Enterprise Architect Competitive salary + 30% Bonus + 30 days holiday, health & dental insurance, £2,000 learning budget, free gym membership, pension contribution Leeds Hybrid working - Travel to Leeds 1-2 per month Fruition IT are recruiting a Enterprise Architect on behalf of a market leading 24x7 online based in Leeds. Why Apply? This is an exciting role to contribute to the future growth of a 24x7 enterprise business. As an Enterprise Architect you'll shape the forward view of technology linked to the long term business strategy. You'll be working with multiple brands within the group to create shared platforms and ways of working. A key part of the role is supporting technology M&A due diligence & integration activities. What will I be doing? Support technology M&A due diligence activities Undertake architecture activities for customer migrations between brand platforms within the group. Provide architectural leadership & support of Cross Brand strategic Tech initiatives To establish & evolve Architecture practice centrally & within Brands Work with the key business personnel to identify solutions that will provide the best return possible. This is likely the single most important part of this role. You will keep pace with change in new technologies, methods and products. Help and advice senior executives set technology strategy then work with technology teams to turn vision into reality. Lead and participate in architectural reviews of software, infrastructure, data and other architectures. Work with product owners, architects, developers and other tech-resource across the organisation to ensure consistency and alignment of the technology roadmaps across the whole organisation. Participate in product evaluations, RFP's, POC's and business decisions to ensure fit and scale into the platform and the organisation. What do I need? Significant experience in an Enterprise Architectural role Cloud architecture and design experience. Architecting cost-efficient, scalable cloud environments, specifically AWS Significant experience in a software design and development role using web technologies and Server Side applications such as C++/JEE/.NET/Kafka based technology. Experience of designing complex, scalable solutions Experience with and buy-in on agile/lean development methodologies, continuous delivery principles. Capable of translating architectural designs into software taking account of target environment, performance requirements and existing systems. Excellent communication skills at all levels, from user up to C-level executives. Knowledge of data architectures and technologies, including both relational/SQL and noSQL databases. Willingness to learn new things, take ownership and solve complex issues. Knowledge of enterprise infrastructure, Servers, deployment, provisioning, networking, and security. Significant experience of solutions delivery/business architecture of COTS platforms To apply for this role, please send your CV for consideration. We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation or age.
05/07/2024
Full time
Enterprise Architect Competitive salary + 30% Bonus + 30 days holiday, health & dental insurance, £2,000 learning budget, free gym membership, pension contribution Leeds Hybrid working - Travel to Leeds 1-2 per month Fruition IT are recruiting a Enterprise Architect on behalf of a market leading 24x7 online based in Leeds. Why Apply? This is an exciting role to contribute to the future growth of a 24x7 enterprise business. As an Enterprise Architect you'll shape the forward view of technology linked to the long term business strategy. You'll be working with multiple brands within the group to create shared platforms and ways of working. A key part of the role is supporting technology M&A due diligence & integration activities. What will I be doing? Support technology M&A due diligence activities Undertake architecture activities for customer migrations between brand platforms within the group. Provide architectural leadership & support of Cross Brand strategic Tech initiatives To establish & evolve Architecture practice centrally & within Brands Work with the key business personnel to identify solutions that will provide the best return possible. This is likely the single most important part of this role. You will keep pace with change in new technologies, methods and products. Help and advice senior executives set technology strategy then work with technology teams to turn vision into reality. Lead and participate in architectural reviews of software, infrastructure, data and other architectures. Work with product owners, architects, developers and other tech-resource across the organisation to ensure consistency and alignment of the technology roadmaps across the whole organisation. Participate in product evaluations, RFP's, POC's and business decisions to ensure fit and scale into the platform and the organisation. What do I need? Significant experience in an Enterprise Architectural role Cloud architecture and design experience. Architecting cost-efficient, scalable cloud environments, specifically AWS Significant experience in a software design and development role using web technologies and Server Side applications such as C++/JEE/.NET/Kafka based technology. Experience of designing complex, scalable solutions Experience with and buy-in on agile/lean development methodologies, continuous delivery principles. Capable of translating architectural designs into software taking account of target environment, performance requirements and existing systems. Excellent communication skills at all levels, from user up to C-level executives. Knowledge of data architectures and technologies, including both relational/SQL and noSQL databases. Willingness to learn new things, take ownership and solve complex issues. Knowledge of enterprise infrastructure, Servers, deployment, provisioning, networking, and security. Significant experience of solutions delivery/business architecture of COTS platforms To apply for this role, please send your CV for consideration. We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation or age.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
04/07/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Java Software Engineer. Candidate will support and work collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Preferred] Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google [Required] Experience writing unit and integration tests with testing frameworks like Junit, Citrus [Required] Experience working with various types of databases like Relational, NoSQL [Required] Experience working with Git [Preferred] Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc [Preferred] Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics [Required] Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API Technical Skills: [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns Education and/or Experience: [Required] BS degree in Computer Science, similar technical field required
03/07/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Java Software Engineer. Candidate will support and work collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Preferred] Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google [Required] Experience writing unit and integration tests with testing frameworks like Junit, Citrus [Required] Experience working with various types of databases like Relational, NoSQL [Required] Experience working with Git [Preferred] Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc [Preferred] Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics [Required] Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API Technical Skills: [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns Education and/or Experience: [Required] BS degree in Computer Science, similar technical field required
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Atlassian SME. This role will focus on Atlassian products and administrating/integrating Confluence and Jira. This person will need experience working in a Windows environment and AWS cloud. Responsibilities: Provide technical leadership for planning, designing, installing, testing, and implementing Atlassian solutions. Provide subject matter expertise on the SDLC platforms we maintain (Confluence, Jira, SpiraTest). Implementing Atlassian plugins and supporting integration with other enterprise software. Supports Knowledge Management (KM) program strategy, transformation, and technical implementation. Creates knowledge documentation related to requirements and solution design. Facilitates knowledge transfer sessions for administration and self-service. Develop a train the trainer model for support and administration. Qualifications: Bachelors degree, 4 years of additional related work experience may be substituted for degree. 3-5 years of experience of SaaS platform implementation and/or system administration. 3+ years of hands-on experience developing and maintaining cloud platform technologies. Certifications in Atlassian products are preferred. 3+ years of experience in implementing Atlassian products. Experience with RESTful APIs, JSON, and XML. Experience with Agile/Scrum or DevOps methodologies. Experience with SQL, Python, PowerShell, or other Scripting languages Experience with System and Data Architecture Experience or knowledge of SDLC pipeline tools such as Git, Jenkins, SonarQube or similar tools
03/07/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Atlassian SME. This role will focus on Atlassian products and administrating/integrating Confluence and Jira. This person will need experience working in a Windows environment and AWS cloud. Responsibilities: Provide technical leadership for planning, designing, installing, testing, and implementing Atlassian solutions. Provide subject matter expertise on the SDLC platforms we maintain (Confluence, Jira, SpiraTest). Implementing Atlassian plugins and supporting integration with other enterprise software. Supports Knowledge Management (KM) program strategy, transformation, and technical implementation. Creates knowledge documentation related to requirements and solution design. Facilitates knowledge transfer sessions for administration and self-service. Develop a train the trainer model for support and administration. Qualifications: Bachelors degree, 4 years of additional related work experience may be substituted for degree. 3-5 years of experience of SaaS platform implementation and/or system administration. 3+ years of hands-on experience developing and maintaining cloud platform technologies. Certifications in Atlassian products are preferred. 3+ years of experience in implementing Atlassian products. Experience with RESTful APIs, JSON, and XML. Experience with Agile/Scrum or DevOps methodologies. Experience with SQL, Python, PowerShell, or other Scripting languages Experience with System and Data Architecture Experience or knowledge of SDLC pipeline tools such as Git, Jenkins, SonarQube or similar tools
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious financial company is looking for a Java Back End Developer. This developer will need experience with Java, Real Time environment, Spring, Spring Boot, Multithreading, etc. Any experience with Kafka and DevOps tools is a plus. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading Experience in object-oriented design and software design patterns BS degree in Computer Science, similar technical field required 3+ year of experience in building high speed, Real Time and batch solutions 3+ years of experience in Java Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google Experience writing unit and integration tests with testing frameworks like Junit, Citrus Experience working with various types of databases like Relational, NoSQL Experience working with Git Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API
03/07/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious financial company is looking for a Java Back End Developer. This developer will need experience with Java, Real Time environment, Spring, Spring Boot, Multithreading, etc. Any experience with Kafka and DevOps tools is a plus. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading Experience in object-oriented design and software design patterns BS degree in Computer Science, similar technical field required 3+ year of experience in building high speed, Real Time and batch solutions 3+ years of experience in Java Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google Experience writing unit and integration tests with testing frameworks like Junit, Citrus Experience working with various types of databases like Relational, NoSQL Experience working with Git Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Atlassian SME. This role will focus on Atlassian products and administrating/integrating Confluence and Jira. This person will need experience working in a Windows environment and AWS cloud. Responsibilities: Provide technical leadership for planning, designing, installing, testing, and implementing Atlassian solutions. Provide subject matter expertise on the SDLC platforms we maintain (Confluence, Jira, SpiraTest). Implementing Atlassian plugins and supporting integration with other enterprise software. Supports Knowledge Management (KM) program strategy, transformation, and technical implementation. Creates knowledge documentation related to requirements and solution design. Facilitates knowledge transfer sessions for administration and self-service. Develop a train the trainer model for support and administration. Qualifications: Bachelors degree, 4 years of additional related work experience may be substituted for degree. 3-5 years of experience of SaaS platform implementation and/or system administration. 3+ years of hands-on experience developing and maintaining cloud platform technologies. Certifications in Atlassian products are preferred. 3+ years of experience in implementing Atlassian products. Experience with RESTful APIs, JSON, and XML. Experience with Agile/Scrum or DevOps methodologies. Experience with SQL, Python, PowerShell, or other Scripting languages Experience with System and Data Architecture Experience or knowledge of SDLC pipeline tools such as Git, Jenkins, SonarQube or similar tools
03/07/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Atlassian SME. This role will focus on Atlassian products and administrating/integrating Confluence and Jira. This person will need experience working in a Windows environment and AWS cloud. Responsibilities: Provide technical leadership for planning, designing, installing, testing, and implementing Atlassian solutions. Provide subject matter expertise on the SDLC platforms we maintain (Confluence, Jira, SpiraTest). Implementing Atlassian plugins and supporting integration with other enterprise software. Supports Knowledge Management (KM) program strategy, transformation, and technical implementation. Creates knowledge documentation related to requirements and solution design. Facilitates knowledge transfer sessions for administration and self-service. Develop a train the trainer model for support and administration. Qualifications: Bachelors degree, 4 years of additional related work experience may be substituted for degree. 3-5 years of experience of SaaS platform implementation and/or system administration. 3+ years of hands-on experience developing and maintaining cloud platform technologies. Certifications in Atlassian products are preferred. 3+ years of experience in implementing Atlassian products. Experience with RESTful APIs, JSON, and XML. Experience with Agile/Scrum or DevOps methodologies. Experience with SQL, Python, PowerShell, or other Scripting languages Experience with System and Data Architecture Experience or knowledge of SDLC pipeline tools such as Git, Jenkins, SonarQube or similar tools
Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone's chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We do this by showcasing their talents, skills, and unique experiences in an inclusive environment that helps them thrive. DLP Cloud Security Engineer Dublin, leopardstown - Hybrid (2 to 3 days in office) 6 months contract Euros 650 - Euros 700 per day Are you a seasoned Cloud Security Engineer with a passion for protecting digital assets and data within cloud environments? Our client, a highly reputable organisation and the Awarded World's best financial institution, is seeking a talented individual like you to join their team as a Cloud Security Engineer. In this role, you will have the exciting opportunity to design, implement, and maintain robust DLP security measures that align with the organisation's goals and compliance requirements. Get ready to collaborate with cross-functional teams, identify vulnerabilities, and develop proactive security solutions! You'll also have the chance to provide expert consultation, create impactful presentations, and contribute to the execution of long-term strategic plans. Skills required for this role include extensive experience in information security, proficiency in cloud platforms such as AWS, Azure, or Google Cloud, and a strong understanding of network security, DLP, and encryption. Join our client's dynamic team and make a significant impact by safeguarding their digital assets and data. You'll have the opportunity to work with cutting-edge technologies, collaborate with a diverse group of professionals, and thrive in an environment that encourages innovation. Don't miss out on this exciting opportunity! Apply now to become our client's Cloud Security Engineer. Responsibilities: -Develop and implement a comprehensive cloud DLP security strategy that aligns with the organisation's goals and compliance requirements. -Design and architect secure cloud solutions, considering aspects such as network security, identity and access management (IAM), encryption, and data protection. -Continuously monitor cloud environments for security threats, vulnerabilities, and anomalies. Take charge in implementing proactive measures to address concerns effectively. -Ensure that cloud security practises adhere to industry standards and regulatory compliance, such as GDPR, HIPAA, or SOC 2. -Establish reporting routines that provide visibility to the effective execution of long-term maturity/strategic plans. -Provide expert consultation to control owners and stakeholders in developing complete and repeatable control processes, including documentation of controls and metrics. -Develop impactful presentations tailored for executives and stakeholders, highlighting key security measures. Required Skills: -Extensive experience in information security, with a strong focus on cloud security. Prior experience as a Security Architect or similar role is highly desirable. -Proficiency in cloud platforms such as AWS, Azure, or Google Cloud, including knowledge of their security services and best practises. -Strong understanding of security principles, protocols, and technologies, with expertise in areas like network security, DLP, and encryption. -Excellent analytical and problem-solving abilities, with the capacity to assess complex security risks and devise effective solutions even in ambiguous situations. -Experience in administration of a DLP tool, including configuration, upgrade, and disaster recovery planning. -Understanding of technical and organisational security vulnerabilities, threats, and risks. -Exceptional communication skills, with the ability to articulate complex concepts concisely and accurately. -Ability to troubleshoot and solve complex problems rapidly. -Certified Cloud Security Professional (CCSP) preferred or ability to obtain within 3 months. Desired Skills: -Experience operating and tuning DLP technologies. -Familiarity with CASB solutions, Microsoft Purview, Proofpoint, M365. -Cloud platform familiarity as it relates to DLP solutions (AWS, Azure). -Operating Systems (Windows/Mac/Linux). -Basic Networking - VPN, TCP/UDP protocols. -Basic Encryption - SSL, AES, IPsec, Key Management, Certificates. -Ancillary Services - DNS, Web Server, LDAP/AD, Database technologies. -Intermediate Level Scripting - eg, Python, PowerShell. If you feel you have the skills and experience and want to hear more about this role 'apply now' to declare your interest in this opportunity with our client. Your application will be observed by our dedicated team. We will respond to all successful applicants ASAP however, please be advised that we will always look to contact you further from this time should we need further applicants or if other opportunities arise relevant to your skill-set.
03/07/2024
Project-based
Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone's chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We do this by showcasing their talents, skills, and unique experiences in an inclusive environment that helps them thrive. DLP Cloud Security Engineer Dublin, leopardstown - Hybrid (2 to 3 days in office) 6 months contract Euros 650 - Euros 700 per day Are you a seasoned Cloud Security Engineer with a passion for protecting digital assets and data within cloud environments? Our client, a highly reputable organisation and the Awarded World's best financial institution, is seeking a talented individual like you to join their team as a Cloud Security Engineer. In this role, you will have the exciting opportunity to design, implement, and maintain robust DLP security measures that align with the organisation's goals and compliance requirements. Get ready to collaborate with cross-functional teams, identify vulnerabilities, and develop proactive security solutions! You'll also have the chance to provide expert consultation, create impactful presentations, and contribute to the execution of long-term strategic plans. Skills required for this role include extensive experience in information security, proficiency in cloud platforms such as AWS, Azure, or Google Cloud, and a strong understanding of network security, DLP, and encryption. Join our client's dynamic team and make a significant impact by safeguarding their digital assets and data. You'll have the opportunity to work with cutting-edge technologies, collaborate with a diverse group of professionals, and thrive in an environment that encourages innovation. Don't miss out on this exciting opportunity! Apply now to become our client's Cloud Security Engineer. Responsibilities: -Develop and implement a comprehensive cloud DLP security strategy that aligns with the organisation's goals and compliance requirements. -Design and architect secure cloud solutions, considering aspects such as network security, identity and access management (IAM), encryption, and data protection. -Continuously monitor cloud environments for security threats, vulnerabilities, and anomalies. Take charge in implementing proactive measures to address concerns effectively. -Ensure that cloud security practises adhere to industry standards and regulatory compliance, such as GDPR, HIPAA, or SOC 2. -Establish reporting routines that provide visibility to the effective execution of long-term maturity/strategic plans. -Provide expert consultation to control owners and stakeholders in developing complete and repeatable control processes, including documentation of controls and metrics. -Develop impactful presentations tailored for executives and stakeholders, highlighting key security measures. Required Skills: -Extensive experience in information security, with a strong focus on cloud security. Prior experience as a Security Architect or similar role is highly desirable. -Proficiency in cloud platforms such as AWS, Azure, or Google Cloud, including knowledge of their security services and best practises. -Strong understanding of security principles, protocols, and technologies, with expertise in areas like network security, DLP, and encryption. -Excellent analytical and problem-solving abilities, with the capacity to assess complex security risks and devise effective solutions even in ambiguous situations. -Experience in administration of a DLP tool, including configuration, upgrade, and disaster recovery planning. -Understanding of technical and organisational security vulnerabilities, threats, and risks. -Exceptional communication skills, with the ability to articulate complex concepts concisely and accurately. -Ability to troubleshoot and solve complex problems rapidly. -Certified Cloud Security Professional (CCSP) preferred or ability to obtain within 3 months. Desired Skills: -Experience operating and tuning DLP technologies. -Familiarity with CASB solutions, Microsoft Purview, Proofpoint, M365. -Cloud platform familiarity as it relates to DLP solutions (AWS, Azure). -Operating Systems (Windows/Mac/Linux). -Basic Networking - VPN, TCP/UDP protocols. -Basic Encryption - SSL, AES, IPsec, Key Management, Certificates. -Ancillary Services - DNS, Web Server, LDAP/AD, Database technologies. -Intermediate Level Scripting - eg, Python, PowerShell. If you feel you have the skills and experience and want to hear more about this role 'apply now' to declare your interest in this opportunity with our client. Your application will be observed by our dedicated team. We will respond to all successful applicants ASAP however, please be advised that we will always look to contact you further from this time should we need further applicants or if other opportunities arise relevant to your skill-set.
Principal Software Engineer Amsterdam - Hybrid €140 - €160 Euro Per Hour Initially until end of Oct Dependant on start date Are you a Principal Software Engineer? Do you live in the Netherlands and are you seeking a new contract role? Brookwood Recruitment is working with a Global Company that puts the customer first and is heavily focused on sustainability. The successful Principal Software Engineer will need the following skills: AWS Kubernetes Java Terraform Gitlab stack Apache Spark Flink Snowflake runtimes ideal What you can expect to be doing as a Principal Software Engineer: Building software applicationsIs responsible to build software applications by using relevant development languages and applying knowledge of systems, services and tools appropriate for the business area and is the go to person on this topic for the area. Is responsible to write readable and reusable code by applying standard patterns and using standard libraries and is the go to person on this topic for the area. Is responsible to refactor and simplify code by introducing design patterns when necessary and is the go to person on this topic for the area. Is responsible to ensure the quality of the application by following standard testing techniques and methods that adhere to the test strategy and is the go to person on this topic for the area. Is responsible to maintain data security, integrity and quality by effectively following company standards and best practices and is the go to person on this topic for the area. Architectural GuidanceIs responsible to advise product teams towards a technical solution that meets the functional, nonfunctional & architectural requirements by challenging the rationale for an application design and providing context in the wider architectural landscape Is responsible to set a clear direction for a technical capability by evaluating and aligning the target architecture improvements, reframing architectural designs and decisions for varied stakeholder Is responsible to own a service end to end by actively monitoring application health and performance, setting and monitoring relevant metrics and act accordingly when violated and is the go to person on this topic for the area. Is responsible to reduce business continuity risks and bus factor by applying state-of-the-art practices and tools, and writing the appropriate documentation such as runbooks and OpDocs and is the go to person on this topic for the area. Is responsible to reduce risk and obtain customer feedback by using continuous delivery and experimentation frameworks and is the go to person on this topic for the area. Is responsible to independently manage an application or service by working through deployment and operations in production and is the go to person on this topic for the area. Is responsible to evaluate possible architecture solutions by taking into account cost, business requirements, technology requirements and emerging technologies and guide more junior members of the team in this topic. Is responsible to describe the implications of changing an existing system or adding a new system to a specific area, by having a broad, high-level understanding of the infrastructure and architecture of our systems and guide more junior members of the team in this topic. Is responsible to help grow the business and/or accelerate software development by applying engineering techniques (eg prototyping, spiking and vendor evaluation) and standards and guide more junior members of the team in this topic. Is responsible to meet business needs by designing solutions that meet current requirements and are adaptable for future enhancements and guide more junior members of the team in this topic. Technical Incident ManagementIs responsible to address and resolve live production issues by mitigating the customer impact within SLAIs responsible to improve the overall reliability of systems by producing long term solutions through root cause analysis Is responsible to keep track of incidents by contributing to postmortem processes and logging live issues Coaching/MentoringIs responsible to coach, guide and improve the overall performance of stakeholders and colleagues at all levels, when appropriate, by sharing experience, knowledge and approaches to work Critical ThinkingIs responsible to systematically identify patterns and underlying issues in complex situations, and to find solutions by applying logical and analytical thinking and guide more junior members of the team in this topic. Is responsible to constructively evaluate and develop ideas, plans and solutions by reviewing them, objectively taking into account external knowledge, initiating 'SMART' improvements and articulating their rationale and guide more junior members of the team in this topic. Continuous Quality and Process ImprovementIs responsible to identify opportunities for process, system and structural improvements (ie performance gains) by examining and evaluating current process flows, methods and standards. Is responsible to design and implement relevant improvements by defining adapted/new process flows, standards, and practices that enable business performance. Effective CommunicationIs responsible to deliver clear, well-structured, and meaningful information to a target audience by using suitable communication mediums and language tailored to the audience Is responsible to achieve mutually agreeable solutions by staying adaptable, communicating ideas in clear coherent language and practising active listening Is responsible to ask relevant (follow-up) questions to properly engage with the speaker and really understand what they are saying, by applying listening and reflection techniques If this contract Principal Software Engineer role in Amsterdam motivates and inspires you, please apply with Brookwood Recruitment today. We'd love to help you get your next role. Brookwood has a consultative and inclusive approach to business. We take time to understand our client's needs, structure and culture to enable a fully tailored service that delivers time and time again.
03/07/2024
Project-based
Principal Software Engineer Amsterdam - Hybrid €140 - €160 Euro Per Hour Initially until end of Oct Dependant on start date Are you a Principal Software Engineer? Do you live in the Netherlands and are you seeking a new contract role? Brookwood Recruitment is working with a Global Company that puts the customer first and is heavily focused on sustainability. The successful Principal Software Engineer will need the following skills: AWS Kubernetes Java Terraform Gitlab stack Apache Spark Flink Snowflake runtimes ideal What you can expect to be doing as a Principal Software Engineer: Building software applicationsIs responsible to build software applications by using relevant development languages and applying knowledge of systems, services and tools appropriate for the business area and is the go to person on this topic for the area. Is responsible to write readable and reusable code by applying standard patterns and using standard libraries and is the go to person on this topic for the area. Is responsible to refactor and simplify code by introducing design patterns when necessary and is the go to person on this topic for the area. Is responsible to ensure the quality of the application by following standard testing techniques and methods that adhere to the test strategy and is the go to person on this topic for the area. Is responsible to maintain data security, integrity and quality by effectively following company standards and best practices and is the go to person on this topic for the area. Architectural GuidanceIs responsible to advise product teams towards a technical solution that meets the functional, nonfunctional & architectural requirements by challenging the rationale for an application design and providing context in the wider architectural landscape Is responsible to set a clear direction for a technical capability by evaluating and aligning the target architecture improvements, reframing architectural designs and decisions for varied stakeholder Is responsible to own a service end to end by actively monitoring application health and performance, setting and monitoring relevant metrics and act accordingly when violated and is the go to person on this topic for the area. Is responsible to reduce business continuity risks and bus factor by applying state-of-the-art practices and tools, and writing the appropriate documentation such as runbooks and OpDocs and is the go to person on this topic for the area. Is responsible to reduce risk and obtain customer feedback by using continuous delivery and experimentation frameworks and is the go to person on this topic for the area. Is responsible to independently manage an application or service by working through deployment and operations in production and is the go to person on this topic for the area. Is responsible to evaluate possible architecture solutions by taking into account cost, business requirements, technology requirements and emerging technologies and guide more junior members of the team in this topic. Is responsible to describe the implications of changing an existing system or adding a new system to a specific area, by having a broad, high-level understanding of the infrastructure and architecture of our systems and guide more junior members of the team in this topic. Is responsible to help grow the business and/or accelerate software development by applying engineering techniques (eg prototyping, spiking and vendor evaluation) and standards and guide more junior members of the team in this topic. Is responsible to meet business needs by designing solutions that meet current requirements and are adaptable for future enhancements and guide more junior members of the team in this topic. Technical Incident ManagementIs responsible to address and resolve live production issues by mitigating the customer impact within SLAIs responsible to improve the overall reliability of systems by producing long term solutions through root cause analysis Is responsible to keep track of incidents by contributing to postmortem processes and logging live issues Coaching/MentoringIs responsible to coach, guide and improve the overall performance of stakeholders and colleagues at all levels, when appropriate, by sharing experience, knowledge and approaches to work Critical ThinkingIs responsible to systematically identify patterns and underlying issues in complex situations, and to find solutions by applying logical and analytical thinking and guide more junior members of the team in this topic. Is responsible to constructively evaluate and develop ideas, plans and solutions by reviewing them, objectively taking into account external knowledge, initiating 'SMART' improvements and articulating their rationale and guide more junior members of the team in this topic. Continuous Quality and Process ImprovementIs responsible to identify opportunities for process, system and structural improvements (ie performance gains) by examining and evaluating current process flows, methods and standards. Is responsible to design and implement relevant improvements by defining adapted/new process flows, standards, and practices that enable business performance. Effective CommunicationIs responsible to deliver clear, well-structured, and meaningful information to a target audience by using suitable communication mediums and language tailored to the audience Is responsible to achieve mutually agreeable solutions by staying adaptable, communicating ideas in clear coherent language and practising active listening Is responsible to ask relevant (follow-up) questions to properly engage with the speaker and really understand what they are saying, by applying listening and reflection techniques If this contract Principal Software Engineer role in Amsterdam motivates and inspires you, please apply with Brookwood Recruitment today. We'd love to help you get your next role. Brookwood has a consultative and inclusive approach to business. We take time to understand our client's needs, structure and culture to enable a fully tailored service that delivers time and time again.
Python Developer - 6 months contract - Hybrid (1 day in Paris) My client, a global pharma company, is seeking a Python Developer with AWS experience for a Data Lake project. Requirements: 5-10 years of development experience. Strong Back End development skills and proficiency in AWS cloud services. Some Front End development skills (optional, but preferred). Key expertise required: AWS architecture, Python, JavaScript, Database Design (SQL, No SQL). Responsibilities: Provide technical development support to implement clinical data lake. Contribute to the solution implementation. Identify short-term and long-term solutions with the project team. Identify and resolve key obstacles/problems. Escalate technical issues promptly. Assist in framing and implementing Clinical Data Lake through various prototypes. Work closely with the Clinical Domain Architect, RAPID Tech Lead, and Product Owner. Key Activities: Implement MVPs and POCs for the Clinical Data Lake project. Additional Requirements: Proficiency in English and French is ideal. Previous pharmaceutical industry experience is ideal.
03/07/2024
Project-based
Python Developer - 6 months contract - Hybrid (1 day in Paris) My client, a global pharma company, is seeking a Python Developer with AWS experience for a Data Lake project. Requirements: 5-10 years of development experience. Strong Back End development skills and proficiency in AWS cloud services. Some Front End development skills (optional, but preferred). Key expertise required: AWS architecture, Python, JavaScript, Database Design (SQL, No SQL). Responsibilities: Provide technical development support to implement clinical data lake. Contribute to the solution implementation. Identify short-term and long-term solutions with the project team. Identify and resolve key obstacles/problems. Escalate technical issues promptly. Assist in framing and implementing Clinical Data Lake through various prototypes. Work closely with the Clinical Domain Architect, RAPID Tech Lead, and Product Owner. Key Activities: Implement MVPs and POCs for the Clinical Data Lake project. Additional Requirements: Proficiency in English and French is ideal. Previous pharmaceutical industry experience is ideal.
ASSOCIATE PRINCIPAL, SOFTWARE ENGINEERING (JAVA) SALARY: $160k - $170k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite and 2 days remote NO SPONSORSHIP Looking for a candidate with 5 plus years Back End Java development version 8 or above. financial big plus. Must have event-driven systems experience of cloud-based AWS data solutions any devops terraform ansible jenkins. big plus memory model data structures concurrency and Multithreading strong testing flint Apache Spark kafka streams etc. Re: Java, do you understand Multithreading What is your level of experience in Spring. A Re: Kafka Can you answer basic user/developer questions Re: Flink do you have any experience Do you have any skills or understanding of BigO notations. This role supports and works collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Specialist, Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Primary Duties and Responsibilities: To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. Support the application development of big data application for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. 5+ year of experience in building high speed, data-centric solutions 5+ years of experience in Java Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google Experience writing unit and integration tests with testing frameworks like Junit, Citrus Experience following Git workflows Working knowledge of DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Technical Skills: Java-based software development experience and Multithreading Fluent in object-oriented design Strong testing experience Experience working with two or more of the following: Unix/Linux environments, event-driven systems, transaction processing systems, distributed and parallel systems, large software system development, security software development, public-cloud platforms Hands-on experience with Java version 8 onwards, Spring, SpringBoot, Microservices, REST API
02/07/2024
Full time
ASSOCIATE PRINCIPAL, SOFTWARE ENGINEERING (JAVA) SALARY: $160k - $170k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite and 2 days remote NO SPONSORSHIP Looking for a candidate with 5 plus years Back End Java development version 8 or above. financial big plus. Must have event-driven systems experience of cloud-based AWS data solutions any devops terraform ansible jenkins. big plus memory model data structures concurrency and Multithreading strong testing flint Apache Spark kafka streams etc. Re: Java, do you understand Multithreading What is your level of experience in Spring. A Re: Kafka Can you answer basic user/developer questions Re: Flink do you have any experience Do you have any skills or understanding of BigO notations. This role supports and works collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Specialist, Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Primary Duties and Responsibilities: To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. Support the application development of big data application for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. 5+ year of experience in building high speed, data-centric solutions 5+ years of experience in Java Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google Experience writing unit and integration tests with testing frameworks like Junit, Citrus Experience following Git workflows Working knowledge of DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Technical Skills: Java-based software development experience and Multithreading Fluent in object-oriented design Strong testing experience Experience working with two or more of the following: Unix/Linux environments, event-driven systems, transaction processing systems, distributed and parallel systems, large software system development, security software development, public-cloud platforms Hands-on experience with Java version 8 onwards, Spring, SpringBoot, Microservices, REST API
AWS Cloud based performance testing Chicago - Hybrid 3 days on site. - Long term contract role C2C or W2 Must be AWS certified heavy cloud experience setting up and maintenance of a cloud-based performance system to automate and troubleshoot environmental issues. Performance testing, automation testings, financial experience strongly preferred. python Scripting: converting Java to python. Don't have to be application developers and as much. Devops and containerization as possible splunk confluence Jira API testing uc4 or similar. All about cloud testing system they are migrating from an old system to a new system kafka is a HUGE plus WORK TO BE PERFORMED: Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles. SKILL AND EXPERIENCE REQUIRED: Python Scripting - familiarity with creating modules that multiply transactional data and other data multiplier strategies that will be used in test cycles of the Real Time Clearing System SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor. Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. AWS Certified SysOps Administrator or Certified Developer (required) Languages Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud.Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL; Operating Systems experience; Methodologies: Agile, Iterative Waterfall
02/07/2024
Project-based
AWS Cloud based performance testing Chicago - Hybrid 3 days on site. - Long term contract role C2C or W2 Must be AWS certified heavy cloud experience setting up and maintenance of a cloud-based performance system to automate and troubleshoot environmental issues. Performance testing, automation testings, financial experience strongly preferred. python Scripting: converting Java to python. Don't have to be application developers and as much. Devops and containerization as possible splunk confluence Jira API testing uc4 or similar. All about cloud testing system they are migrating from an old system to a new system kafka is a HUGE plus WORK TO BE PERFORMED: Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles. SKILL AND EXPERIENCE REQUIRED: Python Scripting - familiarity with creating modules that multiply transactional data and other data multiplier strategies that will be used in test cycles of the Real Time Clearing System SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor. Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. AWS Certified SysOps Administrator or Certified Developer (required) Languages Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud.Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL; Operating Systems experience; Methodologies: Agile, Iterative Waterfall
Contract - UC4 Automation Engineer Rate: Open Location: Chicago, IL Hybrid: 3 days on-site, 2 days remote Qualifications Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL Responsibilities Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles.
02/07/2024
Project-based
Contract - UC4 Automation Engineer Rate: Open Location: Chicago, IL Hybrid: 3 days on-site, 2 days remote Qualifications Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL Responsibilities Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles.
Job Title: Product Architect Job Location: London/Leeds/Edinburgh Job Type: Perm About the FCA The FCA regulates the conduct of 50,000 firms in the UK to ensure our financial markets are honest, fair and competitive. We do this to make sure markets work well for individuals, businesses and the economy as a whole. The team/department The Data & Analytics Product Group (DAPG) sits within the Regulatory Systems Department of the Data, Technology & Innovation (DTI) Division. The DAPG supports the delivery of the FCA's Digital and Data Strategies, optimising FCA's performance as a digitally led regulator. The Analyse & Insight (A&I) product team (within DAPG) specifically supports FCA's Data Science and Advanced Analytics functions and the FCA's Data Science strategy. What you will get from the role? Stimulating, innovative and experimental work to solve the biggest challenges facing financial regulation and the opportunity to make a tangible impact on the organisation Exposure to new ideas, opportunity to increase your knowledge and understanding of new technologies in Financial Regulation Exposure to senior industry leaders and an opportunity to work with international regulators Working in key strategic initiatives supporting FCA's Data Science and AI Strategy The skills and experience you will have Minimum We are a signatory to the Government's Disability Confident scheme. This means that we will offer an interview to disabled candidates entering under the scheme, should they meet the minimum criteria for a role. Experience with Amazon Web Services, and AI/Data Science related components eg AWS Sagemaker, AWS Cognitive Services Exposure to Cloud Technologies Experience of Microservices architecture design and implementation Essential Experience with Jenkins, Git, Chef & Linux Hands on development background and ready to delivery as required Security Cleared or eligible for Security Clearance Strong stakeholder management Designing resilient and scalable systems Automated development using declarative CI/CD pipelines Secure by design architecture Approach and implementation in Data Science and AI industry trends If you are someone who is seeking that next challenge, and you have the experience and skills required, then please send me your CV / Our Recruitment Delivery Team are committed to offering an inclusive recruitment experience to all candidates. If you require any accommodations or adjustments as a result of disability, impairment, or health condition, please do not hesitate to let me know.
02/07/2024
Full time
Job Title: Product Architect Job Location: London/Leeds/Edinburgh Job Type: Perm About the FCA The FCA regulates the conduct of 50,000 firms in the UK to ensure our financial markets are honest, fair and competitive. We do this to make sure markets work well for individuals, businesses and the economy as a whole. The team/department The Data & Analytics Product Group (DAPG) sits within the Regulatory Systems Department of the Data, Technology & Innovation (DTI) Division. The DAPG supports the delivery of the FCA's Digital and Data Strategies, optimising FCA's performance as a digitally led regulator. The Analyse & Insight (A&I) product team (within DAPG) specifically supports FCA's Data Science and Advanced Analytics functions and the FCA's Data Science strategy. What you will get from the role? Stimulating, innovative and experimental work to solve the biggest challenges facing financial regulation and the opportunity to make a tangible impact on the organisation Exposure to new ideas, opportunity to increase your knowledge and understanding of new technologies in Financial Regulation Exposure to senior industry leaders and an opportunity to work with international regulators Working in key strategic initiatives supporting FCA's Data Science and AI Strategy The skills and experience you will have Minimum We are a signatory to the Government's Disability Confident scheme. This means that we will offer an interview to disabled candidates entering under the scheme, should they meet the minimum criteria for a role. Experience with Amazon Web Services, and AI/Data Science related components eg AWS Sagemaker, AWS Cognitive Services Exposure to Cloud Technologies Experience of Microservices architecture design and implementation Essential Experience with Jenkins, Git, Chef & Linux Hands on development background and ready to delivery as required Security Cleared or eligible for Security Clearance Strong stakeholder management Designing resilient and scalable systems Automated development using declarative CI/CD pipelines Secure by design architecture Approach and implementation in Data Science and AI industry trends If you are someone who is seeking that next challenge, and you have the experience and skills required, then please send me your CV / Our Recruitment Delivery Team are committed to offering an inclusive recruitment experience to all candidates. If you require any accommodations or adjustments as a result of disability, impairment, or health condition, please do not hesitate to let me know.
If you are a Backend Scala consultant and you are available now then I have a great opportunity for you. The role is for 6 months + The position is completely remote Job Title: Backend Scala Engineer Job Description: As a Backend Scala Engineer, you will be responsible for designing, developing, and maintaining microservices-based applications with a strong focus on data handling and cloud integration. Your primary language will be Scala, and you will leverage AWS services to ensure scalable and efficient solutions. Key Responsibilities: Microservices Development: Design, develop, and maintain Back End microservices using Scala. Ensure high performance and scalability of microservices architecture. Cloud Integration: Utilize AWS services for deploying, managing, and scaling applications. Implement best practices for cloud-native development. Data Handling: Work with various data storage solutions, optimizing data retrieval and storage. Ensure efficient data processing within the Back End system. Must-Have Skills: Scala: Strong proficiency in Scala programming. Experience with functional programming paradigms. Microservices: In-depth knowledge of microservices architecture and design patterns. Proven experience in building and deploying microservices in production. AWS: Proficiency with AWS services (eg, EC2, S3, Lambda, RDS). Experience with cloud-based architecture and best practices. Nice-to-Have Skills: Redis: Knowledge of Redis for caching and in-memory data storage. Experience in integrating Redis with Back End applications. DynamoDB: Experience with DynamoDB or other NoSQL databases. Understanding of designing scalable and performant data models in DynamoDB. Join our team as a Backend Scala Engineer and use your expertise in Scala, microservices, and AWS to develop robust and scalable solutions. Your skills in Redis and DynamoDB will be a valuable asset in our innovative and dynamic environment. Apply now to contribute to our Back End infrastructure and data handling capabilities! Darwin Recruitment is acting as an Employment Business in relation to this vacancy.
01/07/2024
Project-based
If you are a Backend Scala consultant and you are available now then I have a great opportunity for you. The role is for 6 months + The position is completely remote Job Title: Backend Scala Engineer Job Description: As a Backend Scala Engineer, you will be responsible for designing, developing, and maintaining microservices-based applications with a strong focus on data handling and cloud integration. Your primary language will be Scala, and you will leverage AWS services to ensure scalable and efficient solutions. Key Responsibilities: Microservices Development: Design, develop, and maintain Back End microservices using Scala. Ensure high performance and scalability of microservices architecture. Cloud Integration: Utilize AWS services for deploying, managing, and scaling applications. Implement best practices for cloud-native development. Data Handling: Work with various data storage solutions, optimizing data retrieval and storage. Ensure efficient data processing within the Back End system. Must-Have Skills: Scala: Strong proficiency in Scala programming. Experience with functional programming paradigms. Microservices: In-depth knowledge of microservices architecture and design patterns. Proven experience in building and deploying microservices in production. AWS: Proficiency with AWS services (eg, EC2, S3, Lambda, RDS). Experience with cloud-based architecture and best practices. Nice-to-Have Skills: Redis: Knowledge of Redis for caching and in-memory data storage. Experience in integrating Redis with Back End applications. DynamoDB: Experience with DynamoDB or other NoSQL databases. Understanding of designing scalable and performant data models in DynamoDB. Join our team as a Backend Scala Engineer and use your expertise in Scala, microservices, and AWS to develop robust and scalable solutions. Your skills in Redis and DynamoDB will be a valuable asset in our innovative and dynamic environment. Apply now to contribute to our Back End infrastructure and data handling capabilities! Darwin Recruitment is acting as an Employment Business in relation to this vacancy.
.*REMOTE WITHIN SWITZERLAND* If open to relocate, please note that only EU Citizens, or valid Swiss Work Permit holders can be considered at this stage. Job Title: Senior DevOps Engineer Description: You will be part of a team of engineers working on cutting edge technologies: combining DevOps and software engineering skills to build a foundation for healthcare solutions. The team believes individuals can make a difference and values innovation. The perfect candidate: In addition, interpersonal skills are really important. As a distributed team working together and remotely, communication is essential. Tasks & Responsibilities: Create and maintain command line and web/rest applications (design, document, develop, test and deploy) Create scripts, customizations, templates to ensure speed of delivery AWS Serverless Architecture Work with DevOps and DevSecOps Teams Champion automation and evangelize lean development Improve CI/CD Release Automation Must Haves: Proven Experience, 3-5 years, in a similar role(*) AWS (EC2, EKS, Lambda, S3, API gateway) (*) Infrastructure provisioning and testing (Terraform) (*) CI/CD (GitLab) (*) Scripting and Cloud automation- Python/Shell(*) Kubernetes (*) English fluent Nice to Have: Logging and monitoring (Prometheus and Grafana) Certifications (AWS/K8s) Skills: AWS EC2 EKS Lambda S3 API gateway terraform CI/CD GitLab Python Shell Cloud automation Scripting Kubernetes Prometheus Grafana Loggings Job Title: Senior Devops Engineer Location: Basel, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
01/07/2024
Project-based
.*REMOTE WITHIN SWITZERLAND* If open to relocate, please note that only EU Citizens, or valid Swiss Work Permit holders can be considered at this stage. Job Title: Senior DevOps Engineer Description: You will be part of a team of engineers working on cutting edge technologies: combining DevOps and software engineering skills to build a foundation for healthcare solutions. The team believes individuals can make a difference and values innovation. The perfect candidate: In addition, interpersonal skills are really important. As a distributed team working together and remotely, communication is essential. Tasks & Responsibilities: Create and maintain command line and web/rest applications (design, document, develop, test and deploy) Create scripts, customizations, templates to ensure speed of delivery AWS Serverless Architecture Work with DevOps and DevSecOps Teams Champion automation and evangelize lean development Improve CI/CD Release Automation Must Haves: Proven Experience, 3-5 years, in a similar role(*) AWS (EC2, EKS, Lambda, S3, API gateway) (*) Infrastructure provisioning and testing (Terraform) (*) CI/CD (GitLab) (*) Scripting and Cloud automation- Python/Shell(*) Kubernetes (*) English fluent Nice to Have: Logging and monitoring (Prometheus and Grafana) Certifications (AWS/K8s) Skills: AWS EC2 EKS Lambda S3 API gateway terraform CI/CD GitLab Python Shell Cloud automation Scripting Kubernetes Prometheus Grafana Loggings Job Title: Senior Devops Engineer Location: Basel, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
Principal Python Data Engineer (Architecture Programmer Developer Java Python Software Engineer Data Enterprise Engineering Developer Programmer AWS GCP Python Athena Glue Airflow Ignite JavaScript Agile Pandas NumPy SciPy Spark Dremio Apache Iceburg Iceberg PySpark MWAA Arrow DBT gRPC protobuf Snowflake TypeScript Manager Finance Trading Front Office Investment Banking Asset Manager Financial Services FX Fixed Income Equities Commodities Derivatives Hedge Fund) required by my asset management client in London. You MUST have the following: Advanced ability as a Python Solutions Architect/Principal Engineer/Engineering Manager Leadership experience: you must have led small teams on the delivery of projects AWS (EC2, ECS, EKS, Glue) Java SQL Spark MWAA/Airflow Agile The following is DESIRABLE, not essential: Iceberg DBT Trading, Front Office finance Role: Principal Python Data Engineer (Architecture Programmer Developer Java Python Software Engineer Data Enterprise Engineering Developer Programmer AWS GCP Python Athena Glue Airflow Ignite JavaScript Agile Pandas NumPy SciPy Spark Dremio Apache Iceburg Iceberg PySpark MWAA Arrow DBT gRPC protobuf Snowflake TypeScript Manager Finance Trading Front Office Investment Banking Asset Manager Financial Services FX Fixed Income Equities Commodities Derivatives Hedge Fund) required by my asset management client in London. You will join a number of teams that are responsible for the core engineering of a large amount of financial trading data. The data is currently ingested into, and stored in, an AWS data lake. This is being migrated to a data mesh architecture though. You will lead a team of 4-5 engineers, in a very hands-on role, that will contribute towards this migration, working with AWS Glue, Athena, Python, Java, Iceberg, DBT, Arrow and Dremio. 20-30% of the role will be spent mentoring members of the team, architectural reviews, code reviews, implementing best practices, reporting to senior management and contributing towards technical strategy. They have a very flexible hybrid working set up of 1-2 days/month in the office. Salary: £90k - £125k + 15% Guaranteed Bonus + 10% Pension
01/07/2024
Full time
Principal Python Data Engineer (Architecture Programmer Developer Java Python Software Engineer Data Enterprise Engineering Developer Programmer AWS GCP Python Athena Glue Airflow Ignite JavaScript Agile Pandas NumPy SciPy Spark Dremio Apache Iceburg Iceberg PySpark MWAA Arrow DBT gRPC protobuf Snowflake TypeScript Manager Finance Trading Front Office Investment Banking Asset Manager Financial Services FX Fixed Income Equities Commodities Derivatives Hedge Fund) required by my asset management client in London. You MUST have the following: Advanced ability as a Python Solutions Architect/Principal Engineer/Engineering Manager Leadership experience: you must have led small teams on the delivery of projects AWS (EC2, ECS, EKS, Glue) Java SQL Spark MWAA/Airflow Agile The following is DESIRABLE, not essential: Iceberg DBT Trading, Front Office finance Role: Principal Python Data Engineer (Architecture Programmer Developer Java Python Software Engineer Data Enterprise Engineering Developer Programmer AWS GCP Python Athena Glue Airflow Ignite JavaScript Agile Pandas NumPy SciPy Spark Dremio Apache Iceburg Iceberg PySpark MWAA Arrow DBT gRPC protobuf Snowflake TypeScript Manager Finance Trading Front Office Investment Banking Asset Manager Financial Services FX Fixed Income Equities Commodities Derivatives Hedge Fund) required by my asset management client in London. You will join a number of teams that are responsible for the core engineering of a large amount of financial trading data. The data is currently ingested into, and stored in, an AWS data lake. This is being migrated to a data mesh architecture though. You will lead a team of 4-5 engineers, in a very hands-on role, that will contribute towards this migration, working with AWS Glue, Athena, Python, Java, Iceberg, DBT, Arrow and Dremio. 20-30% of the role will be spent mentoring members of the team, architectural reviews, code reviews, implementing best practices, reporting to senior management and contributing towards technical strategy. They have a very flexible hybrid working set up of 1-2 days/month in the office. Salary: £90k - £125k + 15% Guaranteed Bonus + 10% Pension
Principal Python Data Engineer (Architecture Programmer Developer Java Python Software Engineer Data Enterprise Engineering Developer Programmer AWS GCP Python Athena Glue Airflow Ignite JavaScript Agile Pandas NumPy SciPy Spark Dremio Apache Iceburg Iceberg PySpark MWAA Arrow DBT gRPC protobuf Snowflake TypeScript Manager Finance Trading Front Office Investment Banking Asset Manager Financial Services FX Fixed Income Equities Commodities Derivatives Hedge Fund) required by my asset management client in London. You MUST have the following: Advanced ability as a Python Solutions Architect/Principal Engineer/Engineering Manager Leadership experience: you must have led small teams on the delivery of projects AWS (EC2, ECS, EKS, Glue) Java SQL Spark MWAA/Airflow Agile The following is DESIRABLE, not essential: Iceberg DBT Trading, Front Office finance Role: Principal Python Data Engineer (Architecture Programmer Developer Java Python Software Engineer Data Enterprise Engineering Developer Programmer AWS GCP Python Athena Glue Airflow Ignite JavaScript Agile Pandas NumPy SciPy Spark Dremio Apache Iceburg Iceberg PySpark MWAA Arrow DBT gRPC protobuf Snowflake TypeScript Manager Finance Trading Front Office Investment Banking Asset Manager Financial Services FX Fixed Income Equities Commodities Derivatives Hedge Fund) required by my asset management client in London. You will join a number of teams that are responsible for the core engineering of a large amount of financial trading data. The data is currently ingested into, and stored in, an AWS data lake. This is being migrated to a data mesh architecture though. You will lead a team of 4-5 engineers, in a very hands-on role, that will contribute towards this migration, working with AWS Glue, Athena, Python, Java, Iceberg, DBT, Arrow and Dremio. 20-30% of the role will be spent mentoring members of the team, architectural reviews, code reviews, implementing best practices, reporting to senior management and contributing towards technical strategy. They have a very flexible hybrid working set up of 1-2 days/month in the office. Salary: £125k - £155k + 15% Guaranteed Bonus + 10% Pension
01/07/2024
Full time
Principal Python Data Engineer (Architecture Programmer Developer Java Python Software Engineer Data Enterprise Engineering Developer Programmer AWS GCP Python Athena Glue Airflow Ignite JavaScript Agile Pandas NumPy SciPy Spark Dremio Apache Iceburg Iceberg PySpark MWAA Arrow DBT gRPC protobuf Snowflake TypeScript Manager Finance Trading Front Office Investment Banking Asset Manager Financial Services FX Fixed Income Equities Commodities Derivatives Hedge Fund) required by my asset management client in London. You MUST have the following: Advanced ability as a Python Solutions Architect/Principal Engineer/Engineering Manager Leadership experience: you must have led small teams on the delivery of projects AWS (EC2, ECS, EKS, Glue) Java SQL Spark MWAA/Airflow Agile The following is DESIRABLE, not essential: Iceberg DBT Trading, Front Office finance Role: Principal Python Data Engineer (Architecture Programmer Developer Java Python Software Engineer Data Enterprise Engineering Developer Programmer AWS GCP Python Athena Glue Airflow Ignite JavaScript Agile Pandas NumPy SciPy Spark Dremio Apache Iceburg Iceberg PySpark MWAA Arrow DBT gRPC protobuf Snowflake TypeScript Manager Finance Trading Front Office Investment Banking Asset Manager Financial Services FX Fixed Income Equities Commodities Derivatives Hedge Fund) required by my asset management client in London. You will join a number of teams that are responsible for the core engineering of a large amount of financial trading data. The data is currently ingested into, and stored in, an AWS data lake. This is being migrated to a data mesh architecture though. You will lead a team of 4-5 engineers, in a very hands-on role, that will contribute towards this migration, working with AWS Glue, Athena, Python, Java, Iceberg, DBT, Arrow and Dremio. 20-30% of the role will be spent mentoring members of the team, architectural reviews, code reviews, implementing best practices, reporting to senior management and contributing towards technical strategy. They have a very flexible hybrid working set up of 1-2 days/month in the office. Salary: £125k - £155k + 15% Guaranteed Bonus + 10% Pension
Principal Java Data Engineer (Architecture Programmer Developer Java Python Software Engineer Data Enterprise Engineering Developer Programmer AWS GCP Python Athena Glue Airflow Ignite JavaScript Agile Pandas NumPy SciPy Spark Dremio Apache Iceburg Iceberg PySpark MWAA Arrow DBT gRPC protobuf Snowflake TypeScript Manager Finance Trading Front Office Investment Banking Asset Manager Financial Services FX Fixed Income Equities Commodities Derivatives Hedge Fund) required by my asset management client in London. You MUST have the following: Advanced ability as a Java Solutions Architect/Principal Engineer/Engineering Manager Leadership experience: you must have led small teams on the delivery of projects AWS (EC2, ECS, EKS, Glue) Python SQL Spark MWAA/Airflow Agile The following is DESIRABLE, not essential: Iceberg DBT Trading, Front Office finance Role: Principal Java Data Engineer (Architecture Programmer Developer Java Python Software Engineer Data Enterprise Engineering Developer Programmer AWS GCP Python Athena Glue Airflow Ignite JavaScript Agile Pandas NumPy SciPy Spark Dremio Apache Iceburg Iceberg PySpark MWAA Arrow DBT gRPC protobuf Snowflake TypeScript Manager Finance Trading Front Office Investment Banking Asset Manager Financial Services FX Fixed Income Equities Commodities Derivatives Hedge Fund) required by my asset management client in London. You will join a number of teams that are responsible for the core engineering of a large amount of financial trading data. The data is currently ingested into, and stored in, an AWS data lake. This is being migrated to a data mesh architecture though. You will lead a team of 4-5 engineers, in a very hands-on role, that will contribute towards this migration, working with AWS Glue, Athena, Python, Java, Iceberg, DBT, Arrow and Dremio. 20-30% of the role will be spent mentoring members of the team, architectural reviews, code reviews, implementing best practices, reporting to senior management and contributing towards technical strategy. They have a very flexible hybrid working set up of 1-2 days/month in the office. Salary: £125k - £155k + 15% Guaranteed Bonus + 10% Pension
01/07/2024
Full time
Principal Java Data Engineer (Architecture Programmer Developer Java Python Software Engineer Data Enterprise Engineering Developer Programmer AWS GCP Python Athena Glue Airflow Ignite JavaScript Agile Pandas NumPy SciPy Spark Dremio Apache Iceburg Iceberg PySpark MWAA Arrow DBT gRPC protobuf Snowflake TypeScript Manager Finance Trading Front Office Investment Banking Asset Manager Financial Services FX Fixed Income Equities Commodities Derivatives Hedge Fund) required by my asset management client in London. You MUST have the following: Advanced ability as a Java Solutions Architect/Principal Engineer/Engineering Manager Leadership experience: you must have led small teams on the delivery of projects AWS (EC2, ECS, EKS, Glue) Python SQL Spark MWAA/Airflow Agile The following is DESIRABLE, not essential: Iceberg DBT Trading, Front Office finance Role: Principal Java Data Engineer (Architecture Programmer Developer Java Python Software Engineer Data Enterprise Engineering Developer Programmer AWS GCP Python Athena Glue Airflow Ignite JavaScript Agile Pandas NumPy SciPy Spark Dremio Apache Iceburg Iceberg PySpark MWAA Arrow DBT gRPC protobuf Snowflake TypeScript Manager Finance Trading Front Office Investment Banking Asset Manager Financial Services FX Fixed Income Equities Commodities Derivatives Hedge Fund) required by my asset management client in London. You will join a number of teams that are responsible for the core engineering of a large amount of financial trading data. The data is currently ingested into, and stored in, an AWS data lake. This is being migrated to a data mesh architecture though. You will lead a team of 4-5 engineers, in a very hands-on role, that will contribute towards this migration, working with AWS Glue, Athena, Python, Java, Iceberg, DBT, Arrow and Dremio. 20-30% of the role will be spent mentoring members of the team, architectural reviews, code reviews, implementing best practices, reporting to senior management and contributing towards technical strategy. They have a very flexible hybrid working set up of 1-2 days/month in the office. Salary: £125k - £155k + 15% Guaranteed Bonus + 10% Pension
Your new company Working with a fast-growing company that utilises cutting edge technologies to support companies with insights with an emphasis on information security. They have continued to push innovation into several well-known companies from across the globe. Your new role You will be a key member of the team in helping to deliver high quality software solutions. This includes allocating and prioritising tickets related to incidents, problems, projects, or software changes. This role extends beyond technical tasks, the work you complete will help enhance customer performance and experience. By doing so, you directly contribute to the team's success. If you're passionate about problem-solving, collaboration and creating something truly brilliant, this could be the role for you. What you'll need to succeed Proficiency in common web application languages such as PHP and Typescript/JavaScript. The ideal candidate should stay current with modern development practices, including PWA, SPA, and Component-based architectural patterns. Experience with frameworks like Laravel, React, .NET and Angular, as well as knowledge of data streaming tools, databases (MySQL, MS SQL, NoSQL), and RESTful APIs, is essential. Familiarity with DevOps, CI/CD pipelines, and cloud platforms (such as AWS and Google Cloud) is also desired.Experienced with using Git for version control in fast-moving team environments. Ideally, some familiarity with GraphQL and understanding of API security best practices would be beneficial. Passionate about staying at the forefront of web development and working in a dynamic environment. What you'll get in return Up to £60,000 DOE Company pension Gym membership Health and Wellbeing programme On-site parking Private medical insurance What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
01/07/2024
Full time
Your new company Working with a fast-growing company that utilises cutting edge technologies to support companies with insights with an emphasis on information security. They have continued to push innovation into several well-known companies from across the globe. Your new role You will be a key member of the team in helping to deliver high quality software solutions. This includes allocating and prioritising tickets related to incidents, problems, projects, or software changes. This role extends beyond technical tasks, the work you complete will help enhance customer performance and experience. By doing so, you directly contribute to the team's success. If you're passionate about problem-solving, collaboration and creating something truly brilliant, this could be the role for you. What you'll need to succeed Proficiency in common web application languages such as PHP and Typescript/JavaScript. The ideal candidate should stay current with modern development practices, including PWA, SPA, and Component-based architectural patterns. Experience with frameworks like Laravel, React, .NET and Angular, as well as knowledge of data streaming tools, databases (MySQL, MS SQL, NoSQL), and RESTful APIs, is essential. Familiarity with DevOps, CI/CD pipelines, and cloud platforms (such as AWS and Google Cloud) is also desired.Experienced with using Git for version control in fast-moving team environments. Ideally, some familiarity with GraphQL and understanding of API security best practices would be beneficial. Passionate about staying at the forefront of web development and working in a dynamic environment. What you'll get in return Up to £60,000 DOE Company pension Gym membership Health and Wellbeing programme On-site parking Private medical insurance What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
24/06/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.