Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Enterprise Firm is currently seeking a Senior Cloud Hosting Services Engineer with strong Azure experience. Candidate will be the IT Infrastructure team's cloud SME. Candidate will demonstrate their knowledge and hands-on experience with Azure cloud tools/technologies The Sr. Cloud Engineer reports to the Manager of Infrastructure Services. The ideal candidate will have extensive experience with Microsoft Azure, including PaaS and IaaS, and a strong understanding of Azure security best practices. Proficiency in creating and managing ARM and Bicep templates and supporting DevOps teams is essential. This role will involve designing, implementing, and maintaining cloud-based solutions to meet our business needs. Responsibilities: Design, implement, and manage scalable, secure, and reliable cloud infrastructure on Microsoft Azure. Develop and maintain Infrastructure as Code (IaC) using ARM and Bicep templates. Develop skill set across three domains (Infrastructure, Containers, and Networking) to better support the client. Ensure adherence to Azure security best practices and compliance requirements. Collaborate with DevOps teams to support CI/CD pipelines and automation processes. Monitor and optimize cloud resources for performance, cost, and scalability. Troubleshoot and resolve issues related to cloud infrastructure and services. Implement and manage Azure PaaS and IaaS solution, including on-premises server, storage, and database systems, among others. Independently and collaboratively lead client engagement workstreams focused on project, improvement, optimization, and transformation of processes. Communicate effectively and appropriately with both technical and non-technical audiences. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. 8+ years professional experience with multiple cloud platforms and technologies, including PaaS and IaaS. Ability to provide technical guidance and support to DevOps teams. Extensive experience with ARM/Bicep templates and Azure security best practices. Experience with distributed computing, complex architecture design, leading large-scale projects, and mentoring junior team members. Familiarity with managed Kubernetes services like Azure Kubernetes Service (AKS) and Azure Container Apps. Skilled in creating and managing Azure hub and spoke networks, using Network Virtual Appliances (NVAs), Route Tables (UDRs), applying Network Security Groups (NSGs), and ExpressRoute circuits. Proven experience with the Microsoft Cloud Adoption Framework and Azure Well-Architected Framework. Managing Infrastructure as Code (IaC) templates through Azure DevOps (or similar toolset), including use of Git, work items, branching, pull requests, and pipelines. Experience with monitoring, optimizing cloud resources, and creating custom Azure Policies. Preferred Qualifications: Master's degree in Computer Science, Engineering, or a related field. Experience in AI/Data-related project support with DevOps teams. Demonstrated experience with Security and IT Governance, including federated Identity and Access Management (IAM) solutions, including MFA, as well as Cloud Access Security Broker tools, to govern secure access to resources. Familiarity with other Infrastructure as Code (IaC) tools such as Terraform or Ansible. Other public cloud experience (AWS, Google, etc.). Azure VMware Solution (AVS) experience.
12/03/2025
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Enterprise Firm is currently seeking a Senior Cloud Hosting Services Engineer with strong Azure experience. Candidate will be the IT Infrastructure team's cloud SME. Candidate will demonstrate their knowledge and hands-on experience with Azure cloud tools/technologies The Sr. Cloud Engineer reports to the Manager of Infrastructure Services. The ideal candidate will have extensive experience with Microsoft Azure, including PaaS and IaaS, and a strong understanding of Azure security best practices. Proficiency in creating and managing ARM and Bicep templates and supporting DevOps teams is essential. This role will involve designing, implementing, and maintaining cloud-based solutions to meet our business needs. Responsibilities: Design, implement, and manage scalable, secure, and reliable cloud infrastructure on Microsoft Azure. Develop and maintain Infrastructure as Code (IaC) using ARM and Bicep templates. Develop skill set across three domains (Infrastructure, Containers, and Networking) to better support the client. Ensure adherence to Azure security best practices and compliance requirements. Collaborate with DevOps teams to support CI/CD pipelines and automation processes. Monitor and optimize cloud resources for performance, cost, and scalability. Troubleshoot and resolve issues related to cloud infrastructure and services. Implement and manage Azure PaaS and IaaS solution, including on-premises server, storage, and database systems, among others. Independently and collaboratively lead client engagement workstreams focused on project, improvement, optimization, and transformation of processes. Communicate effectively and appropriately with both technical and non-technical audiences. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. 8+ years professional experience with multiple cloud platforms and technologies, including PaaS and IaaS. Ability to provide technical guidance and support to DevOps teams. Extensive experience with ARM/Bicep templates and Azure security best practices. Experience with distributed computing, complex architecture design, leading large-scale projects, and mentoring junior team members. Familiarity with managed Kubernetes services like Azure Kubernetes Service (AKS) and Azure Container Apps. Skilled in creating and managing Azure hub and spoke networks, using Network Virtual Appliances (NVAs), Route Tables (UDRs), applying Network Security Groups (NSGs), and ExpressRoute circuits. Proven experience with the Microsoft Cloud Adoption Framework and Azure Well-Architected Framework. Managing Infrastructure as Code (IaC) templates through Azure DevOps (or similar toolset), including use of Git, work items, branching, pull requests, and pipelines. Experience with monitoring, optimizing cloud resources, and creating custom Azure Policies. Preferred Qualifications: Master's degree in Computer Science, Engineering, or a related field. Experience in AI/Data-related project support with DevOps teams. Demonstrated experience with Security and IT Governance, including federated Identity and Access Management (IAM) solutions, including MFA, as well as Cloud Access Security Broker tools, to govern secure access to resources. Familiarity with other Infrastructure as Code (IaC) tools such as Terraform or Ansible. Other public cloud experience (AWS, Google, etc.). Azure VMware Solution (AVS) experience.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Manager of DevOps Engineering Support with AWS experience. Candidate will provide subject matter expertise for ongoing support of custom applications and 3rd party infrastructure deployed on-prem and AWS environments. Candidate will identify areas for improvement, allocate resources, and hire where appropriate to ensure successful support of these environments. Responsibilities: Team Leadership: Lead and mentor a team of DevOps engineers, providing guidance and support to ensure the team's success and professional growth. Translate middle and senior management directives into workable policies Monitor project status and take remedial action on projects behind schedule and/or over budget Run the incident and problem management process for non-production environments. Manage L1/L2 support for non-production and production environments Resolve complex support issues in production and non-production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS. Assist production support and development staff in debugging environment defects. Create procedural and troubleshooting documentation related to cloud native applications Oversee development of complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform Supervisory Responsibilities Direct work of staff and conduct appropriate personnel actions (hiring, promotions, terminations, etc.) when necessary Communicate and advise staff on administrative policies and procedures, technical problems, priorities and methods and software change issues Prepare and conduct employee reviews as well as monitors progress to satisfy employee and management goals Promote employee development by conducting career-planning sessions and scheduling employee training classes, conferences and seminars Ensure that proper funding is added to the department budget Qualifications: Scripting and coding Network technologies CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience. Technologies used to support microservices. Experience with cloud-based systems such as AWS, Azure, or Google Cloud, including expertise in infrastructure-as-code tools such as Terraform or CloudFormation Strong programming skills in Java or Python, and experience with containerization technologies such as Docker or Kubernetes Experience with Kafka MRC Understanding of software development methodologies and Agile practices Experience with Middleware technologies Excellent analytical and problem-solving skills, with the ability to troubleshoot and identify the root cause of issues. Excellent verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams. Bachelor's degree in a related area 10-15 years of overall IT experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years management experience Cloud Certification a plus
11/03/2025
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Manager of DevOps Engineering Support with AWS experience. Candidate will provide subject matter expertise for ongoing support of custom applications and 3rd party infrastructure deployed on-prem and AWS environments. Candidate will identify areas for improvement, allocate resources, and hire where appropriate to ensure successful support of these environments. Responsibilities: Team Leadership: Lead and mentor a team of DevOps engineers, providing guidance and support to ensure the team's success and professional growth. Translate middle and senior management directives into workable policies Monitor project status and take remedial action on projects behind schedule and/or over budget Run the incident and problem management process for non-production environments. Manage L1/L2 support for non-production and production environments Resolve complex support issues in production and non-production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS. Assist production support and development staff in debugging environment defects. Create procedural and troubleshooting documentation related to cloud native applications Oversee development of complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform Supervisory Responsibilities Direct work of staff and conduct appropriate personnel actions (hiring, promotions, terminations, etc.) when necessary Communicate and advise staff on administrative policies and procedures, technical problems, priorities and methods and software change issues Prepare and conduct employee reviews as well as monitors progress to satisfy employee and management goals Promote employee development by conducting career-planning sessions and scheduling employee training classes, conferences and seminars Ensure that proper funding is added to the department budget Qualifications: Scripting and coding Network technologies CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience. Technologies used to support microservices. Experience with cloud-based systems such as AWS, Azure, or Google Cloud, including expertise in infrastructure-as-code tools such as Terraform or CloudFormation Strong programming skills in Java or Python, and experience with containerization technologies such as Docker or Kubernetes Experience with Kafka MRC Understanding of software development methodologies and Agile practices Experience with Middleware technologies Excellent analytical and problem-solving skills, with the ability to troubleshoot and identify the root cause of issues. Excellent verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams. Bachelor's degree in a related area 10-15 years of overall IT experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years management experience Cloud Certification a plus
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Manager of DevOps Engineering Support with AWS experience. Candidate will provide subject matter expertise for ongoing support of custom applications and 3rd party infrastructure deployed on-prem and AWS environments. Candidate will identify areas for improvement, allocate resources, and hire where appropriate to ensure successful support of these environments. Responsibilities: Team Leadership: Lead and mentor a team of DevOps engineers, providing guidance and support to ensure the team's success and professional growth. Translate middle and senior management directives into workable policies Monitor project status and take remedial action on projects behind schedule and/or over budget Run the incident and problem management process for non-production environments. Manage L1/L2 support for non-production and production environments Resolve complex support issues in production and non-production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS. Assist production support and development staff in debugging environment defects. Create procedural and troubleshooting documentation related to cloud native applications Oversee development of complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform Supervisory Responsibilities Direct work of staff and conduct appropriate personnel actions (hiring, promotions, terminations, etc.) when necessary Communicate and advise staff on administrative policies and procedures, technical problems, priorities and methods and software change issues Prepare and conduct employee reviews as well as monitors progress to satisfy employee and management goals Promote employee development by conducting career-planning sessions and scheduling employee training classes, conferences and seminars Ensure that proper funding is added to the department budget Qualifications: Scripting and coding Network technologies CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience. Technologies used to support microservices. Experience with cloud-based systems such as AWS, Azure, or Google Cloud, including expertise in infrastructure-as-code tools such as Terraform or CloudFormation Strong programming skills in Java or Python, and experience with containerization technologies such as Docker or Kubernetes Experience with Kafka MRC Understanding of software development methodologies and Agile practices Experience with Middleware technologies Excellent analytical and problem-solving skills, with the ability to troubleshoot and identify the root cause of issues. Excellent verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams. Bachelor's degree in a related area 10-15 years of overall IT experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years management experience Cloud Certification a plus
11/03/2025
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Manager of DevOps Engineering Support with AWS experience. Candidate will provide subject matter expertise for ongoing support of custom applications and 3rd party infrastructure deployed on-prem and AWS environments. Candidate will identify areas for improvement, allocate resources, and hire where appropriate to ensure successful support of these environments. Responsibilities: Team Leadership: Lead and mentor a team of DevOps engineers, providing guidance and support to ensure the team's success and professional growth. Translate middle and senior management directives into workable policies Monitor project status and take remedial action on projects behind schedule and/or over budget Run the incident and problem management process for non-production environments. Manage L1/L2 support for non-production and production environments Resolve complex support issues in production and non-production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS. Assist production support and development staff in debugging environment defects. Create procedural and troubleshooting documentation related to cloud native applications Oversee development of complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform Supervisory Responsibilities Direct work of staff and conduct appropriate personnel actions (hiring, promotions, terminations, etc.) when necessary Communicate and advise staff on administrative policies and procedures, technical problems, priorities and methods and software change issues Prepare and conduct employee reviews as well as monitors progress to satisfy employee and management goals Promote employee development by conducting career-planning sessions and scheduling employee training classes, conferences and seminars Ensure that proper funding is added to the department budget Qualifications: Scripting and coding Network technologies CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience. Technologies used to support microservices. Experience with cloud-based systems such as AWS, Azure, or Google Cloud, including expertise in infrastructure-as-code tools such as Terraform or CloudFormation Strong programming skills in Java or Python, and experience with containerization technologies such as Docker or Kubernetes Experience with Kafka MRC Understanding of software development methodologies and Agile practices Experience with Middleware technologies Excellent analytical and problem-solving skills, with the ability to troubleshoot and identify the root cause of issues. Excellent verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams. Bachelor's degree in a related area 10-15 years of overall IT experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years management experience Cloud Certification a plus
Manager, Software Engineering - DevOps Salary: Open + Bonus Location: Chicago, IL or Dallas, TX Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* Qualifications Bachelor's degree 10-15 years of overall IT experience Minimum 10 years' experience working in a distributed multi-platform environment. Minimum 3 years management experience Technical CI/CD tools such as Artifactory, Jenkins, and GIT Technologies used to support microservices. Experience with AWS cloud-based system including expertise in infrastructure-as-code tools such as Terraform or CloudFormation Strong programming skills in Java or Python, and experience with containerization technologies such as Docker or Kubernetes Experience with Kafka MRC Responsibilities Direct work of staff and conduct appropriate personnel actions (hiring, promotions, terminations, etc.) when necessary Communicate and advise staff on administrative policies and procedures, technical problems, priorities and methods and software change issues Lead and mentor a team of DevOps engineers, providing guidance and support to ensure the team's success and professional growth. Translate middle and senior management directives into workable policies Monitor project status and take remedial action on projects behind schedule and/or over budget Run the incident and problem management process for non-production environments. Manage L1/L2 support for non-production and production environments Resolve complex support issues in production and non-production environments.
11/03/2025
Full time
Manager, Software Engineering - DevOps Salary: Open + Bonus Location: Chicago, IL or Dallas, TX Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* Qualifications Bachelor's degree 10-15 years of overall IT experience Minimum 10 years' experience working in a distributed multi-platform environment. Minimum 3 years management experience Technical CI/CD tools such as Artifactory, Jenkins, and GIT Technologies used to support microservices. Experience with AWS cloud-based system including expertise in infrastructure-as-code tools such as Terraform or CloudFormation Strong programming skills in Java or Python, and experience with containerization technologies such as Docker or Kubernetes Experience with Kafka MRC Responsibilities Direct work of staff and conduct appropriate personnel actions (hiring, promotions, terminations, etc.) when necessary Communicate and advise staff on administrative policies and procedures, technical problems, priorities and methods and software change issues Lead and mentor a team of DevOps engineers, providing guidance and support to ensure the team's success and professional growth. Translate middle and senior management directives into workable policies Monitor project status and take remedial action on projects behind schedule and/or over budget Run the incident and problem management process for non-production environments. Manage L1/L2 support for non-production and production environments Resolve complex support issues in production and non-production environments.
Manager, Software Engineering - DevOps Salary: Open + Bonus Location: Chicago, IL or Dallas, TX Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* Qualifications Bachelor's degree 10-15 years of overall IT experience Minimum 10 years' experience working in a distributed multi-platform environment. Minimum 3 years management experience Technical CI/CD tools such as Artifactory, Jenkins, and GIT Technologies used to support microservices. Experience with AWS cloud-based system including expertise in infrastructure-as-code tools such as Terraform or CloudFormation Strong programming skills in Java or Python, and experience with containerization technologies such as Docker or Kubernetes Experience with Kafka MRC Responsibilities Direct work of staff and conduct appropriate personnel actions (hiring, promotions, terminations, etc.) when necessary Communicate and advise staff on administrative policies and procedures, technical problems, priorities and methods and software change issues Lead and mentor a team of DevOps engineers, providing guidance and support to ensure the team's success and professional growth. Translate middle and senior management directives into workable policies Monitor project status and take remedial action on projects behind schedule and/or over budget Run the incident and problem management process for non-production environments. Manage L1/L2 support for non-production and production environments Resolve complex support issues in production and non-production environments.
11/03/2025
Full time
Manager, Software Engineering - DevOps Salary: Open + Bonus Location: Chicago, IL or Dallas, TX Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* Qualifications Bachelor's degree 10-15 years of overall IT experience Minimum 10 years' experience working in a distributed multi-platform environment. Minimum 3 years management experience Technical CI/CD tools such as Artifactory, Jenkins, and GIT Technologies used to support microservices. Experience with AWS cloud-based system including expertise in infrastructure-as-code tools such as Terraform or CloudFormation Strong programming skills in Java or Python, and experience with containerization technologies such as Docker or Kubernetes Experience with Kafka MRC Responsibilities Direct work of staff and conduct appropriate personnel actions (hiring, promotions, terminations, etc.) when necessary Communicate and advise staff on administrative policies and procedures, technical problems, priorities and methods and software change issues Lead and mentor a team of DevOps engineers, providing guidance and support to ensure the team's success and professional growth. Translate middle and senior management directives into workable policies Monitor project status and take remedial action on projects behind schedule and/or over budget Run the incident and problem management process for non-production environments. Manage L1/L2 support for non-production and production environments Resolve complex support issues in production and non-production environments.
Senior Cybersecurity & IAM Architect (ForgeRock & PingOne) Location: Edinburgh (Preferred)/London | Hybrid (3 Days Onsite) Work Type: Permanent Are you an IAM expert looking for your next big challenge? We're working with a leading financial services company, and they need a Senior Cybersecurity & IAM Architect to help design and deliver top-notch identity solutions. If you have 10+ years of hands-on experience with ForgeRock products (AM, IDM, IG, DS) and deep knowledge of PingOne Advanced Identity Cloud , we'd love to hear from you! What you'll be doing: Designing and building scalable IAM solutions that make security seamless. Integrating PingOne Advanced Identity Cloud with existing apps and infrastructure. Leading IAM operations and being the go-to person for L3/L4 support . Working with modern authentication protocols like OAuth2.0, OpenID, SAML , and Kerberos . Collaborating on system integrations and high-availability environments. What we're looking for: Solid experience with ForgeRock (AM, IDM, IG, DS) and Ping Identity . Strong skills in Java, JavaScript, Groovy Scripting , REST APIs , and LDAP . Familiarity with CI/CD pipelines , DevOps tools , and cloud-based IAM. Some knowledge of web development (JavaScript, TypeScript, React, Angular) would be a plus. A natural problem solver who can translate business needs into technical solutions. If you're someone who thrives on creating secure, efficient identity systems and enjoys leading teams and projects, this is the role for you. Ready to take the next step? Apply now or send a copy of your updated CV. We'd love to chat! Randstad Technologies Ltd is a leading specialist recruitment business for the IT & Engineering industries. Please note that due to a high level of applications, we can only respond to applicants whose skills & qualifications are suitable for this position. No terminology in this advert is intended to discriminate against any of the protected characteristics that fall under the Equality Act 2010. For the purposes of the Conduct Regulations 2003, when advertising permanent vacancies we are acting as an Employment Agency, and when advertising temporary/contract vacancies we are acting as an Employment Business.
11/03/2025
Full time
Senior Cybersecurity & IAM Architect (ForgeRock & PingOne) Location: Edinburgh (Preferred)/London | Hybrid (3 Days Onsite) Work Type: Permanent Are you an IAM expert looking for your next big challenge? We're working with a leading financial services company, and they need a Senior Cybersecurity & IAM Architect to help design and deliver top-notch identity solutions. If you have 10+ years of hands-on experience with ForgeRock products (AM, IDM, IG, DS) and deep knowledge of PingOne Advanced Identity Cloud , we'd love to hear from you! What you'll be doing: Designing and building scalable IAM solutions that make security seamless. Integrating PingOne Advanced Identity Cloud with existing apps and infrastructure. Leading IAM operations and being the go-to person for L3/L4 support . Working with modern authentication protocols like OAuth2.0, OpenID, SAML , and Kerberos . Collaborating on system integrations and high-availability environments. What we're looking for: Solid experience with ForgeRock (AM, IDM, IG, DS) and Ping Identity . Strong skills in Java, JavaScript, Groovy Scripting , REST APIs , and LDAP . Familiarity with CI/CD pipelines , DevOps tools , and cloud-based IAM. Some knowledge of web development (JavaScript, TypeScript, React, Angular) would be a plus. A natural problem solver who can translate business needs into technical solutions. If you're someone who thrives on creating secure, efficient identity systems and enjoys leading teams and projects, this is the role for you. Ready to take the next step? Apply now or send a copy of your updated CV. We'd love to chat! Randstad Technologies Ltd is a leading specialist recruitment business for the IT & Engineering industries. Please note that due to a high level of applications, we can only respond to applicants whose skills & qualifications are suitable for this position. No terminology in this advert is intended to discriminate against any of the protected characteristics that fall under the Equality Act 2010. For the purposes of the Conduct Regulations 2003, when advertising permanent vacancies we are acting as an Employment Agency, and when advertising temporary/contract vacancies we are acting as an Employment Business.
Engineering Lead Permanent Location : Leeds, Hybrid working - 2 days a week on site (some flexibility in this) As the Engineering Lead you will be responsible for working with architecture, infrastructure, engineering, platform and information security teams ensuring consistency of engineering best practices. You will be directly responsible for the software and test engineers; in the role you will mentor and coach individuals throughout the engineering function as well as working collaboratively across the delivery and platform teams. Lead a team and community of engineers, providing technical guidance, mentorship and ensuring alignment with delivery goals, ways of working and industry best practices. Ensure alignment to Agile delivery team ways of working and best practices: Champion Agile engineering best-practices, processes, and tools in support of DevOps and Platforms Ensure secure code by design: Ensure DevSecOps practices are Embedded into the software and infrastructure pipelines. Review and implement improvements to existing code review practices and tooling Ensure coding practices are aligned to APIM and observability/monitoring tooling Responsible for ensuring all engineering compliance, risks and issues are understood and highlighted to senior management as they arise Collaborate with Cloud/DevOps engineers to refactor Azure DevOps (ADO) projects, update CI/CD pipeline templates as necessary, and configure pull request templates to ensure code quality and security. Keeping abreast of advancement in Engineering practices and recommending change where required Detailed/Low-Level Design: Participate in design workshops to ensure that engineering needs are met. What we are looking for: Experience: Strong practical technical background in the following Core Tech: Azure Cloud, Azure DevOps, Observability tooling Azure Data Platform, .NET/C#, PowerShell, Power Platform, Terraform, YAML, Dynamics 365, SharePoint Online, Office365. Proven experience of leading engineering teams and communities in a commercial environment. Experience of using Azure DevOps Experience of working within a variety of project delivery methodologies incl. agile and waterfall Skills: Ability to engage and challenge at all levels; strong influencing skills coupled with tenacity and resilience. Good coaching and development skills - the ability to help develop and grow engineering communities. Good organisational skills Ability to work to tight deadlines. Excellent attention to detail Ability to communicate clearly and effectively. Working as part of a team, contributing to achieving team targets The ability to work effectively across several concurrent projects Can operate at both a big picture and a detail level with ability to act as an agent for change for both Knowledge and Qualifications Certified Software Engineering Master (SEMC)/Microsoft Certified Solution Developer (MCSD) or equivalent. Any relevant Microsoft Engineering or Developer certifications, such as: AZ-400 - Designing and Implementing Microsoft DevOps Solutions AZ-104 - Microsoft Azure Administrator Certification Proven web applications development experience - C#, .NET, .NET Core, Asp.NET, MVC, web API, Blazor, TSQL, Bootstrap, JavaScript If you are interested and looking or your next role, please apply with a copy of your CV or email (see below)
11/03/2025
Full time
Engineering Lead Permanent Location : Leeds, Hybrid working - 2 days a week on site (some flexibility in this) As the Engineering Lead you will be responsible for working with architecture, infrastructure, engineering, platform and information security teams ensuring consistency of engineering best practices. You will be directly responsible for the software and test engineers; in the role you will mentor and coach individuals throughout the engineering function as well as working collaboratively across the delivery and platform teams. Lead a team and community of engineers, providing technical guidance, mentorship and ensuring alignment with delivery goals, ways of working and industry best practices. Ensure alignment to Agile delivery team ways of working and best practices: Champion Agile engineering best-practices, processes, and tools in support of DevOps and Platforms Ensure secure code by design: Ensure DevSecOps practices are Embedded into the software and infrastructure pipelines. Review and implement improvements to existing code review practices and tooling Ensure coding practices are aligned to APIM and observability/monitoring tooling Responsible for ensuring all engineering compliance, risks and issues are understood and highlighted to senior management as they arise Collaborate with Cloud/DevOps engineers to refactor Azure DevOps (ADO) projects, update CI/CD pipeline templates as necessary, and configure pull request templates to ensure code quality and security. Keeping abreast of advancement in Engineering practices and recommending change where required Detailed/Low-Level Design: Participate in design workshops to ensure that engineering needs are met. What we are looking for: Experience: Strong practical technical background in the following Core Tech: Azure Cloud, Azure DevOps, Observability tooling Azure Data Platform, .NET/C#, PowerShell, Power Platform, Terraform, YAML, Dynamics 365, SharePoint Online, Office365. Proven experience of leading engineering teams and communities in a commercial environment. Experience of using Azure DevOps Experience of working within a variety of project delivery methodologies incl. agile and waterfall Skills: Ability to engage and challenge at all levels; strong influencing skills coupled with tenacity and resilience. Good coaching and development skills - the ability to help develop and grow engineering communities. Good organisational skills Ability to work to tight deadlines. Excellent attention to detail Ability to communicate clearly and effectively. Working as part of a team, contributing to achieving team targets The ability to work effectively across several concurrent projects Can operate at both a big picture and a detail level with ability to act as an agent for change for both Knowledge and Qualifications Certified Software Engineering Master (SEMC)/Microsoft Certified Solution Developer (MCSD) or equivalent. Any relevant Microsoft Engineering or Developer certifications, such as: AZ-400 - Designing and Implementing Microsoft DevOps Solutions AZ-104 - Microsoft Azure Administrator Certification Proven web applications development experience - C#, .NET, .NET Core, Asp.NET, MVC, web API, Blazor, TSQL, Bootstrap, JavaScript If you are interested and looking or your next role, please apply with a copy of your CV or email (see below)
Data Engineering Solutions Architect (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You MUST have the following: Good experience as a Java Solutions Architect Excellent design and architecture ability for systems involving large amounts of data Advanced Java Amazon Web Services (AWS) or GCP CI/CD pipelines TDD Enterprise-scale SQL or Oracle Terraform, Kubernetes, Docker The following is DESIRABLE, not essential: Experience delivery projects in data management, governance and regulation Python An understanding of data mesh architecture Kafka, Iceberg, Hoodie Role: Data Engineering Solutions Architect (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You will be hired to be the technical lead of a new team that is being assembled to build a new data management platform on AWS. The greenfield project will include the automation of data catalogue population and the implementation of data governance policies. You will be the solutions architect in a team that has a senior developer, mid-level developer and business lead. You and the business lead will share responsibility of the team. He will be responsible for the interpretation of data regulation, building of road maps and strategy and creation of policies. You will do the design, architecture and technical delivery of this strategy and his data policies. Over the course of the next year, you will hire more developers into the team and the workload grows. The technology is Java on AWS with some Python. You will be very hands-on and as part of a small team, you will also be involved in DevOps and testing. You will be confident with CI/CD pipelines, IaC and containerization. You will also be comfortable with enterprise-scale SQL and/or Oracle databases. As the data environment moves from an AWS based data lake to a data mesh architecture, any understanding of data mesh would also be highly desirable. You will also contribute to the two other teams in the data engineering space within the company- the data platform team which operates a Hoodie based data lake and the team working with Iceberg and Kafka to create the new data mesh architecture- but the data governance programme will be your priority. Hours are 8.30am - 5.30pm. Hybrid working is 2 days/week in the office. Comp: $320k - 420k + 401k
11/03/2025
Full time
Data Engineering Solutions Architect (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You MUST have the following: Good experience as a Java Solutions Architect Excellent design and architecture ability for systems involving large amounts of data Advanced Java Amazon Web Services (AWS) or GCP CI/CD pipelines TDD Enterprise-scale SQL or Oracle Terraform, Kubernetes, Docker The following is DESIRABLE, not essential: Experience delivery projects in data management, governance and regulation Python An understanding of data mesh architecture Kafka, Iceberg, Hoodie Role: Data Engineering Solutions Architect (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You will be hired to be the technical lead of a new team that is being assembled to build a new data management platform on AWS. The greenfield project will include the automation of data catalogue population and the implementation of data governance policies. You will be the solutions architect in a team that has a senior developer, mid-level developer and business lead. You and the business lead will share responsibility of the team. He will be responsible for the interpretation of data regulation, building of road maps and strategy and creation of policies. You will do the design, architecture and technical delivery of this strategy and his data policies. Over the course of the next year, you will hire more developers into the team and the workload grows. The technology is Java on AWS with some Python. You will be very hands-on and as part of a small team, you will also be involved in DevOps and testing. You will be confident with CI/CD pipelines, IaC and containerization. You will also be comfortable with enterprise-scale SQL and/or Oracle databases. As the data environment moves from an AWS based data lake to a data mesh architecture, any understanding of data mesh would also be highly desirable. You will also contribute to the two other teams in the data engineering space within the company- the data platform team which operates a Hoodie based data lake and the team working with Iceberg and Kafka to create the new data mesh architecture- but the data governance programme will be your priority. Hours are 8.30am - 5.30pm. Hybrid working is 2 days/week in the office. Comp: $320k - 420k + 401k
Lead Data Engineer (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You MUST have the following: Good experience as a Lead Data Engineer/Data Engineering Solutions Architect Excellent design and architecture ability for systems involving large amounts of data Advanced Java Amazon Web Services (AWS) or GCP CI/CD pipelines TDD Enterprise-scale SQL or Oracle Terraform, Kubernetes, Docker The following is DESIRABLE, not essential: Experience delivery projects in data management, governance and regulation Python An understanding of data mesh architecture Kafka, Iceberg, Hoodie Role: Lead Data Engineer (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You will be hired to be the technical lead of a new team that is being assembled to build a new data management platform on AWS. The greenfield project will include the automation of data catalogue population and the implementation of data governance policies. You will be the solutions architect in a team that has a senior developer, mid-level developer and business lead. You and the business lead will share responsibility of the team. He will be responsible for the interpretation of data regulation, building of road maps and strategy and creation of policies. You will do the design, architecture and technical delivery of this strategy and his data policies. Over the course of the next year, you will hire more developers into the team and the workload grows. The technology is Java on AWS with some Python. You will be very hands-on and as part of a small team, you will also be involved in DevOps and testing. You will be confident with CI/CD pipelines, IaC and containerization. You will also be comfortable with enterprise-scale SQL and/or Oracle databases. As the data environment moves from an AWS based data lake to a data mesh architecture, any understanding of data mesh would also be highly desirable. You will also contribute to the two other teams in the data engineering space within the company- the data platform team which operates a Hoodie based data lake and the team working with Iceberg and Kafka to create the new data mesh architecture- but the data governance programme will be your priority. Hours are 8.30am - 5.30pm. Hybrid working is 2 days/week in the office. Comp: $260k - $340k + 401k
11/03/2025
Full time
Lead Data Engineer (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You MUST have the following: Good experience as a Lead Data Engineer/Data Engineering Solutions Architect Excellent design and architecture ability for systems involving large amounts of data Advanced Java Amazon Web Services (AWS) or GCP CI/CD pipelines TDD Enterprise-scale SQL or Oracle Terraform, Kubernetes, Docker The following is DESIRABLE, not essential: Experience delivery projects in data management, governance and regulation Python An understanding of data mesh architecture Kafka, Iceberg, Hoodie Role: Lead Data Engineer (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You will be hired to be the technical lead of a new team that is being assembled to build a new data management platform on AWS. The greenfield project will include the automation of data catalogue population and the implementation of data governance policies. You will be the solutions architect in a team that has a senior developer, mid-level developer and business lead. You and the business lead will share responsibility of the team. He will be responsible for the interpretation of data regulation, building of road maps and strategy and creation of policies. You will do the design, architecture and technical delivery of this strategy and his data policies. Over the course of the next year, you will hire more developers into the team and the workload grows. The technology is Java on AWS with some Python. You will be very hands-on and as part of a small team, you will also be involved in DevOps and testing. You will be confident with CI/CD pipelines, IaC and containerization. You will also be comfortable with enterprise-scale SQL and/or Oracle databases. As the data environment moves from an AWS based data lake to a data mesh architecture, any understanding of data mesh would also be highly desirable. You will also contribute to the two other teams in the data engineering space within the company- the data platform team which operates a Hoodie based data lake and the team working with Iceberg and Kafka to create the new data mesh architecture- but the data governance programme will be your priority. Hours are 8.30am - 5.30pm. Hybrid working is 2 days/week in the office. Comp: $260k - $340k + 401k
Data Engineering Manager (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You MUST have the following: Good experience as a hands-on Data Engineering Manager/Architect/Technical Lead Excellent design and architecture ability for systems involving large amounts of data Advanced Java Amazon Web Services (AWS) or GCP CI/CD pipelines TDD Enterprise-scale SQL or Oracle Terraform, Kubernetes, Docker The following is DESIRABLE, not essential: Experience delivery projects in data management, governance and regulation Python An understanding of data mesh architecture Kafka, Iceberg, Hoodie Role: Data Engineering Manager (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You will be hired to be the technical lead and co-manager of a new team that is being assembled to build a new data management platform on AWS. The greenfield project will include the automation of data catalogue population and the implementation of data governance policies. You will be the lead engineer/manager/solutions architect in a team that has a senior developer, mid-level developer and business lead. You and the business lead will share responsibility of the team. He will be responsible for the interpretation of data regulation, building of road maps and strategy and creation of policies. You will do the design, architecture and technical delivery of this strategy and his data policies. Over the course of the next year, you will hire more developers into the team and the workload grows. The technology is Java on AWS with some Python. You will be very hands-on and as part of a small team, you will also be involved in DevOps and testing. You will be confident with CI/CD pipelines, IaC and containerization. You will also be comfortable with enterprise-scale SQL and/or Oracle databases. As the data environment moves from an AWS based data lake to a data mesh architecture, any understanding of data mesh would also be highly desirable. You will also contribute to the two other teams in the data engineering space within the company- the data platform team which operates a Hoodie based data lake and the team working with Iceberg and Kafka to create the new data mesh architecture- but the data governance programme will be your priority. Hours are 8.30am - 5.30pm. Hybrid working is 2 days/week in the office. Comp: $320k - $420k + 401k
11/03/2025
Full time
Data Engineering Manager (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You MUST have the following: Good experience as a hands-on Data Engineering Manager/Architect/Technical Lead Excellent design and architecture ability for systems involving large amounts of data Advanced Java Amazon Web Services (AWS) or GCP CI/CD pipelines TDD Enterprise-scale SQL or Oracle Terraform, Kubernetes, Docker The following is DESIRABLE, not essential: Experience delivery projects in data management, governance and regulation Python An understanding of data mesh architecture Kafka, Iceberg, Hoodie Role: Data Engineering Manager (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You will be hired to be the technical lead and co-manager of a new team that is being assembled to build a new data management platform on AWS. The greenfield project will include the automation of data catalogue population and the implementation of data governance policies. You will be the lead engineer/manager/solutions architect in a team that has a senior developer, mid-level developer and business lead. You and the business lead will share responsibility of the team. He will be responsible for the interpretation of data regulation, building of road maps and strategy and creation of policies. You will do the design, architecture and technical delivery of this strategy and his data policies. Over the course of the next year, you will hire more developers into the team and the workload grows. The technology is Java on AWS with some Python. You will be very hands-on and as part of a small team, you will also be involved in DevOps and testing. You will be confident with CI/CD pipelines, IaC and containerization. You will also be comfortable with enterprise-scale SQL and/or Oracle databases. As the data environment moves from an AWS based data lake to a data mesh architecture, any understanding of data mesh would also be highly desirable. You will also contribute to the two other teams in the data engineering space within the company- the data platform team which operates a Hoodie based data lake and the team working with Iceberg and Kafka to create the new data mesh architecture- but the data governance programme will be your priority. Hours are 8.30am - 5.30pm. Hybrid working is 2 days/week in the office. Comp: $320k - $420k + 401k
About the Role A leading global manufacturer is undergoing a digital transformation , leveraging SAP S/4HANA and advanced Product Lifecycle Management (PLM) systems to enhance business processes and drive innovation. As a Digital Business Partner , you will align digital strategies with business objectives , ensuring seamless collaboration between Equipment Portfolio & Innovation teams and the Digital & Technology division . Key Responsibilities Partner with senior stakeholders to align business and digital strategies . Develop and drive a digital roadmap that enhances PLM, ERP, and advanced design capabilities (eg, Model-Based Engineering, Digital Twin, IIoT). Lead the adoption of digital solutions , measuring their impact and driving continuous improvement. Oversee technology implementation , working with SAP, Siemens Teamcenter, SolidWorks , and other key systems. Provide thought leadership , leveraging external insights to drive innovation . Lead a global, cross-functional team , ensuring talent development and fostering collaboration. Skills & Experience 10+ years of experience in digital transformation, business partnering, or IT leadership in a manufacturing environment . Strong background in PLM systems, CAD integrations, ERP systems (SAP S/4HANA preferred) . Experience with large-scale technology programs using DevOps & Waterfall methodologies . Deep understanding of data migration, integration, and advanced design tools . Proven leadership in stakeholder management, IT operations (AMS), and change management . Fluent in English; French and/or Italian is a plus .
10/03/2025
Project-based
About the Role A leading global manufacturer is undergoing a digital transformation , leveraging SAP S/4HANA and advanced Product Lifecycle Management (PLM) systems to enhance business processes and drive innovation. As a Digital Business Partner , you will align digital strategies with business objectives , ensuring seamless collaboration between Equipment Portfolio & Innovation teams and the Digital & Technology division . Key Responsibilities Partner with senior stakeholders to align business and digital strategies . Develop and drive a digital roadmap that enhances PLM, ERP, and advanced design capabilities (eg, Model-Based Engineering, Digital Twin, IIoT). Lead the adoption of digital solutions , measuring their impact and driving continuous improvement. Oversee technology implementation , working with SAP, Siemens Teamcenter, SolidWorks , and other key systems. Provide thought leadership , leveraging external insights to drive innovation . Lead a global, cross-functional team , ensuring talent development and fostering collaboration. Skills & Experience 10+ years of experience in digital transformation, business partnering, or IT leadership in a manufacturing environment . Strong background in PLM systems, CAD integrations, ERP systems (SAP S/4HANA preferred) . Experience with large-scale technology programs using DevOps & Waterfall methodologies . Deep understanding of data migration, integration, and advanced design tools . Proven leadership in stakeholder management, IT operations (AMS), and change management . Fluent in English; French and/or Italian is a plus .
Senior DevOps & CI/CD Engineer Position bei unserem Kunden aus der Bankenbranche in St. Gallen/Zürich zu besetzen. Aufgaben: Unterstützung bei der Gestaltung und Umsetzung eines neuen CI/CD-Standards auf Basis von GitLab Automatisierung von Entwicklungs- und Betriebsprozessen zur Verbesserung der Effizienz und Qualität der Softwarebereitstellung Zusammenarbeit bei der Gestaltung und Implementierung von Monitoring-Lösungen für Anwendungen und Infrastruktur Entwicklung und Optimierung von Selbstbedienungsmechanismen für Entwickler zur effizienten Verwaltung ihrer Umgebungen Beratung und Unterstützung interner Teams bei der Einführung und Nutzung neuer DevOps- und Automatisierungstools Integration und Verwaltung von Sicherheits- und Compliance-Anforderungen in die CI/CD-Pipelines Analyse bestehender Prozesse und Identifikation von Verbesserungspotenzialen im Deployment- und Release-Management Troubleshooting und Unterstützung der Entwicklungsteams bei CI/CD- und Automatisierungsproblemen Ihre Kenntnisse: Tiefgehende Erfahrung mit CI/CD-Pipelines in GitLab und deren Architektur Umfassendes Wissen über Automatisierungstools und Skriptsprachen (z. B. Bash, Ansible, Terraform) Erfahrung mit Container-Orchestrierung und Cloud-Technologien, insbesondere OpenShift/Kubernetes Kenntnisse in Monitoring- und Logging-Tools (z. B. Splunk, Dynatrace, Prometheus, Grafana) Erfahrung im Management von Infrastruktur Verständnis moderner Softwareentwicklungsprozesse und deren Automatisierung (z. B. GitOps, DevSecOps) Starke Kommunikationsfähigkeiten zur Zusammenarbeit mit Entwicklern, Architekten und Stakeholdern Erfahrung mit Sicherheits- und Compliance-Anforderungen im CI/CD-Kontext von Vorteil Ort: St. Gallen oder Zürich (Hybrid) Arbeitsmodell: Hybrid (3 Tage vor Ort, 1-2 Tage Home Office) Sektor: Bankenbranche Start: 01.04. oder 01.05.2025 Projektdauer: Bis 31.12.2025 mit Option zur Verlängerung Pensum: 80-100% REF: 22687 Machen Sie den nächsten Schritt und senden Sie uns Ihren Lebenslauf sowie eine Telefonnummer, unter der wir Sie tagsüber erreichen können. Aufgrund der schweizerischen Arbeitsgesetzgebung können wir nur Bewerbungen von Schweizer Staatsbürgern, EU-Bürgern und Personen mit einer Arbeitserlaubnis in Betracht ziehen. Ukrainische Flüchtlinge sind herzlich willkommen, und wir werden Sie auf Ihrem Weg unterstützen. Wir begrüßen Bewerbungen von Personen aller Geschlechter, Altersgruppen im erwerbsfähigen Alter, sexuellen Orientierungen, persönlichen Ausdrucksformen, ethnischen Zugehörigkeiten und religiösen Überzeugungen. Daher sind Angaben zum Geschlecht oder ein Foto in Ihrer Bewerbung nicht erforderlich. Aufgrund von Kundenanforderungen benötigen wir Informationen zu Ihrem Familienstand, Ihrer Staatsangehörigkeit, Ihrem Geburtsdatum sowie einer gültigen Schweizer Arbeitsbewilligung. Bei Bewerbern mit Behinderungen sind wir gerne bereit, gemeinsam mit unserem Endkunden mögliche Lösungen zu prüfen.
10/03/2025
Project-based
Senior DevOps & CI/CD Engineer Position bei unserem Kunden aus der Bankenbranche in St. Gallen/Zürich zu besetzen. Aufgaben: Unterstützung bei der Gestaltung und Umsetzung eines neuen CI/CD-Standards auf Basis von GitLab Automatisierung von Entwicklungs- und Betriebsprozessen zur Verbesserung der Effizienz und Qualität der Softwarebereitstellung Zusammenarbeit bei der Gestaltung und Implementierung von Monitoring-Lösungen für Anwendungen und Infrastruktur Entwicklung und Optimierung von Selbstbedienungsmechanismen für Entwickler zur effizienten Verwaltung ihrer Umgebungen Beratung und Unterstützung interner Teams bei der Einführung und Nutzung neuer DevOps- und Automatisierungstools Integration und Verwaltung von Sicherheits- und Compliance-Anforderungen in die CI/CD-Pipelines Analyse bestehender Prozesse und Identifikation von Verbesserungspotenzialen im Deployment- und Release-Management Troubleshooting und Unterstützung der Entwicklungsteams bei CI/CD- und Automatisierungsproblemen Ihre Kenntnisse: Tiefgehende Erfahrung mit CI/CD-Pipelines in GitLab und deren Architektur Umfassendes Wissen über Automatisierungstools und Skriptsprachen (z. B. Bash, Ansible, Terraform) Erfahrung mit Container-Orchestrierung und Cloud-Technologien, insbesondere OpenShift/Kubernetes Kenntnisse in Monitoring- und Logging-Tools (z. B. Splunk, Dynatrace, Prometheus, Grafana) Erfahrung im Management von Infrastruktur Verständnis moderner Softwareentwicklungsprozesse und deren Automatisierung (z. B. GitOps, DevSecOps) Starke Kommunikationsfähigkeiten zur Zusammenarbeit mit Entwicklern, Architekten und Stakeholdern Erfahrung mit Sicherheits- und Compliance-Anforderungen im CI/CD-Kontext von Vorteil Ort: St. Gallen oder Zürich (Hybrid) Arbeitsmodell: Hybrid (3 Tage vor Ort, 1-2 Tage Home Office) Sektor: Bankenbranche Start: 01.04. oder 01.05.2025 Projektdauer: Bis 31.12.2025 mit Option zur Verlängerung Pensum: 80-100% REF: 22687 Machen Sie den nächsten Schritt und senden Sie uns Ihren Lebenslauf sowie eine Telefonnummer, unter der wir Sie tagsüber erreichen können. Aufgrund der schweizerischen Arbeitsgesetzgebung können wir nur Bewerbungen von Schweizer Staatsbürgern, EU-Bürgern und Personen mit einer Arbeitserlaubnis in Betracht ziehen. Ukrainische Flüchtlinge sind herzlich willkommen, und wir werden Sie auf Ihrem Weg unterstützen. Wir begrüßen Bewerbungen von Personen aller Geschlechter, Altersgruppen im erwerbsfähigen Alter, sexuellen Orientierungen, persönlichen Ausdrucksformen, ethnischen Zugehörigkeiten und religiösen Überzeugungen. Daher sind Angaben zum Geschlecht oder ein Foto in Ihrer Bewerbung nicht erforderlich. Aufgrund von Kundenanforderungen benötigen wir Informationen zu Ihrem Familienstand, Ihrer Staatsangehörigkeit, Ihrem Geburtsdatum sowie einer gültigen Schweizer Arbeitsbewilligung. Bei Bewerbern mit Behinderungen sind wir gerne bereit, gemeinsam mit unserem Endkunden mögliche Lösungen zu prüfen.
Senior DevOps Systems Engineer (OpenShift & Red Hat) This is a crucial opportunity for a highly skilled Senior DevOps Systems Engineer with expertise in OpenShift & Red Hat to contribute to large-scale transformation projects that impact millions. As a Senior DevOps Systems Engineer (OpenShift & Red Hat), the responsibilities include managing the OpenShift Container Platform (OCP) and associated tools such as Red Hat Service Mesh, GitHub, Helm, Ansible, and Kafka. The role involves providing IT support for Servers, storage systems, and intricate infrastructures, including VMware vSphere, vSAN, and Horizon. Additionally, it requires the administration of Servers and operating systems, including UNIX, Linux, and Windows, as well as overseeing backup and restore solutions. To be successful in this role, candidates must possess strong experience in DevOps, Systems Engineering, or a related discipline. Proficiency in managing OpenShift, VMware vSphere, and related tools such as Red Hat Service Mesh, GitHub, Ansible, and Kafka is essential. Hands-on experience with backup solutions, blade systems, Oracle infrastructure, and Kubernetes is also required. While not mandatory, possessing a Red Hat OpenShift I: Containers & Kubernetes certification would be advantageous. This role offers the chance to work on impactful projects that enhance essential services for millions. It is a full-time, freelance role with huge long-term prosperity as the project will be lasting multiple years. Please apply today!
10/03/2025
Project-based
Senior DevOps Systems Engineer (OpenShift & Red Hat) This is a crucial opportunity for a highly skilled Senior DevOps Systems Engineer with expertise in OpenShift & Red Hat to contribute to large-scale transformation projects that impact millions. As a Senior DevOps Systems Engineer (OpenShift & Red Hat), the responsibilities include managing the OpenShift Container Platform (OCP) and associated tools such as Red Hat Service Mesh, GitHub, Helm, Ansible, and Kafka. The role involves providing IT support for Servers, storage systems, and intricate infrastructures, including VMware vSphere, vSAN, and Horizon. Additionally, it requires the administration of Servers and operating systems, including UNIX, Linux, and Windows, as well as overseeing backup and restore solutions. To be successful in this role, candidates must possess strong experience in DevOps, Systems Engineering, or a related discipline. Proficiency in managing OpenShift, VMware vSphere, and related tools such as Red Hat Service Mesh, GitHub, Ansible, and Kafka is essential. Hands-on experience with backup solutions, blade systems, Oracle infrastructure, and Kubernetes is also required. While not mandatory, possessing a Red Hat OpenShift I: Containers & Kubernetes certification would be advantageous. This role offers the chance to work on impactful projects that enhance essential services for millions. It is a full-time, freelance role with huge long-term prosperity as the project will be lasting multiple years. Please apply today!
Global Network Operations Manager - Network Automation, SRE & DevOps Seeking a Senior Network Leader to take ownership of Global Network Operations, Site Reliability Engineering & Network Service Design. We are specifically looking for experienced individuals from the world of Finance & Banking, Capital Markets & Live Market Data, ideally from Low Latency 'always on' networking environments. To be considered for this role, candidates must have experience working at global enterprise businesses, supporting network operations and SRE, Service Management, Monitoring & Network Automation/DevOps supporting WAN & Edge Connectivity. To be shortlisted, you'll have a strong technical background within Network Engineering & Network Design, as a Technical Networking SME but also as a Services Manager, taking ownership for key vendor relationships and displaying excellent internal & external stakeholder management skills. Must be able to demonstrate excellent team management & leadership skills, being responsible for budget planning & forecasting as well as facing off to senior leaders within the business, all whilst driving product aligned, service focused ways of working in line with agile methodologies eg Scrum & Kanban.
10/03/2025
Full time
Global Network Operations Manager - Network Automation, SRE & DevOps Seeking a Senior Network Leader to take ownership of Global Network Operations, Site Reliability Engineering & Network Service Design. We are specifically looking for experienced individuals from the world of Finance & Banking, Capital Markets & Live Market Data, ideally from Low Latency 'always on' networking environments. To be considered for this role, candidates must have experience working at global enterprise businesses, supporting network operations and SRE, Service Management, Monitoring & Network Automation/DevOps supporting WAN & Edge Connectivity. To be shortlisted, you'll have a strong technical background within Network Engineering & Network Design, as a Technical Networking SME but also as a Services Manager, taking ownership for key vendor relationships and displaying excellent internal & external stakeholder management skills. Must be able to demonstrate excellent team management & leadership skills, being responsible for budget planning & forecasting as well as facing off to senior leaders within the business, all whilst driving product aligned, service focused ways of working in line with agile methodologies eg Scrum & Kanban.
Senior Quality Assurance Engineer (m/f/d) with SAP Know How - Projekt-Nr.: 44339 We are seeking a Senior Quality Assurance Engineer to join a project in Barcelona. The role involves behavioral testing, automated testing, and working with DevOps tools to ensure the quality of single-page applications and SAP E-commerce systems. Place: Barcelona Location: Barcelona Project Duration: 07.03.2025 - 31.12.2025 Person-Days: 179/Workload: 100% Industry: Industry Submission Deadline: ASAP Your tasks Review requirements, specifications, and technical design documents to provide timely and meaningful feedback Create detailed, comprehensive, and well-structured test plans and cases to structure the testing process Provide estimates, prioritize, plan, and coordinate testing activities to align with the development cycle Design, develop, and execute automation scripts using open-source tools Identify, record, and document issues to ensure developers can resolve them effectively Perform thorough regression testing to verify that bugs or fixes are appropriately resolved Take a user-centered approach to ensure requirements and tests cover the right things for usability and compliance Liaise with internal teams to identify system requirements and ensure compliance with quality and cybersecurity standards Track quality assurance metrics, including defect densities and open defect counts Collaborate with cross-functional teams to ensure adherence to SCRUM practices and agile methodologies Must have competences Experience in behavioral testing of user experiences Proficiency in automated testing tools and automated test script development Experience with DevOps tools such as JIRA, X-Ray, and Confluence Agile mindset and experience working as part of a SCRUM team Experience with testing single-page applications with complex user flows Strong problem-solving skills and critical thinking Excellent communication and interpersonal skills Ability to collaborate effectively with team members and stakeholders Nice to have competences Experience with SAP E-commerce, SAP Hybris, or SAP Commerce Cloud Knowledge of integration with other systems Proactive attitude and willingness to learn Self-improvement in tackling problems Transparency and clarity in work processes Additional information The position is based in Barcelona, with a starting date as soon as possible.
10/03/2025
Project-based
Senior Quality Assurance Engineer (m/f/d) with SAP Know How - Projekt-Nr.: 44339 We are seeking a Senior Quality Assurance Engineer to join a project in Barcelona. The role involves behavioral testing, automated testing, and working with DevOps tools to ensure the quality of single-page applications and SAP E-commerce systems. Place: Barcelona Location: Barcelona Project Duration: 07.03.2025 - 31.12.2025 Person-Days: 179/Workload: 100% Industry: Industry Submission Deadline: ASAP Your tasks Review requirements, specifications, and technical design documents to provide timely and meaningful feedback Create detailed, comprehensive, and well-structured test plans and cases to structure the testing process Provide estimates, prioritize, plan, and coordinate testing activities to align with the development cycle Design, develop, and execute automation scripts using open-source tools Identify, record, and document issues to ensure developers can resolve them effectively Perform thorough regression testing to verify that bugs or fixes are appropriately resolved Take a user-centered approach to ensure requirements and tests cover the right things for usability and compliance Liaise with internal teams to identify system requirements and ensure compliance with quality and cybersecurity standards Track quality assurance metrics, including defect densities and open defect counts Collaborate with cross-functional teams to ensure adherence to SCRUM practices and agile methodologies Must have competences Experience in behavioral testing of user experiences Proficiency in automated testing tools and automated test script development Experience with DevOps tools such as JIRA, X-Ray, and Confluence Agile mindset and experience working as part of a SCRUM team Experience with testing single-page applications with complex user flows Strong problem-solving skills and critical thinking Excellent communication and interpersonal skills Ability to collaborate effectively with team members and stakeholders Nice to have competences Experience with SAP E-commerce, SAP Hybris, or SAP Commerce Cloud Knowledge of integration with other systems Proactive attitude and willingness to learn Self-improvement in tackling problems Transparency and clarity in work processes Additional information The position is based in Barcelona, with a starting date as soon as possible.
We are looking for a Senior DevOps Application Engineer to join a high-impact team managing Real Time critical applications in Belgium. This is a fantastic opportunity to work on a long-term project (60 months), leveraging the latest DevOps automation, cloud technologies, and infrastructure security best practices. You'll play a key role in CI/CD pipeline automation, containerization, and infrastructure optimization, working with Agile development teams to enhance scalability, reliability, and system performance. Responsibilities: Automate CI/CD pipelines using Jenkins, GitLab, and Ansible Manage Kubernetes, Docker, and OpenShift environments Optimize infrastructure security, performance, and availability Collaborate with cross-functional teams on Real Time application support Maintain and improve Linux-based systems and cloud environments Implement Scripting and automation with Python, Bash Requirements: 8+ years of experience in DevOps engineering Strong expertise in CI/CD tools (Ansible & AWX) Hands-on experience with Kubernetes, Docker, and OpenShift Deep knowledge of Linux administration and infrastructure automation Strong Scripting skills (Bash, Python) Experience with networking, security, and high-availability systems If you're a DevOps expert looking for a long-term contract in a mission-critical environment, let's chat!
07/03/2025
Project-based
We are looking for a Senior DevOps Application Engineer to join a high-impact team managing Real Time critical applications in Belgium. This is a fantastic opportunity to work on a long-term project (60 months), leveraging the latest DevOps automation, cloud technologies, and infrastructure security best practices. You'll play a key role in CI/CD pipeline automation, containerization, and infrastructure optimization, working with Agile development teams to enhance scalability, reliability, and system performance. Responsibilities: Automate CI/CD pipelines using Jenkins, GitLab, and Ansible Manage Kubernetes, Docker, and OpenShift environments Optimize infrastructure security, performance, and availability Collaborate with cross-functional teams on Real Time application support Maintain and improve Linux-based systems and cloud environments Implement Scripting and automation with Python, Bash Requirements: 8+ years of experience in DevOps engineering Strong expertise in CI/CD tools (Ansible & AWX) Hands-on experience with Kubernetes, Docker, and OpenShift Deep knowledge of Linux administration and infrastructure automation Strong Scripting skills (Bash, Python) Experience with networking, security, and high-availability systems If you're a DevOps expert looking for a long-term contract in a mission-critical environment, let's chat!
We are currently looking on behalf of one of our important clients for a Senior Data Engineer. The role is permanent position based in Zurich Canton & comes with good home office allowance. Your role: Hold responsibility for designing, building & maintaining the Data Infrastructure. Contribute to Solution Architecture build & design. Provide valuable expertise to data science team to develop next-generation monitoring algorithms. Ensure that data pipelines deliver high-quality data in a robust, economic & scalable way. Your Skills & Experience: At least 5 years of relevant professional experience in Cloud Data Engineering, preferably on Azure. Strong experience in Big Data Processing frameworks (Apache Spark), CI/CD (GitHub actions) & DevOps Skilled & experienced in Python & SQL. Any experience in developing IoT Solutions is considered advantageous. Your Profile: Completed University Degree in the area Computer Science or Similar. Motivated, dynamic & innovative. Fluent in English (spoken & written). Any German language skills are considered very advantageous.
07/03/2025
Full time
We are currently looking on behalf of one of our important clients for a Senior Data Engineer. The role is permanent position based in Zurich Canton & comes with good home office allowance. Your role: Hold responsibility for designing, building & maintaining the Data Infrastructure. Contribute to Solution Architecture build & design. Provide valuable expertise to data science team to develop next-generation monitoring algorithms. Ensure that data pipelines deliver high-quality data in a robust, economic & scalable way. Your Skills & Experience: At least 5 years of relevant professional experience in Cloud Data Engineering, preferably on Azure. Strong experience in Big Data Processing frameworks (Apache Spark), CI/CD (GitHub actions) & DevOps Skilled & experienced in Python & SQL. Any experience in developing IoT Solutions is considered advantageous. Your Profile: Completed University Degree in the area Computer Science or Similar. Motivated, dynamic & innovative. Fluent in English (spoken & written). Any German language skills are considered very advantageous.
Knowledge Engineer, Fully Remote, £80,000 - £100,000 per annum My client, a leading AI solutions company are seeking a Mid-Senior Python Backend Engineer with a passion for knowledge graphs and semantic web technologies. In this role, you will own the full Back End development for an RDF-intensive platform - designing and optimising systems around triple stores (AWS Neptune), Real Time data processing and validation with SHACL, and advanced query capabilities. You will integrate AI-driven SPARQL generation models (LLMs/NLP) to enable intelligent querying of the knowledge graph. Working in a cross-functional squad of 3-8 team members using a Lean Kanban approach, you'll collaborate closely with product, data scientists, and DevOps to deliver high-quality features in a fast-paced, agile environment. Key Responsibilities: Design and Develop Knowledge Graph Backends: Build robust Back End services to manage RDF data in triple stores (AWS Neptune) and vector embeddings in Milvus. Ensure Real Time processing of graph data, including on-the-fly validation with SHACL to maintain data integrity. SPARQL Query Implementation & AI Integration: Create efficient SPARQL queries and endpoints for data retrieval. Integrate NLP/AI models (eg Hugging Face transformers, OpenAI APIs, LlamaIndex AgentFlow) to translate natural language into SPARQL queries, enabling AI-driven query generation and semantic search. API & Microservices Development: Develop and maintain RESTful APIs and GraphQL endpoints (using FastAPI or Flask) to expose knowledge graph data and services. Follow microservices architecture best practices to ensure components are modular, scalable, and easy to maintain. Database & State Management: Manage data storage solutions including PostgreSQL (for application/session state) and caching layers as needed. Use SQLAlchemy or similar ORM for efficient database interactions and maintain data consistency between the relational and graph data stores. Performance Optimisation & Scalability: Optimise SPARQL queries, data indexing (including vector indices in Milvus), and service architecture for low-latency, Real Time responses. Ensure the system scales to handle growing knowledge graph data and high query volumes. DevOps and Deployment: Collaborate with DevOps to containerize and deploy services using Docker and Kubernetes. Implement CI/CD pipelines for automated testing and deployment. Monitor services on cloud platforms (AWS/Azure) for reliability, and participate in performance tuning and troubleshooting as needed. Team Collaboration: Work closely within a small, cross-functional squad (engineers, QA, product, data scientists) to plan and deliver features. Participate in Lean Kanban rituals (eg stand-ups, continuous flow planning) to ensure steady progress. Mentor junior developers when necessary and uphold best practices in code quality, testing, and documentation. Required Skills and Experience: Programming Languages: Strong proficiency in Python (Back End development focus). Solid experience writing and optimizing SPARQL queries for RDF data. Knowledge Graph & Semantic Web: Hands-on experience with RDF and triple stores- ideally AWS Neptune or similar graph databases. Familiarity with RDF schemas/ontologies and concepts like triples, graphs, and URIs. SHACL & Data Validation: Experience using SHACL (Shapes Constraint Language) or similar tools for Real Time data validation in knowledge graphs. Ability to define and enforce data schemas/constraints to ensure data quality. Vector Stores: Practical knowledge of vector databases such as Milvus (or alternatives like FAISS, Pinecone) for storing and querying embeddings. Understanding of how to integrate vector similarity search with knowledge graph data for enhanced query results. Frameworks & Libraries: Proficiency with libraries like RDFLib for handling RDF data in Python and PySHACL for running SHACL validations. Experience with SQLAlchemy (or other ORMs) for PostgreSQL. Familiarity with LlamaIndex (AgentFlow) or similar frameworks for connecting language models to data sources. API Development: Proven experience building Back End RESTful APIs (FastAPI, Flask or similar) and/or GraphQL APIs. Knowledge of designing API contracts, versioning, and authentication/authorization mechanisms. Microservices & Architecture: Understanding of microservices architecture and patterns. Ability to design decoupled services and work with message queues or event streams if needed for Real Time processing. AI/ML Integration: Experience integrating NLP/LLM models (Hugging Face transformers, OpenAI, etc.) into applications. Specifically, comfort with leveraging AI to generate or optimize queries (eg, natural language to SPARQL translation) and working with frameworks like LlamaIndex to bridge AI and the knowledge graph. Databases: Strong SQL skills and experience with PostgreSQL (for transactional data or session state). Ability to write efficient queries and design relational schemas that complement the knowledge graph. Basic understanding of how relational data can link to graph data. Cloud & DevOps: Experience deploying applications on AWS or Azure. Proficiency with Docker for containerization and Kubernetes for orchestration. Experience setting up CI/CD pipelines (GitHub Actions, Jenkins, or similar) to automate testing and deployment. Familiarity with cloud services (AWS Neptune, S3, networking, monitoring tools etc.) is a plus. Agile Collaboration: Comfortable working in an Agile/Lean Kanban software development process. Strong collaboration and communication skills to function effectively in a remote or hybrid work environment. Ability to take ownership of tasks and drive them to completion with minimal supervision, while also engaging with the team for feedback and knowledge sharing.
07/03/2025
Full time
Knowledge Engineer, Fully Remote, £80,000 - £100,000 per annum My client, a leading AI solutions company are seeking a Mid-Senior Python Backend Engineer with a passion for knowledge graphs and semantic web technologies. In this role, you will own the full Back End development for an RDF-intensive platform - designing and optimising systems around triple stores (AWS Neptune), Real Time data processing and validation with SHACL, and advanced query capabilities. You will integrate AI-driven SPARQL generation models (LLMs/NLP) to enable intelligent querying of the knowledge graph. Working in a cross-functional squad of 3-8 team members using a Lean Kanban approach, you'll collaborate closely with product, data scientists, and DevOps to deliver high-quality features in a fast-paced, agile environment. Key Responsibilities: Design and Develop Knowledge Graph Backends: Build robust Back End services to manage RDF data in triple stores (AWS Neptune) and vector embeddings in Milvus. Ensure Real Time processing of graph data, including on-the-fly validation with SHACL to maintain data integrity. SPARQL Query Implementation & AI Integration: Create efficient SPARQL queries and endpoints for data retrieval. Integrate NLP/AI models (eg Hugging Face transformers, OpenAI APIs, LlamaIndex AgentFlow) to translate natural language into SPARQL queries, enabling AI-driven query generation and semantic search. API & Microservices Development: Develop and maintain RESTful APIs and GraphQL endpoints (using FastAPI or Flask) to expose knowledge graph data and services. Follow microservices architecture best practices to ensure components are modular, scalable, and easy to maintain. Database & State Management: Manage data storage solutions including PostgreSQL (for application/session state) and caching layers as needed. Use SQLAlchemy or similar ORM for efficient database interactions and maintain data consistency between the relational and graph data stores. Performance Optimisation & Scalability: Optimise SPARQL queries, data indexing (including vector indices in Milvus), and service architecture for low-latency, Real Time responses. Ensure the system scales to handle growing knowledge graph data and high query volumes. DevOps and Deployment: Collaborate with DevOps to containerize and deploy services using Docker and Kubernetes. Implement CI/CD pipelines for automated testing and deployment. Monitor services on cloud platforms (AWS/Azure) for reliability, and participate in performance tuning and troubleshooting as needed. Team Collaboration: Work closely within a small, cross-functional squad (engineers, QA, product, data scientists) to plan and deliver features. Participate in Lean Kanban rituals (eg stand-ups, continuous flow planning) to ensure steady progress. Mentor junior developers when necessary and uphold best practices in code quality, testing, and documentation. Required Skills and Experience: Programming Languages: Strong proficiency in Python (Back End development focus). Solid experience writing and optimizing SPARQL queries for RDF data. Knowledge Graph & Semantic Web: Hands-on experience with RDF and triple stores- ideally AWS Neptune or similar graph databases. Familiarity with RDF schemas/ontologies and concepts like triples, graphs, and URIs. SHACL & Data Validation: Experience using SHACL (Shapes Constraint Language) or similar tools for Real Time data validation in knowledge graphs. Ability to define and enforce data schemas/constraints to ensure data quality. Vector Stores: Practical knowledge of vector databases such as Milvus (or alternatives like FAISS, Pinecone) for storing and querying embeddings. Understanding of how to integrate vector similarity search with knowledge graph data for enhanced query results. Frameworks & Libraries: Proficiency with libraries like RDFLib for handling RDF data in Python and PySHACL for running SHACL validations. Experience with SQLAlchemy (or other ORMs) for PostgreSQL. Familiarity with LlamaIndex (AgentFlow) or similar frameworks for connecting language models to data sources. API Development: Proven experience building Back End RESTful APIs (FastAPI, Flask or similar) and/or GraphQL APIs. Knowledge of designing API contracts, versioning, and authentication/authorization mechanisms. Microservices & Architecture: Understanding of microservices architecture and patterns. Ability to design decoupled services and work with message queues or event streams if needed for Real Time processing. AI/ML Integration: Experience integrating NLP/LLM models (Hugging Face transformers, OpenAI, etc.) into applications. Specifically, comfort with leveraging AI to generate or optimize queries (eg, natural language to SPARQL translation) and working with frameworks like LlamaIndex to bridge AI and the knowledge graph. Databases: Strong SQL skills and experience with PostgreSQL (for transactional data or session state). Ability to write efficient queries and design relational schemas that complement the knowledge graph. Basic understanding of how relational data can link to graph data. Cloud & DevOps: Experience deploying applications on AWS or Azure. Proficiency with Docker for containerization and Kubernetes for orchestration. Experience setting up CI/CD pipelines (GitHub Actions, Jenkins, or similar) to automate testing and deployment. Familiarity with cloud services (AWS Neptune, S3, networking, monitoring tools etc.) is a plus. Agile Collaboration: Comfortable working in an Agile/Lean Kanban software development process. Strong collaboration and communication skills to function effectively in a remote or hybrid work environment. Ability to take ownership of tasks and drive them to completion with minimal supervision, while also engaging with the team for feedback and knowledge sharing.
Data Engineering Manager (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You MUST have the following: Good experience as a hands-on Data Engineering Manager/Architect/Technical Lead Excellent design and architecture ability for systems involving large amounts of data Advanced Java Amazon Web Services (AWS) or GCP CI/CD pipelines TDD Enterprise-scale SQL or Oracle Terraform, Kubernetes, Docker The following is DESIRABLE, not essential: Experience delivery projects in data management, governance and regulation Python An understanding of data mesh architecture Kafka, Iceberg, Hoodie Role: Data Engineering Manager (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You will be hired to be the technical lead and co-manager of a new team that is being assembled to build a new data management platform on AWS. The greenfield project will include the automation of data catalogue population and the implementation of data governance policies. You will be the lead engineer/manager/solutions architect in a team that has a senior developer, mid-level developer and business lead. You and the business lead will share responsibility of the team. He will be responsible for the interpretation of data regulation, building of road maps and strategy and creation of policies. You will do the design, architecture and technical delivery of this strategy and his data policies. Over the course of the next year, you will hire more developers into the team and the workload grows. The technology is Java on AWS with some Python. You will be very hands-on and as part of a small team, you will also be involved in DevOps and testing. You will be confident with CI/CD pipelines, IaC and containerization. You will also be comfortable with enterprise-scale SQL and/or Oracle databases. As the data environment moves from an AWS based data lake to a data mesh architecture, any understanding of data mesh would also be highly desirable. You will also contribute to the two other teams in the data engineering space within the company- the data platform team which operates a Hoodie based data lake and the team working with Iceberg and Kafka to create the new data mesh architecture- but the data governance programme will be your priority. Hours are 8.30am - 5.30pm. Salary: $220k - $260k + 25% Bonus + $25k Share Options
07/03/2025
Full time
Data Engineering Manager (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You MUST have the following: Good experience as a hands-on Data Engineering Manager/Architect/Technical Lead Excellent design and architecture ability for systems involving large amounts of data Advanced Java Amazon Web Services (AWS) or GCP CI/CD pipelines TDD Enterprise-scale SQL or Oracle Terraform, Kubernetes, Docker The following is DESIRABLE, not essential: Experience delivery projects in data management, governance and regulation Python An understanding of data mesh architecture Kafka, Iceberg, Hoodie Role: Data Engineering Manager (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You will be hired to be the technical lead and co-manager of a new team that is being assembled to build a new data management platform on AWS. The greenfield project will include the automation of data catalogue population and the implementation of data governance policies. You will be the lead engineer/manager/solutions architect in a team that has a senior developer, mid-level developer and business lead. You and the business lead will share responsibility of the team. He will be responsible for the interpretation of data regulation, building of road maps and strategy and creation of policies. You will do the design, architecture and technical delivery of this strategy and his data policies. Over the course of the next year, you will hire more developers into the team and the workload grows. The technology is Java on AWS with some Python. You will be very hands-on and as part of a small team, you will also be involved in DevOps and testing. You will be confident with CI/CD pipelines, IaC and containerization. You will also be comfortable with enterprise-scale SQL and/or Oracle databases. As the data environment moves from an AWS based data lake to a data mesh architecture, any understanding of data mesh would also be highly desirable. You will also contribute to the two other teams in the data engineering space within the company- the data platform team which operates a Hoodie based data lake and the team working with Iceberg and Kafka to create the new data mesh architecture- but the data governance programme will be your priority. Hours are 8.30am - 5.30pm. Salary: $220k - $260k + 25% Bonus + $25k Share Options
Lead Data Engineer (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You MUST have the following: Good experience as a Lead Data Engineer/Data Engineering Solutions Architect Excellent design and architecture ability for systems involving large amounts of data Advanced Java Amazon Web Services (AWS) or GCP CI/CD pipelines TDD Enterprise-scale SQL or Oracle Terraform, Kubernetes, Docker The following is DESIRABLE, not essential: Experience delivery projects in data management, governance and regulation Python An understanding of data mesh architecture Kafka, Iceberg, Hoodie Role: Lead Data Engineer (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You will be hired to be the technical lead of a new team that is being assembled to build a new data management platform on AWS. The greenfield project will include the automation of data catalogue population and the implementation of data governance policies. You will be the solutions architect in a team that has a senior developer, mid-level developer and business lead. You and the business lead will share responsibility of the team. He will be responsible for the interpretation of data regulation, building of road maps and strategy and creation of policies. You will do the design, architecture and technical delivery of this strategy and his data policies. Over the course of the next year, you will hire more developers into the team and the workload grows. The technology is Java on AWS with some Python. You will be very hands-on and as part of a small team, you will also be involved in DevOps and testing. You will be confident with CI/CD pipelines, IaC and containerization. You will also be comfortable with enterprise-scale SQL and/or Oracle databases. As the data environment moves from an AWS based data lake to a data mesh architecture, any understanding of data mesh would also be highly desirable. You will also contribute to the two other teams in the data engineering space within the company- the data platform team which operates a Hoodie based data lake and the team working with Iceberg and Kafka to create the new data mesh architecture- but the data governance programme will be your priority. Hours are 8.30am - 5.30pm. Salary: $190k - $220k + 25% Bonus + $25k Share Options
07/03/2025
Full time
Lead Data Engineer (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You MUST have the following: Good experience as a Lead Data Engineer/Data Engineering Solutions Architect Excellent design and architecture ability for systems involving large amounts of data Advanced Java Amazon Web Services (AWS) or GCP CI/CD pipelines TDD Enterprise-scale SQL or Oracle Terraform, Kubernetes, Docker The following is DESIRABLE, not essential: Experience delivery projects in data management, governance and regulation Python An understanding of data mesh architecture Kafka, Iceberg, Hoodie Role: Lead Data Engineer (Architecture Architect Solutions Java Python Automation Data Lake Datalake Data Mesh CI/CD Big Data AWS SQL Oracle Java Kafka Apache Iceberg Hoodie Finance Trading Financial Services Banking Remote Working Governance Management Regulation) required by our financial services client in Manhattan, New York City. You will be hired to be the technical lead of a new team that is being assembled to build a new data management platform on AWS. The greenfield project will include the automation of data catalogue population and the implementation of data governance policies. You will be the solutions architect in a team that has a senior developer, mid-level developer and business lead. You and the business lead will share responsibility of the team. He will be responsible for the interpretation of data regulation, building of road maps and strategy and creation of policies. You will do the design, architecture and technical delivery of this strategy and his data policies. Over the course of the next year, you will hire more developers into the team and the workload grows. The technology is Java on AWS with some Python. You will be very hands-on and as part of a small team, you will also be involved in DevOps and testing. You will be confident with CI/CD pipelines, IaC and containerization. You will also be comfortable with enterprise-scale SQL and/or Oracle databases. As the data environment moves from an AWS based data lake to a data mesh architecture, any understanding of data mesh would also be highly desirable. You will also contribute to the two other teams in the data engineering space within the company- the data platform team which operates a Hoodie based data lake and the team working with Iceberg and Kafka to create the new data mesh architecture- but the data governance programme will be your priority. Hours are 8.30am - 5.30pm. Salary: $190k - $220k + 25% Bonus + $25k Share Options