Logo
  • Publica Anunt
  • Cauta Joburi
  • En

LoginIntra in cont

Intra in cont
  • Continut personalizat
  • Solutii si produse de recrutare
Log In Ai uitat parola ?
Inregistrare
Cont nou?
Creeaza-ti cont

Intra in cont

Ai uitat parola ?
Cautare avansata Alerte joburi Avanseaza in cariera Joburi Studenti Adauga CV Alege produs de recrutare

Alerte joburi

Anuntul de job nu mai este valabil. Va prezentam mai jos cateva oferte de angajare similare cu anuntul cautat de dvs.

52 joburi disponibile

Seteaza o alerta de joburi
Refine Search
Cautare curenta
ci cd solutions engineer
Request Technology - Craig Johnson
Principal Technology Platform Engineer
Request Technology - Craig Johnson Dallas, Texas
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Principal Technology Platform Engineer. Candidate will be a senior member of the team, and be responsible for collaborating with stakeholders, partner teams, and solutions architects to research and engineer available technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and leading proof-of-concept and laboratory testing efforts using modern approaches to process and automation. Responsibilities: Key contributor to technology platform design, testing, and implementation process to introduce new technology and improve existing technology initiatives for hybrid cloud and on-premises infrastructure within the Data Centers. Define and implement testing and success criteria for platforms, products, and technologies to ensure alignment with business, security, and architecture objectives. Lead and participate in exploratory proof-of-concept engagements and technology stress testing to determine solution feasibility and stability. Collaborate with various partner teams across technology, security, and business to provide technical consultation as part of projects and daily business activities. Create technical knowledge and guideline documentation for new and existing technologies to assist partner teams with knowledge transfer for execution and operations. Develop and maintain scalable DevSecOps pipelines, including CI/CD, Infrastructure as Code, and automated security scanning. No direct supervision, but candidate will provide mentorship to members of the team. Qualifications: Excellent oral and written communication. Ability to think strategically and map architectural decisions/recommendations to business needs Ability to work independently and collaboratively with local and remote employees, vendors, and consultants. Must possess critical values, including (but not limited to) collaboration, credibility, trust, adaptability and commitment to do the right thing. Proven track record of collaborating cross-functionally and delivering impactful technical solutions. Technical Skills: Experience developing CI/CD workflows using tools like Github Actions, Jenkins, Azure DevOps Pipelines, AWS CodePipeline, etc Familiarity with GitOps driven deployment tooling with solutions using tools like ArgoCD, FluxCD, etc In-depth knowledge of Observability and enterprise-level monitoring, logging, and alerting solutions through tools like Prometheus, Elasticsearch, Grafana, etc Experience with Cloud native technologies such as Kubernetes, ECS or Azure Container instances. Understanding of enterprise-grade networking technologies, including Routers, Switches, Firewalls and load balancers. Knowledge of network security protocols and certificate-based authentication Deeply experienced with infrastructure as code, like Terraform, OpenTofu or Pulumi Experience working with authentication protocols and suites ( LDAP, Kerberos, SAML, etc.), multi-factor authentication and password-less platforms and technologies, role-based access control and entitlements, etc. Solid understanding of common database technologies on-premises and in the cloud (PostgreSQL, MongoDB, Redis, MSSQL, etc ), data field hardening and encryption, access controls, high-availability, etc. Understanding of governance frameworks and standards such as COBIT, NIST CSF are a plus. Experience with regulatory frameworks such as Reg SCI and CFTC 99.18 are a plus. Education and/or Experience: [Preferred] 10+ years of progressive experience as a senior/lead engineer in a DevOps, SRE or infrastructure-focused role [Required] Deep expertise in cloud computing platforms (AWS, Azure, Google Cloud Platform, etc.) and infrastructure as code using tools like Terraform, Ansible, etc. [Required] Strong background in designing and maintaining CI/CD pipelines, with experience integration security testing and compliance [Required] Proficient in Scripting and programming languages such as Python, Bash or Go [Required] Understanding of traditional on-premises data center technologies and hybrid cloud architecture [Preferred] Bachelor's degree or higher in a technical field
30/06/2025
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Principal Technology Platform Engineer. Candidate will be a senior member of the team, and be responsible for collaborating with stakeholders, partner teams, and solutions architects to research and engineer available technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and leading proof-of-concept and laboratory testing efforts using modern approaches to process and automation. Responsibilities: Key contributor to technology platform design, testing, and implementation process to introduce new technology and improve existing technology initiatives for hybrid cloud and on-premises infrastructure within the Data Centers. Define and implement testing and success criteria for platforms, products, and technologies to ensure alignment with business, security, and architecture objectives. Lead and participate in exploratory proof-of-concept engagements and technology stress testing to determine solution feasibility and stability. Collaborate with various partner teams across technology, security, and business to provide technical consultation as part of projects and daily business activities. Create technical knowledge and guideline documentation for new and existing technologies to assist partner teams with knowledge transfer for execution and operations. Develop and maintain scalable DevSecOps pipelines, including CI/CD, Infrastructure as Code, and automated security scanning. No direct supervision, but candidate will provide mentorship to members of the team. Qualifications: Excellent oral and written communication. Ability to think strategically and map architectural decisions/recommendations to business needs Ability to work independently and collaboratively with local and remote employees, vendors, and consultants. Must possess critical values, including (but not limited to) collaboration, credibility, trust, adaptability and commitment to do the right thing. Proven track record of collaborating cross-functionally and delivering impactful technical solutions. Technical Skills: Experience developing CI/CD workflows using tools like Github Actions, Jenkins, Azure DevOps Pipelines, AWS CodePipeline, etc Familiarity with GitOps driven deployment tooling with solutions using tools like ArgoCD, FluxCD, etc In-depth knowledge of Observability and enterprise-level monitoring, logging, and alerting solutions through tools like Prometheus, Elasticsearch, Grafana, etc Experience with Cloud native technologies such as Kubernetes, ECS or Azure Container instances. Understanding of enterprise-grade networking technologies, including Routers, Switches, Firewalls and load balancers. Knowledge of network security protocols and certificate-based authentication Deeply experienced with infrastructure as code, like Terraform, OpenTofu or Pulumi Experience working with authentication protocols and suites ( LDAP, Kerberos, SAML, etc.), multi-factor authentication and password-less platforms and technologies, role-based access control and entitlements, etc. Solid understanding of common database technologies on-premises and in the cloud (PostgreSQL, MongoDB, Redis, MSSQL, etc ), data field hardening and encryption, access controls, high-availability, etc. Understanding of governance frameworks and standards such as COBIT, NIST CSF are a plus. Experience with regulatory frameworks such as Reg SCI and CFTC 99.18 are a plus. Education and/or Experience: [Preferred] 10+ years of progressive experience as a senior/lead engineer in a DevOps, SRE or infrastructure-focused role [Required] Deep expertise in cloud computing platforms (AWS, Azure, Google Cloud Platform, etc.) and infrastructure as code using tools like Terraform, Ansible, etc. [Required] Strong background in designing and maintaining CI/CD pipelines, with experience integration security testing and compliance [Required] Proficient in Scripting and programming languages such as Python, Bash or Go [Required] Understanding of traditional on-premises data center technologies and hybrid cloud architecture [Preferred] Bachelor's degree or higher in a technical field
Request Technology - Craig Johnson
Director of Java Kafka Software Development
Request Technology - Craig Johnson Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Director of Software Development with strong Java and Kafka experience. Candidate will be responsible for leading a team of skilled software engineers designing and delivering scalable and resilient hybrid and Cloud-based applications and data solutions supporting critical financial market clearing and risk activities; helping to drive the strategy of transforming the enterprise into a data-driven organization; lead through innovative strategic thinking in building data solutions. Responsibilities: Manage, lead, and mentor software development team Serve as technical product owner flushing out detailed business, architectural, and design requirements Develop solutions to complex technical challenges while coding, testing, troubleshooting and documenting the systems you and your team develop Recommend architectural changes and new technologies and tools that improve the efficiency and quality of OCC's systems and development processes Lead the efforts to optimize application performance and resilience though analysis, code refactoring, and systems tuning Collaborate with others to deliver complex projects involving the integration with multiple systems Work closely with internal and external business and technology partners. Build and manage a team of skilled software engineers Qualifications: 8+ years of experience leading software development teams Experience with Java Experience with distributed message brokers like Flink, Spark, Kafka Streams, etc. Experience with Agile development processes for enterprise software solutions Experience with software testing methodologies and automated testing frameworks Strong leadership skills Ability to manage project teams with different timelines and focus Knowledge of industry trends, best practices, and change management Strong communication skills with ability to communicate and interact with engineers and business stakeholders Team player, self-driven, motivated, and able to work under pressure Technical Skills: 8-10 years of experience in building high performance, large scale data solutions Experience managing a team of professionals to drive their work, providing mentoring for growth, and delivering constructive feedback or course correction where necessary 8+ years of solutions design and architecture experience Hands-on development experience with multiple programming languages such as Python and Java Experience with Big Data processing technologies and frameworks such as Presto, Hadoop, MapReduce, and Spark Hands-on experiences designing and implementing RESTful APIs Knowledge and understanding of DevOps tools and technologies such as Terraform, Git, Jenkins, Docker, Harness, NexArtifactory, and CI/CD pipelines Knowledge of SQL, data warehousing design concepts, various data management systems (structured and semi structured) and integrating with various database technologies (Relational, NoSQL) Experience working with Cloud ecosystems (AWS, Azure, Google Cloud Platform) Experience with stream processing technologies and frameworks such as Kafka, Spark Streaming, Flink Familiarity with monitoring related tools and frameworks like Splunk, Elasticsearch, SignalFX, and AppDynamics Good understanding of data integrations patterns, technologies, and tools Education/Certification: BS degree in Computer Science, similar technical field, or equivalent practical experience. Master's degree preferred OCP Java Programmer Certification (preferred) AWS Certified Solutions Architect (preferred)
30/06/2025
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Director of Software Development with strong Java and Kafka experience. Candidate will be responsible for leading a team of skilled software engineers designing and delivering scalable and resilient hybrid and Cloud-based applications and data solutions supporting critical financial market clearing and risk activities; helping to drive the strategy of transforming the enterprise into a data-driven organization; lead through innovative strategic thinking in building data solutions. Responsibilities: Manage, lead, and mentor software development team Serve as technical product owner flushing out detailed business, architectural, and design requirements Develop solutions to complex technical challenges while coding, testing, troubleshooting and documenting the systems you and your team develop Recommend architectural changes and new technologies and tools that improve the efficiency and quality of OCC's systems and development processes Lead the efforts to optimize application performance and resilience though analysis, code refactoring, and systems tuning Collaborate with others to deliver complex projects involving the integration with multiple systems Work closely with internal and external business and technology partners. Build and manage a team of skilled software engineers Qualifications: 8+ years of experience leading software development teams Experience with Java Experience with distributed message brokers like Flink, Spark, Kafka Streams, etc. Experience with Agile development processes for enterprise software solutions Experience with software testing methodologies and automated testing frameworks Strong leadership skills Ability to manage project teams with different timelines and focus Knowledge of industry trends, best practices, and change management Strong communication skills with ability to communicate and interact with engineers and business stakeholders Team player, self-driven, motivated, and able to work under pressure Technical Skills: 8-10 years of experience in building high performance, large scale data solutions Experience managing a team of professionals to drive their work, providing mentoring for growth, and delivering constructive feedback or course correction where necessary 8+ years of solutions design and architecture experience Hands-on development experience with multiple programming languages such as Python and Java Experience with Big Data processing technologies and frameworks such as Presto, Hadoop, MapReduce, and Spark Hands-on experiences designing and implementing RESTful APIs Knowledge and understanding of DevOps tools and technologies such as Terraform, Git, Jenkins, Docker, Harness, NexArtifactory, and CI/CD pipelines Knowledge of SQL, data warehousing design concepts, various data management systems (structured and semi structured) and integrating with various database technologies (Relational, NoSQL) Experience working with Cloud ecosystems (AWS, Azure, Google Cloud Platform) Experience with stream processing technologies and frameworks such as Kafka, Spark Streaming, Flink Familiarity with monitoring related tools and frameworks like Splunk, Elasticsearch, SignalFX, and AppDynamics Good understanding of data integrations patterns, technologies, and tools Education/Certification: BS degree in Computer Science, similar technical field, or equivalent practical experience. Master's degree preferred OCP Java Programmer Certification (preferred) AWS Certified Solutions Architect (preferred)
Request Technology - Robyn Honquest
Lead Middleware Kafka Administration/DevOps
Request Technology - Robyn Honquest Coppell, Texas
Lead Middleware Kafka Administration/DevOps/IaC The key to this rule is CURRENT Kafka administration (minimum eight years) Five years in Terraform, Ansible, and Infrastructure as a Code IaC Nice to haves: Kubernetes, Rancher, GitHub, Artifactory LOCATION: Dallas, TX Hybrid 3 days onsite Open to h1b Looking for 10 years Kafka administration infrastructure as a code cloud/automation container orchestration CID pipeline Kafka Ansible Terraform Bash Kubernetes Rancher GitHub artifactory harness Jenkins AWS Azure CICD IaC automated Cloud provisioning cluster management performance tuning and security We are seeking a highly skilled and experienced Infrastructure Middleware Engineer with deep expertise in Kafka administration, infrastructure as code (IaC), cloud automation, container orchestration and CI/CD pipelines. The ideal candidate will be responsible for designing, implementing, and maintaining robust and scalable Middleware solutions, ensuring high availability, performance, and security. Primary Duties and Responsibilities: To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. Design, implement and manage highly available and scalable Kafka clusters. Monitor Kafka performance, troubleshoot issues and optimize configurations. Develop and maintain IaC using Ansible and Terraform for infrastructure provisioning and configuration Management. Create and maintain reusable IaC modules. Design and implement cloud-based infrastructure solutions on AWS and Azure. Automate cloud resource provisioning, scaling and management using cloud-native tools and services. Deploy and Manage containerized applications using Kubernetes and Rancher Troubleshoot container-related issues and optimize container performance. Design, implement and maintain CI/CD pipelines using tools like GitHub, Artifactory, Harness and Jenkins. Automated the build, test and deployment of Middleware components. Integrate IaC and container technologies into CI/CD pipelines. Document all processes and procedures. Work with development teams to ensure smooth deployments. Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. Strong proficiency in IaC tools, specifically Ansible, Terraform and bash Scripting. Extensive experience with cloud automation and provisioning on AWS and Azure. Proficiency in CI/CD tools, including GitHub, Artifactory, Harness and Jenkins. Strong Scripting skills in languages like Python and Bash. Excellent troubleshooting and problem-solving skills Understanding of networking principles. Experience with monitoring tools like Splunk, Splunk OTEL, Prometheus and Grafana. Technical Skills: Kafka, Ansible, Terraform, Bash, Kubernetes, Rancher, GitHub, Artifactory, Harness, Jenkins, AWS, Azure, CI/CD, IaC, Automated Cloud Provisioning Education and/or Experience: Bachelors degree in Computer Science, Engineering or a related field (or equivalent experience) 10+ years of experience in infrastructure middlware administration. In-depth expertise in Kafka administration, including cluster management, performance tuning, and security. Certificates or Licenses: AWS Solutions Architect, CKAD or CKA certifications preferred.
30/06/2025
Full time
Lead Middleware Kafka Administration/DevOps/IaC The key to this rule is CURRENT Kafka administration (minimum eight years) Five years in Terraform, Ansible, and Infrastructure as a Code IaC Nice to haves: Kubernetes, Rancher, GitHub, Artifactory LOCATION: Dallas, TX Hybrid 3 days onsite Open to h1b Looking for 10 years Kafka administration infrastructure as a code cloud/automation container orchestration CID pipeline Kafka Ansible Terraform Bash Kubernetes Rancher GitHub artifactory harness Jenkins AWS Azure CICD IaC automated Cloud provisioning cluster management performance tuning and security We are seeking a highly skilled and experienced Infrastructure Middleware Engineer with deep expertise in Kafka administration, infrastructure as code (IaC), cloud automation, container orchestration and CI/CD pipelines. The ideal candidate will be responsible for designing, implementing, and maintaining robust and scalable Middleware solutions, ensuring high availability, performance, and security. Primary Duties and Responsibilities: To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. Design, implement and manage highly available and scalable Kafka clusters. Monitor Kafka performance, troubleshoot issues and optimize configurations. Develop and maintain IaC using Ansible and Terraform for infrastructure provisioning and configuration Management. Create and maintain reusable IaC modules. Design and implement cloud-based infrastructure solutions on AWS and Azure. Automate cloud resource provisioning, scaling and management using cloud-native tools and services. Deploy and Manage containerized applications using Kubernetes and Rancher Troubleshoot container-related issues and optimize container performance. Design, implement and maintain CI/CD pipelines using tools like GitHub, Artifactory, Harness and Jenkins. Automated the build, test and deployment of Middleware components. Integrate IaC and container technologies into CI/CD pipelines. Document all processes and procedures. Work with development teams to ensure smooth deployments. Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. Strong proficiency in IaC tools, specifically Ansible, Terraform and bash Scripting. Extensive experience with cloud automation and provisioning on AWS and Azure. Proficiency in CI/CD tools, including GitHub, Artifactory, Harness and Jenkins. Strong Scripting skills in languages like Python and Bash. Excellent troubleshooting and problem-solving skills Understanding of networking principles. Experience with monitoring tools like Splunk, Splunk OTEL, Prometheus and Grafana. Technical Skills: Kafka, Ansible, Terraform, Bash, Kubernetes, Rancher, GitHub, Artifactory, Harness, Jenkins, AWS, Azure, CI/CD, IaC, Automated Cloud Provisioning Education and/or Experience: Bachelors degree in Computer Science, Engineering or a related field (or equivalent experience) 10+ years of experience in infrastructure middlware administration. In-depth expertise in Kafka administration, including cluster management, performance tuning, and security. Certificates or Licenses: AWS Solutions Architect, CKAD or CKA certifications preferred.
Request Technology
Director, Software Engineering (Java, Data)
Request Technology Chicago, Illinois
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Director, Java Software Engineering. This Director will lead a software development team working with Java, Python, Flink, Spark, Kafka, big data processing, DevOps tools, data warehousing/management, etc. Responsibilities: Manage, lead, build, and mentor software development team Serve as technical product owner flushing out detailed business, architectural, and design requirements Develop solutions to complex technical challenges while coding, testing, troubleshooting and documenting the systems you and your team develop Recommend architectural changes and new technologies and tools that improve the efficiency and quality of company systems and development processes Lead the efforts to optimize application performance and resilience though analysis, code refactoring, and systems tuning Qualifications: BS degree in Computer Science, similar technical field, or equivalent practical experience. Master's degree preferred 8-10 years of experience in building high performance, large scale data solutions Hands-on development experience with multiple programming languages such as Python and Java Experience with distributed message brokers like Flink, Spark, Kafka Streams, etc. Experience with Agile development processes for enterprise software solutions Experience with software testing methodologies and automated testing frameworks Experience with Big Data processing technologies and frameworks such as Presto, Hadoop, MapReduce, and Spark Hands-on experiences designing and implementing RESTful APIs Knowledge and understanding of DevOps tools and technologies such as Terraform, Git, Jenkins, Docker, Harness, Nexus/Artifactory, and CI/CD pipelines Knowledge of SQL, data warehousing design concepts, various data management systems (structured and semi structured) and integrating with various database technologies (Relational, NoSQL) Experience working with Cloud ecosystems (AWS, Azure, GCP) Experience with stream processing technologies and frameworks such as Kafka, Spark Streaming, Flink Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. Experience working with various types of databases like Relational, NoSQL, Object-based, Graph Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics
30/06/2025
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Director, Java Software Engineering. This Director will lead a software development team working with Java, Python, Flink, Spark, Kafka, big data processing, DevOps tools, data warehousing/management, etc. Responsibilities: Manage, lead, build, and mentor software development team Serve as technical product owner flushing out detailed business, architectural, and design requirements Develop solutions to complex technical challenges while coding, testing, troubleshooting and documenting the systems you and your team develop Recommend architectural changes and new technologies and tools that improve the efficiency and quality of company systems and development processes Lead the efforts to optimize application performance and resilience though analysis, code refactoring, and systems tuning Qualifications: BS degree in Computer Science, similar technical field, or equivalent practical experience. Master's degree preferred 8-10 years of experience in building high performance, large scale data solutions Hands-on development experience with multiple programming languages such as Python and Java Experience with distributed message brokers like Flink, Spark, Kafka Streams, etc. Experience with Agile development processes for enterprise software solutions Experience with software testing methodologies and automated testing frameworks Experience with Big Data processing technologies and frameworks such as Presto, Hadoop, MapReduce, and Spark Hands-on experiences designing and implementing RESTful APIs Knowledge and understanding of DevOps tools and technologies such as Terraform, Git, Jenkins, Docker, Harness, Nexus/Artifactory, and CI/CD pipelines Knowledge of SQL, data warehousing design concepts, various data management systems (structured and semi structured) and integrating with various database technologies (Relational, NoSQL) Experience working with Cloud ecosystems (AWS, Azure, GCP) Experience with stream processing technologies and frameworks such as Kafka, Spark Streaming, Flink Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. Experience working with various types of databases like Relational, NoSQL, Object-based, Graph Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics
Global Enterprise Partners
Data Domain Lead - Finance - 6+ Months - Freelance
Global Enterprise Partners
Global Enterprise Partners is looking for a Data Domain Lead to drive Back End data strategy and engineering within the Finance domain. This is a hands-on and functional role focused on shaping reusable data models and deploying governance frameworks. Requirements for Data Domain Lead role: 6+ years of experience in Data Engineering, ideally within Finance or Supply Chain domains. Experience in data integration, modelling (preferably in Azure), and Back End data architecture. Proficiency in SQL, Python, and big data/ETL technologies. Experience deploying (not designing) data governance and quality frameworks. Familiarity with DevOps, CI/CD, and Agile methodologies. Strategic mindset with the ability to define data vision, build roadmaps, and align technical solutions with business needs. Details for Data Domain Lead role: Start Date: ASAP Duration: 6months + Location: Remote (1 week per month travelling within EU) Contract Type: Freelance/Independent Contractor Rate: Open Interested? If you or someone you know is interested in the Data Domain Lead position, please get in touch with Angelos Gkelmpesis of Global Enterprise Partners Let op: vacaturefraude Helaas komt vacaturefraude steeds vaker voor. We waarschuwen je voor mogelijke misleiding: * Wij zullen nooit via WhatsApp of in een videogesprek vragen om jouw persoonlijke gegevens (zoals een kopie van je ID, bankgegevens of BSN). * Twijfel je over de echtheid van een vacature of contactpersoon? Neem dan altijd rechtstreeks contact met ons op via de officiële contactgegevens op onze website. Important: job fraud Unfortunately, job fraud is becoming more common. Beware of such scams: * We will never ask for personal information (such as a copy of your ID, bank details, or social security number) via WhatsApp or during a video call. * If you're unsure whether a vacancy or contact person is legitimate, please reach out to us directly using the official contact details on our website.
30/06/2025
Project-based
Global Enterprise Partners is looking for a Data Domain Lead to drive Back End data strategy and engineering within the Finance domain. This is a hands-on and functional role focused on shaping reusable data models and deploying governance frameworks. Requirements for Data Domain Lead role: 6+ years of experience in Data Engineering, ideally within Finance or Supply Chain domains. Experience in data integration, modelling (preferably in Azure), and Back End data architecture. Proficiency in SQL, Python, and big data/ETL technologies. Experience deploying (not designing) data governance and quality frameworks. Familiarity with DevOps, CI/CD, and Agile methodologies. Strategic mindset with the ability to define data vision, build roadmaps, and align technical solutions with business needs. Details for Data Domain Lead role: Start Date: ASAP Duration: 6months + Location: Remote (1 week per month travelling within EU) Contract Type: Freelance/Independent Contractor Rate: Open Interested? If you or someone you know is interested in the Data Domain Lead position, please get in touch with Angelos Gkelmpesis of Global Enterprise Partners Let op: vacaturefraude Helaas komt vacaturefraude steeds vaker voor. We waarschuwen je voor mogelijke misleiding: * Wij zullen nooit via WhatsApp of in een videogesprek vragen om jouw persoonlijke gegevens (zoals een kopie van je ID, bankgegevens of BSN). * Twijfel je over de echtheid van een vacature of contactpersoon? Neem dan altijd rechtstreeks contact met ons op via de officiële contactgegevens op onze website. Important: job fraud Unfortunately, job fraud is becoming more common. Beware of such scams: * We will never ask for personal information (such as a copy of your ID, bank details, or social security number) via WhatsApp or during a video call. * If you're unsure whether a vacancy or contact person is legitimate, please reach out to us directly using the official contact details on our website.
VIQU Ltd
Data Engineering Manager
VIQU Ltd Leeds, Yorkshire
Data Engineering Manager Location: Leeds (Mostly Remote - 1 Day On-Site) Salary: Up to £75,000 per annum Are you an experienced Senior Data Engineer ready to step into a leadership role? We're looking for a Lead Data Engineer to join our team based in Leeds, working mostly remotely with just one day on-site per week. You'll lead the design and delivery of scalable, cloud-based data solutions using Databricks, Python, and SQL, while mentoring a team and driving engineering best practices. About You You might currently be a Senior Data Engineer ready to grow your leadership skills. You're passionate about building robust, efficient data pipelines and shaping cloud data architecture in an agile environment. Key Responsibilities Lead development of data pipelines and solutions using Databricks, Python, and SQL Design and maintain data models supporting analytics and business intelligence Build and optimise ELT/ETL processes on AWS or Azure Collaborate closely with analysts, architects, and stakeholders to deliver high-quality data products Champion best practices in testing, CI/CD, version control, and infrastructure as code Mentor and support your team, taking ownership of technical delivery and decisions Drive continuous improvements in platform performance, cost, and reliability Key Requirements Hands-on experience with Databricks or similar data engineering platforms Strong Python and SQL skills in data engineering contexts Expertise in data modelling and building analytics-ready datasets Experience with AWS or Azure cloud data services Proven leadership or mentorship experience Excellent communication and stakeholder management Agile delivery and DevOps tooling knowledge Desirable Experience with infrastructure-as-code (Terraform, CloudFormation) Familiarity with CI/CD pipelines and orchestration tools Knowledge of data governance and quality controls Experience in regulated or large-scale environments
30/06/2025
Full time
Data Engineering Manager Location: Leeds (Mostly Remote - 1 Day On-Site) Salary: Up to £75,000 per annum Are you an experienced Senior Data Engineer ready to step into a leadership role? We're looking for a Lead Data Engineer to join our team based in Leeds, working mostly remotely with just one day on-site per week. You'll lead the design and delivery of scalable, cloud-based data solutions using Databricks, Python, and SQL, while mentoring a team and driving engineering best practices. About You You might currently be a Senior Data Engineer ready to grow your leadership skills. You're passionate about building robust, efficient data pipelines and shaping cloud data architecture in an agile environment. Key Responsibilities Lead development of data pipelines and solutions using Databricks, Python, and SQL Design and maintain data models supporting analytics and business intelligence Build and optimise ELT/ETL processes on AWS or Azure Collaborate closely with analysts, architects, and stakeholders to deliver high-quality data products Champion best practices in testing, CI/CD, version control, and infrastructure as code Mentor and support your team, taking ownership of technical delivery and decisions Drive continuous improvements in platform performance, cost, and reliability Key Requirements Hands-on experience with Databricks or similar data engineering platforms Strong Python and SQL skills in data engineering contexts Expertise in data modelling and building analytics-ready datasets Experience with AWS or Azure cloud data services Proven leadership or mentorship experience Excellent communication and stakeholder management Agile delivery and DevOps tooling knowledge Desirable Experience with infrastructure-as-code (Terraform, CloudFormation) Familiarity with CI/CD pipelines and orchestration tools Knowledge of data governance and quality controls Experience in regulated or large-scale environments
Robert Walters
Pyhton DB Engineer (on prem)
Robert Walters Glasgow, Lanarkshire
The Opportunity A global financial services firm is seeking an experienced Python Database Engineer to play a key role in shaping and delivering next-generation data infrastructure. This position focuses on building and supporting a highly scalable, resilient OLTP platform using SQL architecture. You'll be part of a global team driving automation, security, and scalability across cloud-based containerised environments. This is a unique opportunity to work in a modern engineering culture that values clean code, DevOps principles, and deep technical expertise. Key Responsibilities * Design and deploy secure, compliant infrastructure in an internal cloud environment. * Contribute to the architecture, development, and roll-out of a SQL platform in a DBaaS model. * Collaborate closely with InfoSec to integrate required access and compliance controls. * Drive automation of infrastructure and containerised services (Kubernetes). * Benchmark and run proof-of-concept evaluations of database technologies. * Document and optimise operational processes around the PostgreSQL platform. * Deliver production-ready solutions using modern CI/CD and infrastructure-as-code practices. Required Skills & Experience * Strong Python development skills (must-have). * Advanced knowledge of PostgreSQL and ANSI SQL. * Deep understanding of Kubernetes for container orchestration and managing stateful services. * Proficiency in Linux systems and Shell Scripting. * Hands-on experience with Terraform, Helm, and CI/CD pipelines. * Proven experience with building or supporting high availability, mission-critical systems. * Strong grasp of authentication and security concepts (OAuth, SAML, OpenID, SCIM, Kerberos). * Familiarity with monitoring tools, agent-based architectures, alerting, and dashboard creation. * Experience working in Agile and DevOps-oriented teams. Preferred Experience * Experience running PostgreSQL or NewSQL databases at scale in production environments. * Prior involvement in infrastructure benchmarking or database product evaluations. * Exposure to DBaaS architecture and deployment models. Why Apply? * Be part of a world-class technology organisation with deep engineering culture. * Gain exposure to cutting-edge technologies in the cloud-native, data infrastructure space. * Join a collaborative team working across global offices. * Access to ongoing training, progression, and mentorship in a complex, high-impact environment. * Work in a centrally located office with excellent on-site amenities, including gym and restaurant. Interested in delivering high-impact infrastructure at scale? Apply now and help shape the future of data services in a global enterprise environment. We are committed to creating an inclusive recruitment experience.If you have a disability or long-term health condition and require adjustments to the recruitment process, our Adjustment Concierge Service is here to support you. Please reach out to us at (see below) to discuss further.
30/06/2025
Project-based
The Opportunity A global financial services firm is seeking an experienced Python Database Engineer to play a key role in shaping and delivering next-generation data infrastructure. This position focuses on building and supporting a highly scalable, resilient OLTP platform using SQL architecture. You'll be part of a global team driving automation, security, and scalability across cloud-based containerised environments. This is a unique opportunity to work in a modern engineering culture that values clean code, DevOps principles, and deep technical expertise. Key Responsibilities * Design and deploy secure, compliant infrastructure in an internal cloud environment. * Contribute to the architecture, development, and roll-out of a SQL platform in a DBaaS model. * Collaborate closely with InfoSec to integrate required access and compliance controls. * Drive automation of infrastructure and containerised services (Kubernetes). * Benchmark and run proof-of-concept evaluations of database technologies. * Document and optimise operational processes around the PostgreSQL platform. * Deliver production-ready solutions using modern CI/CD and infrastructure-as-code practices. Required Skills & Experience * Strong Python development skills (must-have). * Advanced knowledge of PostgreSQL and ANSI SQL. * Deep understanding of Kubernetes for container orchestration and managing stateful services. * Proficiency in Linux systems and Shell Scripting. * Hands-on experience with Terraform, Helm, and CI/CD pipelines. * Proven experience with building or supporting high availability, mission-critical systems. * Strong grasp of authentication and security concepts (OAuth, SAML, OpenID, SCIM, Kerberos). * Familiarity with monitoring tools, agent-based architectures, alerting, and dashboard creation. * Experience working in Agile and DevOps-oriented teams. Preferred Experience * Experience running PostgreSQL or NewSQL databases at scale in production environments. * Prior involvement in infrastructure benchmarking or database product evaluations. * Exposure to DBaaS architecture and deployment models. Why Apply? * Be part of a world-class technology organisation with deep engineering culture. * Gain exposure to cutting-edge technologies in the cloud-native, data infrastructure space. * Join a collaborative team working across global offices. * Access to ongoing training, progression, and mentorship in a complex, high-impact environment. * Work in a centrally located office with excellent on-site amenities, including gym and restaurant. Interested in delivering high-impact infrastructure at scale? Apply now and help shape the future of data services in a global enterprise environment. We are committed to creating an inclusive recruitment experience.If you have a disability or long-term health condition and require adjustments to the recruitment process, our Adjustment Concierge Service is here to support you. Please reach out to us at (see below) to discuss further.
Scope AT Limited
GLASGOW BASED- Database Engineer - Python, Postgres, Kubernetes, CI/CD, Terraform, Linux
Scope AT Limited Glasgow, Lanarkshire
Contract Type: Contingent/Contract (PAYE engagement) Location: Glasgow, United Kingdom (Hybrid - 3 days in office) Contract Duration: 12 months About the Role We are seeking an experienced Database Engineer to join a globally distributed technology team within a leading financial services environment. The successful candidate will contribute to the implementation and support of scalable, resilient NewSQL platforms within an internal cloud-based DBaaS infrastructure. This is a hands-on engineering role, requiring a mix of database product expertise and development skills, particularly in Python. You will play a key part in building high-performance, highly available data solutions with a strong emphasis on automation, security, and operational efficiency. Key Responsibilities Design and deploy secure, compliant infrastructure integrated with organizational controls. Build and maintain scalable database platforms using Postgres in a containerized environment. Collaborate with global teams and security stakeholders to deliver robust DBaaS solutions. Automate deployment and operational processes using CI/CD and Infrastructure as Code tools. Provide architecture input for highly available, production-grade systems. Document and optimize support processes for Postgres-based services. Develop monitoring and alerting systems to ensure high availability and performance. Key Skills & Experience Strong Python development skills (essential) Hands-on experience with Postgres in production environments Expertise in Kubernetes and container orchestration Solid Linux system administration skills Experience with CI/CD pipelines and tools such as Terraform and Helm Proficient in ANSI SQL Experience working with secure authentication and authorization standards (SAML, SCIM, OAuth, OpenID, Kerberos) Strong understanding of DevOps practices and Agile methodologies Ability to develop and manage system monitoring tools and dashboards Experience with large-scale, high-velocity OLTP systems and NewSQL architecture Additional Information This role is offered on a PAYE basis , excluding holiday pay accrual. Candidates must have access to their own device for remote work. Ensure accurate candidate details for internal system access processing.
30/06/2025
Project-based
Contract Type: Contingent/Contract (PAYE engagement) Location: Glasgow, United Kingdom (Hybrid - 3 days in office) Contract Duration: 12 months About the Role We are seeking an experienced Database Engineer to join a globally distributed technology team within a leading financial services environment. The successful candidate will contribute to the implementation and support of scalable, resilient NewSQL platforms within an internal cloud-based DBaaS infrastructure. This is a hands-on engineering role, requiring a mix of database product expertise and development skills, particularly in Python. You will play a key part in building high-performance, highly available data solutions with a strong emphasis on automation, security, and operational efficiency. Key Responsibilities Design and deploy secure, compliant infrastructure integrated with organizational controls. Build and maintain scalable database platforms using Postgres in a containerized environment. Collaborate with global teams and security stakeholders to deliver robust DBaaS solutions. Automate deployment and operational processes using CI/CD and Infrastructure as Code tools. Provide architecture input for highly available, production-grade systems. Document and optimize support processes for Postgres-based services. Develop monitoring and alerting systems to ensure high availability and performance. Key Skills & Experience Strong Python development skills (essential) Hands-on experience with Postgres in production environments Expertise in Kubernetes and container orchestration Solid Linux system administration skills Experience with CI/CD pipelines and tools such as Terraform and Helm Proficient in ANSI SQL Experience working with secure authentication and authorization standards (SAML, SCIM, OAuth, OpenID, Kerberos) Strong understanding of DevOps practices and Agile methodologies Ability to develop and manage system monitoring tools and dashboards Experience with large-scale, high-velocity OLTP systems and NewSQL architecture Additional Information This role is offered on a PAYE basis , excluding holiday pay accrual. Candidates must have access to their own device for remote work. Ensure accurate candidate details for internal system access processing.
GlobalLogic UK&I
Endur Technical Architect
GlobalLogic UK&I
Job Title: Endur Technical Architect Location: Hybrid - 3 days per week on-site in Canary Wharf Start Date: ASAP Contract Duration: Until 31st December 2025 (with potential extension) Contract - Inside IR35 Join GlobalLogic as an Endur Technical Architect We are seeking a highly experienced Endur Technical Architect to join GlobalLogic on an exciting project with one of our large enterprise clients in the energy trading domain. This is a unique opportunity to shape the architecture of a next-generation trading platform, leveraging modern technologies and driving forward digital transformation in the industry. In this role, you will work closely with subject matter experts, product owners, technical leads, designers, and fellow architects to re-architect and design innovative solutions aligned with both business and technology strategies. A deep understanding of energy trading and risk management processes across front, middle, and Back Office functions-particularly within physical trading-is essential. Your Responsibilities: Design and document the architecture of bespoke, modern energy trading systems. Lead functional and technical requirement gathering, analysis, and architectural design. Serve as a thought leader and advisor throughout delivery and design reviews. Communicate effectively across stakeholders, business functions, vendors, and consulting teams. Manage business change and stakeholder expectations with clarity and confidence. Contribute to the architectural evolution of a trading platform supporting complex portfolios and optimization challenges. Must-Have Technical Skills: Microservices architecture & technologies Cloud hosting (AWS) Infrastructure as Code (Terraform) DevOps toolchain (CI/CD, Git, Ansible) Must-Have Functional Experience: Full life cycle experience in physical energy trading Exposure to Power and/or Gas markets Key Skills & Experience Required: Functional Expertise: Strong understanding of physical energy trading (preferably Gas and/or Power). Experience with complex contract optionality , portfolio management , and schedule optimization . Knowledge of deal life cycle and options modelling . Familiarity with dependency graphs in trading environments. Architectural & Technical Competencies: Proven experience designing architectures across: Data Architecture - Transactional and analytical data modelling, Real Time reporting, MongoDB, data migration, reconciliation. Technical Architecture - Hands-on with C#/Java, microservices, containerization (Docker, OpenShift, Kubernetes), React, AWS, Terraform. Integration - Expertise in Real Time messaging (eg, AMQ), API design (JSON, Swagger), and batch processes. Infrastructure & Operations - DevOps practices, CI/CD pipelines, Git, Ansible, cloud elasticity, cost optimization, and grid computing. Delivery Approach: Deep familiarity with Agile methodologies . Capable of working autonomously or as part of a small, high-performing team. Strong analytical mindset with a proactive approach to problem-solving. Effective communicator, both written and verbal, with an ability to explain complex concepts to diverse audiences. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a leader in digital engineering and product development services. We partner with top-tier clients across industries-including finance, telecoms, healthcare, and automotive-to design and build innovative digital platforms and experiences. Our teams combine deep technical expertise with seamless delivery to solve complex challenges, modernise Legacy systems, and accelerate digital transformation. With a strong focus on cloud, data, AI, and Embedded technologies, GlobalLogic UK&I offers a dynamic environment where engineers, architects, and consultants collaborate on cutting-edge projects that make a real-world impact. Join us to shape the future of digital innovation-right here in the UK and beyond.
30/06/2025
Project-based
Job Title: Endur Technical Architect Location: Hybrid - 3 days per week on-site in Canary Wharf Start Date: ASAP Contract Duration: Until 31st December 2025 (with potential extension) Contract - Inside IR35 Join GlobalLogic as an Endur Technical Architect We are seeking a highly experienced Endur Technical Architect to join GlobalLogic on an exciting project with one of our large enterprise clients in the energy trading domain. This is a unique opportunity to shape the architecture of a next-generation trading platform, leveraging modern technologies and driving forward digital transformation in the industry. In this role, you will work closely with subject matter experts, product owners, technical leads, designers, and fellow architects to re-architect and design innovative solutions aligned with both business and technology strategies. A deep understanding of energy trading and risk management processes across front, middle, and Back Office functions-particularly within physical trading-is essential. Your Responsibilities: Design and document the architecture of bespoke, modern energy trading systems. Lead functional and technical requirement gathering, analysis, and architectural design. Serve as a thought leader and advisor throughout delivery and design reviews. Communicate effectively across stakeholders, business functions, vendors, and consulting teams. Manage business change and stakeholder expectations with clarity and confidence. Contribute to the architectural evolution of a trading platform supporting complex portfolios and optimization challenges. Must-Have Technical Skills: Microservices architecture & technologies Cloud hosting (AWS) Infrastructure as Code (Terraform) DevOps toolchain (CI/CD, Git, Ansible) Must-Have Functional Experience: Full life cycle experience in physical energy trading Exposure to Power and/or Gas markets Key Skills & Experience Required: Functional Expertise: Strong understanding of physical energy trading (preferably Gas and/or Power). Experience with complex contract optionality , portfolio management , and schedule optimization . Knowledge of deal life cycle and options modelling . Familiarity with dependency graphs in trading environments. Architectural & Technical Competencies: Proven experience designing architectures across: Data Architecture - Transactional and analytical data modelling, Real Time reporting, MongoDB, data migration, reconciliation. Technical Architecture - Hands-on with C#/Java, microservices, containerization (Docker, OpenShift, Kubernetes), React, AWS, Terraform. Integration - Expertise in Real Time messaging (eg, AMQ), API design (JSON, Swagger), and batch processes. Infrastructure & Operations - DevOps practices, CI/CD pipelines, Git, Ansible, cloud elasticity, cost optimization, and grid computing. Delivery Approach: Deep familiarity with Agile methodologies . Capable of working autonomously or as part of a small, high-performing team. Strong analytical mindset with a proactive approach to problem-solving. Effective communicator, both written and verbal, with an ability to explain complex concepts to diverse audiences. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a leader in digital engineering and product development services. We partner with top-tier clients across industries-including finance, telecoms, healthcare, and automotive-to design and build innovative digital platforms and experiences. Our teams combine deep technical expertise with seamless delivery to solve complex challenges, modernise Legacy systems, and accelerate digital transformation. With a strong focus on cloud, data, AI, and Embedded technologies, GlobalLogic UK&I offers a dynamic environment where engineers, architects, and consultants collaborate on cutting-edge projects that make a real-world impact. Join us to shape the future of digital innovation-right here in the UK and beyond.
Opus Recruitment Solutions Ltd
System Integration Lead
Opus Recruitment Solutions Ltd
Looking for a System Integration Lead to take on an initial 3 month contract, ouside IR35, with on site requirements in London Skills/Requirements Degree in Computer Science, Engineering, or a related field. 5+ years of experience in IT service delivery with a focus on SAP BTP and API technologies. Strong hands-on experience with: SAP Integration Suite/CPI API protocols (REST, SOAP, gRPC) .NET, Java, or Python Cloud platforms (AWS or Azure) CI/CD tools (eg, Jenkins, CircleCI) Data integration tools (eg, SSIS, Azure Data Factory) Experience with microservices, Docker, Kubernetes, and messaging queues (Kafka, ActiveMQ). Please note all applicants must have full right to work and live in the UK. If interested please apply or send your CV over to (see below) and I'll reach out!
30/06/2025
Project-based
Looking for a System Integration Lead to take on an initial 3 month contract, ouside IR35, with on site requirements in London Skills/Requirements Degree in Computer Science, Engineering, or a related field. 5+ years of experience in IT service delivery with a focus on SAP BTP and API technologies. Strong hands-on experience with: SAP Integration Suite/CPI API protocols (REST, SOAP, gRPC) .NET, Java, or Python Cloud platforms (AWS or Azure) CI/CD tools (eg, Jenkins, CircleCI) Data integration tools (eg, SSIS, Azure Data Factory) Experience with microservices, Docker, Kubernetes, and messaging queues (Kafka, ActiveMQ). Please note all applicants must have full right to work and live in the UK. If interested please apply or send your CV over to (see below) and I'll reach out!
Levy Associates Ltd
Engineering Manager
Levy Associates Ltd Amsterdam, Noord-Holland
ABOUT YOU You're a hands-on leader with a solid background in software engineering and a passion for building scalable platforms from the ground up. You thrive in dynamic environments and take pride in owning both the product and its technical roadmap. With strong communication skills and a collaborative mindset, you're ready to manage a team and guide them toward technical and personal growth. You enjoy engaging with stakeholders and are not afraid to get involved in coding and architectural decisions. WHAT ARE YOU GOING TO DO Take ownership of one of two future development teams Manage 3-5 direct reports, providing mentorship and growth opportunities Align regularly with business and product stakeholders Contribute to the codebase, technical architecture, and strategic planning Maintain and evolve a diverse portfolio of existing products Work with technologies such as Kotlin (Spring Boot), Python, AWS, Terraform, AWS CDK, and Angular 16+ AN IDEAL PROFILE WOULD BE Bachelor's or Master's degree in Computer Science or related field 7+ years of relevant work experience post-graduation 1+ year in a lead or engineering management role Solid experience in Python or a JVM language (Kotlin/Java) Skilled in cloud computing, preferably AWS, with Infrastructure as Code experience Strong knowledge of SQL and query performance Experienced in clean, scalable code practices and large-scale projects Familiarity with the Spring Framework and Typescript is a plus Comfortable working closely with business stakeholders and gathering requirements Tick all the boxes? Then smash that button and let's have a chat. Contact - (see below) ABOUT US: Levy is an international IT staffing organization providing recruitment and project resourcing services to companies ranging from start-ups to well-established global players across the UK, Holland, Germany, Belgium, and the USA. By partnering with our clients, we provide tailored interim and permanent IT staffing solutions to help them deliver their initiatives across applications and infrastructure, touching areas such as Digital, Data, Cloud, Cybersecurity and ERP.
30/06/2025
Project-based
ABOUT YOU You're a hands-on leader with a solid background in software engineering and a passion for building scalable platforms from the ground up. You thrive in dynamic environments and take pride in owning both the product and its technical roadmap. With strong communication skills and a collaborative mindset, you're ready to manage a team and guide them toward technical and personal growth. You enjoy engaging with stakeholders and are not afraid to get involved in coding and architectural decisions. WHAT ARE YOU GOING TO DO Take ownership of one of two future development teams Manage 3-5 direct reports, providing mentorship and growth opportunities Align regularly with business and product stakeholders Contribute to the codebase, technical architecture, and strategic planning Maintain and evolve a diverse portfolio of existing products Work with technologies such as Kotlin (Spring Boot), Python, AWS, Terraform, AWS CDK, and Angular 16+ AN IDEAL PROFILE WOULD BE Bachelor's or Master's degree in Computer Science or related field 7+ years of relevant work experience post-graduation 1+ year in a lead or engineering management role Solid experience in Python or a JVM language (Kotlin/Java) Skilled in cloud computing, preferably AWS, with Infrastructure as Code experience Strong knowledge of SQL and query performance Experienced in clean, scalable code practices and large-scale projects Familiarity with the Spring Framework and Typescript is a plus Comfortable working closely with business stakeholders and gathering requirements Tick all the boxes? Then smash that button and let's have a chat. Contact - (see below) ABOUT US: Levy is an international IT staffing organization providing recruitment and project resourcing services to companies ranging from start-ups to well-established global players across the UK, Holland, Germany, Belgium, and the USA. By partnering with our clients, we provide tailored interim and permanent IT staffing solutions to help them deliver their initiatives across applications and infrastructure, touching areas such as Digital, Data, Cloud, Cybersecurity and ERP.
IT Talent Solutions Ltd
Junior Full-stack .NET Developer On-Site
IT Talent Solutions Ltd Enfield, Middlesex
Junior Full-Stack .NET Developer - Cybersecurity Tools £30,000 to £35,000 | Onsite (Enfield area) | Graduate to Junior Level If you're early in your software career and want to build experience in secure application development, this could be a great next step. You'll be working closely with two experienced developers in a tight-knit team, building tools that help protect users and systems across a wide range of platforms. The company is growing steadily and focuses on authentication services and identity verification tools used by organisations worldwide. This is a full-stack role where you'll get hands-on with everything from Back End logic to Front End interfaces. What you'll be doing: Building secure, scalable .NET applications Developing responsive, user-friendly web interfaces Working on authentication features like login flows and token management Writing clean, maintainable code and improving system performance Collaborating with senior engineers and learning best practices Tech you'll use: C# and .NET Core or Framework SQL Server HTML, CSS, JavaScript REST APIs, JSON, HTTP Any Front End framework like React, Angular, or Blazor Nice to have: Interest in authentication protocols like OAuth2, SAML, or OpenID Connect Familiarity with Azure AD or Active Directory Experience with Git, CI/CD pipelines, or general DevOps tooling Location: Based onsite in the Enfield area for the first 6 months, with flexibility considered after that Why apply: Work directly with experienced developers and get regular mentoring Gain practical experience building security-focused tools Join a stable, growing company with plenty of scope for technical development
30/06/2025
Full time
Junior Full-Stack .NET Developer - Cybersecurity Tools £30,000 to £35,000 | Onsite (Enfield area) | Graduate to Junior Level If you're early in your software career and want to build experience in secure application development, this could be a great next step. You'll be working closely with two experienced developers in a tight-knit team, building tools that help protect users and systems across a wide range of platforms. The company is growing steadily and focuses on authentication services and identity verification tools used by organisations worldwide. This is a full-stack role where you'll get hands-on with everything from Back End logic to Front End interfaces. What you'll be doing: Building secure, scalable .NET applications Developing responsive, user-friendly web interfaces Working on authentication features like login flows and token management Writing clean, maintainable code and improving system performance Collaborating with senior engineers and learning best practices Tech you'll use: C# and .NET Core or Framework SQL Server HTML, CSS, JavaScript REST APIs, JSON, HTTP Any Front End framework like React, Angular, or Blazor Nice to have: Interest in authentication protocols like OAuth2, SAML, or OpenID Connect Familiarity with Azure AD or Active Directory Experience with Git, CI/CD pipelines, or general DevOps tooling Location: Based onsite in the Enfield area for the first 6 months, with flexibility considered after that Why apply: Work directly with experienced developers and get regular mentoring Gain practical experience building security-focused tools Join a stable, growing company with plenty of scope for technical development
DGH Recruitment Ltd.
.Net Azure GenAI Principal Engineer/Technical Lead
DGH Recruitment Ltd. City, London
.Net Azure GenAI Principal Engineer/Technical Lead - £100-120k London/Hybrid My prestigious global legal client requires a Development Tech Lead/Principal Engineer to join their innovation function working across a wide range of initiatives and projects You will be the technical lead for a dispersed development team with experience of coaching and mentoring team members Core Skills: - .Net - Azure - GenAI Other Skills: - IaC - Azure DevOps - CI/CD You will be confident and credible when liaising with senior management to interrogate requirements and devise appropriate solutions. You will be adept at working with vendors and across the wider business to build solutions and tackle problems .Net Azure GenAI Principal Engineer/Technical Lead - £100-120k London/Hybrid In accordance with the Employment Agencies and Employment Businesses Regulations 2003, this position is advertised based upon DGH Recruitment Limited having first sought approval of its client to find candidates for this position. DGH Recruitment Limited acts as both an Employment Agency and Employment Business
30/06/2025
Full time
.Net Azure GenAI Principal Engineer/Technical Lead - £100-120k London/Hybrid My prestigious global legal client requires a Development Tech Lead/Principal Engineer to join their innovation function working across a wide range of initiatives and projects You will be the technical lead for a dispersed development team with experience of coaching and mentoring team members Core Skills: - .Net - Azure - GenAI Other Skills: - IaC - Azure DevOps - CI/CD You will be confident and credible when liaising with senior management to interrogate requirements and devise appropriate solutions. You will be adept at working with vendors and across the wider business to build solutions and tackle problems .Net Azure GenAI Principal Engineer/Technical Lead - £100-120k London/Hybrid In accordance with the Employment Agencies and Employment Businesses Regulations 2003, this position is advertised based upon DGH Recruitment Limited having first sought approval of its client to find candidates for this position. DGH Recruitment Limited acts as both an Employment Agency and Employment Business
Nicoll Curtin Technology
Cloud Network Security Engineer
Nicoll Curtin Technology
We are looking for an experienced Cloud Network Security Engineer to join our Security & Data Protection team in Zürich on a 6-month contract. In this role, you will support an established network security team as it transitions to the cloud, contributing your expertise to deliver secure, automated, and innovative network security solutions in a hybrid infrastructure. This position is based in Switzerland and available for immediate start. You will work in a dynamic, international environment, collaborating with multiple stakeholders and integrating AWS network security services with internal ITSM and CI/CD tools. Your work will help secure communication channels for business-critical applications and support core security principles such as authentication, integrity, and confidentiality. You should have solid experience in AWS network security, strong technical knowledge of networking protocols and cloud architectures, and a passion for automation and security in hybrid environments. Proficiency with Terraform and ITSM tools like ServiceNow is essential. Fluency in English is required. If you're ready to help shape secure network infrastructure in a fast-evolving cloud environment, we'd love to hear from you!
30/06/2025
Project-based
We are looking for an experienced Cloud Network Security Engineer to join our Security & Data Protection team in Zürich on a 6-month contract. In this role, you will support an established network security team as it transitions to the cloud, contributing your expertise to deliver secure, automated, and innovative network security solutions in a hybrid infrastructure. This position is based in Switzerland and available for immediate start. You will work in a dynamic, international environment, collaborating with multiple stakeholders and integrating AWS network security services with internal ITSM and CI/CD tools. Your work will help secure communication channels for business-critical applications and support core security principles such as authentication, integrity, and confidentiality. You should have solid experience in AWS network security, strong technical knowledge of networking protocols and cloud architectures, and a passion for automation and security in hybrid environments. Proficiency with Terraform and ITSM tools like ServiceNow is essential. Fluency in English is required. If you're ready to help shape secure network infrastructure in a fast-evolving cloud environment, we'd love to hear from you!
Request Technology - Robyn Honquest
Software Engineer - Java/C# - IoT
Request Technology - Robyn Honquest Oak Brook, Illinois
NO SPONSORSHIP SOFTWARE ENGINEER PLATFORM ENGINEER - Java/C#.NET SALARY: $97k -$184k plus 15% bonus LOCATION: Oak Brook, IL hybrid 3 days onsite Java & C# .NET developer, who can take Java technology and redesign it in .NET They want to move away from Java totally and eventually do all .NET (Back End development/Middleware enhancements) Any product development is a plus Internet of things IoT Looking for a candidate to architect and enhance core Middleware that powers cloud IoT platform design development and delivery. ISO, Java, .NET C# Azure Kafka Rabbit MQ AWS infrastructure as a code IoC Terraform CICD Jenkins GitHub Microservices Containerization Docker Kubernetes AWS Multi Cloud Key Responsibilities: Act as a technical authority and key driver in the design, development, and delivery of innovative features, collaborating with product owners, Front End, Middleware, DevOps, and firmware teams to align technical solutions with business goals. Lead technical assessments, scope changes, and oversee the management of the codebase for critical business requirements, high-impact product enhancements, and complex change requests across multiple initiatives. Architect and implement scalable, efficient, and robust software designs for high-complexity projects, working closely with solution architects and senior engineering leaders to ensure alignment with platform and business strategies. Champion Agile methodologies, such as Scrum, to enable efficient development cycles, continuous integration, and high-quality deliverables in Middleware development. Facilitate and lead strategic technical discussions, including architecture reviews, design meetings, and pull requests, fostering a culture of engineering excellence and collaboration. Drive adherence to best practices, coding standards, and platform design principles to deliver high-quality, reusable, and maintainable code. Develop deep domain expertise in platform-specific frameworks, features, and Middleware components, acting as a subject-matter expert and advisor across teams. Mentor and coach engineers across the organization, building technical capability, fostering innovation, and cultivating leadership within the engineering team. Collaborate with cross-functional domain experts including infrastructure, database, security, and Front End teams to drive cohesive solutions and seamless integration. Provide technical leadership approaches to elevate the myQ platform's technical capabilities and market competitiveness. ISO 27001 standards Job Requirements: Bachelors Degree An advanced degree in a directly relevant area of study may substitute for up to two (2) years of job-related experience. 8+ years of experience in software engineering, design, development, and deployment of large-scale systems Extensive experience in creating technical documentation, including design specifications, architecture diagrams, and deployment guides. Deep understanding of Agile methodologies and Scrum processes Proficiency with Java, .NET, C#, Azure, SQL, and Visual Studio. Hands-on experience with GIT, NoSQL databases, and messaging systems such as Kafka, RabbitMQ, or similar technologies. Advanced knowledge of AWS services, including but not limited to EC2, S3, Lambda, API Gateway, RDS, DynamoDB, and CloudFront. Strong expertise in Infrastructure as Code (IaC) using Terraform for automated provisioning and management of cloud resources. Proficiency with CI/CD tools such as Jenkins, GitHub Actions, or AWS CodePipeline, and experience with automated testing and deployment frameworks. Experience Docker and Kubernetes. Ability to travel domestically and internationally up to 10%. Knowledge, Skills, and Abilities: In-depth understanding of software development and design principles, with a focus on building scalable, secure, and maintainable systems. Comprehensive expertise in cloud-based development and architecture, with a strong focus on AWS and multi-cloud solutions. Exceptional ability to lead, collaborate, and provide clear technical direction to multiple development teams across diverse geographies. Deep knowledge of CI/CD practices, tools, and deployment processes, enabling efficient and reliable software delivery. Proven ability to debug, troubleshoot, and resolve complex technical issues in distributed systems and cloud environments. Proficiency in estimating work, supporting project planning efforts, and reporting progress to stakeholders at a platform and organizational level. Strong understanding of security best practices in cloud environments, including IAM roles, encryption, and network security. Demonstrated ability to leverage cloud monitoring and logging tools such as AWS CloudWatch, Elastic Stack, or Datadog for performance optimization and incident resolution. Experience with automated testing frameworks and ensuring high-quality software delivery through robust test pipelines.
28/06/2025
Full time
NO SPONSORSHIP SOFTWARE ENGINEER PLATFORM ENGINEER - Java/C#.NET SALARY: $97k -$184k plus 15% bonus LOCATION: Oak Brook, IL hybrid 3 days onsite Java & C# .NET developer, who can take Java technology and redesign it in .NET They want to move away from Java totally and eventually do all .NET (Back End development/Middleware enhancements) Any product development is a plus Internet of things IoT Looking for a candidate to architect and enhance core Middleware that powers cloud IoT platform design development and delivery. ISO, Java, .NET C# Azure Kafka Rabbit MQ AWS infrastructure as a code IoC Terraform CICD Jenkins GitHub Microservices Containerization Docker Kubernetes AWS Multi Cloud Key Responsibilities: Act as a technical authority and key driver in the design, development, and delivery of innovative features, collaborating with product owners, Front End, Middleware, DevOps, and firmware teams to align technical solutions with business goals. Lead technical assessments, scope changes, and oversee the management of the codebase for critical business requirements, high-impact product enhancements, and complex change requests across multiple initiatives. Architect and implement scalable, efficient, and robust software designs for high-complexity projects, working closely with solution architects and senior engineering leaders to ensure alignment with platform and business strategies. Champion Agile methodologies, such as Scrum, to enable efficient development cycles, continuous integration, and high-quality deliverables in Middleware development. Facilitate and lead strategic technical discussions, including architecture reviews, design meetings, and pull requests, fostering a culture of engineering excellence and collaboration. Drive adherence to best practices, coding standards, and platform design principles to deliver high-quality, reusable, and maintainable code. Develop deep domain expertise in platform-specific frameworks, features, and Middleware components, acting as a subject-matter expert and advisor across teams. Mentor and coach engineers across the organization, building technical capability, fostering innovation, and cultivating leadership within the engineering team. Collaborate with cross-functional domain experts including infrastructure, database, security, and Front End teams to drive cohesive solutions and seamless integration. Provide technical leadership approaches to elevate the myQ platform's technical capabilities and market competitiveness. ISO 27001 standards Job Requirements: Bachelors Degree An advanced degree in a directly relevant area of study may substitute for up to two (2) years of job-related experience. 8+ years of experience in software engineering, design, development, and deployment of large-scale systems Extensive experience in creating technical documentation, including design specifications, architecture diagrams, and deployment guides. Deep understanding of Agile methodologies and Scrum processes Proficiency with Java, .NET, C#, Azure, SQL, and Visual Studio. Hands-on experience with GIT, NoSQL databases, and messaging systems such as Kafka, RabbitMQ, or similar technologies. Advanced knowledge of AWS services, including but not limited to EC2, S3, Lambda, API Gateway, RDS, DynamoDB, and CloudFront. Strong expertise in Infrastructure as Code (IaC) using Terraform for automated provisioning and management of cloud resources. Proficiency with CI/CD tools such as Jenkins, GitHub Actions, or AWS CodePipeline, and experience with automated testing and deployment frameworks. Experience Docker and Kubernetes. Ability to travel domestically and internationally up to 10%. Knowledge, Skills, and Abilities: In-depth understanding of software development and design principles, with a focus on building scalable, secure, and maintainable systems. Comprehensive expertise in cloud-based development and architecture, with a strong focus on AWS and multi-cloud solutions. Exceptional ability to lead, collaborate, and provide clear technical direction to multiple development teams across diverse geographies. Deep knowledge of CI/CD practices, tools, and deployment processes, enabling efficient and reliable software delivery. Proven ability to debug, troubleshoot, and resolve complex technical issues in distributed systems and cloud environments. Proficiency in estimating work, supporting project planning efforts, and reporting progress to stakeholders at a platform and organizational level. Strong understanding of security best practices in cloud environments, including IAM roles, encryption, and network security. Demonstrated ability to leverage cloud monitoring and logging tools such as AWS CloudWatch, Elastic Stack, or Datadog for performance optimization and incident resolution. Experience with automated testing frameworks and ensuring high-quality software delivery through robust test pipelines.
Request Technology - Craig Johnson
Senior .NET Java Software Engineer
Request Technology - Craig Johnson Oak Brook, Illinois
.*We are unable to sponsor for this permanent, Full time role* .*Position is bonus eligible* Prestigious Enterprise Company is currently seeking a Senior Software Engineer with both .NET and Java experience. Candidate will play a key part in designing, developing, and optimizing a connected product ecosystem, working on cutting-edge IoT solutions, cloud services, and mobile applications. You will collaborate closely with cross-functional teams to build high-performance, scalable, and secure software solutions, ensuring seamless connectivity and integration across our platform. The ideal candidate has a strong background in software development, cloud computing and IoT protocols, along with a passion for building next-generation smart access technologies. Responsibilities: Work using Agile methodologies such as Scrum to develop Middleware Serve as primary point person and scrum team representative for interactions with product owner, Front End, Middleware, DeVos, and firmware functional teams Participate in technical assessment, scoping and management of changes to the code-base on new business requirements, product enhancements and other change requests Analyze requirements, collaborate with architects and senior engineers to produce thoughtful software designs of moderate scope and complexity Maintain domain specific software knowledge of key software application features, frameworks, or components in Middleware Lead and contribute to technical discussions in community of practice, design review, pull request, or other technical meeting forums Collaborate with other Chamberlain domain experts, such as infrastructure, database, and Front End, as the team develops features and platform enhancements Lead offshore teams to design and develop features, and burn down technical debt Ensure adherence to coding standards and other best practices to create reusable code Provide mentoring and coaching to junior engineers to increase software capability of the Middleware development team. Responsible for complying with the security requirements set forth by the Information Security team and the established ISO 27001 Security Roles, Responsibilities, and Authorities Document found in the ISMS Document Library Comply with health and safety guidelines and rules; managers should also ensure compliance across their teams. Protect Chamberlain Group's reputation by keeping information confidential. Maintain professional and technical knowledge by attending educational workshops, reading professional publications, establishing personal networks, and participating in professional societies. Contribute to the team effort by accomplishing related results and participating on projects as needed. Qualifications: Bachelor's Degree in Computer Science, related technical field or equivalent practical experience An advanced degree in directly applicable area of study may substitute for up to two (2) years of job-related experience 5+ years of job-related experience as defined in the Essential Duties and Responsibilities Deep understanding of Agile methodologies and Scrum is required Experience in creating technical documentation is required Experience with Microsoft technology stack, including .NET, C#, Azure, AWS, SQL, Visual Studio Experience with GIT, No SQL databases, messaging systems, Distributed Architecture. Experience in creating technical documentation Thorough understanding of OOP, SOLID, RESTful services, dependency injection and cloud development Ability to work well with others and provide clear direction to a development team Strong analytical and problem-solving skills Understanding of CI/CD (continuous integration/continuous delivery) tools, frameworks and deployment processes is required Ability to interface with Product Owners and Scrum Masters for ticket/issue management Ability to lead junior and senior engineers on projects Ability to debug, troubleshoot, and self-diagnose issues in software development Working experience in a cloud platform Azure or AWS is must to have.
27/06/2025
Full time
.*We are unable to sponsor for this permanent, Full time role* .*Position is bonus eligible* Prestigious Enterprise Company is currently seeking a Senior Software Engineer with both .NET and Java experience. Candidate will play a key part in designing, developing, and optimizing a connected product ecosystem, working on cutting-edge IoT solutions, cloud services, and mobile applications. You will collaborate closely with cross-functional teams to build high-performance, scalable, and secure software solutions, ensuring seamless connectivity and integration across our platform. The ideal candidate has a strong background in software development, cloud computing and IoT protocols, along with a passion for building next-generation smart access technologies. Responsibilities: Work using Agile methodologies such as Scrum to develop Middleware Serve as primary point person and scrum team representative for interactions with product owner, Front End, Middleware, DeVos, and firmware functional teams Participate in technical assessment, scoping and management of changes to the code-base on new business requirements, product enhancements and other change requests Analyze requirements, collaborate with architects and senior engineers to produce thoughtful software designs of moderate scope and complexity Maintain domain specific software knowledge of key software application features, frameworks, or components in Middleware Lead and contribute to technical discussions in community of practice, design review, pull request, or other technical meeting forums Collaborate with other Chamberlain domain experts, such as infrastructure, database, and Front End, as the team develops features and platform enhancements Lead offshore teams to design and develop features, and burn down technical debt Ensure adherence to coding standards and other best practices to create reusable code Provide mentoring and coaching to junior engineers to increase software capability of the Middleware development team. Responsible for complying with the security requirements set forth by the Information Security team and the established ISO 27001 Security Roles, Responsibilities, and Authorities Document found in the ISMS Document Library Comply with health and safety guidelines and rules; managers should also ensure compliance across their teams. Protect Chamberlain Group's reputation by keeping information confidential. Maintain professional and technical knowledge by attending educational workshops, reading professional publications, establishing personal networks, and participating in professional societies. Contribute to the team effort by accomplishing related results and participating on projects as needed. Qualifications: Bachelor's Degree in Computer Science, related technical field or equivalent practical experience An advanced degree in directly applicable area of study may substitute for up to two (2) years of job-related experience 5+ years of job-related experience as defined in the Essential Duties and Responsibilities Deep understanding of Agile methodologies and Scrum is required Experience in creating technical documentation is required Experience with Microsoft technology stack, including .NET, C#, Azure, AWS, SQL, Visual Studio Experience with GIT, No SQL databases, messaging systems, Distributed Architecture. Experience in creating technical documentation Thorough understanding of OOP, SOLID, RESTful services, dependency injection and cloud development Ability to work well with others and provide clear direction to a development team Strong analytical and problem-solving skills Understanding of CI/CD (continuous integration/continuous delivery) tools, frameworks and deployment processes is required Ability to interface with Product Owners and Scrum Masters for ticket/issue management Ability to lead junior and senior engineers on projects Ability to debug, troubleshoot, and self-diagnose issues in software development Working experience in a cloud platform Azure or AWS is must to have.
Request Technology - Craig Johnson
Lead Kafka Middleware Engineer
Request Technology - Craig Johnson Dallas, Texas
*Position is bonus eligible* Prestigious Financial Institution is currently seeking a Lead Kafka Middleware Engineer with deep expertise in Kafka administration, infrastructure as code (IaC), cloud automation, container orchestration and CI/CD pipelines. The ideal candidate will be responsible for designing, implementing, and maintaining robust and scalable Middleware solutions, ensuring high availability, performance, and security. Candidate will play a crucial role in automating infrastructure provisioning, deployments, and operations, enabling our organization to rapidly deliver and scale applications. Responsibilities: Design, implement and manage highly available and scalable Kafka clusters. Monitor Kafka performance, troubleshoot issues and optimize configurations. Develop and maintain IaC using Ansible and Terraform for infrastructure provisioning and configuration Management. Create and maintain reusable IaC modules. Design and implement cloud-based infrastructure solutions on AWS and Azure. Automate cloud resource provisioning, scaling and management using cloud-native tools and services. Deploy and Manage containerized applications using Kubernetes and Rancher Troubleshoot container-related issues and optimize container performance. Design, implement and maintain CI/CD pipelines using tools like GitHub, Artifactory, Harness and Jenkins. Automated the build, test and deployment of Middleware components. Integrate IaC and container technologies into CI/CD pipelines. Document all processes and procedures. Work with development teams to ensure smooth deployments. Qualifications: Strong proficiency in IaC tools, specifically Ansible, Terraform and bash Scripting. Extensive experience with cloud automation and provisioning on AWS and Azure. Proficiency in CI/CD tools, including GitHub, Artifactory, Harness and Jenkins. Strong Scripting skills in languages like Python and Bash. Excellent troubleshooting and problem-solving skills Understanding of networking principles. Experience with monitoring tools like Splunk, Splunk OTEL, Prometheus and Grafana. Kafka, Ansible, Terraform, Bash, Kubernetes, Rancher, GitHub, Artifactory, Harness, Jenkins, AWS, Azure, CI/CD, IaC, Automated Cloud Provisioning Bachelors degree in Computer Science, Engineering or a related field (or equivalent experience) 10+ years of experience in infrastructure Middleware administration. In-depth expertise in Kafka administration, including cluster management, performance tuning, and security. AWS Solutions Architect, CKAD or CKA certifications preferred.
27/06/2025
Full time
*Position is bonus eligible* Prestigious Financial Institution is currently seeking a Lead Kafka Middleware Engineer with deep expertise in Kafka administration, infrastructure as code (IaC), cloud automation, container orchestration and CI/CD pipelines. The ideal candidate will be responsible for designing, implementing, and maintaining robust and scalable Middleware solutions, ensuring high availability, performance, and security. Candidate will play a crucial role in automating infrastructure provisioning, deployments, and operations, enabling our organization to rapidly deliver and scale applications. Responsibilities: Design, implement and manage highly available and scalable Kafka clusters. Monitor Kafka performance, troubleshoot issues and optimize configurations. Develop and maintain IaC using Ansible and Terraform for infrastructure provisioning and configuration Management. Create and maintain reusable IaC modules. Design and implement cloud-based infrastructure solutions on AWS and Azure. Automate cloud resource provisioning, scaling and management using cloud-native tools and services. Deploy and Manage containerized applications using Kubernetes and Rancher Troubleshoot container-related issues and optimize container performance. Design, implement and maintain CI/CD pipelines using tools like GitHub, Artifactory, Harness and Jenkins. Automated the build, test and deployment of Middleware components. Integrate IaC and container technologies into CI/CD pipelines. Document all processes and procedures. Work with development teams to ensure smooth deployments. Qualifications: Strong proficiency in IaC tools, specifically Ansible, Terraform and bash Scripting. Extensive experience with cloud automation and provisioning on AWS and Azure. Proficiency in CI/CD tools, including GitHub, Artifactory, Harness and Jenkins. Strong Scripting skills in languages like Python and Bash. Excellent troubleshooting and problem-solving skills Understanding of networking principles. Experience with monitoring tools like Splunk, Splunk OTEL, Prometheus and Grafana. Kafka, Ansible, Terraform, Bash, Kubernetes, Rancher, GitHub, Artifactory, Harness, Jenkins, AWS, Azure, CI/CD, IaC, Automated Cloud Provisioning Bachelors degree in Computer Science, Engineering or a related field (or equivalent experience) 10+ years of experience in infrastructure Middleware administration. In-depth expertise in Kafka administration, including cluster management, performance tuning, and security. AWS Solutions Architect, CKAD or CKA certifications preferred.
infeurope S.A.
Chaos and Kraken Expert for a multinational Institution in Strasbourg
infeurope S.A. Strasbourg, Bas-Rhin
We are looking for a Chaos and Kraken Expert to work for a client project in Strasbourg. Location: 80% on-site work in Strasbourg and 20% off-site work Start date: immediately End Date: 12 months Preliminary Requirements: Candidate must be citizen of member states of the European Union ( European Union nationality ), and should be able to get their criminal record. Role and Tasks description: The Chaos and Kraken Expert will be in charge to manage, define, execute, and support Chaos and Kraken scenario on distributed systems. He/she will be Responsible for creating best practices for using these tools to automate crash scenario testing and communicating them to the teams. He/she will have to be able to draw up procedures and white paper recommendations to use. These procedures will then be used by the testing teams to carry out resilience and service level tests. He/she will have to demonstrate a high degree of autonomy, be open-minded and know how to transfer knowledge to the testing teams. The candidate will need to have experience of managing Chaos and Kraken tools in an OpenShift microservice environment and a virtualised platform. Main skills required: Chaos Experiment Design: Planning and designing experiments that simulate system failures, such as service outages, network latency, packet loss, etc. Experiment Execution: Implementing and executing these experiments in test or controlled production environments. Monitoring and Analysis: Monitoring system behaviour during experiments and analysing results to identify weak points. Documentation and Reporting: Documenting findings and providing detailed reports with recommendations to improve system resilience. Automation: Developing scripts and tools to automate the execution of chaos experiments. Collaboration: Working with testing, development, operations, and security teams to implement improvements based on experiment results. Key skills: University degree in Computer Science: Master or equivalent; Knowledge of Distributed Systems: Understanding how distributed systems work and potential failures that can occur. Programming and Scripting: Skills in languages such as Python, Go, and other Scripting languages. Chaos Engineering Tools: Familiarity with tools like Chaos Monkey, Gremlin, Litmus, and others. Data Analysis: Ability to analyse experiment results and draw useful conclusions. Communication: Ability to communicate findings and recommendations to technical and non-technical teams. Problem Solving: Ability to identify and solve complex problems in distributed systems. Openshift/Kubernetes (corporate microservices platform used) VMWare/Linux/Windows/Shell DevOps - Github/Ansible/Helm/ArgoCD/Jenkins / Very good English speaking & writing skills; Experience and willingness of working in an international/multicultural environment. infeurope is a Luxembourg-based IT service provider, designing, developing and managing multilingual information and documentary systems in many application areas and business sectors. For more than 40 years we have delivered IT systems and solutions.
27/06/2025
Project-based
We are looking for a Chaos and Kraken Expert to work for a client project in Strasbourg. Location: 80% on-site work in Strasbourg and 20% off-site work Start date: immediately End Date: 12 months Preliminary Requirements: Candidate must be citizen of member states of the European Union ( European Union nationality ), and should be able to get their criminal record. Role and Tasks description: The Chaos and Kraken Expert will be in charge to manage, define, execute, and support Chaos and Kraken scenario on distributed systems. He/she will be Responsible for creating best practices for using these tools to automate crash scenario testing and communicating them to the teams. He/she will have to be able to draw up procedures and white paper recommendations to use. These procedures will then be used by the testing teams to carry out resilience and service level tests. He/she will have to demonstrate a high degree of autonomy, be open-minded and know how to transfer knowledge to the testing teams. The candidate will need to have experience of managing Chaos and Kraken tools in an OpenShift microservice environment and a virtualised platform. Main skills required: Chaos Experiment Design: Planning and designing experiments that simulate system failures, such as service outages, network latency, packet loss, etc. Experiment Execution: Implementing and executing these experiments in test or controlled production environments. Monitoring and Analysis: Monitoring system behaviour during experiments and analysing results to identify weak points. Documentation and Reporting: Documenting findings and providing detailed reports with recommendations to improve system resilience. Automation: Developing scripts and tools to automate the execution of chaos experiments. Collaboration: Working with testing, development, operations, and security teams to implement improvements based on experiment results. Key skills: University degree in Computer Science: Master or equivalent; Knowledge of Distributed Systems: Understanding how distributed systems work and potential failures that can occur. Programming and Scripting: Skills in languages such as Python, Go, and other Scripting languages. Chaos Engineering Tools: Familiarity with tools like Chaos Monkey, Gremlin, Litmus, and others. Data Analysis: Ability to analyse experiment results and draw useful conclusions. Communication: Ability to communicate findings and recommendations to technical and non-technical teams. Problem Solving: Ability to identify and solve complex problems in distributed systems. Openshift/Kubernetes (corporate microservices platform used) VMWare/Linux/Windows/Shell DevOps - Github/Ansible/Helm/ArgoCD/Jenkins / Very good English speaking & writing skills; Experience and willingness of working in an international/multicultural environment. infeurope is a Luxembourg-based IT service provider, designing, developing and managing multilingual information and documentary systems in many application areas and business sectors. For more than 40 years we have delivered IT systems and solutions.
Request Technology - Craig Johnson
Senior IBM MQ Middleware Engineer
Request Technology - Craig Johnson Chicago, Illinois
*Position is bonus eligible* Prestigious Financial Institution is currently seeking a Senior IBM MQ Middleware Engineer. Candidate will be responsible for the end-to-end administration, maintenance, and support of our critical IBM MQ messaging infrastructure. This role involves ensuring the reliability, availability, performance, and security of the MQ environment, supporting business applications, and actively participating in strategic initiatives such as cloud migration, containerization, and system modernization. Responsibilities: Install, configure, and maintain IBM MQ software (Queue Managers, Clients) on RHEL. Perform version upgrades, apply fix packs, and manage patching cycles according to best practices and security requirements. Configure MQ objects, including Queue Managers, Queues (Local, Remote, Alias, Model), Channels, Listeners, Clusters, Topics, and Subscriptions. Monitor MQ system performance metrics, message throughput, and latency. Identify performance bottlenecks and implement tuning adjustments at the Queue Manager, channel, and queue levels. Analyze MQ logs and trace data to optimize configurations. Provide expert-level troubleshooting for MQ-related issues, including connectivity problems, message delivery failures, security errors, and performance degradation. Act as a primary point of contact for application teams regarding MQ connectivity and messaging issues. Respond to incidents, diagnose root causes, and implement corrective actions, participating in on-call rotation if applicable. Monitor resource utilization (CPU, memory, disk space, message depths) and forecast future capacity needs. Design and implement scalable MQ solutions, including clustering and distributed queuing, to meet growing business demands. Develop, implement, test, and maintain disaster recovery (DR) procedures for the MQ environment. Configure and manage high availability (HA) solutions, potentially including multi-instance Queue Managers or clustering. Participate in regular DR testing exercises. Implement and manage MQ security configurations, including TLS/SSL for channels, Channel Authentication Rules (CHLAUTH), and Object Authority Manager (OAM). Work with security teams to ensure compliance with security policies and standards. Manage certificates and keystores for secure communication. Interface with IBM support (raising PMRs/Cases) for problem resolution, technical guidance, and product information. Stay informed about IBM MQ product roadmaps, new features, and end-of-support timelines. Participate in the planning, design, and execution of migrating IBM MQ workloads to AWS. Contribute to initiatives involving the deployment and management of IBM MQ in containerized environments (eg, Docker, Kubernetes, OpenShift), utilizing MQ container images and operators. Actively participate in infrastructure modernization projects related to messaging. Develop and maintain scripts (eg, Shell, Python, Perl) or utilize automation tools (eg, Ansible) to streamline routine MQ administration tasks, deployments, and configuration management. Create and maintain comprehensive documentation for MQ architecture, configurations, standards, and operational procedures. Collaborate effectively with application developers, system administrators, network engineers, database administrators, and project managers. Qualifications: Strong understanding of core IBM MQ concepts (Queue Managers, Queues, Channels, Clustering, Publish/Subscribe, Security). Proficiency in administering IBM MQ on [Specify primary OS, eg, Linux, Windows]. Proven troubleshooting skills in diagnosing and resolving complex MQ issues. Experience with Scripting languages (eg, Bash, Python, Perl) for automation. Understanding of networking concepts (TCP/IP, DNS, Firewalls, load balancers) as they relate to MQ. Excellent communication and interpersonal skills. Ability to work independently and as part of a collaborative team. Experience with IBM MQ on RHEL and containers in cloud. Experience with advanced MQ features like MQ Appliances, Advanced Message Security (AMS), Managed File Transfer (MFT). Experience migrating or managing IBM MQ in AWS. Hands-on experience with containerization technologies (Docker, Kubernetes, OpenShift) and managing MQ in containers. Experience with infrastructure-as-code (IaC) tools like Ansible or Terraform. Experience with enterprise monitoring tools (eg, Instana, Dynatrace, Splunk, Nagios, Prometheus). Familiarity with other messaging technologies like Kafka. Experience working in Agile/DevOps environments. IBM MQ, Ansible, Terraform, Bash, Docker Kubernetes, Rancher, GitHub, Artifactory, Harness, Jenkins, AWS, Azure, CI/CD, IaC, Automated Cloud Provisioning Education and/or Experience: Bachelor's degree in Computer Science, Information Technology, or a related field, OR equivalent practical experience. 10+ years of hands-on experience administering IBM MQ in a complex enterprise environment. Certificates or Licenses: AWS Solutions Architect, CKAD or CKA certifications preferred.
26/06/2025
Full time
*Position is bonus eligible* Prestigious Financial Institution is currently seeking a Senior IBM MQ Middleware Engineer. Candidate will be responsible for the end-to-end administration, maintenance, and support of our critical IBM MQ messaging infrastructure. This role involves ensuring the reliability, availability, performance, and security of the MQ environment, supporting business applications, and actively participating in strategic initiatives such as cloud migration, containerization, and system modernization. Responsibilities: Install, configure, and maintain IBM MQ software (Queue Managers, Clients) on RHEL. Perform version upgrades, apply fix packs, and manage patching cycles according to best practices and security requirements. Configure MQ objects, including Queue Managers, Queues (Local, Remote, Alias, Model), Channels, Listeners, Clusters, Topics, and Subscriptions. Monitor MQ system performance metrics, message throughput, and latency. Identify performance bottlenecks and implement tuning adjustments at the Queue Manager, channel, and queue levels. Analyze MQ logs and trace data to optimize configurations. Provide expert-level troubleshooting for MQ-related issues, including connectivity problems, message delivery failures, security errors, and performance degradation. Act as a primary point of contact for application teams regarding MQ connectivity and messaging issues. Respond to incidents, diagnose root causes, and implement corrective actions, participating in on-call rotation if applicable. Monitor resource utilization (CPU, memory, disk space, message depths) and forecast future capacity needs. Design and implement scalable MQ solutions, including clustering and distributed queuing, to meet growing business demands. Develop, implement, test, and maintain disaster recovery (DR) procedures for the MQ environment. Configure and manage high availability (HA) solutions, potentially including multi-instance Queue Managers or clustering. Participate in regular DR testing exercises. Implement and manage MQ security configurations, including TLS/SSL for channels, Channel Authentication Rules (CHLAUTH), and Object Authority Manager (OAM). Work with security teams to ensure compliance with security policies and standards. Manage certificates and keystores for secure communication. Interface with IBM support (raising PMRs/Cases) for problem resolution, technical guidance, and product information. Stay informed about IBM MQ product roadmaps, new features, and end-of-support timelines. Participate in the planning, design, and execution of migrating IBM MQ workloads to AWS. Contribute to initiatives involving the deployment and management of IBM MQ in containerized environments (eg, Docker, Kubernetes, OpenShift), utilizing MQ container images and operators. Actively participate in infrastructure modernization projects related to messaging. Develop and maintain scripts (eg, Shell, Python, Perl) or utilize automation tools (eg, Ansible) to streamline routine MQ administration tasks, deployments, and configuration management. Create and maintain comprehensive documentation for MQ architecture, configurations, standards, and operational procedures. Collaborate effectively with application developers, system administrators, network engineers, database administrators, and project managers. Qualifications: Strong understanding of core IBM MQ concepts (Queue Managers, Queues, Channels, Clustering, Publish/Subscribe, Security). Proficiency in administering IBM MQ on [Specify primary OS, eg, Linux, Windows]. Proven troubleshooting skills in diagnosing and resolving complex MQ issues. Experience with Scripting languages (eg, Bash, Python, Perl) for automation. Understanding of networking concepts (TCP/IP, DNS, Firewalls, load balancers) as they relate to MQ. Excellent communication and interpersonal skills. Ability to work independently and as part of a collaborative team. Experience with IBM MQ on RHEL and containers in cloud. Experience with advanced MQ features like MQ Appliances, Advanced Message Security (AMS), Managed File Transfer (MFT). Experience migrating or managing IBM MQ in AWS. Hands-on experience with containerization technologies (Docker, Kubernetes, OpenShift) and managing MQ in containers. Experience with infrastructure-as-code (IaC) tools like Ansible or Terraform. Experience with enterprise monitoring tools (eg, Instana, Dynatrace, Splunk, Nagios, Prometheus). Familiarity with other messaging technologies like Kafka. Experience working in Agile/DevOps environments. IBM MQ, Ansible, Terraform, Bash, Docker Kubernetes, Rancher, GitHub, Artifactory, Harness, Jenkins, AWS, Azure, CI/CD, IaC, Automated Cloud Provisioning Education and/or Experience: Bachelor's degree in Computer Science, Information Technology, or a related field, OR equivalent practical experience. 10+ years of hands-on experience administering IBM MQ in a complex enterprise environment. Certificates or Licenses: AWS Solutions Architect, CKAD or CKA certifications preferred.
Request Technology - Craig Johnson
Senior IBM MQ Middleware Engineer
Request Technology - Craig Johnson Dallas, Texas
*Position is bonus eligible* Prestigious Financial Institution is currently seeking a Senior IBM MQ Middleware Engineer. Candidate will be responsible for the end-to-end administration, maintenance, and support of our critical IBM MQ messaging infrastructure. This role involves ensuring the reliability, availability, performance, and security of the MQ environment, supporting business applications, and actively participating in strategic initiatives such as cloud migration, containerization, and system modernization. Responsibilities: Install, configure, and maintain IBM MQ software (Queue Managers, Clients) on RHEL. Perform version upgrades, apply fix packs, and manage patching cycles according to best practices and security requirements. Configure MQ objects, including Queue Managers, Queues (Local, Remote, Alias, Model), Channels, Listeners, Clusters, Topics, and Subscriptions. Monitor MQ system performance metrics, message throughput, and latency. Identify performance bottlenecks and implement tuning adjustments at the Queue Manager, channel, and queue levels. Analyze MQ logs and trace data to optimize configurations. Provide expert-level troubleshooting for MQ-related issues, including connectivity problems, message delivery failures, security errors, and performance degradation. Act as a primary point of contact for application teams regarding MQ connectivity and messaging issues. Respond to incidents, diagnose root causes, and implement corrective actions, participating in on-call rotation if applicable. Monitor resource utilization (CPU, memory, disk space, message depths) and forecast future capacity needs. Design and implement scalable MQ solutions, including clustering and distributed queuing, to meet growing business demands. Develop, implement, test, and maintain disaster recovery (DR) procedures for the MQ environment. Configure and manage high availability (HA) solutions, potentially including multi-instance Queue Managers or clustering. Participate in regular DR testing exercises. Implement and manage MQ security configurations, including TLS/SSL for channels, Channel Authentication Rules (CHLAUTH), and Object Authority Manager (OAM). Work with security teams to ensure compliance with security policies and standards. Manage certificates and keystores for secure communication. Interface with IBM support (raising PMRs/Cases) for problem resolution, technical guidance, and product information. Stay informed about IBM MQ product roadmaps, new features, and end-of-support timelines. Participate in the planning, design, and execution of migrating IBM MQ workloads to AWS. Contribute to initiatives involving the deployment and management of IBM MQ in containerized environments (eg, Docker, Kubernetes, OpenShift), utilizing MQ container images and operators. Actively participate in infrastructure modernization projects related to messaging. Develop and maintain scripts (eg, Shell, Python, Perl) or utilize automation tools (eg, Ansible) to streamline routine MQ administration tasks, deployments, and configuration management. Create and maintain comprehensive documentation for MQ architecture, configurations, standards, and operational procedures. Collaborate effectively with application developers, system administrators, network engineers, database administrators, and project managers. Qualifications: Strong understanding of core IBM MQ concepts (Queue Managers, Queues, Channels, Clustering, Publish/Subscribe, Security). Proficiency in administering IBM MQ on [Specify primary OS, eg, Linux, Windows]. Proven troubleshooting skills in diagnosing and resolving complex MQ issues. Experience with Scripting languages (eg, Bash, Python, Perl) for automation. Understanding of networking concepts (TCP/IP, DNS, Firewalls, load balancers) as they relate to MQ. Excellent communication and interpersonal skills. Ability to work independently and as part of a collaborative team. Experience with IBM MQ on RHEL and containers in cloud. Experience with advanced MQ features like MQ Appliances, Advanced Message Security (AMS), Managed File Transfer (MFT). Experience migrating or managing IBM MQ in AWS. Hands-on experience with containerization technologies (Docker, Kubernetes, OpenShift) and managing MQ in containers. Experience with infrastructure-as-code (IaC) tools like Ansible or Terraform. Experience with enterprise monitoring tools (eg, Instana, Dynatrace, Splunk, Nagios, Prometheus). Familiarity with other messaging technologies like Kafka. Experience working in Agile/DevOps environments. IBM MQ, Ansible, Terraform, Bash, Docker Kubernetes, Rancher, GitHub, Artifactory, Harness, Jenkins, AWS, Azure, CI/CD, IaC, Automated Cloud Provisioning Education and/or Experience: Bachelor's degree in Computer Science, Information Technology, or a related field, OR equivalent practical experience. 10+ years of hands-on experience administering IBM MQ in a complex enterprise environment. Certificates or Licenses: AWS Solutions Architect, CKAD or CKA certifications preferred.
26/06/2025
Full time
*Position is bonus eligible* Prestigious Financial Institution is currently seeking a Senior IBM MQ Middleware Engineer. Candidate will be responsible for the end-to-end administration, maintenance, and support of our critical IBM MQ messaging infrastructure. This role involves ensuring the reliability, availability, performance, and security of the MQ environment, supporting business applications, and actively participating in strategic initiatives such as cloud migration, containerization, and system modernization. Responsibilities: Install, configure, and maintain IBM MQ software (Queue Managers, Clients) on RHEL. Perform version upgrades, apply fix packs, and manage patching cycles according to best practices and security requirements. Configure MQ objects, including Queue Managers, Queues (Local, Remote, Alias, Model), Channels, Listeners, Clusters, Topics, and Subscriptions. Monitor MQ system performance metrics, message throughput, and latency. Identify performance bottlenecks and implement tuning adjustments at the Queue Manager, channel, and queue levels. Analyze MQ logs and trace data to optimize configurations. Provide expert-level troubleshooting for MQ-related issues, including connectivity problems, message delivery failures, security errors, and performance degradation. Act as a primary point of contact for application teams regarding MQ connectivity and messaging issues. Respond to incidents, diagnose root causes, and implement corrective actions, participating in on-call rotation if applicable. Monitor resource utilization (CPU, memory, disk space, message depths) and forecast future capacity needs. Design and implement scalable MQ solutions, including clustering and distributed queuing, to meet growing business demands. Develop, implement, test, and maintain disaster recovery (DR) procedures for the MQ environment. Configure and manage high availability (HA) solutions, potentially including multi-instance Queue Managers or clustering. Participate in regular DR testing exercises. Implement and manage MQ security configurations, including TLS/SSL for channels, Channel Authentication Rules (CHLAUTH), and Object Authority Manager (OAM). Work with security teams to ensure compliance with security policies and standards. Manage certificates and keystores for secure communication. Interface with IBM support (raising PMRs/Cases) for problem resolution, technical guidance, and product information. Stay informed about IBM MQ product roadmaps, new features, and end-of-support timelines. Participate in the planning, design, and execution of migrating IBM MQ workloads to AWS. Contribute to initiatives involving the deployment and management of IBM MQ in containerized environments (eg, Docker, Kubernetes, OpenShift), utilizing MQ container images and operators. Actively participate in infrastructure modernization projects related to messaging. Develop and maintain scripts (eg, Shell, Python, Perl) or utilize automation tools (eg, Ansible) to streamline routine MQ administration tasks, deployments, and configuration management. Create and maintain comprehensive documentation for MQ architecture, configurations, standards, and operational procedures. Collaborate effectively with application developers, system administrators, network engineers, database administrators, and project managers. Qualifications: Strong understanding of core IBM MQ concepts (Queue Managers, Queues, Channels, Clustering, Publish/Subscribe, Security). Proficiency in administering IBM MQ on [Specify primary OS, eg, Linux, Windows]. Proven troubleshooting skills in diagnosing and resolving complex MQ issues. Experience with Scripting languages (eg, Bash, Python, Perl) for automation. Understanding of networking concepts (TCP/IP, DNS, Firewalls, load balancers) as they relate to MQ. Excellent communication and interpersonal skills. Ability to work independently and as part of a collaborative team. Experience with IBM MQ on RHEL and containers in cloud. Experience with advanced MQ features like MQ Appliances, Advanced Message Security (AMS), Managed File Transfer (MFT). Experience migrating or managing IBM MQ in AWS. Hands-on experience with containerization technologies (Docker, Kubernetes, OpenShift) and managing MQ in containers. Experience with infrastructure-as-code (IaC) tools like Ansible or Terraform. Experience with enterprise monitoring tools (eg, Instana, Dynatrace, Splunk, Nagios, Prometheus). Familiarity with other messaging technologies like Kafka. Experience working in Agile/DevOps environments. IBM MQ, Ansible, Terraform, Bash, Docker Kubernetes, Rancher, GitHub, Artifactory, Harness, Jenkins, AWS, Azure, CI/CD, IaC, Automated Cloud Provisioning Education and/or Experience: Bachelor's degree in Computer Science, Information Technology, or a related field, OR equivalent practical experience. 10+ years of hands-on experience administering IBM MQ in a complex enterprise environment. Certificates or Licenses: AWS Solutions Architect, CKAD or CKA certifications preferred.

Modal Window

Cauta joburi dupa:
  • Domeniu:
  • IT_Software Development
  • Bănci
  • Vanzari
  • Medical
  • Inginerie
  • Orase:
  • Bucuresti
  • Cluj-Napoca
  • Timisoara
  • Iasi
  • Constanta
  • Craiova
  • Brasov
  • Galati
  • Ploiesti
  • Oradea
  • Pitesti
  • Sibiu
Helpful Resources
  • Blog Cariera
  • Produse de recrutare
  • Contact
Servicii angajatori
  • Publicare anunturi
  • Administrare Aplicatii
  • Cauta CV-uri
Instrumente candidati
  • Joburi Studenti
  • Alerte joburi
  • Administrare Aplicatii
  • Adauga CV
Joburi internationale
  • Jobs in US
  • Jobs in UK
  • Offres d'emploi en France
  • Jobs in Deutschland

© All rights reserved. Copyrights @Carieranoua

  • Despre noi
  • Companii
  • Termeni si conditii
  • Confidentialitate
  • Contact