Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a strong Application Security Engineer. Candidate will work closely with other members of the Security Services, IT Development Teams, and Development teams to support application and software security initiatives, projects, and operations. Responsibilities: Application Security/Secure SDLC Build and optimize our security tooling stack, including SAST, DAST, SCA, and IaC. Implement DevSecOps principles and integrate tools into CI/CD pipelines and developer workflows. Define and improve secure SDLC processes designing and implementing a developer friendly secure SDLC framework tailored to the delivery model. Automate security checks in CI/CD pipelines and developer tools to ensure continuous visibility and successful delivery. Build out process for threat modelling and secure design review process. Implement security for supply chain security, AI/ML application security, Open source etc. Review reports of the testing and conduct security risk assessments of the vulnerabilities. The use and maintenance of cloud and self-managed security scanning tools, manual source code reviews, and manual penetration assessments. Conduct IT/Security code review meetings to eliminate false positives and encourage collaboration between Security and IT development teams. Assist with application security vulnerability management including implementation of new vulnerability management tools. Debrief users and provide remediation strategy on findings. Ensure alignment of security controls in testing program and supporting services and related policies and procedures with applicable regulations and industry standard best practices. Perform ongoing reviews of application releases to ensure only secure and reviewed code is pushed to prod, with automation tasks as necessary. Develop scripts/automation to assist development teams with interpreting results from pipeline vulnerability verification reports to facilitate vulnerability remediation. Qualifications: Experience with CI/CD pipelines and software development/coding: Docker, Jenkins, GitHub, SVN, Terraform, and others. Exceptional analytical, problem solving and troubleshooting skills with the ability to exercise good judgment while developing creative solutions. Strong familiarity with enterprise technologies; strong technical background and understanding of security-related technologies; prefer operational experience as an administrator, engineer, or developer and direct experience testing in commercial cloud environments (AWS, Azure, Google Cloud Platform, IaaS/PaaS/SaaS). Good applicable knowledge of policy and procedure development, systems analysis, Information Assurance (IA) policy, vulnerability management, and risk management Good understanding of regulatory standards including CSF, NIST, PCI, SSAE 16, SAS 70, HIPPA, FIPS 199, COBIT 5 and others as needed. Strong knowledge of cryptography (symmetric, asymmetric, hashing) and its various applications. Strong knowledge of common enterprise infrastructure technology stacks and network configurations. Exhibit ability to understand and modify code in a diverse range of programming languages and frameworks; must have direct practical experience with one or more high level programming languages. Technical Skills: Deep knowledge of common web, API and cloud vulnerabilities (eg OWASP Top 10, CWE, auth flaws etc.). Deep understanding of vulnerabilities, reachability, exploitability and how they affect applications. Familiarity with secure coding principles across multiple languages (eg python, Java, JavaScript etc.). Knowledge of how security fits into platform engineering and cloud native stacks. Deep understanding of application layer attacks and defense mechanisms (CCS, CSRF, SQLi, XXE, SSRF, broken access control etc.). Familiarity with API security (REST & GraphQL), Postman, OOWASP top 10). Proficiency with artifact repositories and implementing security controls around component ingestion. Knowledge of shift-left strategies and embedding controls early in the development life cycle. Familiarity with Kubernetes security, container scanning and cloud infrastructure as code. Ability to triage and prioritize vulnerabilities based on exploitability, impact and business context. Strong proficiency application security and vulnerability management. Strong experience with custom Scripting (python, C++, PowerShell, bash, etc.) and process automation. Some proficiency with common penetration testing tools (Kali, Armitage, Metasploit, Cobalt Strike, Nmap, Qualys, Nessus, Burp Suite, Wireshark etc.). Experience with Mainframes, Windows, Unix, MacOS, Cisco, platforms and controls. Experience with dedicated document management tools (eg, DMS, PolicyTech) a plus. Familiarity with application frameworks and their built-in security services and API s (ie, Sun J2EE, MS .NET, OMG CORBA, Spring, etc.). Knowledge of security architecture design and principles including confidentiality, integrity and availability. Knowledge of automated code scanning tools and development pipeline tools. Understanding of security concepts and practices, including those for authentication, authorization, access control and auditing as well as best practices (eg OWASP). Familiarity with application authentication and authorization systems (ie, CA SiteMinder, RSA SecurID/ACE, Active Directory, and LDAP). Fundamental understanding of network and data communications technologies Knowledge of (AWS, Azure, Google Cloud Platform) Cloud security concepts, best practices, and environments. Knowledge of Secure DevOps concepts. Education and/or Experience: BS in Computer Science, Information Management, Information Security or other comparable technical degree from an accredited college/university desired. 5+ Years experience in Application Security or Information Security environment. Experience writing scripts and working with containers in a CI/CD pipeline. Exposure to security architecture design through application development or knowledge of security concepts/best practices. Certificates or Licenses: Security-related certifications (CISSP, CISA, CRISK, ISSAP, GSLC, OSCP, OSCE, GPEN, or GXPN, etc.) highly desired. Professional network and/or security certifications a plus (ie, GIAC, CISSP, CISA, CISM, CRISC) Cloud security automation certifications a plus (ie GCSA) Penetration testing certifications a plus (ie OSCP, GWAPT)
01/07/2025
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a strong Application Security Engineer. Candidate will work closely with other members of the Security Services, IT Development Teams, and Development teams to support application and software security initiatives, projects, and operations. Responsibilities: Application Security/Secure SDLC Build and optimize our security tooling stack, including SAST, DAST, SCA, and IaC. Implement DevSecOps principles and integrate tools into CI/CD pipelines and developer workflows. Define and improve secure SDLC processes designing and implementing a developer friendly secure SDLC framework tailored to the delivery model. Automate security checks in CI/CD pipelines and developer tools to ensure continuous visibility and successful delivery. Build out process for threat modelling and secure design review process. Implement security for supply chain security, AI/ML application security, Open source etc. Review reports of the testing and conduct security risk assessments of the vulnerabilities. The use and maintenance of cloud and self-managed security scanning tools, manual source code reviews, and manual penetration assessments. Conduct IT/Security code review meetings to eliminate false positives and encourage collaboration between Security and IT development teams. Assist with application security vulnerability management including implementation of new vulnerability management tools. Debrief users and provide remediation strategy on findings. Ensure alignment of security controls in testing program and supporting services and related policies and procedures with applicable regulations and industry standard best practices. Perform ongoing reviews of application releases to ensure only secure and reviewed code is pushed to prod, with automation tasks as necessary. Develop scripts/automation to assist development teams with interpreting results from pipeline vulnerability verification reports to facilitate vulnerability remediation. Qualifications: Experience with CI/CD pipelines and software development/coding: Docker, Jenkins, GitHub, SVN, Terraform, and others. Exceptional analytical, problem solving and troubleshooting skills with the ability to exercise good judgment while developing creative solutions. Strong familiarity with enterprise technologies; strong technical background and understanding of security-related technologies; prefer operational experience as an administrator, engineer, or developer and direct experience testing in commercial cloud environments (AWS, Azure, Google Cloud Platform, IaaS/PaaS/SaaS). Good applicable knowledge of policy and procedure development, systems analysis, Information Assurance (IA) policy, vulnerability management, and risk management Good understanding of regulatory standards including CSF, NIST, PCI, SSAE 16, SAS 70, HIPPA, FIPS 199, COBIT 5 and others as needed. Strong knowledge of cryptography (symmetric, asymmetric, hashing) and its various applications. Strong knowledge of common enterprise infrastructure technology stacks and network configurations. Exhibit ability to understand and modify code in a diverse range of programming languages and frameworks; must have direct practical experience with one or more high level programming languages. Technical Skills: Deep knowledge of common web, API and cloud vulnerabilities (eg OWASP Top 10, CWE, auth flaws etc.). Deep understanding of vulnerabilities, reachability, exploitability and how they affect applications. Familiarity with secure coding principles across multiple languages (eg python, Java, JavaScript etc.). Knowledge of how security fits into platform engineering and cloud native stacks. Deep understanding of application layer attacks and defense mechanisms (CCS, CSRF, SQLi, XXE, SSRF, broken access control etc.). Familiarity with API security (REST & GraphQL), Postman, OOWASP top 10). Proficiency with artifact repositories and implementing security controls around component ingestion. Knowledge of shift-left strategies and embedding controls early in the development life cycle. Familiarity with Kubernetes security, container scanning and cloud infrastructure as code. Ability to triage and prioritize vulnerabilities based on exploitability, impact and business context. Strong proficiency application security and vulnerability management. Strong experience with custom Scripting (python, C++, PowerShell, bash, etc.) and process automation. Some proficiency with common penetration testing tools (Kali, Armitage, Metasploit, Cobalt Strike, Nmap, Qualys, Nessus, Burp Suite, Wireshark etc.). Experience with Mainframes, Windows, Unix, MacOS, Cisco, platforms and controls. Experience with dedicated document management tools (eg, DMS, PolicyTech) a plus. Familiarity with application frameworks and their built-in security services and API s (ie, Sun J2EE, MS .NET, OMG CORBA, Spring, etc.). Knowledge of security architecture design and principles including confidentiality, integrity and availability. Knowledge of automated code scanning tools and development pipeline tools. Understanding of security concepts and practices, including those for authentication, authorization, access control and auditing as well as best practices (eg OWASP). Familiarity with application authentication and authorization systems (ie, CA SiteMinder, RSA SecurID/ACE, Active Directory, and LDAP). Fundamental understanding of network and data communications technologies Knowledge of (AWS, Azure, Google Cloud Platform) Cloud security concepts, best practices, and environments. Knowledge of Secure DevOps concepts. Education and/or Experience: BS in Computer Science, Information Management, Information Security or other comparable technical degree from an accredited college/university desired. 5+ Years experience in Application Security or Information Security environment. Experience writing scripts and working with containers in a CI/CD pipeline. Exposure to security architecture design through application development or knowledge of security concepts/best practices. Certificates or Licenses: Security-related certifications (CISSP, CISA, CRISK, ISSAP, GSLC, OSCP, OSCE, GPEN, or GXPN, etc.) highly desired. Professional network and/or security certifications a plus (ie, GIAC, CISSP, CISA, CISM, CRISC) Cloud security automation certifications a plus (ie GCSA) Penetration testing certifications a plus (ie OSCP, GWAPT)
NO SPONSORSHIP Associate Principal, Security Engineering (Application Security) SALARY: $160k - $170k plus 15% bonus LOCATION: DALLAS, TX On site 3 days a week Application security, web applications, network applications. This position works closely with other members of the Security Services, IT Development Teams, and Development teams to support application and software security initiatives, projects, and operations. Responsibilities include: Candidate would perform Network/Application and Web Application security. Also create custom scripts and perform automation while also perform security assessments on both Legacy on prem and cloud environments. Candidate would also Identify, document and communicate vulnerabilities. Primary Duties and Responsibilities: Application Security/Secure SDLC Build and optimize our security tooling stack, including SAST, DAST, SCA, and IaC. Implement DevSecOps principles and integrate tools into CI/CD pipelines and developer workflows. Define and improve secure SDLC processes - designing and implementing a developer friendly secure SDLC framework tailored to the delivery model. Automate security checks in CI/CD pipelines and developer tools to ensure continuous visibility and successful delivery. Build out process for threat modelling and secure design review process. Implement security for supply chain security, AI/ML application security, Open source etc. Review reports of the testing and conduct security risk assessments of the vulnerabilities. The use and maintenance of cloud and self-managed security scanning tools, manual source code reviews, and manual penetration assessments. Conduct IT/Security code review meetings to eliminate false positives and encourage collaboration between Security and IT development teams. Assist with application security vulnerability management including implementation of new vulnerability management tools. Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. Experience with CI/CD pipelines and software development/coding: Docker, Jenkins, GitHub, SVN, Terraform, and others. Exceptional analytical, problem solving and troubleshooting skills with the ability to exercise good judgment while developing creative solutions. Strong familiarity with enterprise technologies; strong technical background and understanding of security-related technologies; prefer operational experience as an administrator, engineer, or developer and direct experience testing in commercial cloud environments (AWS, Azure, GCP, IaaS/PaaS/SaaS). Good applicable knowledge of policy and procedure development, systems analysis, Information Assurance (IA) policy, vulnerability management, and risk management Good understanding of regulatory standards including CSF, NIST, PCI, SSAE 16, SAS 70, HIPPA, FIPS 199, COBIT 5 and others as needed. Strong knowledge of cryptography (symmetric, asymmetric, hashing) and its various applications. Strong knowledge of common enterprise infrastructure technology stacks and network configurations. Exhibit ability to understand and modify code in a diverse range of programming languages and frameworks; must have direct practical experience with one or more high level programming languages. Technical Skills: Deep knowledge of common web, API and cloud vulnerabilities (eg OWASP Top 10, CWE, auth flaws etc.). Deep understanding of vulnerabilities, reachability, exploitability and how they affect applications. Familiarity with secure coding principles across multiple languages (eg python, Java, JavaScript etc.). Knowledge of how security fits into platform engineering and cloud native stacks. Deep understanding of application layer attacks and defense mechanisms (CCS, CSRF, SQLi, XXE, SSRF, broken access control etc.). Familiarity with API security (REST & GraphQL), Postman, OOWASP top 10). Proficiency with artifact repositories and implementing security controls around component ingestion. Knowledge of shift-left strategies and embedding controls early in the development life cycle. Familiarity with Kubernetes security, container scanning and cloud infrastructure as code. Ability to triage and prioritize vulnerabilities based on exploitability, impact and business context. Strong proficiency application security and vulnerability management. Strong experience with custom Scripting (python, C++, PowerShell, bash, etc.) and process automation. Some proficiency with common penetration testing tools (Kali, Armitage, Metasploit, Cobalt Strike, Nmap, Qualys, Nessus, Burp Suite, Wireshark etc.). Experience with Mainframes, Windows, Unix, MacOS, Cisco, platforms and controls. Experience with dedicated document management tools (eg, DMS, PolicyTech) a plus. Familiarity with application frameworks and their built-in security services and API's (ie, Sun J2EE, MS .NET, OMG CORBA, Spring, etc.). Knowledge of security architecture design and principles including confidentiality, integrity and availability. Knowledge of automated code scanning tools and development pipeline tools. Understanding of security concepts and practices, including those for authentication, authorization, access control and auditing as well as best practices (eg OWASP). Familiarity with application authentication and authorization systems (ie, CA SiteMinder, RSA SecurID/ACE, Active Directory, and LDAP). Fundamental understanding of network and data communications technologies Knowledge of (AWS, Azure, GCP) Cloud security concepts, best practices, and environments. Knowledge of Secure DevOps concepts. Education and/or Experience: BS in Computer Science, Information Management, Information Security or other comparable technical degree from an accredited college/university desired. 5+ Years' experience in Application Security or Information Security environment. Experience writing scripts and working with containers in a CI/CD pipeline. Exposure to security architecture design through application development or knowledge of security concepts/best practices. Certificates or Licenses: Security-related certifications (CISSP, CISA, CRISK, ISSAP, GSLC, OSCP, OSCE, GPEN, or GXPN, etc.) highly desired. Professional network and/or security certifications a plus (ie, GIAC, CISSP, CISA, CISM, CRISC) Cloud security automation certifications a plus (ie GCSA) Penetration testing certifications a plus (ie OSCP, GWAPT)
01/07/2025
Full time
NO SPONSORSHIP Associate Principal, Security Engineering (Application Security) SALARY: $160k - $170k plus 15% bonus LOCATION: DALLAS, TX On site 3 days a week Application security, web applications, network applications. This position works closely with other members of the Security Services, IT Development Teams, and Development teams to support application and software security initiatives, projects, and operations. Responsibilities include: Candidate would perform Network/Application and Web Application security. Also create custom scripts and perform automation while also perform security assessments on both Legacy on prem and cloud environments. Candidate would also Identify, document and communicate vulnerabilities. Primary Duties and Responsibilities: Application Security/Secure SDLC Build and optimize our security tooling stack, including SAST, DAST, SCA, and IaC. Implement DevSecOps principles and integrate tools into CI/CD pipelines and developer workflows. Define and improve secure SDLC processes - designing and implementing a developer friendly secure SDLC framework tailored to the delivery model. Automate security checks in CI/CD pipelines and developer tools to ensure continuous visibility and successful delivery. Build out process for threat modelling and secure design review process. Implement security for supply chain security, AI/ML application security, Open source etc. Review reports of the testing and conduct security risk assessments of the vulnerabilities. The use and maintenance of cloud and self-managed security scanning tools, manual source code reviews, and manual penetration assessments. Conduct IT/Security code review meetings to eliminate false positives and encourage collaboration between Security and IT development teams. Assist with application security vulnerability management including implementation of new vulnerability management tools. Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. Experience with CI/CD pipelines and software development/coding: Docker, Jenkins, GitHub, SVN, Terraform, and others. Exceptional analytical, problem solving and troubleshooting skills with the ability to exercise good judgment while developing creative solutions. Strong familiarity with enterprise technologies; strong technical background and understanding of security-related technologies; prefer operational experience as an administrator, engineer, or developer and direct experience testing in commercial cloud environments (AWS, Azure, GCP, IaaS/PaaS/SaaS). Good applicable knowledge of policy and procedure development, systems analysis, Information Assurance (IA) policy, vulnerability management, and risk management Good understanding of regulatory standards including CSF, NIST, PCI, SSAE 16, SAS 70, HIPPA, FIPS 199, COBIT 5 and others as needed. Strong knowledge of cryptography (symmetric, asymmetric, hashing) and its various applications. Strong knowledge of common enterprise infrastructure technology stacks and network configurations. Exhibit ability to understand and modify code in a diverse range of programming languages and frameworks; must have direct practical experience with one or more high level programming languages. Technical Skills: Deep knowledge of common web, API and cloud vulnerabilities (eg OWASP Top 10, CWE, auth flaws etc.). Deep understanding of vulnerabilities, reachability, exploitability and how they affect applications. Familiarity with secure coding principles across multiple languages (eg python, Java, JavaScript etc.). Knowledge of how security fits into platform engineering and cloud native stacks. Deep understanding of application layer attacks and defense mechanisms (CCS, CSRF, SQLi, XXE, SSRF, broken access control etc.). Familiarity with API security (REST & GraphQL), Postman, OOWASP top 10). Proficiency with artifact repositories and implementing security controls around component ingestion. Knowledge of shift-left strategies and embedding controls early in the development life cycle. Familiarity with Kubernetes security, container scanning and cloud infrastructure as code. Ability to triage and prioritize vulnerabilities based on exploitability, impact and business context. Strong proficiency application security and vulnerability management. Strong experience with custom Scripting (python, C++, PowerShell, bash, etc.) and process automation. Some proficiency with common penetration testing tools (Kali, Armitage, Metasploit, Cobalt Strike, Nmap, Qualys, Nessus, Burp Suite, Wireshark etc.). Experience with Mainframes, Windows, Unix, MacOS, Cisco, platforms and controls. Experience with dedicated document management tools (eg, DMS, PolicyTech) a plus. Familiarity with application frameworks and their built-in security services and API's (ie, Sun J2EE, MS .NET, OMG CORBA, Spring, etc.). Knowledge of security architecture design and principles including confidentiality, integrity and availability. Knowledge of automated code scanning tools and development pipeline tools. Understanding of security concepts and practices, including those for authentication, authorization, access control and auditing as well as best practices (eg OWASP). Familiarity with application authentication and authorization systems (ie, CA SiteMinder, RSA SecurID/ACE, Active Directory, and LDAP). Fundamental understanding of network and data communications technologies Knowledge of (AWS, Azure, GCP) Cloud security concepts, best practices, and environments. Knowledge of Secure DevOps concepts. Education and/or Experience: BS in Computer Science, Information Management, Information Security or other comparable technical degree from an accredited college/university desired. 5+ Years' experience in Application Security or Information Security environment. Experience writing scripts and working with containers in a CI/CD pipeline. Exposure to security architecture design through application development or knowledge of security concepts/best practices. Certificates or Licenses: Security-related certifications (CISSP, CISA, CRISK, ISSAP, GSLC, OSCP, OSCE, GPEN, or GXPN, etc.) highly desired. Professional network and/or security certifications a plus (ie, GIAC, CISSP, CISA, CISM, CRISC) Cloud security automation certifications a plus (ie GCSA) Penetration testing certifications a plus (ie OSCP, GWAPT)
Director, Software Engineering Salary: Open + Bonus Location: Chicago, IL Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this position* Qualifications Bachelor's degree 8+ years of software development experience 8-10 years of experience in building high-performance, large-scale data solutions 8+ years of solutions design and architecture experience Hands-on development experience with multiple programming languages such as Python and Java Experience with Big Data processing technologies and frameworks such as Presto, Hadoop, MapReduce, and Spark Hands-on experiences designing and implementing RESTful APIs Knowledge and understanding of DevOps tools and technologies such as Terraform, Git, Jenkins, Docker, Harness, Nexus/Artifactory, and CI/CD pipelines Knowledge of SQL, data warehousing design concepts, various data management systems (structured and semi structured) and integrating with various database technologies (Relational, NoSQL) Experience working with Cloud ecosystems (AWS, Azure, GCP) Experience with stream processing technologies and frameworks such as Kafka, Spark Streaming, Flink Familiarity with monitoring related tools and frameworks like Splunk, Elasticsearch, SignalFX, and AppDynamics Responsibilities Manage, lead, and mentor software development team Serve as technical product owner flushing out detailed business, architectural, and design requirements Develop solutions to complex technical challenges while coding, testing, troubleshooting and documenting the systems you and your team develop Recommend architectural changes and new technologies and tools that improve the efficiency and quality of company systems and development processes Lead the efforts to optimize application performance and resilience though analysis, code refactoring, and systems tuning Collaborate with others to deliver complex projects involving the integration with multiple systems Work closely with internal and external business and technology partners.
01/07/2025
Full time
Director, Software Engineering Salary: Open + Bonus Location: Chicago, IL Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this position* Qualifications Bachelor's degree 8+ years of software development experience 8-10 years of experience in building high-performance, large-scale data solutions 8+ years of solutions design and architecture experience Hands-on development experience with multiple programming languages such as Python and Java Experience with Big Data processing technologies and frameworks such as Presto, Hadoop, MapReduce, and Spark Hands-on experiences designing and implementing RESTful APIs Knowledge and understanding of DevOps tools and technologies such as Terraform, Git, Jenkins, Docker, Harness, Nexus/Artifactory, and CI/CD pipelines Knowledge of SQL, data warehousing design concepts, various data management systems (structured and semi structured) and integrating with various database technologies (Relational, NoSQL) Experience working with Cloud ecosystems (AWS, Azure, GCP) Experience with stream processing technologies and frameworks such as Kafka, Spark Streaming, Flink Familiarity with monitoring related tools and frameworks like Splunk, Elasticsearch, SignalFX, and AppDynamics Responsibilities Manage, lead, and mentor software development team Serve as technical product owner flushing out detailed business, architectural, and design requirements Develop solutions to complex technical challenges while coding, testing, troubleshooting and documenting the systems you and your team develop Recommend architectural changes and new technologies and tools that improve the efficiency and quality of company systems and development processes Lead the efforts to optimize application performance and resilience though analysis, code refactoring, and systems tuning Collaborate with others to deliver complex projects involving the integration with multiple systems Work closely with internal and external business and technology partners.
Performance/Failover Testing Engineer Salary: Open + Bonus Location: Chicago, IL Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* Qualifications Bachelor's degree 2-3 years in the areas of Performance/Failover Testing/Architecture/DevOps Experienced in application, hardware, and network testing Experience with Kafka or other messaging systems Hands-on experience building test automation and failover test suites Responsibilities Analyze the business requirements, understand the expectations from end user, review with Business Analysts/SMEs and design test scenarios. Works closely with development, QA, and operations teams to understand application architecture and performance requirements. Design and develop performance test scripts and scenarios using appropriate tools and frameworks. Monitor system behavior during performance, Load, Stress, and failover tests and ensure systems meet recovery time objectives. Conduct failover tests by simulations of hardware, software, and network failures.
01/07/2025
Full time
Performance/Failover Testing Engineer Salary: Open + Bonus Location: Chicago, IL Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* Qualifications Bachelor's degree 2-3 years in the areas of Performance/Failover Testing/Architecture/DevOps Experienced in application, hardware, and network testing Experience with Kafka or other messaging systems Hands-on experience building test automation and failover test suites Responsibilities Analyze the business requirements, understand the expectations from end user, review with Business Analysts/SMEs and design test scenarios. Works closely with development, QA, and operations teams to understand application architecture and performance requirements. Design and develop performance test scripts and scenarios using appropriate tools and frameworks. Monitor system behavior during performance, Load, Stress, and failover tests and ensure systems meet recovery time objectives. Conduct failover tests by simulations of hardware, software, and network failures.
As a Cloud infrastructure engineer, you manage the cloud workloads and understand the needs of the devops teams like no other. You are jointly responsible for the implementation of our Cloud infrastructure. You build fundamental components that allow the DevOps teams to work quickly. Your professional expertise is used to support the devops teams. You offer support in configuration as well as in the choice of the right Azure resources, guidance in migrating to Azure, building and managing infrastructure. You therefore also work closely with the various development teams, the DevOps System team and the group cloud team. In short, you contribute to the expansion of our Azure catalogue with standard offering. You act as a 'change agent' who further shapes the maturity of our current cloud environment and defines the technical direction. You stimulate the acceptance and awareness of continuous delivery and evangelize the benefits of CICD together with the DevOps System team lead. Infrastructure knowledge & experience Deep knowledge and hands-on experience in Azure Cloud engineering and infrastructure design Experience with Infrastructure as Code with focus on Terraform Knowledge of Azure resources, subnets Knowledge of Firewall (WAF), networking, Vnets, subnets Knowledge of service endpoints and private endpoints, private DNS Zones and Hub-and-Spoke architecture Knowledge of Azure Container Apps, including scalability, Dapr integration and microservices. Experience with Azure Functions (serverless) and Azure App Services, including deployment slots, autoscaling, App Service Environment (ASE). Experience in designing and building SaaS/PaaS cloud components Working in an enterprise environment Able to quickly gain insight into a complex landscape Not afraid to simplify 'aggressively' where possible/necessary The Cloud Engineer we are looking for has at least a Microsoft certificate. Extras are certainly welcome. [Bonus points] Azure certified (or willing to obtain this). Certifications in scope: AZ-104, AZ-400, AZ-500, AZ-700, SC-300. Knowledge and/or experience with DevOps and CICD tools is a plus
01/07/2025
Project-based
As a Cloud infrastructure engineer, you manage the cloud workloads and understand the needs of the devops teams like no other. You are jointly responsible for the implementation of our Cloud infrastructure. You build fundamental components that allow the DevOps teams to work quickly. Your professional expertise is used to support the devops teams. You offer support in configuration as well as in the choice of the right Azure resources, guidance in migrating to Azure, building and managing infrastructure. You therefore also work closely with the various development teams, the DevOps System team and the group cloud team. In short, you contribute to the expansion of our Azure catalogue with standard offering. You act as a 'change agent' who further shapes the maturity of our current cloud environment and defines the technical direction. You stimulate the acceptance and awareness of continuous delivery and evangelize the benefits of CICD together with the DevOps System team lead. Infrastructure knowledge & experience Deep knowledge and hands-on experience in Azure Cloud engineering and infrastructure design Experience with Infrastructure as Code with focus on Terraform Knowledge of Azure resources, subnets Knowledge of Firewall (WAF), networking, Vnets, subnets Knowledge of service endpoints and private endpoints, private DNS Zones and Hub-and-Spoke architecture Knowledge of Azure Container Apps, including scalability, Dapr integration and microservices. Experience with Azure Functions (serverless) and Azure App Services, including deployment slots, autoscaling, App Service Environment (ASE). Experience in designing and building SaaS/PaaS cloud components Working in an enterprise environment Able to quickly gain insight into a complex landscape Not afraid to simplify 'aggressively' where possible/necessary The Cloud Engineer we are looking for has at least a Microsoft certificate. Extras are certainly welcome. [Bonus points] Azure certified (or willing to obtain this). Certifications in scope: AZ-104, AZ-400, AZ-500, AZ-700, SC-300. Knowledge and/or experience with DevOps and CICD tools is a plus
You and your Job As a DevOps Engineer in our team, you will design, build, test and maintain automation solutions that improve our Oracle database infrastructures security and compliance. You'll collaborate with colleagues across DevOps teams, always seeking smarter ways to work. Your main responsibilities are: * Engineer solutions to secure the infrastructure, ensure compliance with regulatory requirements, and integrate these solutions into the Rabobank infrastructure. * Ensure high quality, continuity, security, and stability of our environment * Collaborate closely with other DevOps teams and the Product Owner. * Act as an enthusiastic and inspirational team member, helping and coaching the squad to improve their development and Ansible skills. What You Bring You are a developer with strong expertise in Ansible and a passion for automation. You are skilled in developing and managing workflows for the provisioning, configuration, and monitoring of Linux-based infrastructures. Ideally, you also have experience with Python. You bring: * Extensive knowledge of automating deployments, developing, and testing code. * Enthusiasm for automation tools and technologies, such as Azure DevOps, APIs, and REST interfaces - even when it comes to automating smaller, simpler tasks. * Willingness to share your knowledge and experience. * The ability to keep the big picture in mind while diving into the details to solve complex integration challenges. * A collaborative spirit, open to feedback, and focused on continuous improvement. * Proficient in English (B2 level). Must-have skills: * Automation experience in Ansible, including use of tools like VS Code * Experienced in working with LDAP, EUS and preferably OUD or OID * Profound experience about Data at rest, eg Transparent Data Encryption (TDE) and in transit (TLS) and Message Security * Oracle Linux knowledge Nice-to-have or willing to learn: * Knowledge and experience of Oracle FMW and WebLogic * Experience in coding in Python * SCIM and IAM knowledge * DBA practices and procedures * Knowledge of OEM, Splunk and CyberArk How You Work: * You have an agile, experimental mindset - always looking for better ways to build and improve. * You're familiar with the ITIL framework * Risk, security, and compliance are second nature to you. * You believe in clear documentation and knowledge sharing. * You can translate requirements into smart, scalable designs. * You write and automate tests to guarantee quality. Your Mindset: * You take ownership and aim for results. * You have effective communication and collaboration skills. * You are a true team player, comfortable with DevOps and Scrum ways of working. What We Offer: * Work in a highly skilled and motivated DevOps Oracle team. * A top-tier and interesting technical environment. * A dynamic organization undergoing rapid changes. * Flexible working hours and the possibility to work from home
01/07/2025
Project-based
You and your Job As a DevOps Engineer in our team, you will design, build, test and maintain automation solutions that improve our Oracle database infrastructures security and compliance. You'll collaborate with colleagues across DevOps teams, always seeking smarter ways to work. Your main responsibilities are: * Engineer solutions to secure the infrastructure, ensure compliance with regulatory requirements, and integrate these solutions into the Rabobank infrastructure. * Ensure high quality, continuity, security, and stability of our environment * Collaborate closely with other DevOps teams and the Product Owner. * Act as an enthusiastic and inspirational team member, helping and coaching the squad to improve their development and Ansible skills. What You Bring You are a developer with strong expertise in Ansible and a passion for automation. You are skilled in developing and managing workflows for the provisioning, configuration, and monitoring of Linux-based infrastructures. Ideally, you also have experience with Python. You bring: * Extensive knowledge of automating deployments, developing, and testing code. * Enthusiasm for automation tools and technologies, such as Azure DevOps, APIs, and REST interfaces - even when it comes to automating smaller, simpler tasks. * Willingness to share your knowledge and experience. * The ability to keep the big picture in mind while diving into the details to solve complex integration challenges. * A collaborative spirit, open to feedback, and focused on continuous improvement. * Proficient in English (B2 level). Must-have skills: * Automation experience in Ansible, including use of tools like VS Code * Experienced in working with LDAP, EUS and preferably OUD or OID * Profound experience about Data at rest, eg Transparent Data Encryption (TDE) and in transit (TLS) and Message Security * Oracle Linux knowledge Nice-to-have or willing to learn: * Knowledge and experience of Oracle FMW and WebLogic * Experience in coding in Python * SCIM and IAM knowledge * DBA practices and procedures * Knowledge of OEM, Splunk and CyberArk How You Work: * You have an agile, experimental mindset - always looking for better ways to build and improve. * You're familiar with the ITIL framework * Risk, security, and compliance are second nature to you. * You believe in clear documentation and knowledge sharing. * You can translate requirements into smart, scalable designs. * You write and automate tests to guarantee quality. Your Mindset: * You take ownership and aim for results. * You have effective communication and collaboration skills. * You are a true team player, comfortable with DevOps and Scrum ways of working. What We Offer: * Work in a highly skilled and motivated DevOps Oracle team. * A top-tier and interesting technical environment. * A dynamic organization undergoing rapid changes. * Flexible working hours and the possibility to work from home
Senior Golang Engineer An exciting opportunity for Senior Go Engineers to join a Cloud Platform team focused on delivering and scaling a Cassandra as a Service (KaaS) solution. You'll own and evolve internal APIs that automate and secure data infrastructure across critical systems. The Role We're looking for a hands-on Go expert with solid experience in Kubernetes (OpenShift 4) and DevOps automation . This is a greenfield opportunity to build and maintain the API platform powering Cassandra provisioning. What You'll Do Design, build, and maintain APIs in Go for managing Cassandra clusters Automate datastore operations (setup, scaling, patching, upgrades) Build and manage CI/CD pipelines using Azure DevOps and Ansible Write clear technical documentation and contribute to platform security Collaborate closely with platform, infra, and support teams Participate in Agile development with a strong DevOps mindset Must-Haves 3+ years' experience in Go (Golang) Strong production knowledge of OpenShift 4/Kubernetes/Helm Expertise in CI/CD tooling , especially Azure DevOps and Ansible Familiarity with software security (eg, OWASP Top 10) Excellent collaboration and communication skills Experience in complex, high-performance IT environments Nice to Have Linux (RHEL 8/9) Background in PaaS or DBaaS environments
01/07/2025
Project-based
Senior Golang Engineer An exciting opportunity for Senior Go Engineers to join a Cloud Platform team focused on delivering and scaling a Cassandra as a Service (KaaS) solution. You'll own and evolve internal APIs that automate and secure data infrastructure across critical systems. The Role We're looking for a hands-on Go expert with solid experience in Kubernetes (OpenShift 4) and DevOps automation . This is a greenfield opportunity to build and maintain the API platform powering Cassandra provisioning. What You'll Do Design, build, and maintain APIs in Go for managing Cassandra clusters Automate datastore operations (setup, scaling, patching, upgrades) Build and manage CI/CD pipelines using Azure DevOps and Ansible Write clear technical documentation and contribute to platform security Collaborate closely with platform, infra, and support teams Participate in Agile development with a strong DevOps mindset Must-Haves 3+ years' experience in Go (Golang) Strong production knowledge of OpenShift 4/Kubernetes/Helm Expertise in CI/CD tooling , especially Azure DevOps and Ansible Familiarity with software security (eg, OWASP Top 10) Excellent collaboration and communication skills Experience in complex, high-performance IT environments Nice to Have Linux (RHEL 8/9) Background in PaaS or DBaaS environments
Role Overview As Head of AI, you will be the primary technical driver of all AI/ML initiatives. You'll report directly to the CEO/CTO and own the full life cycle of our AI roadmap-from research and proof-of-concept to scalable production. We're looking for a doer who can rapidly prototype models, optimize for performance, and mentor junior engineers, all while helping define product strategy. In this role, you will: Lead AI strategy and execution in a high-ambiguity environment. Build, train, and deploy state-of-the-art models (eg, deep learning, NLP, computer vision, reinforcement learning, or relevant domain-specific architectures). Design infrastructure for data ingestion, annotation, experimentation, model versioning, and monitoring. Collaborate closely with product, design, and DevOps to integrate AI features into our platform. Continuously evaluate new research, open-source tools, and emerging frameworks to keep us at the forefront. Recruit, mentor, and grow an AI/ML team as we scale beyond our seed round. Key Responsibilities 1. Architecture & Hands-On Development Define and implement end-to-end AI pipelines: data collection/cleaning, feature engineering, model training, validation, and inference. Rapidly prototype novel models (eg, neural networks, probabilistic models) using PyTorch, TensorFlow, JAX, or equivalent. Productionize models in cloud/on-prem environments (AWS/GCP/Azure) with containerization (Docker/Kubernetes) and ensure low-latency, high-availability inference. 2. Strategic Leadership Develop a multi-quarter AI roadmap aligned with product milestones and fundraising milestones. Identify and evaluate opportunities for AI-driven competitive advantages (eg, proprietary data, unique model architectures, transfer/few-shot learning). Collaborate with business stakeholders to translate big problems into technically feasible AI solutions. 3. Data & Infrastructure Oversee the creation and maintenance of scalable data pipelines (ETL/ELT) and data lakes/warehouses. Establish best practices for data labeling, versioning, and governance to ensure high data quality. Implement ML Ops processes: CI/CD for model training, automated testing, model-drift detection, and continuous monitoring. 4. Team Building & Mentorship Hire and mentor AI/ML engineers, data scientists, and research interns. Set coding standards, model-development guidelines, and rigor around reproducible experiments (eg, clear Git workflow, experiment tracking). Conduct regular code/model reviews and foster a culture of learn by doing and iterative improvement. 5. Research & Innovation Stay abreast of state-of-the-art AI research (eg, pre-training, fine-tuning, generative methods) and evaluate applicability. Publish or present whitepapers/prototype demos if appropriate (keeping stealth constraints in mind). Forge partnerships with academic labs or open-source communities to accelerate innovation. Minimum Qualifications Experience (7 + years total; 3 + years in senior/lead role): Demonstrated track record of shipping AI/ML products end-to-end (from prototype to production). Hands-on expertise building and deploying deep learning models (eg, CNNs, Transformers, graph neural networks) in real-world applications. Proficiency in Python and core ML libraries (PyTorch, TensorFlow, scikit-learn, Hugging Face, etc.). Strong software engineering background: data structures, algorithms, distributed systems, and version control (Git). Experience designing scalable ML infrastructure on cloud platforms (AWS SageMaker, GCP AI Platform, Azure ML, or equivalent). Solid understanding of data-engineering concepts: SQL/noSQL, data pipelines (Airflow, Prefect, or similar), and batch/streaming frameworks (Spark, Kafka). Leadership & Communication: Proven ability to lead cross-functional teams in ambiguous startup settings. Exceptional written and verbal communication skills-able to explain complex concepts to both technical and non-technical stakeholders. Experience recruiting and mentoring engineers or data scientists in a fast-paced environment. Education: Bachelor's or Master's in Computer Science, AI/ML, Electrical Engineering, Statistics, or a related field. (Ph.D. in AI/ML is a plus but not required if hands-on experience is extensive.) Preferred (Nice-to-Have) Prior experience in a stealth-mode or early-stage startup, ideally taking an AI product from 0 - 1. Background in a relevant domain (eg, healthcare AI, autonomous systems, finance, robotics, computer vision, or NLP). Hands-on experience with large-scale language models (LLMs) and prompt engineering (eg, GPT, BERT, T5 family). Familiarity with on-device or edge-AI deployments (eg, TensorFlow Lite, ONNX, mobile/Embedded inference). Knowledge of MLOps tooling (MLflow, Weights & Biases, Kubeflow, or similar) for experiment tracking and model governance. Open-source contributions or published papers in top-tier AI/ML conferences (NeurIPS, ICML, CVPR, ACL, etc.). Soft Skills & Cultural Fit Doer Mindset: You thrive in scrappy, ambiguous environments. You'll roll up your sleeves to write production code, prototype research ideas, and iterate quickly. Bias for Action: You favor shipping an MVP quickly, measuring impact, and iterating-over striving for perfect academic proofs that never see production. Ownership Mentality: You treat the startup as your own: you take responsibility for system uptime, data integrity, and feature adoption, not just model accuracy. Collaborative Attitude: You value cross-functional teamwork and can pivot between researcher mode and software engineer mode depending on the task at hand. Growth-Oriented: You continually learn new algorithms, architectures, and engineering best practices; you encourage team members to do the same. What We Offer Equity Package: Meaningful ownership stake, commensurate with an early leadership role. Competitive Compensation: Salary aligned with early-stage startup benchmarks; a large portion of the upside is in equity. Autonomy & Impact: You'll shape the technical direction of our AI stack and lay the groundwork for a market-leading product. Flexible Work Environment: Remote-friendly with occasional in-person retreats or team meetups. Learning Budget: Funds for conferences, courses, or publications to ensure you stay at the bleeding edge. Fast-Track Growth: As our first AI hire and eventual team leader, you'll rapidly expand your responsibilities-and the team you build-within months. How to Apply Please send your resume/CV and a brief cover letter with the subject line: Head of AI Application - [Your Name] In your cover letter, highlight: 1. A recent project where you built and deployed an AI/ML system end-to-end (include technical stack and impact). 2. Any leadership or mentoring experience guiding other engineers or data scientists. 3. Why you're excited to join a stealth startup and move quickly from prototype to production. We will review applications on a rolling basis and aim to schedule initial calls within two weeks of receipt. Equal Opportunity: We are committed to building a diverse team and welcome applicants of all backgrounds. We celebrate differences and encourage individuals who thrive in a fast-paced, collaborative, and impact-driven culture to apply. Ready to build world-class AI from day one? Come join us and help shape the future.
01/07/2025
Full time
Role Overview As Head of AI, you will be the primary technical driver of all AI/ML initiatives. You'll report directly to the CEO/CTO and own the full life cycle of our AI roadmap-from research and proof-of-concept to scalable production. We're looking for a doer who can rapidly prototype models, optimize for performance, and mentor junior engineers, all while helping define product strategy. In this role, you will: Lead AI strategy and execution in a high-ambiguity environment. Build, train, and deploy state-of-the-art models (eg, deep learning, NLP, computer vision, reinforcement learning, or relevant domain-specific architectures). Design infrastructure for data ingestion, annotation, experimentation, model versioning, and monitoring. Collaborate closely with product, design, and DevOps to integrate AI features into our platform. Continuously evaluate new research, open-source tools, and emerging frameworks to keep us at the forefront. Recruit, mentor, and grow an AI/ML team as we scale beyond our seed round. Key Responsibilities 1. Architecture & Hands-On Development Define and implement end-to-end AI pipelines: data collection/cleaning, feature engineering, model training, validation, and inference. Rapidly prototype novel models (eg, neural networks, probabilistic models) using PyTorch, TensorFlow, JAX, or equivalent. Productionize models in cloud/on-prem environments (AWS/GCP/Azure) with containerization (Docker/Kubernetes) and ensure low-latency, high-availability inference. 2. Strategic Leadership Develop a multi-quarter AI roadmap aligned with product milestones and fundraising milestones. Identify and evaluate opportunities for AI-driven competitive advantages (eg, proprietary data, unique model architectures, transfer/few-shot learning). Collaborate with business stakeholders to translate big problems into technically feasible AI solutions. 3. Data & Infrastructure Oversee the creation and maintenance of scalable data pipelines (ETL/ELT) and data lakes/warehouses. Establish best practices for data labeling, versioning, and governance to ensure high data quality. Implement ML Ops processes: CI/CD for model training, automated testing, model-drift detection, and continuous monitoring. 4. Team Building & Mentorship Hire and mentor AI/ML engineers, data scientists, and research interns. Set coding standards, model-development guidelines, and rigor around reproducible experiments (eg, clear Git workflow, experiment tracking). Conduct regular code/model reviews and foster a culture of learn by doing and iterative improvement. 5. Research & Innovation Stay abreast of state-of-the-art AI research (eg, pre-training, fine-tuning, generative methods) and evaluate applicability. Publish or present whitepapers/prototype demos if appropriate (keeping stealth constraints in mind). Forge partnerships with academic labs or open-source communities to accelerate innovation. Minimum Qualifications Experience (7 + years total; 3 + years in senior/lead role): Demonstrated track record of shipping AI/ML products end-to-end (from prototype to production). Hands-on expertise building and deploying deep learning models (eg, CNNs, Transformers, graph neural networks) in real-world applications. Proficiency in Python and core ML libraries (PyTorch, TensorFlow, scikit-learn, Hugging Face, etc.). Strong software engineering background: data structures, algorithms, distributed systems, and version control (Git). Experience designing scalable ML infrastructure on cloud platforms (AWS SageMaker, GCP AI Platform, Azure ML, or equivalent). Solid understanding of data-engineering concepts: SQL/noSQL, data pipelines (Airflow, Prefect, or similar), and batch/streaming frameworks (Spark, Kafka). Leadership & Communication: Proven ability to lead cross-functional teams in ambiguous startup settings. Exceptional written and verbal communication skills-able to explain complex concepts to both technical and non-technical stakeholders. Experience recruiting and mentoring engineers or data scientists in a fast-paced environment. Education: Bachelor's or Master's in Computer Science, AI/ML, Electrical Engineering, Statistics, or a related field. (Ph.D. in AI/ML is a plus but not required if hands-on experience is extensive.) Preferred (Nice-to-Have) Prior experience in a stealth-mode or early-stage startup, ideally taking an AI product from 0 - 1. Background in a relevant domain (eg, healthcare AI, autonomous systems, finance, robotics, computer vision, or NLP). Hands-on experience with large-scale language models (LLMs) and prompt engineering (eg, GPT, BERT, T5 family). Familiarity with on-device or edge-AI deployments (eg, TensorFlow Lite, ONNX, mobile/Embedded inference). Knowledge of MLOps tooling (MLflow, Weights & Biases, Kubeflow, or similar) for experiment tracking and model governance. Open-source contributions or published papers in top-tier AI/ML conferences (NeurIPS, ICML, CVPR, ACL, etc.). Soft Skills & Cultural Fit Doer Mindset: You thrive in scrappy, ambiguous environments. You'll roll up your sleeves to write production code, prototype research ideas, and iterate quickly. Bias for Action: You favor shipping an MVP quickly, measuring impact, and iterating-over striving for perfect academic proofs that never see production. Ownership Mentality: You treat the startup as your own: you take responsibility for system uptime, data integrity, and feature adoption, not just model accuracy. Collaborative Attitude: You value cross-functional teamwork and can pivot between researcher mode and software engineer mode depending on the task at hand. Growth-Oriented: You continually learn new algorithms, architectures, and engineering best practices; you encourage team members to do the same. What We Offer Equity Package: Meaningful ownership stake, commensurate with an early leadership role. Competitive Compensation: Salary aligned with early-stage startup benchmarks; a large portion of the upside is in equity. Autonomy & Impact: You'll shape the technical direction of our AI stack and lay the groundwork for a market-leading product. Flexible Work Environment: Remote-friendly with occasional in-person retreats or team meetups. Learning Budget: Funds for conferences, courses, or publications to ensure you stay at the bleeding edge. Fast-Track Growth: As our first AI hire and eventual team leader, you'll rapidly expand your responsibilities-and the team you build-within months. How to Apply Please send your resume/CV and a brief cover letter with the subject line: Head of AI Application - [Your Name] In your cover letter, highlight: 1. A recent project where you built and deployed an AI/ML system end-to-end (include technical stack and impact). 2. Any leadership or mentoring experience guiding other engineers or data scientists. 3. Why you're excited to join a stealth startup and move quickly from prototype to production. We will review applications on a rolling basis and aim to schedule initial calls within two weeks of receipt. Equal Opportunity: We are committed to building a diverse team and welcome applicants of all backgrounds. We celebrate differences and encourage individuals who thrive in a fast-paced, collaborative, and impact-driven culture to apply. Ready to build world-class AI from day one? Come join us and help shape the future.
We are looking for a DevOps Engineer to work for a client project in Strasbourg. Location: 80% on-site work in Strasbourg, and 20% off-site. Start date: Immediately Mission duration: 1 year, renewable Preliminary Requirements: Candidate should be citizen of member states of the European Union (EU nationality), and should be able to get their criminal record. Role description: In charge of the deployment of an institutional application in the customer datacenter. Task description: Importing sources code from external providers and verifying the correct push in Github/Jenkins, using Ansible Tower as Orchestrator. Build the application (as microservices) and verify the correct pipeline execution. Checking Sonar, Fortify SCA results, and built images. Generate deployment package using Orchestrator. Importing deployment package using Orchestrator and verifying the Helm Charts pushed, containers, etc. Executing the deployment, on the target environment, using Orchestrator. Verifying the deployment before the testing phase. Participates in the improvement of deployment processes. Job requirements: University degree (master or equivalent) in Computer Science/Security & Resource Management; Minimum 5 years of professional IT experience; Strong knowledge in infrastructure, specifically OpenShift, Kubernetes , and Docker; Proven and excellent experience in using, at least, the following technologies: Github, Jenkins, Maven, Buildah, Ansible Tower, Artifactory, Jfrog, SonarQube, Fortify SCA, Helm Charts, ArgoCD CI/CD experience Service Mesh (istio), Kafka, Linux Good troubleshooting skills Very good English speaking & writing skills; Experience and willingness of working in an international/multicultural environment. infeurope is a Luxembourg-based IT service provider, designing, developing and managing multilingual information and documentary systems in many application areas and business sectors. For more than 40 years we have delivered IT systems and solutions.
01/07/2025
Project-based
We are looking for a DevOps Engineer to work for a client project in Strasbourg. Location: 80% on-site work in Strasbourg, and 20% off-site. Start date: Immediately Mission duration: 1 year, renewable Preliminary Requirements: Candidate should be citizen of member states of the European Union (EU nationality), and should be able to get their criminal record. Role description: In charge of the deployment of an institutional application in the customer datacenter. Task description: Importing sources code from external providers and verifying the correct push in Github/Jenkins, using Ansible Tower as Orchestrator. Build the application (as microservices) and verify the correct pipeline execution. Checking Sonar, Fortify SCA results, and built images. Generate deployment package using Orchestrator. Importing deployment package using Orchestrator and verifying the Helm Charts pushed, containers, etc. Executing the deployment, on the target environment, using Orchestrator. Verifying the deployment before the testing phase. Participates in the improvement of deployment processes. Job requirements: University degree (master or equivalent) in Computer Science/Security & Resource Management; Minimum 5 years of professional IT experience; Strong knowledge in infrastructure, specifically OpenShift, Kubernetes , and Docker; Proven and excellent experience in using, at least, the following technologies: Github, Jenkins, Maven, Buildah, Ansible Tower, Artifactory, Jfrog, SonarQube, Fortify SCA, Helm Charts, ArgoCD CI/CD experience Service Mesh (istio), Kafka, Linux Good troubleshooting skills Very good English speaking & writing skills; Experience and willingness of working in an international/multicultural environment. infeurope is a Luxembourg-based IT service provider, designing, developing and managing multilingual information and documentary systems in many application areas and business sectors. For more than 40 years we have delivered IT systems and solutions.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Director of Software Development with strong Java and Kafka experience. Candidate will be responsible for leading a team of skilled software engineers designing and delivering scalable and resilient hybrid and Cloud-based applications and data solutions supporting critical financial market clearing and risk activities; helping to drive the strategy of transforming the enterprise into a data-driven organization; lead through innovative strategic thinking in building data solutions. Responsibilities: Manage, lead, and mentor software development team Serve as technical product owner flushing out detailed business, architectural, and design requirements Develop solutions to complex technical challenges while coding, testing, troubleshooting and documenting the systems you and your team develop Recommend architectural changes and new technologies and tools that improve the efficiency and quality of OCC's systems and development processes Lead the efforts to optimize application performance and resilience though analysis, code refactoring, and systems tuning Collaborate with others to deliver complex projects involving the integration with multiple systems Work closely with internal and external business and technology partners. Build and manage a team of skilled software engineers Qualifications: 8+ years of experience leading software development teams Experience with Java Experience with distributed message brokers like Flink, Spark, Kafka Streams, etc. Experience with Agile development processes for enterprise software solutions Experience with software testing methodologies and automated testing frameworks Strong leadership skills Ability to manage project teams with different timelines and focus Knowledge of industry trends, best practices, and change management Strong communication skills with ability to communicate and interact with engineers and business stakeholders Team player, self-driven, motivated, and able to work under pressure Technical Skills: 8-10 years of experience in building high performance, large scale data solutions Experience managing a team of professionals to drive their work, providing mentoring for growth, and delivering constructive feedback or course correction where necessary 8+ years of solutions design and architecture experience Hands-on development experience with multiple programming languages such as Python and Java Experience with Big Data processing technologies and frameworks such as Presto, Hadoop, MapReduce, and Spark Hands-on experiences designing and implementing RESTful APIs Knowledge and understanding of DevOps tools and technologies such as Terraform, Git, Jenkins, Docker, Harness, NexArtifactory, and CI/CD pipelines Knowledge of SQL, data warehousing design concepts, various data management systems (structured and semi structured) and integrating with various database technologies (Relational, NoSQL) Experience working with Cloud ecosystems (AWS, Azure, Google Cloud Platform) Experience with stream processing technologies and frameworks such as Kafka, Spark Streaming, Flink Familiarity with monitoring related tools and frameworks like Splunk, Elasticsearch, SignalFX, and AppDynamics Good understanding of data integrations patterns, technologies, and tools Education/Certification: BS degree in Computer Science, similar technical field, or equivalent practical experience. Master's degree preferred OCP Java Programmer Certification (preferred) AWS Certified Solutions Architect (preferred)
30/06/2025
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Director of Software Development with strong Java and Kafka experience. Candidate will be responsible for leading a team of skilled software engineers designing and delivering scalable and resilient hybrid and Cloud-based applications and data solutions supporting critical financial market clearing and risk activities; helping to drive the strategy of transforming the enterprise into a data-driven organization; lead through innovative strategic thinking in building data solutions. Responsibilities: Manage, lead, and mentor software development team Serve as technical product owner flushing out detailed business, architectural, and design requirements Develop solutions to complex technical challenges while coding, testing, troubleshooting and documenting the systems you and your team develop Recommend architectural changes and new technologies and tools that improve the efficiency and quality of OCC's systems and development processes Lead the efforts to optimize application performance and resilience though analysis, code refactoring, and systems tuning Collaborate with others to deliver complex projects involving the integration with multiple systems Work closely with internal and external business and technology partners. Build and manage a team of skilled software engineers Qualifications: 8+ years of experience leading software development teams Experience with Java Experience with distributed message brokers like Flink, Spark, Kafka Streams, etc. Experience with Agile development processes for enterprise software solutions Experience with software testing methodologies and automated testing frameworks Strong leadership skills Ability to manage project teams with different timelines and focus Knowledge of industry trends, best practices, and change management Strong communication skills with ability to communicate and interact with engineers and business stakeholders Team player, self-driven, motivated, and able to work under pressure Technical Skills: 8-10 years of experience in building high performance, large scale data solutions Experience managing a team of professionals to drive their work, providing mentoring for growth, and delivering constructive feedback or course correction where necessary 8+ years of solutions design and architecture experience Hands-on development experience with multiple programming languages such as Python and Java Experience with Big Data processing technologies and frameworks such as Presto, Hadoop, MapReduce, and Spark Hands-on experiences designing and implementing RESTful APIs Knowledge and understanding of DevOps tools and technologies such as Terraform, Git, Jenkins, Docker, Harness, NexArtifactory, and CI/CD pipelines Knowledge of SQL, data warehousing design concepts, various data management systems (structured and semi structured) and integrating with various database technologies (Relational, NoSQL) Experience working with Cloud ecosystems (AWS, Azure, Google Cloud Platform) Experience with stream processing technologies and frameworks such as Kafka, Spark Streaming, Flink Familiarity with monitoring related tools and frameworks like Splunk, Elasticsearch, SignalFX, and AppDynamics Good understanding of data integrations patterns, technologies, and tools Education/Certification: BS degree in Computer Science, similar technical field, or equivalent practical experience. Master's degree preferred OCP Java Programmer Certification (preferred) AWS Certified Solutions Architect (preferred)
Lead Middleware Kafka Administration/DevOps/IaC The key to this rule is CURRENT Kafka administration (minimum eight years) Five years in Terraform, Ansible, and Infrastructure as a Code IaC Nice to haves: Kubernetes, Rancher, GitHub, Artifactory LOCATION: Dallas, TX Hybrid 3 days onsite Open to h1b Looking for 10 years Kafka administration infrastructure as a code cloud/automation container orchestration CID pipeline Kafka Ansible Terraform Bash Kubernetes Rancher GitHub artifactory harness Jenkins AWS Azure CICD IaC automated Cloud provisioning cluster management performance tuning and security We are seeking a highly skilled and experienced Infrastructure Middleware Engineer with deep expertise in Kafka administration, infrastructure as code (IaC), cloud automation, container orchestration and CI/CD pipelines. The ideal candidate will be responsible for designing, implementing, and maintaining robust and scalable Middleware solutions, ensuring high availability, performance, and security. Primary Duties and Responsibilities: To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. Design, implement and manage highly available and scalable Kafka clusters. Monitor Kafka performance, troubleshoot issues and optimize configurations. Develop and maintain IaC using Ansible and Terraform for infrastructure provisioning and configuration Management. Create and maintain reusable IaC modules. Design and implement cloud-based infrastructure solutions on AWS and Azure. Automate cloud resource provisioning, scaling and management using cloud-native tools and services. Deploy and Manage containerized applications using Kubernetes and Rancher Troubleshoot container-related issues and optimize container performance. Design, implement and maintain CI/CD pipelines using tools like GitHub, Artifactory, Harness and Jenkins. Automated the build, test and deployment of Middleware components. Integrate IaC and container technologies into CI/CD pipelines. Document all processes and procedures. Work with development teams to ensure smooth deployments. Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. Strong proficiency in IaC tools, specifically Ansible, Terraform and bash Scripting. Extensive experience with cloud automation and provisioning on AWS and Azure. Proficiency in CI/CD tools, including GitHub, Artifactory, Harness and Jenkins. Strong Scripting skills in languages like Python and Bash. Excellent troubleshooting and problem-solving skills Understanding of networking principles. Experience with monitoring tools like Splunk, Splunk OTEL, Prometheus and Grafana. Technical Skills: Kafka, Ansible, Terraform, Bash, Kubernetes, Rancher, GitHub, Artifactory, Harness, Jenkins, AWS, Azure, CI/CD, IaC, Automated Cloud Provisioning Education and/or Experience: Bachelors degree in Computer Science, Engineering or a related field (or equivalent experience) 10+ years of experience in infrastructure middlware administration. In-depth expertise in Kafka administration, including cluster management, performance tuning, and security. Certificates or Licenses: AWS Solutions Architect, CKAD or CKA certifications preferred.
30/06/2025
Full time
Lead Middleware Kafka Administration/DevOps/IaC The key to this rule is CURRENT Kafka administration (minimum eight years) Five years in Terraform, Ansible, and Infrastructure as a Code IaC Nice to haves: Kubernetes, Rancher, GitHub, Artifactory LOCATION: Dallas, TX Hybrid 3 days onsite Open to h1b Looking for 10 years Kafka administration infrastructure as a code cloud/automation container orchestration CID pipeline Kafka Ansible Terraform Bash Kubernetes Rancher GitHub artifactory harness Jenkins AWS Azure CICD IaC automated Cloud provisioning cluster management performance tuning and security We are seeking a highly skilled and experienced Infrastructure Middleware Engineer with deep expertise in Kafka administration, infrastructure as code (IaC), cloud automation, container orchestration and CI/CD pipelines. The ideal candidate will be responsible for designing, implementing, and maintaining robust and scalable Middleware solutions, ensuring high availability, performance, and security. Primary Duties and Responsibilities: To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. Design, implement and manage highly available and scalable Kafka clusters. Monitor Kafka performance, troubleshoot issues and optimize configurations. Develop and maintain IaC using Ansible and Terraform for infrastructure provisioning and configuration Management. Create and maintain reusable IaC modules. Design and implement cloud-based infrastructure solutions on AWS and Azure. Automate cloud resource provisioning, scaling and management using cloud-native tools and services. Deploy and Manage containerized applications using Kubernetes and Rancher Troubleshoot container-related issues and optimize container performance. Design, implement and maintain CI/CD pipelines using tools like GitHub, Artifactory, Harness and Jenkins. Automated the build, test and deployment of Middleware components. Integrate IaC and container technologies into CI/CD pipelines. Document all processes and procedures. Work with development teams to ensure smooth deployments. Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. Strong proficiency in IaC tools, specifically Ansible, Terraform and bash Scripting. Extensive experience with cloud automation and provisioning on AWS and Azure. Proficiency in CI/CD tools, including GitHub, Artifactory, Harness and Jenkins. Strong Scripting skills in languages like Python and Bash. Excellent troubleshooting and problem-solving skills Understanding of networking principles. Experience with monitoring tools like Splunk, Splunk OTEL, Prometheus and Grafana. Technical Skills: Kafka, Ansible, Terraform, Bash, Kubernetes, Rancher, GitHub, Artifactory, Harness, Jenkins, AWS, Azure, CI/CD, IaC, Automated Cloud Provisioning Education and/or Experience: Bachelors degree in Computer Science, Engineering or a related field (or equivalent experience) 10+ years of experience in infrastructure middlware administration. In-depth expertise in Kafka administration, including cluster management, performance tuning, and security. Certificates or Licenses: AWS Solutions Architect, CKAD or CKA certifications preferred.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Director, Java Software Engineering. This Director will lead a software development team working with Java, Python, Flink, Spark, Kafka, big data processing, DevOps tools, data warehousing/management, etc. Responsibilities: Manage, lead, build, and mentor software development team Serve as technical product owner flushing out detailed business, architectural, and design requirements Develop solutions to complex technical challenges while coding, testing, troubleshooting and documenting the systems you and your team develop Recommend architectural changes and new technologies and tools that improve the efficiency and quality of company systems and development processes Lead the efforts to optimize application performance and resilience though analysis, code refactoring, and systems tuning Qualifications: BS degree in Computer Science, similar technical field, or equivalent practical experience. Master's degree preferred 8-10 years of experience in building high performance, large scale data solutions Hands-on development experience with multiple programming languages such as Python and Java Experience with distributed message brokers like Flink, Spark, Kafka Streams, etc. Experience with Agile development processes for enterprise software solutions Experience with software testing methodologies and automated testing frameworks Experience with Big Data processing technologies and frameworks such as Presto, Hadoop, MapReduce, and Spark Hands-on experiences designing and implementing RESTful APIs Knowledge and understanding of DevOps tools and technologies such as Terraform, Git, Jenkins, Docker, Harness, Nexus/Artifactory, and CI/CD pipelines Knowledge of SQL, data warehousing design concepts, various data management systems (structured and semi structured) and integrating with various database technologies (Relational, NoSQL) Experience working with Cloud ecosystems (AWS, Azure, GCP) Experience with stream processing technologies and frameworks such as Kafka, Spark Streaming, Flink Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. Experience working with various types of databases like Relational, NoSQL, Object-based, Graph Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics
30/06/2025
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Director, Java Software Engineering. This Director will lead a software development team working with Java, Python, Flink, Spark, Kafka, big data processing, DevOps tools, data warehousing/management, etc. Responsibilities: Manage, lead, build, and mentor software development team Serve as technical product owner flushing out detailed business, architectural, and design requirements Develop solutions to complex technical challenges while coding, testing, troubleshooting and documenting the systems you and your team develop Recommend architectural changes and new technologies and tools that improve the efficiency and quality of company systems and development processes Lead the efforts to optimize application performance and resilience though analysis, code refactoring, and systems tuning Qualifications: BS degree in Computer Science, similar technical field, or equivalent practical experience. Master's degree preferred 8-10 years of experience in building high performance, large scale data solutions Hands-on development experience with multiple programming languages such as Python and Java Experience with distributed message brokers like Flink, Spark, Kafka Streams, etc. Experience with Agile development processes for enterprise software solutions Experience with software testing methodologies and automated testing frameworks Experience with Big Data processing technologies and frameworks such as Presto, Hadoop, MapReduce, and Spark Hands-on experiences designing and implementing RESTful APIs Knowledge and understanding of DevOps tools and technologies such as Terraform, Git, Jenkins, Docker, Harness, Nexus/Artifactory, and CI/CD pipelines Knowledge of SQL, data warehousing design concepts, various data management systems (structured and semi structured) and integrating with various database technologies (Relational, NoSQL) Experience working with Cloud ecosystems (AWS, Azure, GCP) Experience with stream processing technologies and frameworks such as Kafka, Spark Streaming, Flink Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. Experience working with various types of databases like Relational, NoSQL, Object-based, Graph Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics
The Opportunity A global financial services firm is seeking an experienced Python Database Engineer to play a key role in shaping and delivering next-generation data infrastructure. This position focuses on building and supporting a highly scalable, resilient OLTP platform using SQL architecture. You'll be part of a global team driving automation, security, and scalability across cloud-based containerised environments. This is a unique opportunity to work in a modern engineering culture that values clean code, DevOps principles, and deep technical expertise. Key Responsibilities * Design and deploy secure, compliant infrastructure in an internal cloud environment. * Contribute to the architecture, development, and roll-out of a SQL platform in a DBaaS model. * Collaborate closely with InfoSec to integrate required access and compliance controls. * Drive automation of infrastructure and containerised services (Kubernetes). * Benchmark and run proof-of-concept evaluations of database technologies. * Document and optimise operational processes around the PostgreSQL platform. * Deliver production-ready solutions using modern CI/CD and infrastructure-as-code practices. Required Skills & Experience * Strong Python development skills (must-have). * Advanced knowledge of PostgreSQL and ANSI SQL. * Deep understanding of Kubernetes for container orchestration and managing stateful services. * Proficiency in Linux systems and Shell Scripting. * Hands-on experience with Terraform, Helm, and CI/CD pipelines. * Proven experience with building or supporting high availability, mission-critical systems. * Strong grasp of authentication and security concepts (OAuth, SAML, OpenID, SCIM, Kerberos). * Familiarity with monitoring tools, agent-based architectures, alerting, and dashboard creation. * Experience working in Agile and DevOps-oriented teams. Preferred Experience * Experience running PostgreSQL or NewSQL databases at scale in production environments. * Prior involvement in infrastructure benchmarking or database product evaluations. * Exposure to DBaaS architecture and deployment models. Why Apply? * Be part of a world-class technology organisation with deep engineering culture. * Gain exposure to cutting-edge technologies in the cloud-native, data infrastructure space. * Join a collaborative team working across global offices. * Access to ongoing training, progression, and mentorship in a complex, high-impact environment. * Work in a centrally located office with excellent on-site amenities, including gym and restaurant. Interested in delivering high-impact infrastructure at scale? Apply now and help shape the future of data services in a global enterprise environment. We are committed to creating an inclusive recruitment experience.If you have a disability or long-term health condition and require adjustments to the recruitment process, our Adjustment Concierge Service is here to support you. Please reach out to us at (see below) to discuss further.
30/06/2025
Project-based
The Opportunity A global financial services firm is seeking an experienced Python Database Engineer to play a key role in shaping and delivering next-generation data infrastructure. This position focuses on building and supporting a highly scalable, resilient OLTP platform using SQL architecture. You'll be part of a global team driving automation, security, and scalability across cloud-based containerised environments. This is a unique opportunity to work in a modern engineering culture that values clean code, DevOps principles, and deep technical expertise. Key Responsibilities * Design and deploy secure, compliant infrastructure in an internal cloud environment. * Contribute to the architecture, development, and roll-out of a SQL platform in a DBaaS model. * Collaborate closely with InfoSec to integrate required access and compliance controls. * Drive automation of infrastructure and containerised services (Kubernetes). * Benchmark and run proof-of-concept evaluations of database technologies. * Document and optimise operational processes around the PostgreSQL platform. * Deliver production-ready solutions using modern CI/CD and infrastructure-as-code practices. Required Skills & Experience * Strong Python development skills (must-have). * Advanced knowledge of PostgreSQL and ANSI SQL. * Deep understanding of Kubernetes for container orchestration and managing stateful services. * Proficiency in Linux systems and Shell Scripting. * Hands-on experience with Terraform, Helm, and CI/CD pipelines. * Proven experience with building or supporting high availability, mission-critical systems. * Strong grasp of authentication and security concepts (OAuth, SAML, OpenID, SCIM, Kerberos). * Familiarity with monitoring tools, agent-based architectures, alerting, and dashboard creation. * Experience working in Agile and DevOps-oriented teams. Preferred Experience * Experience running PostgreSQL or NewSQL databases at scale in production environments. * Prior involvement in infrastructure benchmarking or database product evaluations. * Exposure to DBaaS architecture and deployment models. Why Apply? * Be part of a world-class technology organisation with deep engineering culture. * Gain exposure to cutting-edge technologies in the cloud-native, data infrastructure space. * Join a collaborative team working across global offices. * Access to ongoing training, progression, and mentorship in a complex, high-impact environment. * Work in a centrally located office with excellent on-site amenities, including gym and restaurant. Interested in delivering high-impact infrastructure at scale? Apply now and help shape the future of data services in a global enterprise environment. We are committed to creating an inclusive recruitment experience.If you have a disability or long-term health condition and require adjustments to the recruitment process, our Adjustment Concierge Service is here to support you. Please reach out to us at (see below) to discuss further.
Contract Type: Contingent/Contract (PAYE engagement) Location: Glasgow, United Kingdom (Hybrid - 3 days in office) Contract Duration: 12 months About the Role We are seeking an experienced Database Engineer to join a globally distributed technology team within a leading financial services environment. The successful candidate will contribute to the implementation and support of scalable, resilient NewSQL platforms within an internal cloud-based DBaaS infrastructure. This is a hands-on engineering role, requiring a mix of database product expertise and development skills, particularly in Python. You will play a key part in building high-performance, highly available data solutions with a strong emphasis on automation, security, and operational efficiency. Key Responsibilities Design and deploy secure, compliant infrastructure integrated with organizational controls. Build and maintain scalable database platforms using Postgres in a containerized environment. Collaborate with global teams and security stakeholders to deliver robust DBaaS solutions. Automate deployment and operational processes using CI/CD and Infrastructure as Code tools. Provide architecture input for highly available, production-grade systems. Document and optimize support processes for Postgres-based services. Develop monitoring and alerting systems to ensure high availability and performance. Key Skills & Experience Strong Python development skills (essential) Hands-on experience with Postgres in production environments Expertise in Kubernetes and container orchestration Solid Linux system administration skills Experience with CI/CD pipelines and tools such as Terraform and Helm Proficient in ANSI SQL Experience working with secure authentication and authorization standards (SAML, SCIM, OAuth, OpenID, Kerberos) Strong understanding of DevOps practices and Agile methodologies Ability to develop and manage system monitoring tools and dashboards Experience with large-scale, high-velocity OLTP systems and NewSQL architecture Additional Information This role is offered on a PAYE basis , excluding holiday pay accrual. Candidates must have access to their own device for remote work. Ensure accurate candidate details for internal system access processing.
30/06/2025
Project-based
Contract Type: Contingent/Contract (PAYE engagement) Location: Glasgow, United Kingdom (Hybrid - 3 days in office) Contract Duration: 12 months About the Role We are seeking an experienced Database Engineer to join a globally distributed technology team within a leading financial services environment. The successful candidate will contribute to the implementation and support of scalable, resilient NewSQL platforms within an internal cloud-based DBaaS infrastructure. This is a hands-on engineering role, requiring a mix of database product expertise and development skills, particularly in Python. You will play a key part in building high-performance, highly available data solutions with a strong emphasis on automation, security, and operational efficiency. Key Responsibilities Design and deploy secure, compliant infrastructure integrated with organizational controls. Build and maintain scalable database platforms using Postgres in a containerized environment. Collaborate with global teams and security stakeholders to deliver robust DBaaS solutions. Automate deployment and operational processes using CI/CD and Infrastructure as Code tools. Provide architecture input for highly available, production-grade systems. Document and optimize support processes for Postgres-based services. Develop monitoring and alerting systems to ensure high availability and performance. Key Skills & Experience Strong Python development skills (essential) Hands-on experience with Postgres in production environments Expertise in Kubernetes and container orchestration Solid Linux system administration skills Experience with CI/CD pipelines and tools such as Terraform and Helm Proficient in ANSI SQL Experience working with secure authentication and authorization standards (SAML, SCIM, OAuth, OpenID, Kerberos) Strong understanding of DevOps practices and Agile methodologies Ability to develop and manage system monitoring tools and dashboards Experience with large-scale, high-velocity OLTP systems and NewSQL architecture Additional Information This role is offered on a PAYE basis , excluding holiday pay accrual. Candidates must have access to their own device for remote work. Ensure accurate candidate details for internal system access processing.
Job Title: Endur Technical Architect Location: Hybrid - 3 days per week on-site in Canary Wharf Start Date: ASAP Contract Duration: Until 31st December 2025 (with potential extension) Contract - Inside IR35 Join GlobalLogic as an Endur Technical Architect We are seeking a highly experienced Endur Technical Architect to join GlobalLogic on an exciting project with one of our large enterprise clients in the energy trading domain. This is a unique opportunity to shape the architecture of a next-generation trading platform, leveraging modern technologies and driving forward digital transformation in the industry. In this role, you will work closely with subject matter experts, product owners, technical leads, designers, and fellow architects to re-architect and design innovative solutions aligned with both business and technology strategies. A deep understanding of energy trading and risk management processes across front, middle, and Back Office functions-particularly within physical trading-is essential. Your Responsibilities: Design and document the architecture of bespoke, modern energy trading systems. Lead functional and technical requirement gathering, analysis, and architectural design. Serve as a thought leader and advisor throughout delivery and design reviews. Communicate effectively across stakeholders, business functions, vendors, and consulting teams. Manage business change and stakeholder expectations with clarity and confidence. Contribute to the architectural evolution of a trading platform supporting complex portfolios and optimization challenges. Must-Have Technical Skills: Microservices architecture & technologies Cloud hosting (AWS) Infrastructure as Code (Terraform) DevOps toolchain (CI/CD, Git, Ansible) Must-Have Functional Experience: Full life cycle experience in physical energy trading Exposure to Power and/or Gas markets Key Skills & Experience Required: Functional Expertise: Strong understanding of physical energy trading (preferably Gas and/or Power). Experience with complex contract optionality , portfolio management , and schedule optimization . Knowledge of deal life cycle and options modelling . Familiarity with dependency graphs in trading environments. Architectural & Technical Competencies: Proven experience designing architectures across: Data Architecture - Transactional and analytical data modelling, Real Time reporting, MongoDB, data migration, reconciliation. Technical Architecture - Hands-on with C#/Java, microservices, containerization (Docker, OpenShift, Kubernetes), React, AWS, Terraform. Integration - Expertise in Real Time messaging (eg, AMQ), API design (JSON, Swagger), and batch processes. Infrastructure & Operations - DevOps practices, CI/CD pipelines, Git, Ansible, cloud elasticity, cost optimization, and grid computing. Delivery Approach: Deep familiarity with Agile methodologies . Capable of working autonomously or as part of a small, high-performing team. Strong analytical mindset with a proactive approach to problem-solving. Effective communicator, both written and verbal, with an ability to explain complex concepts to diverse audiences. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a leader in digital engineering and product development services. We partner with top-tier clients across industries-including finance, telecoms, healthcare, and automotive-to design and build innovative digital platforms and experiences. Our teams combine deep technical expertise with seamless delivery to solve complex challenges, modernise Legacy systems, and accelerate digital transformation. With a strong focus on cloud, data, AI, and Embedded technologies, GlobalLogic UK&I offers a dynamic environment where engineers, architects, and consultants collaborate on cutting-edge projects that make a real-world impact. Join us to shape the future of digital innovation-right here in the UK and beyond.
30/06/2025
Project-based
Job Title: Endur Technical Architect Location: Hybrid - 3 days per week on-site in Canary Wharf Start Date: ASAP Contract Duration: Until 31st December 2025 (with potential extension) Contract - Inside IR35 Join GlobalLogic as an Endur Technical Architect We are seeking a highly experienced Endur Technical Architect to join GlobalLogic on an exciting project with one of our large enterprise clients in the energy trading domain. This is a unique opportunity to shape the architecture of a next-generation trading platform, leveraging modern technologies and driving forward digital transformation in the industry. In this role, you will work closely with subject matter experts, product owners, technical leads, designers, and fellow architects to re-architect and design innovative solutions aligned with both business and technology strategies. A deep understanding of energy trading and risk management processes across front, middle, and Back Office functions-particularly within physical trading-is essential. Your Responsibilities: Design and document the architecture of bespoke, modern energy trading systems. Lead functional and technical requirement gathering, analysis, and architectural design. Serve as a thought leader and advisor throughout delivery and design reviews. Communicate effectively across stakeholders, business functions, vendors, and consulting teams. Manage business change and stakeholder expectations with clarity and confidence. Contribute to the architectural evolution of a trading platform supporting complex portfolios and optimization challenges. Must-Have Technical Skills: Microservices architecture & technologies Cloud hosting (AWS) Infrastructure as Code (Terraform) DevOps toolchain (CI/CD, Git, Ansible) Must-Have Functional Experience: Full life cycle experience in physical energy trading Exposure to Power and/or Gas markets Key Skills & Experience Required: Functional Expertise: Strong understanding of physical energy trading (preferably Gas and/or Power). Experience with complex contract optionality , portfolio management , and schedule optimization . Knowledge of deal life cycle and options modelling . Familiarity with dependency graphs in trading environments. Architectural & Technical Competencies: Proven experience designing architectures across: Data Architecture - Transactional and analytical data modelling, Real Time reporting, MongoDB, data migration, reconciliation. Technical Architecture - Hands-on with C#/Java, microservices, containerization (Docker, OpenShift, Kubernetes), React, AWS, Terraform. Integration - Expertise in Real Time messaging (eg, AMQ), API design (JSON, Swagger), and batch processes. Infrastructure & Operations - DevOps practices, CI/CD pipelines, Git, Ansible, cloud elasticity, cost optimization, and grid computing. Delivery Approach: Deep familiarity with Agile methodologies . Capable of working autonomously or as part of a small, high-performing team. Strong analytical mindset with a proactive approach to problem-solving. Effective communicator, both written and verbal, with an ability to explain complex concepts to diverse audiences. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a leader in digital engineering and product development services. We partner with top-tier clients across industries-including finance, telecoms, healthcare, and automotive-to design and build innovative digital platforms and experiences. Our teams combine deep technical expertise with seamless delivery to solve complex challenges, modernise Legacy systems, and accelerate digital transformation. With a strong focus on cloud, data, AI, and Embedded technologies, GlobalLogic UK&I offers a dynamic environment where engineers, architects, and consultants collaborate on cutting-edge projects that make a real-world impact. Join us to shape the future of digital innovation-right here in the UK and beyond.
AI Engineer. Hybrid Working. Pytorch, Python, SQL About Us Our client is a fast-growing startup building intelligent systems that transform how data is processed and understood. As they expand their AI capabilities, they are looking for a talented AI Engineer to join the team and help bring cutting-edge models into production. The Role As an AI Engineer, you'll be responsible for designing, training, and deploying machine learning models that power real-world applications. You'll work closely with product and engineering teams to integrate AI features into our platform, with a strong focus on performance, scalability, and innovation. Key Responsibilities Model Development: Build and fine-tune deep learning models (eg, CNNs, Transformers, GNNs) using PyTorch, TensorFlow, or JAX. Infrastructure & Deployment: Deploy models in production using Docker/Kubernetes on AWS/GCP/Azure, ensuring low-latency and high-availability inference. Data Engineering MLOps Collaboration: Work cross-functionally with product, design, and DevOps teams to deliver AI-powered features. Experimentation: Rapidly prototype and iterate on new ideas, leveraging the latest in generative AI, NLP, and computer vision. Tech Stack Languages: Python (primary), SQL, Bash Frameworks: PyTorch, TensorFlow, Hugging Face, scikit-learn Cloud & DevOps: AWS, GCP, Docker, Kubernetes Data Tools: Airflow, Spark, Kafka, Prefect MLOps: MLflow, Weights & Biases, Kubeflow What We're Looking For 3+ years of experience in AI/ML engineering, with a strong portfolio of deployed models. Solid understanding of software engineering principles and distributed systems. Experience with cloud-based ML infrastructure and containerization. Strong communication skills and a collaborative mindset. Bonus: Experience with LLMs, edge AI, or open-source contributions.
30/06/2025
Full time
AI Engineer. Hybrid Working. Pytorch, Python, SQL About Us Our client is a fast-growing startup building intelligent systems that transform how data is processed and understood. As they expand their AI capabilities, they are looking for a talented AI Engineer to join the team and help bring cutting-edge models into production. The Role As an AI Engineer, you'll be responsible for designing, training, and deploying machine learning models that power real-world applications. You'll work closely with product and engineering teams to integrate AI features into our platform, with a strong focus on performance, scalability, and innovation. Key Responsibilities Model Development: Build and fine-tune deep learning models (eg, CNNs, Transformers, GNNs) using PyTorch, TensorFlow, or JAX. Infrastructure & Deployment: Deploy models in production using Docker/Kubernetes on AWS/GCP/Azure, ensuring low-latency and high-availability inference. Data Engineering MLOps Collaboration: Work cross-functionally with product, design, and DevOps teams to deliver AI-powered features. Experimentation: Rapidly prototype and iterate on new ideas, leveraging the latest in generative AI, NLP, and computer vision. Tech Stack Languages: Python (primary), SQL, Bash Frameworks: PyTorch, TensorFlow, Hugging Face, scikit-learn Cloud & DevOps: AWS, GCP, Docker, Kubernetes Data Tools: Airflow, Spark, Kafka, Prefect MLOps: MLflow, Weights & Biases, Kubeflow What We're Looking For 3+ years of experience in AI/ML engineering, with a strong portfolio of deployed models. Solid understanding of software engineering principles and distributed systems. Experience with cloud-based ML infrastructure and containerization. Strong communication skills and a collaborative mindset. Bonus: Experience with LLMs, edge AI, or open-source contributions.
Request Technology - Robyn Honquest
Oak Brook, Illinois
NO SPONSORSHIP SOFTWARE ENGINEER PLATFORM ENGINEER - Java/C#.NET SALARY: $97k -$184k plus 15% bonus LOCATION: Oak Brook, IL hybrid 3 days onsite Java & C# .NET developer, who can take Java technology and redesign it in .NET They want to move away from Java totally and eventually do all .NET (Back End development/Middleware enhancements) Any product development is a plus Internet of things IoT Looking for a candidate to architect and enhance core Middleware that powers cloud IoT platform design development and delivery. ISO, Java, .NET C# Azure Kafka Rabbit MQ AWS infrastructure as a code IoC Terraform CICD Jenkins GitHub Microservices Containerization Docker Kubernetes AWS Multi Cloud Key Responsibilities: Act as a technical authority and key driver in the design, development, and delivery of innovative features, collaborating with product owners, Front End, Middleware, DevOps, and firmware teams to align technical solutions with business goals. Lead technical assessments, scope changes, and oversee the management of the codebase for critical business requirements, high-impact product enhancements, and complex change requests across multiple initiatives. Architect and implement scalable, efficient, and robust software designs for high-complexity projects, working closely with solution architects and senior engineering leaders to ensure alignment with platform and business strategies. Champion Agile methodologies, such as Scrum, to enable efficient development cycles, continuous integration, and high-quality deliverables in Middleware development. Facilitate and lead strategic technical discussions, including architecture reviews, design meetings, and pull requests, fostering a culture of engineering excellence and collaboration. Drive adherence to best practices, coding standards, and platform design principles to deliver high-quality, reusable, and maintainable code. Develop deep domain expertise in platform-specific frameworks, features, and Middleware components, acting as a subject-matter expert and advisor across teams. Mentor and coach engineers across the organization, building technical capability, fostering innovation, and cultivating leadership within the engineering team. Collaborate with cross-functional domain experts including infrastructure, database, security, and Front End teams to drive cohesive solutions and seamless integration. Provide technical leadership approaches to elevate the myQ platform's technical capabilities and market competitiveness. ISO 27001 standards Job Requirements: Bachelors Degree An advanced degree in a directly relevant area of study may substitute for up to two (2) years of job-related experience. 8+ years of experience in software engineering, design, development, and deployment of large-scale systems Extensive experience in creating technical documentation, including design specifications, architecture diagrams, and deployment guides. Deep understanding of Agile methodologies and Scrum processes Proficiency with Java, .NET, C#, Azure, SQL, and Visual Studio. Hands-on experience with GIT, NoSQL databases, and messaging systems such as Kafka, RabbitMQ, or similar technologies. Advanced knowledge of AWS services, including but not limited to EC2, S3, Lambda, API Gateway, RDS, DynamoDB, and CloudFront. Strong expertise in Infrastructure as Code (IaC) using Terraform for automated provisioning and management of cloud resources. Proficiency with CI/CD tools such as Jenkins, GitHub Actions, or AWS CodePipeline, and experience with automated testing and deployment frameworks. Experience Docker and Kubernetes. Ability to travel domestically and internationally up to 10%. Knowledge, Skills, and Abilities: In-depth understanding of software development and design principles, with a focus on building scalable, secure, and maintainable systems. Comprehensive expertise in cloud-based development and architecture, with a strong focus on AWS and multi-cloud solutions. Exceptional ability to lead, collaborate, and provide clear technical direction to multiple development teams across diverse geographies. Deep knowledge of CI/CD practices, tools, and deployment processes, enabling efficient and reliable software delivery. Proven ability to debug, troubleshoot, and resolve complex technical issues in distributed systems and cloud environments. Proficiency in estimating work, supporting project planning efforts, and reporting progress to stakeholders at a platform and organizational level. Strong understanding of security best practices in cloud environments, including IAM roles, encryption, and network security. Demonstrated ability to leverage cloud monitoring and logging tools such as AWS CloudWatch, Elastic Stack, or Datadog for performance optimization and incident resolution. Experience with automated testing frameworks and ensuring high-quality software delivery through robust test pipelines.
28/06/2025
Full time
NO SPONSORSHIP SOFTWARE ENGINEER PLATFORM ENGINEER - Java/C#.NET SALARY: $97k -$184k plus 15% bonus LOCATION: Oak Brook, IL hybrid 3 days onsite Java & C# .NET developer, who can take Java technology and redesign it in .NET They want to move away from Java totally and eventually do all .NET (Back End development/Middleware enhancements) Any product development is a plus Internet of things IoT Looking for a candidate to architect and enhance core Middleware that powers cloud IoT platform design development and delivery. ISO, Java, .NET C# Azure Kafka Rabbit MQ AWS infrastructure as a code IoC Terraform CICD Jenkins GitHub Microservices Containerization Docker Kubernetes AWS Multi Cloud Key Responsibilities: Act as a technical authority and key driver in the design, development, and delivery of innovative features, collaborating with product owners, Front End, Middleware, DevOps, and firmware teams to align technical solutions with business goals. Lead technical assessments, scope changes, and oversee the management of the codebase for critical business requirements, high-impact product enhancements, and complex change requests across multiple initiatives. Architect and implement scalable, efficient, and robust software designs for high-complexity projects, working closely with solution architects and senior engineering leaders to ensure alignment with platform and business strategies. Champion Agile methodologies, such as Scrum, to enable efficient development cycles, continuous integration, and high-quality deliverables in Middleware development. Facilitate and lead strategic technical discussions, including architecture reviews, design meetings, and pull requests, fostering a culture of engineering excellence and collaboration. Drive adherence to best practices, coding standards, and platform design principles to deliver high-quality, reusable, and maintainable code. Develop deep domain expertise in platform-specific frameworks, features, and Middleware components, acting as a subject-matter expert and advisor across teams. Mentor and coach engineers across the organization, building technical capability, fostering innovation, and cultivating leadership within the engineering team. Collaborate with cross-functional domain experts including infrastructure, database, security, and Front End teams to drive cohesive solutions and seamless integration. Provide technical leadership approaches to elevate the myQ platform's technical capabilities and market competitiveness. ISO 27001 standards Job Requirements: Bachelors Degree An advanced degree in a directly relevant area of study may substitute for up to two (2) years of job-related experience. 8+ years of experience in software engineering, design, development, and deployment of large-scale systems Extensive experience in creating technical documentation, including design specifications, architecture diagrams, and deployment guides. Deep understanding of Agile methodologies and Scrum processes Proficiency with Java, .NET, C#, Azure, SQL, and Visual Studio. Hands-on experience with GIT, NoSQL databases, and messaging systems such as Kafka, RabbitMQ, or similar technologies. Advanced knowledge of AWS services, including but not limited to EC2, S3, Lambda, API Gateway, RDS, DynamoDB, and CloudFront. Strong expertise in Infrastructure as Code (IaC) using Terraform for automated provisioning and management of cloud resources. Proficiency with CI/CD tools such as Jenkins, GitHub Actions, or AWS CodePipeline, and experience with automated testing and deployment frameworks. Experience Docker and Kubernetes. Ability to travel domestically and internationally up to 10%. Knowledge, Skills, and Abilities: In-depth understanding of software development and design principles, with a focus on building scalable, secure, and maintainable systems. Comprehensive expertise in cloud-based development and architecture, with a strong focus on AWS and multi-cloud solutions. Exceptional ability to lead, collaborate, and provide clear technical direction to multiple development teams across diverse geographies. Deep knowledge of CI/CD practices, tools, and deployment processes, enabling efficient and reliable software delivery. Proven ability to debug, troubleshoot, and resolve complex technical issues in distributed systems and cloud environments. Proficiency in estimating work, supporting project planning efforts, and reporting progress to stakeholders at a platform and organizational level. Strong understanding of security best practices in cloud environments, including IAM roles, encryption, and network security. Demonstrated ability to leverage cloud monitoring and logging tools such as AWS CloudWatch, Elastic Stack, or Datadog for performance optimization and incident resolution. Experience with automated testing frameworks and ensuring high-quality software delivery through robust test pipelines.
Software Engineering Manager - Golang/Python - Malaga - €140,000 Overview A global fintech company is seeking an experienced Software Engineering Manager with expertise in Golang to lead one of its high-impact engineering clusters. This is a unique opportunity to join a mission-driven team focused on delivering world-class data integration capabilities to millions of users worldwide. Based in Málaga, this hybrid role offers both strategic leadership and hands-on technical influence across a widely used financial analysis platform. Russian language skills are required Role and Responsibilities Lead a cross-functional engineering team working on the ingestion and integration of global market data Drive the development of scalable distributed systems that power Real Time data accessibility Own the technical vision and roadmap for a new Decentralized Exchanges (DEX) initiative Collaborate with other engineering leaders to define and implement next-generation data pipelines Optimize CI/CD pipelines to support fast, safe, and reliable software delivery Mentor engineers, foster a culture of excellence, and ensure timely project delivery Tech Stack Languages : Golang, Python Infrastructure : Kubernetes, Docker, AWS/Azure/GCP CI/CD : Modern continuous delivery pipelines and DevOps best practices Data : Real Time data streaming, distributed systems Tools : Git, modern monitoring and observability platforms Skills and Experience Proven experience as a software engineering leader, with strong architectural and development skills Proficient in Golang Solid background in building and managing distributed systems Familiarity with modern cloud infrastructure (AWS, GCP, or Azure) and Kubernetes Product-oriented mindset with a strong focus on delivering user value Excellent communication, leadership, and project management abilities Experience with CI/CD processes and agile delivery Bonus: Financial domain experience or familiarity with trading platforms Bonus: Experience with Real Time data pipelines or streaming technologies Package Salary up to €140,000 18% - 25% bonus Relocation support including visa, travel, and accommodation assistance for Málaga Private health insurance Regular team events and retreats Ongoing professional development and mentoring opportunities
26/06/2025
Full time
Software Engineering Manager - Golang/Python - Malaga - €140,000 Overview A global fintech company is seeking an experienced Software Engineering Manager with expertise in Golang to lead one of its high-impact engineering clusters. This is a unique opportunity to join a mission-driven team focused on delivering world-class data integration capabilities to millions of users worldwide. Based in Málaga, this hybrid role offers both strategic leadership and hands-on technical influence across a widely used financial analysis platform. Russian language skills are required Role and Responsibilities Lead a cross-functional engineering team working on the ingestion and integration of global market data Drive the development of scalable distributed systems that power Real Time data accessibility Own the technical vision and roadmap for a new Decentralized Exchanges (DEX) initiative Collaborate with other engineering leaders to define and implement next-generation data pipelines Optimize CI/CD pipelines to support fast, safe, and reliable software delivery Mentor engineers, foster a culture of excellence, and ensure timely project delivery Tech Stack Languages : Golang, Python Infrastructure : Kubernetes, Docker, AWS/Azure/GCP CI/CD : Modern continuous delivery pipelines and DevOps best practices Data : Real Time data streaming, distributed systems Tools : Git, modern monitoring and observability platforms Skills and Experience Proven experience as a software engineering leader, with strong architectural and development skills Proficient in Golang Solid background in building and managing distributed systems Familiarity with modern cloud infrastructure (AWS, GCP, or Azure) and Kubernetes Product-oriented mindset with a strong focus on delivering user value Excellent communication, leadership, and project management abilities Experience with CI/CD processes and agile delivery Bonus: Financial domain experience or familiarity with trading platforms Bonus: Experience with Real Time data pipelines or streaming technologies Package Salary up to €140,000 18% - 25% bonus Relocation support including visa, travel, and accommodation assistance for Málaga Private health insurance Regular team events and retreats Ongoing professional development and mentoring opportunities
We are currently looking on behalf of one of our important clients for a German Speaking Azure Cloud/DevOps Engineer (Medical Device Sector). The role is a permanent position based in Bern Canton with good home office allowance. Your Role: Plan & implement hybrid (on-prem/cloud) scenarios & develop future-proof cloud strategies. Deploy, manage & optimize Azure Landing Zones (compute, storage, virtual networks, etc.). Ensure availability, performance & security (security best practices, monitoring, incident handling) in the cloud infrastructure. Automate deployments with Infrastructure as Code (eg, Terraform, Bicep) & Scripting (PowerShell, Azure CLI). Build & maintain CI/CD pipelines (Azure DevOps). Work closely with development teams on cloud architectures. Create & maintain technical documentation. Provide 2nd & 3rd level support for Azure-related requests, including troubleshooting & performance optimization. Your Skills: At least 3 years of professional experience in Cloud Engineering, Managing Azure Environments & DevOps. A sound knowledge in the field of Identity and Access Management (IAM). Skilled & experienced in most of the following areas: Azure Landing Zones, Infrastructure as Code (eg, Terraform, Bicep), Scripting (PowerShell, Azure CLI) & CI/CD pipelines (Azure DevOps). Ideally experienced in technically leading experienced colleagues. Your Profile: Completed Higher Education/University Degree in Computer Science, ideally with focus on Systems Engineering or IT Security (or similar). Ideally Microsoft Certified, for example as an Azure Administrator Associate or Azure Solutions Architect. Fluent in English & good German language skills (to B2 level or higher).
26/06/2025
Full time
We are currently looking on behalf of one of our important clients for a German Speaking Azure Cloud/DevOps Engineer (Medical Device Sector). The role is a permanent position based in Bern Canton with good home office allowance. Your Role: Plan & implement hybrid (on-prem/cloud) scenarios & develop future-proof cloud strategies. Deploy, manage & optimize Azure Landing Zones (compute, storage, virtual networks, etc.). Ensure availability, performance & security (security best practices, monitoring, incident handling) in the cloud infrastructure. Automate deployments with Infrastructure as Code (eg, Terraform, Bicep) & Scripting (PowerShell, Azure CLI). Build & maintain CI/CD pipelines (Azure DevOps). Work closely with development teams on cloud architectures. Create & maintain technical documentation. Provide 2nd & 3rd level support for Azure-related requests, including troubleshooting & performance optimization. Your Skills: At least 3 years of professional experience in Cloud Engineering, Managing Azure Environments & DevOps. A sound knowledge in the field of Identity and Access Management (IAM). Skilled & experienced in most of the following areas: Azure Landing Zones, Infrastructure as Code (eg, Terraform, Bicep), Scripting (PowerShell, Azure CLI) & CI/CD pipelines (Azure DevOps). Ideally experienced in technically leading experienced colleagues. Your Profile: Completed Higher Education/University Degree in Computer Science, ideally with focus on Systems Engineering or IT Security (or similar). Ideally Microsoft Certified, for example as an Azure Administrator Associate or Azure Solutions Architect. Fluent in English & good German language skills (to B2 level or higher).
Azure Platform Engineer We are looking for an Azure Platform Engineer to join a collaborative and diverse team, passionate about building analytics platforms that make a difference. We are seeking someone who excels at turning ideas into solutions with Azure and Databricks and is motivated by a team-oriented environment. Outcomes of the project: The successful candidate will be at the forefront of shaping and maintaining a cutting-edge analytics platform, making data accessible, actionable, and impactful for stakeholders across the client's organization. You will: Strategic Design: Architect scalable, secure, and efficient data solutions using Azure Databricks and other Azure services. Collaboration: Partner with stakeholders to understand their goals and translate them into innovative, practical technical solutions. Integration: Ensure seamless data integration across systems and optimize workflows for performance and efficiency. Automation: Develop and maintain Terraform scripts to automate deployments and manage infrastructure. Performance Excellence: Monitor and tune databases and data pipelines for optimal performance and resource utilization. Continuous Improvement: Stay ahead of industry trends and Azure technologies, continuously enhancing the platform's capabilities. Who are you? Experience 3-5 years in a Platform Engineer or similar role, focusing on large-scale analytics platforms. Proficiency in Azure Databricks and data architecture, with Python experience as a bonus. Hands-on experience with Infrastructure as Code (Terraform preferred) and CI/CD pipelines (Azure DevOps preferred). Familiarity with containerization (Docker, Kubernetes) for deploying and managing data services. Microsoft Azure certifications or equivalent are required. Profile A proactive learner with excellent problem-solving skills and a team-first attitude. Thrives on solving complex challenges in a collaborative environment and values growth as much as delivering results. About Levy Professionals Since 2000, we provide professional solutions to organizations ranging from tech start-ups to global players. From our offices in Amsterdam and London, we have built an international and local network of skilled employed professionals and contractors fuelled by our passion for connecting skills with projects. Over the years we have fulfilled over 1700 requirements and nowadays we consistently have 250+ professionals recruited and relocated from 14 countries allocated to various projects. Our strength is the way that we see and treat people. This will always be a key factor in our strategy for many years to come.
26/06/2025
Project-based
Azure Platform Engineer We are looking for an Azure Platform Engineer to join a collaborative and diverse team, passionate about building analytics platforms that make a difference. We are seeking someone who excels at turning ideas into solutions with Azure and Databricks and is motivated by a team-oriented environment. Outcomes of the project: The successful candidate will be at the forefront of shaping and maintaining a cutting-edge analytics platform, making data accessible, actionable, and impactful for stakeholders across the client's organization. You will: Strategic Design: Architect scalable, secure, and efficient data solutions using Azure Databricks and other Azure services. Collaboration: Partner with stakeholders to understand their goals and translate them into innovative, practical technical solutions. Integration: Ensure seamless data integration across systems and optimize workflows for performance and efficiency. Automation: Develop and maintain Terraform scripts to automate deployments and manage infrastructure. Performance Excellence: Monitor and tune databases and data pipelines for optimal performance and resource utilization. Continuous Improvement: Stay ahead of industry trends and Azure technologies, continuously enhancing the platform's capabilities. Who are you? Experience 3-5 years in a Platform Engineer or similar role, focusing on large-scale analytics platforms. Proficiency in Azure Databricks and data architecture, with Python experience as a bonus. Hands-on experience with Infrastructure as Code (Terraform preferred) and CI/CD pipelines (Azure DevOps preferred). Familiarity with containerization (Docker, Kubernetes) for deploying and managing data services. Microsoft Azure certifications or equivalent are required. Profile A proactive learner with excellent problem-solving skills and a team-first attitude. Thrives on solving complex challenges in a collaborative environment and values growth as much as delivering results. About Levy Professionals Since 2000, we provide professional solutions to organizations ranging from tech start-ups to global players. From our offices in Amsterdam and London, we have built an international and local network of skilled employed professionals and contractors fuelled by our passion for connecting skills with projects. Over the years we have fulfilled over 1700 requirements and nowadays we consistently have 250+ professionals recruited and relocated from 14 countries allocated to various projects. Our strength is the way that we see and treat people. This will always be a key factor in our strategy for many years to come.