Role: Platform Engineering Manager Salary: £60,000 - £70,000 + exceptional pension + package Location: Remote with occasional UK travel I am working with a fantastic organisation who are in a period of growth. They are looking to hire a Platform Engineering Manager with a broad Azure background. Heading up a team of 10 Platform Engineers you will play a vital role in the shaping of the organisations' Azure platform. This role will give you see you working with the latest Microsoft technologies delivering platforms and services that make a real difference. The organisation are a truly people-centric organisation who value their staff and offer a true work/life balance. The organisation make a really positive contribution to the UK and this role offers the opportunity to make a real difference within the organisation. Key Responsibilities Lead a team of Platform Engineers to design, build test and maintain the cloud application infrastructure and CI/CD pipelines that underpin all internal and external Digital services. Ensure security, stability and capacity is Embedded in the development and deployment of services. Champion a Platform Engineering culture by creating and building close collaboration and working practices between the product, engineering and operational business services teams supported by the appropriate use of automation tools. Manage and develop the Platform Engineering capability by providing technical leadership for the in-house team and external suppliers. Contribute to business case development, articulating benefits and return on investment. Ensure solutions delivered to time, cost and quality requirements. Design solutions and services with security controls Embedded, specifically engineered as mitigation against security threats as a core part of the solutions and services. Lead the teams in support, design and implementation of infrastructure technologies and solutions such as: compute, storage, networking, physical infrastructure, database, software, commercial off the shelf (COTS) and open source packages and solutions, virtual and cloud including IaaS, PaaS and SaaS. Required Skills/Experience Demonstrable experience managing highly skilled Platform Engineers, DevOps Engineers or Software Engineers including mentoring and driving best practice. Strong technical background, either from a software engineering or infrastructure engineering background, with strong experience in Platform Engineering tooling and techniques. Experienced driving efficiencies, through automation and process design and implementation in particular in the automation of application deployment methodologies. Experience of building and optimising deployment pipelines and deployment strategies on popular CI/CD tools such as Jenkins. Experience designing, securing, scaling and administering cloud platforms such as Microsoft Azure. Experience managing complex, multi-server services in a high availability production environment. Solid understanding of containerisation, ideally having implemented Docker containers into production environments. Experience of Agile tools and processes eg Azure DevOps etc. Knowledge and understanding of latest trends in DevOps/Platform Engineering methodologies, processes, and tools as well as emerging solutions and ability to apply them when appropriate. If this role would be of interest, then please apply to this advert and I will be in contact to give you more detail. Fruition are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation or age.
16/09/2024
Full time
Role: Platform Engineering Manager Salary: £60,000 - £70,000 + exceptional pension + package Location: Remote with occasional UK travel I am working with a fantastic organisation who are in a period of growth. They are looking to hire a Platform Engineering Manager with a broad Azure background. Heading up a team of 10 Platform Engineers you will play a vital role in the shaping of the organisations' Azure platform. This role will give you see you working with the latest Microsoft technologies delivering platforms and services that make a real difference. The organisation are a truly people-centric organisation who value their staff and offer a true work/life balance. The organisation make a really positive contribution to the UK and this role offers the opportunity to make a real difference within the organisation. Key Responsibilities Lead a team of Platform Engineers to design, build test and maintain the cloud application infrastructure and CI/CD pipelines that underpin all internal and external Digital services. Ensure security, stability and capacity is Embedded in the development and deployment of services. Champion a Platform Engineering culture by creating and building close collaboration and working practices between the product, engineering and operational business services teams supported by the appropriate use of automation tools. Manage and develop the Platform Engineering capability by providing technical leadership for the in-house team and external suppliers. Contribute to business case development, articulating benefits and return on investment. Ensure solutions delivered to time, cost and quality requirements. Design solutions and services with security controls Embedded, specifically engineered as mitigation against security threats as a core part of the solutions and services. Lead the teams in support, design and implementation of infrastructure technologies and solutions such as: compute, storage, networking, physical infrastructure, database, software, commercial off the shelf (COTS) and open source packages and solutions, virtual and cloud including IaaS, PaaS and SaaS. Required Skills/Experience Demonstrable experience managing highly skilled Platform Engineers, DevOps Engineers or Software Engineers including mentoring and driving best practice. Strong technical background, either from a software engineering or infrastructure engineering background, with strong experience in Platform Engineering tooling and techniques. Experienced driving efficiencies, through automation and process design and implementation in particular in the automation of application deployment methodologies. Experience of building and optimising deployment pipelines and deployment strategies on popular CI/CD tools such as Jenkins. Experience designing, securing, scaling and administering cloud platforms such as Microsoft Azure. Experience managing complex, multi-server services in a high availability production environment. Solid understanding of containerisation, ideally having implemented Docker containers into production environments. Experience of Agile tools and processes eg Azure DevOps etc. Knowledge and understanding of latest trends in DevOps/Platform Engineering methodologies, processes, and tools as well as emerging solutions and ability to apply them when appropriate. If this role would be of interest, then please apply to this advert and I will be in contact to give you more detail. Fruition are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation or age.
Full Stack Developer My client is actively seeking an experienced Full Stack Developer to join their expanding team. This opportunity is with an innovative startup dedicated to making a sustainable impact in the trading industry . The ideal candidate will be instrumental in designing and building high-quality software solutions that align with the company's mission and growth objectives. Key responsibilities: Design and develop scalable Back End services using Python and building responsive Front End applications with TypeScript and React. Manage and optimise PostgreSQL databases to ensure efficient data storage and retrieval. Implementing and maintaining CI/CD pipelines to streamline the deployment process. Using cloud platforms to deploy and manage services. Develop and maintain secure, scalable, and well-documented APIs. Monitor cloud-based applications using cloud monitoring tools to ensure high availability and optimal performance. Implement security best practices across the technology stack, ensuring data protection and compliance with industry standards. Collaborate with cross-functional teams, including data engineers and product managers, to deliver high-quality products. Key requirements: Extensive experience in Back End development with Python and Front End development utilising TypeScript and React. Strong proficiency in SQL and PostgreSQL or other databases. Expertise in one or more major cloud platforms (AWS, GCP, Azure) and familiarity with managed services and serverless architecture. Experience with the full software development life cycle, from conception to deployment. Experience in API development and integration. Knowledge of Infrastructure-as-Code tools like Terraform. Familiarity with cloud monitoring tools and best practices. Experience with DevOps practices, including CI/CD pipelines and cloud deployments. Proficiency with GitHub for version control and collaboration. Familiarity with testing frameworks and practices, including unit testing, integration testing, and end-to-end testing. Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent work experience). If you are an experienced Full Stack Developer and this opportunity aligns with your interests, we invite you to apply for immediate consideration.
16/09/2024
Full time
Full Stack Developer My client is actively seeking an experienced Full Stack Developer to join their expanding team. This opportunity is with an innovative startup dedicated to making a sustainable impact in the trading industry . The ideal candidate will be instrumental in designing and building high-quality software solutions that align with the company's mission and growth objectives. Key responsibilities: Design and develop scalable Back End services using Python and building responsive Front End applications with TypeScript and React. Manage and optimise PostgreSQL databases to ensure efficient data storage and retrieval. Implementing and maintaining CI/CD pipelines to streamline the deployment process. Using cloud platforms to deploy and manage services. Develop and maintain secure, scalable, and well-documented APIs. Monitor cloud-based applications using cloud monitoring tools to ensure high availability and optimal performance. Implement security best practices across the technology stack, ensuring data protection and compliance with industry standards. Collaborate with cross-functional teams, including data engineers and product managers, to deliver high-quality products. Key requirements: Extensive experience in Back End development with Python and Front End development utilising TypeScript and React. Strong proficiency in SQL and PostgreSQL or other databases. Expertise in one or more major cloud platforms (AWS, GCP, Azure) and familiarity with managed services and serverless architecture. Experience with the full software development life cycle, from conception to deployment. Experience in API development and integration. Knowledge of Infrastructure-as-Code tools like Terraform. Familiarity with cloud monitoring tools and best practices. Experience with DevOps practices, including CI/CD pipelines and cloud deployments. Proficiency with GitHub for version control and collaboration. Familiarity with testing frameworks and practices, including unit testing, integration testing, and end-to-end testing. Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent work experience). If you are an experienced Full Stack Developer and this opportunity aligns with your interests, we invite you to apply for immediate consideration.
Methods Business and Digital Technology Limited Methods is a £100M+ IT Services Consultancy who has partnered with a range of central government departments and agencies to transform the way the public sector operates in the UK. Established over 30 years ago and UK-based, we apply our skills in transformation, delivery, and collaboration from across the Methods Group, to create end-to-end business and technical solutions that are people-centred, safe, and designed for the future. Our human touch sets us apart from other consultancies, system integrators and software houses - with people, technology, and data at the heart of who we are, we believe in creating value and sustainability through everything we do for our clients, staff, communities, and the planet. We support our clients in the success of their projects while working collaboratively to share skill sets and solve problems. At Methods we have fun while working hard; we are not afraid of making mistakes and learning from them. Predominantly focused on the public-sector, Methods is now building a significant private sector client portfolio. Methods was acquired by the Alten Group in early 2022. Methods is currently recruiting for a DevSecOps Engineer (Cyber) Consultant to join our team on a permanent basis. This role will be based on-site Requirements Specialised in cloud management of platforms, applications, data and supporting infrastructure in the capacity of a system administrator of either the AWS or Azure platform Developing automation to support continuous delivery of changes using technologies on the Azure platform. Developing infrastructure as a service configuration to automate the creation of infrastructure and platforms to host test and production systems Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Testing and examining code written by others and analysing results Ensuring that systems are safe and secure against cybersecurity threats Familiar with the NCSC secure design principles Familiar with managing security of cloud platforms, including administration of secrets, tokens and certificates. Working with Architects, Data and Software Engineers to ensure that development follows established processes and works as intended Planning out projects and being involved in project management decisions Responsible for the design, security, and maintenance of cloud infrastructure Making and guiding effective decisions, explaining clearly how the decision has been reached with the ability to understand and resolve technical disputes across varying levels of complexity and risk. Communicating effectively across organisational, technical and political boundaries to understand the context and how to make complex and technical information and language simple and accessible for non-technical audiences. Understanding of how to expose data from systems (for example, through APIs), link data from multiple systems and deliver streaming services. Ensuring that risks associated with deployment are adequately understood and documented. Ideal Candidates will demonstrate: Experience working across cyber security teams would be beneficial Solid infrastructure design experience for both on-prem and cloud, to implement or migrate applications and databases to Azure. Solid experience in a range of technologies and be able to make assessments as to what is best to be used for the projects and the organisation. As well as suggest and develop innovative approaches within constrained projects and environments. Strong experience in software development, change/release management processes, and technical governance to fully understand the typical life cycle and maintenance of live systems. Ability to work with containerization platforms such as Kubernetes, PKS, Docker; cloud provisioning software, including Ansible, Terraform, Azure blueprints, ARM templates; and application performance analysis and monitoring Experience of functional and non-functional testing including automated deployment experience of applications and databases. Understanding of the government digital service manual and standards across Discovery/Alpha/Beta/Live phases. Understanding of SaaS, PaaS, IaaS technologies and the implications of their use compared with bespoke development. Being able to provide training, support and mentoring to the wider business Knowledge of how to ensure that risks associated with deployment are adequately understood and documented. Desirable Skills & Experience: Worked as part of a system support team, managing live systems and triaging & resolving incidents to resolution including management of known defects and issues. Worked as part of multi-disciplinary project team. Experience with Terraform to deploy cloud infrastructure in Azure Experience with Azure DevOps and GitHub Actions to automate the build and deploy of containerised applications Experience implementing effective instrumentation to monitor applications Experience implementing SAST and DAST tooling in deployment pipelines like Trivvy and SonarQube Experience of both AWS and Azure Dev Ops tooling. This role will require you to have or be willing to go through Security Clearance. As part of the onboarding process candidates will be asked to complete a Baseline Personnel Security Standard; details of the evidence required to apply may be found on the government website Gov.UK. If you are unable to meet this and any associated criteria, then your employment may be delayed, or rejected . Details of this will be discussed with you at interview. Benefits Methods is passionate about its people; we want our colleagues to develop the things they are good at and enjoy. By joining us you can expect Autonomy to develop and grow your skills and experience Be part of exciting project work that is making a difference in society Strong, inspiring and thought-provoking leadership A supportive and collaborative environment Development - access to LinkedIn Learning, a management development programme, and training Wellness - 24/7 confidential employee assistance programme Flexible Working - including home working and part time Social - office parties, breakfast Tuesdays, monthly pizza Thursdays, Thirsty Thursdays, and commitment to charitable causes Time Off - 25 days of annual leave a year, plus bank holidays, with the option to buy 5 extra days each year Volunteering - 2 paid days per year to volunteer in our local communities or within a charity organisation Pension - Salary Exchange Scheme with 4% employer contribution and 5% employee contribution Discretionary Company Bonus - based on company and individual performance Life Assurance - of 4 times base salary Private Medical Insurance - which is non-contributory (spouse and dependants included) Worldwide Travel Insurance - which is non-contributory (spouse and dependants included) Enhanced Maternity and Paternity Pay Travel - season ticket loan, cycle to work scheme For a full list of benefits please visit our website
16/09/2024
Full time
Methods Business and Digital Technology Limited Methods is a £100M+ IT Services Consultancy who has partnered with a range of central government departments and agencies to transform the way the public sector operates in the UK. Established over 30 years ago and UK-based, we apply our skills in transformation, delivery, and collaboration from across the Methods Group, to create end-to-end business and technical solutions that are people-centred, safe, and designed for the future. Our human touch sets us apart from other consultancies, system integrators and software houses - with people, technology, and data at the heart of who we are, we believe in creating value and sustainability through everything we do for our clients, staff, communities, and the planet. We support our clients in the success of their projects while working collaboratively to share skill sets and solve problems. At Methods we have fun while working hard; we are not afraid of making mistakes and learning from them. Predominantly focused on the public-sector, Methods is now building a significant private sector client portfolio. Methods was acquired by the Alten Group in early 2022. Methods is currently recruiting for a DevSecOps Engineer (Cyber) Consultant to join our team on a permanent basis. This role will be based on-site Requirements Specialised in cloud management of platforms, applications, data and supporting infrastructure in the capacity of a system administrator of either the AWS or Azure platform Developing automation to support continuous delivery of changes using technologies on the Azure platform. Developing infrastructure as a service configuration to automate the creation of infrastructure and platforms to host test and production systems Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Testing and examining code written by others and analysing results Ensuring that systems are safe and secure against cybersecurity threats Familiar with the NCSC secure design principles Familiar with managing security of cloud platforms, including administration of secrets, tokens and certificates. Working with Architects, Data and Software Engineers to ensure that development follows established processes and works as intended Planning out projects and being involved in project management decisions Responsible for the design, security, and maintenance of cloud infrastructure Making and guiding effective decisions, explaining clearly how the decision has been reached with the ability to understand and resolve technical disputes across varying levels of complexity and risk. Communicating effectively across organisational, technical and political boundaries to understand the context and how to make complex and technical information and language simple and accessible for non-technical audiences. Understanding of how to expose data from systems (for example, through APIs), link data from multiple systems and deliver streaming services. Ensuring that risks associated with deployment are adequately understood and documented. Ideal Candidates will demonstrate: Experience working across cyber security teams would be beneficial Solid infrastructure design experience for both on-prem and cloud, to implement or migrate applications and databases to Azure. Solid experience in a range of technologies and be able to make assessments as to what is best to be used for the projects and the organisation. As well as suggest and develop innovative approaches within constrained projects and environments. Strong experience in software development, change/release management processes, and technical governance to fully understand the typical life cycle and maintenance of live systems. Ability to work with containerization platforms such as Kubernetes, PKS, Docker; cloud provisioning software, including Ansible, Terraform, Azure blueprints, ARM templates; and application performance analysis and monitoring Experience of functional and non-functional testing including automated deployment experience of applications and databases. Understanding of the government digital service manual and standards across Discovery/Alpha/Beta/Live phases. Understanding of SaaS, PaaS, IaaS technologies and the implications of their use compared with bespoke development. Being able to provide training, support and mentoring to the wider business Knowledge of how to ensure that risks associated with deployment are adequately understood and documented. Desirable Skills & Experience: Worked as part of a system support team, managing live systems and triaging & resolving incidents to resolution including management of known defects and issues. Worked as part of multi-disciplinary project team. Experience with Terraform to deploy cloud infrastructure in Azure Experience with Azure DevOps and GitHub Actions to automate the build and deploy of containerised applications Experience implementing effective instrumentation to monitor applications Experience implementing SAST and DAST tooling in deployment pipelines like Trivvy and SonarQube Experience of both AWS and Azure Dev Ops tooling. This role will require you to have or be willing to go through Security Clearance. As part of the onboarding process candidates will be asked to complete a Baseline Personnel Security Standard; details of the evidence required to apply may be found on the government website Gov.UK. If you are unable to meet this and any associated criteria, then your employment may be delayed, or rejected . Details of this will be discussed with you at interview. Benefits Methods is passionate about its people; we want our colleagues to develop the things they are good at and enjoy. By joining us you can expect Autonomy to develop and grow your skills and experience Be part of exciting project work that is making a difference in society Strong, inspiring and thought-provoking leadership A supportive and collaborative environment Development - access to LinkedIn Learning, a management development programme, and training Wellness - 24/7 confidential employee assistance programme Flexible Working - including home working and part time Social - office parties, breakfast Tuesdays, monthly pizza Thursdays, Thirsty Thursdays, and commitment to charitable causes Time Off - 25 days of annual leave a year, plus bank holidays, with the option to buy 5 extra days each year Volunteering - 2 paid days per year to volunteer in our local communities or within a charity organisation Pension - Salary Exchange Scheme with 4% employer contribution and 5% employee contribution Discretionary Company Bonus - based on company and individual performance Life Assurance - of 4 times base salary Private Medical Insurance - which is non-contributory (spouse and dependants included) Worldwide Travel Insurance - which is non-contributory (spouse and dependants included) Enhanced Maternity and Paternity Pay Travel - season ticket loan, cycle to work scheme For a full list of benefits please visit our website
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
13/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
13/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
13/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
Cloud Security Engineer Akkodis are currently working in partnership with a leading service provider to recruit an experienced Cloud Security Engineer who will provide security expertise for the cloud infrastructure. You will collaborate with DevOps and engineering teams to design, build, and maintain security services, ensuring compliance with relevant regulations and industry standards. Please note this is a hybrid role with flexibility around working from home. The Role As a Cloud Security Engineer you will improve security monitoring and automation across AWS and Azure infrastructure and support ongoing security operations. You will also proactively assess systems for vulnerabilities and work with stakeholders to embed security standards and best practices. The Responsibilities Responsible for the continued development and improvement of cloud security posture; by providing security expertise and guidance on cloud infrastructure Work with the Cloud Infrastructure team - AWS and Azure to ensure secure practices on AWS Organisation and Azure cloud tenants. Conduct periodic assessments and technical audits challenging the security posture. Assist in Cloud Security related incidents and events investigation and response as required. Work with cross-functional teams to respond to incidents - be they an escalated security event or remediating a critical vulnerability - when the need arises Contribute effectively to the establishment and maintenance of the IT Security knowledge base, documenting clear instructions and known fixes. Work on IT security projects as assigned and contribute to projects on the security technical roadmap via security and continuous improvement initiatives. Work with the rest of the Security team and cross-functional teams to manage cloud security risks and remediate vulnerabilities Get involved in raising awareness and promoting a security-conscious culture through security guidance and training to staff members when required. Create and maintain documentation and diagrams of internal security solutions. Collaborate and build relationships with a diverse set of teams including Platform Ops, Data Engineering, Architecture, Development, and operations. Work closely with stakeholders to embed standards and tools and drive the adoption of security best practices. Operate and maintain cloud security tools, solutions, and processes. The Requirements Proven experience in a Cloud administrative role/Security administration role in security or engineering fields in cloud or technology. Proven experience in securing and administering AWS and Azure cloud network and storage infrastructures - deploying and maintaining cloud security policies, products, and controls. Any relevant Azure/AWS Certifications are desirable, especially AWS Cloud Practitioner (Foundational), AWS Security (Speciality), SC-200, AZ-500, SC-900. Cloud native security solutions such as GuardDuty and the Microsoft Defender suite of products Content Delivery Networks and Web Application Firewalls. Experience with vulnerability management. A broad technical knowledge of server, endpoint, and networking hardware and related security configurations. A strong technical knowledge of modern cloud offerings and good understanding of cloud architecture frameworks. If you are looking for an exciting new challenge to join a leading team please apply now. Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
13/09/2024
Full time
Cloud Security Engineer Akkodis are currently working in partnership with a leading service provider to recruit an experienced Cloud Security Engineer who will provide security expertise for the cloud infrastructure. You will collaborate with DevOps and engineering teams to design, build, and maintain security services, ensuring compliance with relevant regulations and industry standards. Please note this is a hybrid role with flexibility around working from home. The Role As a Cloud Security Engineer you will improve security monitoring and automation across AWS and Azure infrastructure and support ongoing security operations. You will also proactively assess systems for vulnerabilities and work with stakeholders to embed security standards and best practices. The Responsibilities Responsible for the continued development and improvement of cloud security posture; by providing security expertise and guidance on cloud infrastructure Work with the Cloud Infrastructure team - AWS and Azure to ensure secure practices on AWS Organisation and Azure cloud tenants. Conduct periodic assessments and technical audits challenging the security posture. Assist in Cloud Security related incidents and events investigation and response as required. Work with cross-functional teams to respond to incidents - be they an escalated security event or remediating a critical vulnerability - when the need arises Contribute effectively to the establishment and maintenance of the IT Security knowledge base, documenting clear instructions and known fixes. Work on IT security projects as assigned and contribute to projects on the security technical roadmap via security and continuous improvement initiatives. Work with the rest of the Security team and cross-functional teams to manage cloud security risks and remediate vulnerabilities Get involved in raising awareness and promoting a security-conscious culture through security guidance and training to staff members when required. Create and maintain documentation and diagrams of internal security solutions. Collaborate and build relationships with a diverse set of teams including Platform Ops, Data Engineering, Architecture, Development, and operations. Work closely with stakeholders to embed standards and tools and drive the adoption of security best practices. Operate and maintain cloud security tools, solutions, and processes. The Requirements Proven experience in a Cloud administrative role/Security administration role in security or engineering fields in cloud or technology. Proven experience in securing and administering AWS and Azure cloud network and storage infrastructures - deploying and maintaining cloud security policies, products, and controls. Any relevant Azure/AWS Certifications are desirable, especially AWS Cloud Practitioner (Foundational), AWS Security (Speciality), SC-200, AZ-500, SC-900. Cloud native security solutions such as GuardDuty and the Microsoft Defender suite of products Content Delivery Networks and Web Application Firewalls. Experience with vulnerability management. A broad technical knowledge of server, endpoint, and networking hardware and related security configurations. A strong technical knowledge of modern cloud offerings and good understanding of cloud architecture frameworks. If you are looking for an exciting new challenge to join a leading team please apply now. Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
Location - UK Hybrid 3 days on site in Europe Key Skills: Dev/Ops Engineer, DevOps, IAM, GitOps, Kubernetes, integration topologies, domains, security zones, automation enablers, OT knowledge, Identity Provider, Directory services (Key Cloak (OpenLDAP), Identity protocols (OIDC, OAuth2, SAML), Secrets Management (eg; HashiCorp Vault), GitOps, Infrastructure as Code and Automation (terraform, Ansible, Helm) My client is urgently searching for a Dev/Ops Engineer with IAM hands on, GitOps, Kubernetes, integration topologies, automation, OT experience to work on a critical hybrid control role in Denmark, Copenhagen. A successful candidate will have excellent experience in: DevOps Engineer DevOps - expert level IAM - expert level GitOps - expert level Integration topologies across various domains and security zones - expert level Automation enablers across various domains and security zones - expert level Kubernetes - hands on strong experience OT knowledge/experience is preferred Identity Provider and Directory Services Identity protocols Secrets Management GitOps Infrastructure as Code and Automation (Terraform, Ansible, Helm) On-site presence in Denmark, Copenhagen required at least 3 days a week (poss mid week) Good collaboration skills: candidate will join an existing agile team Start ASAP. Duration : 12 months Please send your CV in Word format ASAP for immediate and confidential interviews.
13/09/2024
Project-based
Location - UK Hybrid 3 days on site in Europe Key Skills: Dev/Ops Engineer, DevOps, IAM, GitOps, Kubernetes, integration topologies, domains, security zones, automation enablers, OT knowledge, Identity Provider, Directory services (Key Cloak (OpenLDAP), Identity protocols (OIDC, OAuth2, SAML), Secrets Management (eg; HashiCorp Vault), GitOps, Infrastructure as Code and Automation (terraform, Ansible, Helm) My client is urgently searching for a Dev/Ops Engineer with IAM hands on, GitOps, Kubernetes, integration topologies, automation, OT experience to work on a critical hybrid control role in Denmark, Copenhagen. A successful candidate will have excellent experience in: DevOps Engineer DevOps - expert level IAM - expert level GitOps - expert level Integration topologies across various domains and security zones - expert level Automation enablers across various domains and security zones - expert level Kubernetes - hands on strong experience OT knowledge/experience is preferred Identity Provider and Directory Services Identity protocols Secrets Management GitOps Infrastructure as Code and Automation (Terraform, Ansible, Helm) On-site presence in Denmark, Copenhagen required at least 3 days a week (poss mid week) Good collaboration skills: candidate will join an existing agile team Start ASAP. Duration : 12 months Please send your CV in Word format ASAP for immediate and confidential interviews.
ENGLISCH Location: Bern, Worblaufen and remote (60% on-site) Rate: 100-120CHF/Hour Length: 12months Job Description: The ART LPV (Logistics, Production and Sales) division is responsible for the digital development of the supply chain processes of the Infrastructure division. This includes products in the SAP Supply Chain, Manufacturing Sales and Service areas. We work in an agile and networked manner and are looking for an experienced consultant to support us in the warehouse environment. Responsibilities: Design of the S/4 requirements in the warehouse/logistics environment in the Explore phase Implementation of the S/4 requirements in the build phases Support in the creation of the required artifacts in SAP Focused Build (Functional Specification, Collaboration Diagrams and Configuration Design Specification). Coordinate the technical requirements as a link between the work organization in Switzerland and our development partner abroad. Ensure communication with the relevant stakeholders in the work organization, such as Product Owner, Scrum Master, Test Manager. Help shape the future integration into the S/4 HANA solution (including Customizing) within a BizDevOps team. Qualifications: You are a team and customer oriented personality with very good analytical and conceptual skills and a pronounced, agile mindset (SAFe/Scrum) You have very good knowledge of the SAP logistics modules, in particular SAP MM/EWM/WM. You have the ability to explain and present complex topics. You have good knowledge of Confluence, Jira and O365. Very good German and good English language skills Several years of experience in process analysis, development and operation of solutions in the logistics environment according to the SAP Activate method Very good experience in the creation of functional specifications and in customizing as well as integrative knowledge of the adjacent SAP modules such as FI/CO. Additional Skills: You are a team player and are a communicative, open and resilient personality. You are interdisciplinary and do not only focus on your own field of work Ideally, you have experience in Transport Management DEUTSCH Standort: Bern, Worblaufen und remote (60% vor Ort) Preis: 100-120CHF/Stunde Dauer: 12Monate Stellenbeschreibung: Der Bereich ART LPV (Logistik, Produktion und Vertrieb) ist verantwortlich für die digitale Weiterentwicklung der Supply Chain Prozesse des Bereichs Infrastruktur. Dazu gehören Produkte aus den Bereichen SAP Supply Chain, Manufacturing Sales und Service. Wir arbeiten agil und vernetzt und suchen einen erfahrenen Berater (m/w), der uns im Lagerumfeld unterstützt. Verantwortlichkeiten: Design der S/4-Anforderungen im Lager-/Logistikumfeld in der Explore-Phase Umsetzung der S/4-Anforderungen in den Build-Phasen Unterstützung bei der Erstellung der erforderlichen Artefakte in SAP Focused Build (Functional Specification, Collaboration Diagrams und Configuration Design Specification). Koordinieren der technischen Anforderungen als Bindeglied zwischen der Arbeitsorganisation in der Schweiz und unserem Entwicklungspartner im Ausland. Sicherstellung der Kommunikation mit den relevanten Stakeholdern in der Arbeitsorganisation, wie Product Owner, Scrum Master, Test Manager. Mitgestaltung der zukünftigen Integration in die S/4 HANA Lösung (inkl. Customizing) innerhalb eines BizDevOps Teams. Qualifikationen: Sie sind eine team- und kundenorientierte Persönlichkeit mit sehr guten analytischen und konzeptionellen Fähigkeiten und einem ausgeprägten, agilen Mindset (SAFE/Scrum) Sie verfügen über sehr gute Kenntnisse der SAP Logistikmodule, insbesondere SAP MM/EWM/WM. Sie haben die Fähigkeit, komplexe Themen zu erklären und zu präsentieren.
13/09/2024
Project-based
ENGLISCH Location: Bern, Worblaufen and remote (60% on-site) Rate: 100-120CHF/Hour Length: 12months Job Description: The ART LPV (Logistics, Production and Sales) division is responsible for the digital development of the supply chain processes of the Infrastructure division. This includes products in the SAP Supply Chain, Manufacturing Sales and Service areas. We work in an agile and networked manner and are looking for an experienced consultant to support us in the warehouse environment. Responsibilities: Design of the S/4 requirements in the warehouse/logistics environment in the Explore phase Implementation of the S/4 requirements in the build phases Support in the creation of the required artifacts in SAP Focused Build (Functional Specification, Collaboration Diagrams and Configuration Design Specification). Coordinate the technical requirements as a link between the work organization in Switzerland and our development partner abroad. Ensure communication with the relevant stakeholders in the work organization, such as Product Owner, Scrum Master, Test Manager. Help shape the future integration into the S/4 HANA solution (including Customizing) within a BizDevOps team. Qualifications: You are a team and customer oriented personality with very good analytical and conceptual skills and a pronounced, agile mindset (SAFe/Scrum) You have very good knowledge of the SAP logistics modules, in particular SAP MM/EWM/WM. You have the ability to explain and present complex topics. You have good knowledge of Confluence, Jira and O365. Very good German and good English language skills Several years of experience in process analysis, development and operation of solutions in the logistics environment according to the SAP Activate method Very good experience in the creation of functional specifications and in customizing as well as integrative knowledge of the adjacent SAP modules such as FI/CO. Additional Skills: You are a team player and are a communicative, open and resilient personality. You are interdisciplinary and do not only focus on your own field of work Ideally, you have experience in Transport Management DEUTSCH Standort: Bern, Worblaufen und remote (60% vor Ort) Preis: 100-120CHF/Stunde Dauer: 12Monate Stellenbeschreibung: Der Bereich ART LPV (Logistik, Produktion und Vertrieb) ist verantwortlich für die digitale Weiterentwicklung der Supply Chain Prozesse des Bereichs Infrastruktur. Dazu gehören Produkte aus den Bereichen SAP Supply Chain, Manufacturing Sales und Service. Wir arbeiten agil und vernetzt und suchen einen erfahrenen Berater (m/w), der uns im Lagerumfeld unterstützt. Verantwortlichkeiten: Design der S/4-Anforderungen im Lager-/Logistikumfeld in der Explore-Phase Umsetzung der S/4-Anforderungen in den Build-Phasen Unterstützung bei der Erstellung der erforderlichen Artefakte in SAP Focused Build (Functional Specification, Collaboration Diagrams und Configuration Design Specification). Koordinieren der technischen Anforderungen als Bindeglied zwischen der Arbeitsorganisation in der Schweiz und unserem Entwicklungspartner im Ausland. Sicherstellung der Kommunikation mit den relevanten Stakeholdern in der Arbeitsorganisation, wie Product Owner, Scrum Master, Test Manager. Mitgestaltung der zukünftigen Integration in die S/4 HANA Lösung (inkl. Customizing) innerhalb eines BizDevOps Teams. Qualifikationen: Sie sind eine team- und kundenorientierte Persönlichkeit mit sehr guten analytischen und konzeptionellen Fähigkeiten und einem ausgeprägten, agilen Mindset (SAFE/Scrum) Sie verfügen über sehr gute Kenntnisse der SAP Logistikmodule, insbesondere SAP MM/EWM/WM. Sie haben die Fähigkeit, komplexe Themen zu erklären und zu präsentieren.
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Network Engineer. Candidate will be responsible for supporting a team performing network design & analysis of new architectures, routing/switching configuration and design, and network related business continuity approaches; developing systems specifications, technical implementation plans and coordinating complex installation projects with clients and vendors without supervision. Responsible for supporting the team performing network design and support of new network and security architectures for on premise and Cloud networks. Responsibilities: Support the team members who design/architect new routing, switching, and connectivity solutions Support the team members who design/architect new cloud network solutions Support the team members who design, architect and plan network changes and new infrastructure Work with vendors, common carriers and network engineering in identification and resolution of complex network problems Ensure IT/Security Governance needs are met. (NIST-CSF and COBIT) Ensure network performance and network security standards are met Provide tactical and strategic input on overall network planning and related projects Provide on-call support according to assigned schedule Document network changes, policies and procedures and drawings Support large projects and schedules Perform other duties as assigned Qualifications: Experience directing use of tools such as Ansible, Terraform, Jenkins, Python, and Github (or industry equivalent) Experience delivering Infrastructure as code Experience building cloud infrastructure in environments such as AWS (Preferred), Azure, or Google Cloud, or similar service Experience using Agile Methodology Knowledge of layer 3 routing and switching Experience with IOS, NX-OS, and XR Advanced experience with architecting, designing, deploying, and operating network elements such as DNS/IPAM; Firewalls; Network Access Control Solutions (NAC); load balancing; DDoS mitigation, tapping/sniffing infrastructures; and NTP Excellent physical communication troubleshooting skills using cabling and signalling analyzer, packet capture and analysis Advanced WAN, LAN, TCP/IP, VPN, Ethernet Skills Extensive EIGRP, BGP4, RIP and OSPF Routing Protocol knowledge Certifications Relevant industry certifications such as Microsoft Azure or Google Cloud Cisco Certified Network Professional (CCNP or CCDP) equivalent field experience AWS certifications such as DevOps Engineer, Solutions Architect, Advanced Networking, or Security or equivalent field experience Cisco Certified Internetwork Engineer (CCIE) Certification Bachelor's degree, preferably in a technical discipline (Computer Science, Mathematics, etc.), or equivalent combination of education and experience 1+ years of experience in IT systems installation, operations, administration, and maintenance of virtualized Servers/cloud systems
12/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Network Engineer. Candidate will be responsible for supporting a team performing network design & analysis of new architectures, routing/switching configuration and design, and network related business continuity approaches; developing systems specifications, technical implementation plans and coordinating complex installation projects with clients and vendors without supervision. Responsible for supporting the team performing network design and support of new network and security architectures for on premise and Cloud networks. Responsibilities: Support the team members who design/architect new routing, switching, and connectivity solutions Support the team members who design/architect new cloud network solutions Support the team members who design, architect and plan network changes and new infrastructure Work with vendors, common carriers and network engineering in identification and resolution of complex network problems Ensure IT/Security Governance needs are met. (NIST-CSF and COBIT) Ensure network performance and network security standards are met Provide tactical and strategic input on overall network planning and related projects Provide on-call support according to assigned schedule Document network changes, policies and procedures and drawings Support large projects and schedules Perform other duties as assigned Qualifications: Experience directing use of tools such as Ansible, Terraform, Jenkins, Python, and Github (or industry equivalent) Experience delivering Infrastructure as code Experience building cloud infrastructure in environments such as AWS (Preferred), Azure, or Google Cloud, or similar service Experience using Agile Methodology Knowledge of layer 3 routing and switching Experience with IOS, NX-OS, and XR Advanced experience with architecting, designing, deploying, and operating network elements such as DNS/IPAM; Firewalls; Network Access Control Solutions (NAC); load balancing; DDoS mitigation, tapping/sniffing infrastructures; and NTP Excellent physical communication troubleshooting skills using cabling and signalling analyzer, packet capture and analysis Advanced WAN, LAN, TCP/IP, VPN, Ethernet Skills Extensive EIGRP, BGP4, RIP and OSPF Routing Protocol knowledge Certifications Relevant industry certifications such as Microsoft Azure or Google Cloud Cisco Certified Network Professional (CCNP or CCDP) equivalent field experience AWS certifications such as DevOps Engineer, Solutions Architect, Advanced Networking, or Security or equivalent field experience Cisco Certified Internetwork Engineer (CCIE) Certification Bachelor's degree, preferably in a technical discipline (Computer Science, Mathematics, etc.), or equivalent combination of education and experience 1+ years of experience in IT systems installation, operations, administration, and maintenance of virtualized Servers/cloud systems
Associate Principal, Software Programming - Quantitative Risk Management Area - Associate Principal, Software Engineering - Automating Risk Models On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
12/09/2024
Full time
Associate Principal, Software Programming - Quantitative Risk Management Area - Associate Principal, Software Engineering - Automating Risk Models On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
12/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious financial firm is looking for a Principal Software Engineer. This engineer will build software solutions to test systems of financial products. This engineer will need heavy experience using Java, python, Terraform, CI/CD, DevOps, and containerization. The ideal candidate will have experience of working in a highly regulated financial environment. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus.
12/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious financial firm is looking for a Principal Software Engineer. This engineer will build software solutions to test systems of financial products. This engineer will need heavy experience using Java, python, Terraform, CI/CD, DevOps, and containerization. The ideal candidate will have experience of working in a highly regulated financial environment. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus.
Job Title: Linux DevOps Engineer Location: Almere, Netherlands Salary/Rate: €75,000 - €80,000 Start Date: 04/11/2024 Job Type: Permanent Job Summary Join a dedicated team of specialists in complex IT infrastructures where you'll play a key role in connecting various components for optimal performance. You'll have access to a state-of-the-art lab environment, including IBM LinuxONE and Mainframe systems, where you can experiment, learn, and grow professionally. We value both your personal and professional development, ensuring you remain challenged and engaged. We seek an IT professional experienced in Technical Application Management or cloud solutions development and management. Responsibilities: * Work with Linux environments, with a preference for RedHat. * Utilise infrastructure automation tools to streamline processes. * Manage and develop cloud solutions (AWS, Azure, GCE), including private/hybrid or on-premise setups. * Oversee application monitoring and log management. * Use container engines and orchestrators for efficient deployments. * Write Shell Scripts and work with various programming languages. * Implement security measures across all aspects of your work. * Apply agile methodologies to manage projects effectively. Requirements: * Extensive experience with Linux, preferably RedHat. * Proficiency with infrastructure automation tools. * Experience with cloud platforms (AWS, Azure, GCE) in private, hybrid, or on-premise environments. * Knowledge of application monitoring, log management, and container orchestration. * Strong skills in Shell Scripting and programming languages. * Understanding of security practices and how to integrate them into your work. * Practical experience with agile working methodologies. * HBO-level education and relevant experience in an enterprise environment. * Flexible attitude, team player, and ability to engage others with your solutions. * Proficiency in Dutch and English (spoken and written). What We Offer: * Lease car or transportation arrangement. * Pension plan. * 26 vacation days plus an additional day off on your birthday. If you are interested in this opportunity, please apply now with your updated CV in Microsoft Word/PDF format. Disclaimer We consider candidates from various experience levels if they demonstrate the necessary skills and competencies. We are an equal opportunities employer and embrace diversity in our workforce. Please see our website for our full diversity statement.
12/09/2024
Full time
Job Title: Linux DevOps Engineer Location: Almere, Netherlands Salary/Rate: €75,000 - €80,000 Start Date: 04/11/2024 Job Type: Permanent Job Summary Join a dedicated team of specialists in complex IT infrastructures where you'll play a key role in connecting various components for optimal performance. You'll have access to a state-of-the-art lab environment, including IBM LinuxONE and Mainframe systems, where you can experiment, learn, and grow professionally. We value both your personal and professional development, ensuring you remain challenged and engaged. We seek an IT professional experienced in Technical Application Management or cloud solutions development and management. Responsibilities: * Work with Linux environments, with a preference for RedHat. * Utilise infrastructure automation tools to streamline processes. * Manage and develop cloud solutions (AWS, Azure, GCE), including private/hybrid or on-premise setups. * Oversee application monitoring and log management. * Use container engines and orchestrators for efficient deployments. * Write Shell Scripts and work with various programming languages. * Implement security measures across all aspects of your work. * Apply agile methodologies to manage projects effectively. Requirements: * Extensive experience with Linux, preferably RedHat. * Proficiency with infrastructure automation tools. * Experience with cloud platforms (AWS, Azure, GCE) in private, hybrid, or on-premise environments. * Knowledge of application monitoring, log management, and container orchestration. * Strong skills in Shell Scripting and programming languages. * Understanding of security practices and how to integrate them into your work. * Practical experience with agile working methodologies. * HBO-level education and relevant experience in an enterprise environment. * Flexible attitude, team player, and ability to engage others with your solutions. * Proficiency in Dutch and English (spoken and written). What We Offer: * Lease car or transportation arrangement. * Pension plan. * 26 vacation days plus an additional day off on your birthday. If you are interested in this opportunity, please apply now with your updated CV in Microsoft Word/PDF format. Disclaimer We consider candidates from various experience levels if they demonstrate the necessary skills and competencies. We are an equal opportunities employer and embrace diversity in our workforce. Please see our website for our full diversity statement.
Job Title: Tech Lead Job Summary: We are seeking a highly skilled Tech Lead to design, develop, and maintain serverless applications using Python and AWS technologies. The ideal candidate will have extensive experience in building scalable, high-performance Back End systems and a deep understanding of AWS serverless services such as Lambda, DynamoDB, SNS, SQS, S3, and others. This role requires a strong technical leader who can guide teams, architect solutions, and contribute to the overall success of our fintech products. Key Responsibilities: Architect and Develop Solutions: Design and implement robust, scalable, and secure Back End services using Python and AWS serverless technologies. Serverless Application Development: Build and maintain serverless applications leveraging AWS Lambda, DynamoDB, API Gateway, S3, SNS, SQS, and other AWS services. Leadership: Provide technical leadership and mentorship to a team of engineers, promoting best practices in software development, testing, and DevOps. Collaboration: Work closely with cross-functional teams including Front End developers, product managers, and DevOps engineers to deliver high-quality solutions that meet business needs. Automation and CI/CD: Implement and manage CI/CD pipelines, automated testing, and monitoring to ensure high availability and rapid deployment of services. Performance Optimization: Optimize Back End services for performance, scalability, and cost-effectiveness, ensuring the efficient use of AWS resources. Security: Ensure that all solutions adhere to industry best practices for security, including data protection, access controls, and encryption. Documentation: Create and maintain comprehensive technical documentation, including architecture diagrams, API documentation, and deployment guides. Problem Solving: Diagnose and resolve complex technical issues in production environments, ensuring minimal downtime and disruption. Continuous Improvement: Stay updated with the latest trends and best practices in Python, AWS serverless technologies, and fintech/banking technology stacks, and apply this knowledge to improve our systems. Qualifications: Experience: Minimum of 10 years of experience in Back End software development, with at least 6 years of hands-on experience in Python. Extensive experience with AWS serverless technologies, including Lambda, DynamoDB, API Gateway, SNS, SQS, S3, ECS, EKS and other related services. Proven experience in leading technical teams and delivering complex, scalable cloud-based solutions in the fintech or banking sectors. Technical Skills: Strong proficiency in Python and related frameworks (eg, Flask, Django). Deep understanding of AWS serverless architecture and best practices. Experience with infrastructure as code (IaC) tools such as AWS CloudFormation or Terraform. Familiarity with RESTful APIs, microservices architecture, and event-driven systems. Knowledge of DevOps practices, including CI/CD pipelines, automated testing, and monitoring using AWS services (eg, CodePipeline, CloudWatch, X-Ray). Leadership: Demonstrated ability to lead and mentor engineering teams, fostering a culture of collaboration, innovation, and continuous improvement. Problem-Solving: Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve complex technical issues in a fast-paced environment. Communication: Excellent verbal and written communication skills, with the ability to effectively convey technical concepts to both technical and non-technical stakeholders. Preferred Qualifications: Experience with other cloud platforms (eg, Azure, GCP) and containerization technologies like Docker and Kubernetes. Familiarity with financial services industry regulations and compliance requirements. Relevant certifications such as AWS Certified Solutions Architect, AWS Certified Developer, or similar.
12/09/2024
Full time
Job Title: Tech Lead Job Summary: We are seeking a highly skilled Tech Lead to design, develop, and maintain serverless applications using Python and AWS technologies. The ideal candidate will have extensive experience in building scalable, high-performance Back End systems and a deep understanding of AWS serverless services such as Lambda, DynamoDB, SNS, SQS, S3, and others. This role requires a strong technical leader who can guide teams, architect solutions, and contribute to the overall success of our fintech products. Key Responsibilities: Architect and Develop Solutions: Design and implement robust, scalable, and secure Back End services using Python and AWS serverless technologies. Serverless Application Development: Build and maintain serverless applications leveraging AWS Lambda, DynamoDB, API Gateway, S3, SNS, SQS, and other AWS services. Leadership: Provide technical leadership and mentorship to a team of engineers, promoting best practices in software development, testing, and DevOps. Collaboration: Work closely with cross-functional teams including Front End developers, product managers, and DevOps engineers to deliver high-quality solutions that meet business needs. Automation and CI/CD: Implement and manage CI/CD pipelines, automated testing, and monitoring to ensure high availability and rapid deployment of services. Performance Optimization: Optimize Back End services for performance, scalability, and cost-effectiveness, ensuring the efficient use of AWS resources. Security: Ensure that all solutions adhere to industry best practices for security, including data protection, access controls, and encryption. Documentation: Create and maintain comprehensive technical documentation, including architecture diagrams, API documentation, and deployment guides. Problem Solving: Diagnose and resolve complex technical issues in production environments, ensuring minimal downtime and disruption. Continuous Improvement: Stay updated with the latest trends and best practices in Python, AWS serverless technologies, and fintech/banking technology stacks, and apply this knowledge to improve our systems. Qualifications: Experience: Minimum of 10 years of experience in Back End software development, with at least 6 years of hands-on experience in Python. Extensive experience with AWS serverless technologies, including Lambda, DynamoDB, API Gateway, SNS, SQS, S3, ECS, EKS and other related services. Proven experience in leading technical teams and delivering complex, scalable cloud-based solutions in the fintech or banking sectors. Technical Skills: Strong proficiency in Python and related frameworks (eg, Flask, Django). Deep understanding of AWS serverless architecture and best practices. Experience with infrastructure as code (IaC) tools such as AWS CloudFormation or Terraform. Familiarity with RESTful APIs, microservices architecture, and event-driven systems. Knowledge of DevOps practices, including CI/CD pipelines, automated testing, and monitoring using AWS services (eg, CodePipeline, CloudWatch, X-Ray). Leadership: Demonstrated ability to lead and mentor engineering teams, fostering a culture of collaboration, innovation, and continuous improvement. Problem-Solving: Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve complex technical issues in a fast-paced environment. Communication: Excellent verbal and written communication skills, with the ability to effectively convey technical concepts to both technical and non-technical stakeholders. Preferred Qualifications: Experience with other cloud platforms (eg, Azure, GCP) and containerization technologies like Docker and Kubernetes. Familiarity with financial services industry regulations and compliance requirements. Relevant certifications such as AWS Certified Solutions Architect, AWS Certified Developer, or similar.
NO SPONSORSHIP Software Engineering - Python, Java, Terraform, DevOps, Containerization Understanding of industry They do not necessarily have to work within a QRM portal. But they have to understand the industry and come from a highly regulated background, preferably financial Looking for a hard core developer who can work within quantitative risk management and they develop applications and solutions for the QRM team They do not build models, they automate models Develop hardcore applications These people will have masters in mathematics, statistics, physics, or computer science *They may even have a PhD They need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Develops and maintains risk models for managing clearing fund and stress testing risk model software in production AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
11/09/2024
Full time
NO SPONSORSHIP Software Engineering - Python, Java, Terraform, DevOps, Containerization Understanding of industry They do not necessarily have to work within a QRM portal. But they have to understand the industry and come from a highly regulated background, preferably financial Looking for a hard core developer who can work within quantitative risk management and they develop applications and solutions for the QRM team They do not build models, they automate models Develop hardcore applications These people will have masters in mathematics, statistics, physics, or computer science *They may even have a PhD They need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Develops and maintains risk models for managing clearing fund and stress testing risk model software in production AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Sr. Director, Network Reliability Engineering. This Sr. Director will focus on improving network services, operations, reliability, engineering automation, API Driven approach, etc. Responsibilities: Lead the organization into building a network services API driven approach to enable seamless integration of network tools with various other network related services and enable easy consumption of network tools services to other teams Perform automated regular network infrastructure audits to ensure continuous compliance with best practices and industry standards. Lead the development and/or integration of self-service tools for other teams to troubleshoot and resolve network-related issues. Collaborate with other teams to design and implement tools that will help automate end-to-end processes within network infrastructure. Develop automated test suites and maintain clear documentation of solutions developed. Build and lead the sustainability and reliability network engineering function that owns infrastructure availability and performance. Build tools to lead through automation and proactive/predictive alerts by having a strong data analytical tool set to identify areas of improvement Implement comprehensive network service monitoring to ensure uptime and performance, including synthetic, real user, system, application performance, dashboards etc. Define, measure, and meet key Service Level Objectives including availability, performance, incidents and chronic problems Stand up a capacity planning that defines a framework to regularly measure performance and capacity and ensuring that there is no downtime due to capacity. Own end-to-end availability and performance of critical services and build automation to prevent problem recurrence; eventually automate response to all non-exceptional service conditions. Build a DevOps culture to provide high quality, continuous operations, and ongoing support ensuring critical service level metrics, customer requirements and financial objectives. Qualifications: 15 + years of directly related professional experience College or advanced studies degree and/or a minimum of 12 + years of relevant IT and management experience Proven professional experience with operational and organizational management, leadership of teams, and enterprise-wide technology strategy Possess good interpersonal and collaboration skills with ability to communicate optimally with small and large groups of business partners and senior leadership
11/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Sr. Director, Network Reliability Engineering. This Sr. Director will focus on improving network services, operations, reliability, engineering automation, API Driven approach, etc. Responsibilities: Lead the organization into building a network services API driven approach to enable seamless integration of network tools with various other network related services and enable easy consumption of network tools services to other teams Perform automated regular network infrastructure audits to ensure continuous compliance with best practices and industry standards. Lead the development and/or integration of self-service tools for other teams to troubleshoot and resolve network-related issues. Collaborate with other teams to design and implement tools that will help automate end-to-end processes within network infrastructure. Develop automated test suites and maintain clear documentation of solutions developed. Build and lead the sustainability and reliability network engineering function that owns infrastructure availability and performance. Build tools to lead through automation and proactive/predictive alerts by having a strong data analytical tool set to identify areas of improvement Implement comprehensive network service monitoring to ensure uptime and performance, including synthetic, real user, system, application performance, dashboards etc. Define, measure, and meet key Service Level Objectives including availability, performance, incidents and chronic problems Stand up a capacity planning that defines a framework to regularly measure performance and capacity and ensuring that there is no downtime due to capacity. Own end-to-end availability and performance of critical services and build automation to prevent problem recurrence; eventually automate response to all non-exceptional service conditions. Build a DevOps culture to provide high quality, continuous operations, and ongoing support ensuring critical service level metrics, customer requirements and financial objectives. Qualifications: 15 + years of directly related professional experience College or advanced studies degree and/or a minimum of 12 + years of relevant IT and management experience Proven professional experience with operational and organizational management, leadership of teams, and enterprise-wide technology strategy Possess good interpersonal and collaboration skills with ability to communicate optimally with small and large groups of business partners and senior leadership
The Kubernetes Atlassian SecDevOps Engineer ensures to architect and implement sustainable hybrid cloud architectures including assurance of lean, automated and secure maintenance. Furthermore he/she is responsible to collaborate across senior stakeholders in amongst other the areas of Digital, OT, Enterprise Architecture, Quality Assurance and IT Security. The Kubernetes Atlassian SecDevOps Engineer is also responsible to introduce containerization and associated concepts, such as orchestration, central automation and infrastructure as a code, and drive the DevOps culture within the organization. The ideal candidate is a thought leader, a consensus builder, and an integrator of people and processes, as well as a sound subject matter expert in setting up cloud services (incl. People, organization, technical aspects, cost effectiveness, high reliability, security, etc.). Responsible for designing and maintaining containerized solutions - Design and implement cloud or on-premise infrastructure for IT and OT together with the IT and OT infrastructure representatives which need to meet the highest levels of GxP requirements, IT security controls and reliability. Define cloud standards and frameworks in order to manage the entire infrastructure life cycle with DevOps techniques and technologies like: Hardware provisioning (Infrastructure as code) CI/CD pipelines Testing automation framework Artifact repositories (Releases, images ) Logs management and aggregation Events/Metrics management and aggregation Alert system Apps building and deployment Authentication and security framework Responsible for proper Design according to IT Security and QA standards and procedures conducting reviews against architecture standards Ensure that forward looking architectures, including new applications, move to cloud native setups . In order to provide a single pane of glass for IT and OT management, strengthen business resilience, etc. Define, drive and implement new working methodologies using modern agile approach in collaboration with relevant stakeholders. Look for improvements & enhancements to infrastructure systems that will ultimately provide more efficient services to the business Skills: cloud kubernetes atlassian infrastructure as code hardware cicd test automation alert security it security hybridcloud implementation collaboration stakeholder management skills English pharmaceutical Job Title: Kubernetes Atlassian SecDevOps Engineer Location: Basel, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
11/09/2024
Project-based
The Kubernetes Atlassian SecDevOps Engineer ensures to architect and implement sustainable hybrid cloud architectures including assurance of lean, automated and secure maintenance. Furthermore he/she is responsible to collaborate across senior stakeholders in amongst other the areas of Digital, OT, Enterprise Architecture, Quality Assurance and IT Security. The Kubernetes Atlassian SecDevOps Engineer is also responsible to introduce containerization and associated concepts, such as orchestration, central automation and infrastructure as a code, and drive the DevOps culture within the organization. The ideal candidate is a thought leader, a consensus builder, and an integrator of people and processes, as well as a sound subject matter expert in setting up cloud services (incl. People, organization, technical aspects, cost effectiveness, high reliability, security, etc.). Responsible for designing and maintaining containerized solutions - Design and implement cloud or on-premise infrastructure for IT and OT together with the IT and OT infrastructure representatives which need to meet the highest levels of GxP requirements, IT security controls and reliability. Define cloud standards and frameworks in order to manage the entire infrastructure life cycle with DevOps techniques and technologies like: Hardware provisioning (Infrastructure as code) CI/CD pipelines Testing automation framework Artifact repositories (Releases, images ) Logs management and aggregation Events/Metrics management and aggregation Alert system Apps building and deployment Authentication and security framework Responsible for proper Design according to IT Security and QA standards and procedures conducting reviews against architecture standards Ensure that forward looking architectures, including new applications, move to cloud native setups . In order to provide a single pane of glass for IT and OT management, strengthen business resilience, etc. Define, drive and implement new working methodologies using modern agile approach in collaboration with relevant stakeholders. Look for improvements & enhancements to infrastructure systems that will ultimately provide more efficient services to the business Skills: cloud kubernetes atlassian infrastructure as code hardware cicd test automation alert security it security hybridcloud implementation collaboration stakeholder management skills English pharmaceutical Job Title: Kubernetes Atlassian SecDevOps Engineer Location: Basel, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
*Cloud DevOps Engineer - HYBRID POSITION (1DAY ON SITE) - 12 MONTHS+* For our international client based in Netherlands, RED is currently looking for a Cloud DevOps Engineer to start in a new project for an initial 12 month contract, with excellent extension opportunities. This is for a hybrid role which will require travel to Utrecht 1 day per week. Desired skills: Hands-on mentality Experience in mentoring between teams and providing solutions within the SRE chapter. Proficiency in Infrastructure as Code technologies such as Ansible Proficiency in Kubernetes Proficiency in Helm Proficiency in Docker Proficiency in Git, GitLab CI Deep understanding of security best practices in (private) cloud development Dutch reading, writing and language skills If you are interested in this position, please apply or send your updated CV to (see below) for immediate consideration.
11/09/2024
Project-based
*Cloud DevOps Engineer - HYBRID POSITION (1DAY ON SITE) - 12 MONTHS+* For our international client based in Netherlands, RED is currently looking for a Cloud DevOps Engineer to start in a new project for an initial 12 month contract, with excellent extension opportunities. This is for a hybrid role which will require travel to Utrecht 1 day per week. Desired skills: Hands-on mentality Experience in mentoring between teams and providing solutions within the SRE chapter. Proficiency in Infrastructure as Code technologies such as Ansible Proficiency in Kubernetes Proficiency in Helm Proficiency in Docker Proficiency in Git, GitLab CI Deep understanding of security best practices in (private) cloud development Dutch reading, writing and language skills If you are interested in this position, please apply or send your updated CV to (see below) for immediate consideration.
NO SPONSORSHIP Associate Principal, Software Programming Quantitative Risk Management Area Associate Principal, Software Engineering Automating Risk Models Chicago - On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRMs software on these resources. Develop CI/CD pipelines. Contribute to development of QRMs databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Masters degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
10/09/2024
Full time
NO SPONSORSHIP Associate Principal, Software Programming Quantitative Risk Management Area Associate Principal, Software Engineering Automating Risk Models Chicago - On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRMs software on these resources. Develop CI/CD pipelines. Contribute to development of QRMs databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Masters degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas