*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
28/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
27/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
Role Overview We have an exciting opportunity for a Support Developer to join our growing Managed Services team. You will have ideally worked in a Support Development role previously, however if you're a great developer with a passion for service and quality, then you will be a great fit. The role will be to work alongside our analysts, architects, developers and technical consultants to support, maintain and deliver high quality cloud-based solutions to help our customers become more successful. Responsibilities Develop tools and automation capabilities for our customers internal and client facing applications to improve their overall supportability and empower our client's Service Desk Agents to perform more first time fixes Diagnose and fix issues in complex technical environments following DevOps principals and best practices Carry out maintenance and improvement tasks on the solutions and platforms we support Assist with the transition from development into production by identifying and creating any necessary documentation and tooling required to support deployment About you Proficiency in developing and deploying .NET applications, or supporting .NET applications Expertise in developing or supporting applications in Microsoft Azure An understanding across of the following areas: Front End web development and support using C# Working with SQL databases Skills with Infrastructure as Code tools such as ARM, Bicep or Terraform Configuring CI/CD pipelines Familiarity with ALM tools such as Azure DevOps or Jira Proactive monitoring including performance monitoring of applications Knowledge and experience of the following would be advantageous: Azure DevOps Engineer (AZ-400) certification Blazor, React or Angular JavaScript frameworks NoSQL databases such as Cosmos DB CI/CD tools such as Jenkins or GitHub Actions Cloud Services from Azure or AWS SharePoint On Premise and Online Windows PowerShell Modern JavaScript Libraries like Vue.js, Docker and Kubernetes O.L.I.D design principles and modern design patterns What we look for in our people Strong alignment with FSP values and ethos Commitment to teamwork, quality and mutual success Proactivity with an ability to operate with pace and energy Strong communication and interpersonal skills Excellent planning and organisational skills Dedication to excellence and quality Who are FSP? Founded in 2012, FSP Consulting Services (FSP) is a best-in-class digital transformation and cyber consultancy specialist combining real world experience in business strategy, change and adoption and digital solution delivery with a strong culture and social purpose. As a long-standing Microsoft Solutions Partner, our portfolio of modern workplace, cloud, data, and cyber security offerings, alongside trusted managed services delivery, is driving change for high-profile clients in both the public and private sector. Our work is founded on the commitment to deliver positive impact for both organisations and their people. As an employee-first organisation, FSP is committed to creating a culture of True Belonging, Excellence Everywhere, and Creating Opportunity. We are proud to have been recognised by Best Companies as a 3-star 'World Class' workplace (their highest level of accreditation) in 2024, 2023 and 2022. We were also awarded No.1 Best Company to Work For in the UK, in the Technology sector and in the South East (Regional League Table) in 2023. We have also been recognised three times as No.1 Best Workplace in the UK by Great Place to Work®. Find out more about our awards via our website. Why work for FSP? At FSP, we are committed to providing: A collaborative and supportive environment in which you can grow and develop your career The tools and opportunity to do work you can be proud of A chance to work alongside some of the best people in the industry, who always seek to share their knowledge and experience Hybrid working - we empower you to make smart choices about when and where to work to achieve great results Industry leading coaching and mentoring Plus the excellent benefits package we offer
27/09/2024
Full time
Role Overview We have an exciting opportunity for a Support Developer to join our growing Managed Services team. You will have ideally worked in a Support Development role previously, however if you're a great developer with a passion for service and quality, then you will be a great fit. The role will be to work alongside our analysts, architects, developers and technical consultants to support, maintain and deliver high quality cloud-based solutions to help our customers become more successful. Responsibilities Develop tools and automation capabilities for our customers internal and client facing applications to improve their overall supportability and empower our client's Service Desk Agents to perform more first time fixes Diagnose and fix issues in complex technical environments following DevOps principals and best practices Carry out maintenance and improvement tasks on the solutions and platforms we support Assist with the transition from development into production by identifying and creating any necessary documentation and tooling required to support deployment About you Proficiency in developing and deploying .NET applications, or supporting .NET applications Expertise in developing or supporting applications in Microsoft Azure An understanding across of the following areas: Front End web development and support using C# Working with SQL databases Skills with Infrastructure as Code tools such as ARM, Bicep or Terraform Configuring CI/CD pipelines Familiarity with ALM tools such as Azure DevOps or Jira Proactive monitoring including performance monitoring of applications Knowledge and experience of the following would be advantageous: Azure DevOps Engineer (AZ-400) certification Blazor, React or Angular JavaScript frameworks NoSQL databases such as Cosmos DB CI/CD tools such as Jenkins or GitHub Actions Cloud Services from Azure or AWS SharePoint On Premise and Online Windows PowerShell Modern JavaScript Libraries like Vue.js, Docker and Kubernetes O.L.I.D design principles and modern design patterns What we look for in our people Strong alignment with FSP values and ethos Commitment to teamwork, quality and mutual success Proactivity with an ability to operate with pace and energy Strong communication and interpersonal skills Excellent planning and organisational skills Dedication to excellence and quality Who are FSP? Founded in 2012, FSP Consulting Services (FSP) is a best-in-class digital transformation and cyber consultancy specialist combining real world experience in business strategy, change and adoption and digital solution delivery with a strong culture and social purpose. As a long-standing Microsoft Solutions Partner, our portfolio of modern workplace, cloud, data, and cyber security offerings, alongside trusted managed services delivery, is driving change for high-profile clients in both the public and private sector. Our work is founded on the commitment to deliver positive impact for both organisations and their people. As an employee-first organisation, FSP is committed to creating a culture of True Belonging, Excellence Everywhere, and Creating Opportunity. We are proud to have been recognised by Best Companies as a 3-star 'World Class' workplace (their highest level of accreditation) in 2024, 2023 and 2022. We were also awarded No.1 Best Company to Work For in the UK, in the Technology sector and in the South East (Regional League Table) in 2023. We have also been recognised three times as No.1 Best Workplace in the UK by Great Place to Work®. Find out more about our awards via our website. Why work for FSP? At FSP, we are committed to providing: A collaborative and supportive environment in which you can grow and develop your career The tools and opportunity to do work you can be proud of A chance to work alongside some of the best people in the industry, who always seek to share their knowledge and experience Hybrid working - we empower you to make smart choices about when and where to work to achieve great results Industry leading coaching and mentoring Plus the excellent benefits package we offer
Graduate/Junior DevOps Engineer - FinTech Oliver Bernard are currently working with a FinTech company, based in London, who are looking to take on a Graduate/Junior DevOps Engineer as part of current expansion plans across tech for the development and growth of their platform. With a client base that is consistently growing, the IT team is scaling across the board - and they now have an opportunity for a Graduate or Junior Engineer (3-12 months experience) looking to push their career in the DevOps direction. The incoming graduate engineer will join the team and be mentored by DevOps Engineers and Solution Architects within the DevOps/Platform team, whilst being given the opportunity to develop and upskill with a variety of Cloud, Container and Infrastructure as Code Technologies - predominantly AWS, Kubernetes, Terraform. The ideal candidate will be looking for their first opportunity out of university, or have anywhere from 3-12 months experience in a relevant role/industry, and be keen to work with the following: AWS Cloud Docker and Kubernetes Terraform for IaC CI/CD Pipelines Python, PowerShell, Go & more Work in Linux and Windows environments This position can offer £25-45K for the right profile, and presents a great opportunity to break into a DevOps role with minimal experience: this role will give you the chance to learn and work your way into being an established DevOps Engineer. For this position, you must be comfortable being office based 3 days p/week. Graduate/Junior DevOps Engineer - FinTech
27/09/2024
Full time
Graduate/Junior DevOps Engineer - FinTech Oliver Bernard are currently working with a FinTech company, based in London, who are looking to take on a Graduate/Junior DevOps Engineer as part of current expansion plans across tech for the development and growth of their platform. With a client base that is consistently growing, the IT team is scaling across the board - and they now have an opportunity for a Graduate or Junior Engineer (3-12 months experience) looking to push their career in the DevOps direction. The incoming graduate engineer will join the team and be mentored by DevOps Engineers and Solution Architects within the DevOps/Platform team, whilst being given the opportunity to develop and upskill with a variety of Cloud, Container and Infrastructure as Code Technologies - predominantly AWS, Kubernetes, Terraform. The ideal candidate will be looking for their first opportunity out of university, or have anywhere from 3-12 months experience in a relevant role/industry, and be keen to work with the following: AWS Cloud Docker and Kubernetes Terraform for IaC CI/CD Pipelines Python, PowerShell, Go & more Work in Linux and Windows environments This position can offer £25-45K for the right profile, and presents a great opportunity to break into a DevOps role with minimal experience: this role will give you the chance to learn and work your way into being an established DevOps Engineer. For this position, you must be comfortable being office based 3 days p/week. Graduate/Junior DevOps Engineer - FinTech
Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation - (RL7086) Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. We welcome applications from full or part time candidates. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
27/09/2024
Full time
Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation - (RL7086) Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. We welcome applications from full or part time candidates. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
Associate Principal, Software Programming - Quantitative Risk Management Area - Associate Principal, Software Engineering - Automating Risk Models On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
26/09/2024
Full time
Associate Principal, Software Programming - Quantitative Risk Management Area - Associate Principal, Software Engineering - Automating Risk Models On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
26/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious financial firm is looking for a Principal Software Engineer. This engineer will build software solutions to test systems of financial products. This engineer will need heavy experience using Java, python, Terraform, CI/CD, DevOps, and containerization. The ideal candidate will have experience of working in a highly regulated financial environment. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus.
26/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious financial firm is looking for a Principal Software Engineer. This engineer will build software solutions to test systems of financial products. This engineer will need heavy experience using Java, python, Terraform, CI/CD, DevOps, and containerization. The ideal candidate will have experience of working in a highly regulated financial environment. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus.
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
26/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Senior Java Software Engineer. Candidate will support and work collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Preferred] Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google [Required] Experience writing unit and integration tests with testing frameworks like Junit, Citrus [Required] Experience working with various types of databases like Relational, NoSQL [Required] Experience working with Git [Preferred] Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc [Preferred] Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics [Required] Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API Technical Skills: [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns Education and/or Experience: [Required] BS degree in Computer Science, similar technical field required
25/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Senior Java Software Engineer. Candidate will support and work collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Preferred] Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google [Required] Experience writing unit and integration tests with testing frameworks like Junit, Citrus [Required] Experience working with various types of databases like Relational, NoSQL [Required] Experience working with Git [Preferred] Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc [Preferred] Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics [Required] Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API Technical Skills: [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns Education and/or Experience: [Required] BS degree in Computer Science, similar technical field required
NO SPONSORSHIP Software Engineering - Python, Java, Terraform, DevOps, Containerization Understanding of industry They do not necessarily have to work within a QRM portal. But they have to understand the industry and come from a highly regulated background, preferably financial Looking for a hard core developer who can work within quantitative risk management and they develop applications and solutions for the QRM team They do not build models, they automate models Develop hardcore applications These people will have masters in mathematics, statistics, physics, or computer science *They may even have a PhD They need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Develops and maintains risk models for managing clearing fund and stress testing risk model software in production AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
25/09/2024
Full time
NO SPONSORSHIP Software Engineering - Python, Java, Terraform, DevOps, Containerization Understanding of industry They do not necessarily have to work within a QRM portal. But they have to understand the industry and come from a highly regulated background, preferably financial Looking for a hard core developer who can work within quantitative risk management and they develop applications and solutions for the QRM team They do not build models, they automate models Develop hardcore applications These people will have masters in mathematics, statistics, physics, or computer science *They may even have a PhD They need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Develops and maintains risk models for managing clearing fund and stress testing risk model software in production AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is Bonus eligible* Prestigious Financial Company is currently seeking a Cloud DevOps Kubernetes Engineer with AWS and Kubernetes experience. Candidate will be joining the DevOps team and supporting the development efforts for a team that strives for goals that refine and automate the processes needed to support the Software Development Life Cycle (SDLC). Responsibilities: Enable development teams to self-service build and deployment processes through process automation. Design and implement process improvements across the build, deployment, and monitoring of Clearing applications. Support the maintenance and configuration of development environments in Kubernetes and AWS. Develop and support Terraform, Ansible, Harness, and Jenkins jobs used to instantiate and manage development environments. Qualifications: Understanding of Kanban and/or Agile methodologies Able to succeed in fast-paced environment with frequent changes Comfortable communicating with both technical and non-technical audiences Self-starter - takes the initiative to research, learn and deliver Team player - humble, collaborative, and focused on making sure the entire team succeeds Programming/Scripting experience in languages like Java, Bash, Python or Go is required. Knowledge and experience with Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s). Java development experience is desirable. Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations is required. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics. Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. is required. Experience with RESTful APIs and JSON RPC. Experience following Git workflows. BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team
24/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is Bonus eligible* Prestigious Financial Company is currently seeking a Cloud DevOps Kubernetes Engineer with AWS and Kubernetes experience. Candidate will be joining the DevOps team and supporting the development efforts for a team that strives for goals that refine and automate the processes needed to support the Software Development Life Cycle (SDLC). Responsibilities: Enable development teams to self-service build and deployment processes through process automation. Design and implement process improvements across the build, deployment, and monitoring of Clearing applications. Support the maintenance and configuration of development environments in Kubernetes and AWS. Develop and support Terraform, Ansible, Harness, and Jenkins jobs used to instantiate and manage development environments. Qualifications: Understanding of Kanban and/or Agile methodologies Able to succeed in fast-paced environment with frequent changes Comfortable communicating with both technical and non-technical audiences Self-starter - takes the initiative to research, learn and deliver Team player - humble, collaborative, and focused on making sure the entire team succeeds Programming/Scripting experience in languages like Java, Bash, Python or Go is required. Knowledge and experience with Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s). Java development experience is desirable. Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations is required. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics. Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. is required. Experience with RESTful APIs and JSON RPC. Experience following Git workflows. BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team
NO SPONSORSHIP AWS Cloud Engineer SALARY: $115k - 120K and a 10% Bonus LOCATION: Chicago, IL Hybrid 2 day remote and 3 days onsite SELLING POINTS: Bash Python Scripting AWS Kubernetes CICD Github Jenkins Artifactory Docker Compose K8s Kafka Rabbit MQ Amazon Kinesis Terraform Ansible Jenkins Helm Linux Linux Shell Scripting Splunk Infrastructure as a code (IaC) Qualifications: Programming/Scripting experience in languages like Java, Bash, Python or Go Knowledge of Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s) Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Experience with RESTful APIs and JSON RPC Experience following Git workflows Technical Skills: Experience with Linux and Linux Shell Scripting. Jenkins job setup and execution analysis - including Splunk log review for Root Cause Analysis (RCA). Ability to manage Kubernetes deployments with helm charts, using continuous deployment tools like Harness.io Ability to manage AWS deployments using Terraform, Ansible, or similar Infrastructure as Code (IaC) frameworks. Experience with automation, configuration management and orchestration, infrastructure as code. Experience with Golang or Python is a plus. BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team
24/09/2024
Full time
NO SPONSORSHIP AWS Cloud Engineer SALARY: $115k - 120K and a 10% Bonus LOCATION: Chicago, IL Hybrid 2 day remote and 3 days onsite SELLING POINTS: Bash Python Scripting AWS Kubernetes CICD Github Jenkins Artifactory Docker Compose K8s Kafka Rabbit MQ Amazon Kinesis Terraform Ansible Jenkins Helm Linux Linux Shell Scripting Splunk Infrastructure as a code (IaC) Qualifications: Programming/Scripting experience in languages like Java, Bash, Python or Go Knowledge of Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s) Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Experience with RESTful APIs and JSON RPC Experience following Git workflows Technical Skills: Experience with Linux and Linux Shell Scripting. Jenkins job setup and execution analysis - including Splunk log review for Root Cause Analysis (RCA). Ability to manage Kubernetes deployments with helm charts, using continuous deployment tools like Harness.io Ability to manage AWS deployments using Terraform, Ansible, or similar Infrastructure as Code (IaC) frameworks. Experience with automation, configuration management and orchestration, infrastructure as code. Experience with Golang or Python is a plus. BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team
NO SPONSORSHIP Associate Principal, Software Programming Quantitative Risk Management Area Associate Principal, Software Engineering Automating Risk Models Chicago - On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRMs software on these resources. Develop CI/CD pipelines. Contribute to development of QRMs databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Masters degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
24/09/2024
Full time
NO SPONSORSHIP Associate Principal, Software Programming Quantitative Risk Management Area Associate Principal, Software Engineering Automating Risk Models Chicago - On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRMs software on these resources. Develop CI/CD pipelines. Contribute to development of QRMs databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Masters degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* *NO CONTRACTORS OR CONSULTANTS* A prestigious company is looking for an Associate Principal, Backend Java Developer. This company needs someone with 7-10 years of experience focused on Back End Java development, Java 11, Kafka, Golang, Multithreading, AWS, etc. They will be working in a Real Time and highly regulated financial environment. Responsibilities: Actively participates in design of highly performing, scalable, secure, reliable and cost optimized solutions. Primary responsibility is application design and development of next gen clearing applications for business requirements in agreed architecture framework and Agile environment. Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation. Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented. Participates in code-reviews based on high engineering standards Writes unit and integration tests based on chosen test frameworks. Assists Production Support by providing advice on system functionality and fixes as required. Qualifications: BS degree in Computer Science, similar technical field required. Masters preferred. 7-10 years of experience in building large scale, compute and event-driven solutions. Experience (including internal workings of Java) in Java 11+ is required. Experience with app development in Golang. Experience developing software using Object Oriented Designs, advance patterns (like AOP) and multi-threading is required. Experience with distributed message brokers like Kafka, IBM MQ, Amazon Kinesis, etc. is desirable. Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Must be able to write good quality code with 80% or above unit and integration tests coverage. Experience with testing frameworks like Junit, Citrus is desirable. Experience working with various types of databases like Relational, NoSQL, Object-based, Graph. Experience following Git workflows is required. Familiarity with DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Docker, Helm and CI/CD pipeline etc.is a plus. Experience with performance optimization, profiling, and memory management.
24/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* *NO CONTRACTORS OR CONSULTANTS* A prestigious company is looking for an Associate Principal, Backend Java Developer. This company needs someone with 7-10 years of experience focused on Back End Java development, Java 11, Kafka, Golang, Multithreading, AWS, etc. They will be working in a Real Time and highly regulated financial environment. Responsibilities: Actively participates in design of highly performing, scalable, secure, reliable and cost optimized solutions. Primary responsibility is application design and development of next gen clearing applications for business requirements in agreed architecture framework and Agile environment. Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation. Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented. Participates in code-reviews based on high engineering standards Writes unit and integration tests based on chosen test frameworks. Assists Production Support by providing advice on system functionality and fixes as required. Qualifications: BS degree in Computer Science, similar technical field required. Masters preferred. 7-10 years of experience in building large scale, compute and event-driven solutions. Experience (including internal workings of Java) in Java 11+ is required. Experience with app development in Golang. Experience developing software using Object Oriented Designs, advance patterns (like AOP) and multi-threading is required. Experience with distributed message brokers like Kafka, IBM MQ, Amazon Kinesis, etc. is desirable. Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Must be able to write good quality code with 80% or above unit and integration tests coverage. Experience with testing frameworks like Junit, Citrus is desirable. Experience working with various types of databases like Relational, NoSQL, Object-based, Graph. Experience following Git workflows is required. Familiarity with DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Docker, Helm and CI/CD pipeline etc.is a plus. Experience with performance optimization, profiling, and memory management.
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is on the search for a Senior Associate, Cloud Engineer. This company is looking for a 3-year cloud engineer with experience with bash, python, AWS, Kubernetes, CICD, Ansible, Terraform, Linux Shell, IaC, etc. Responsibilities: Enable development teams to self-service build and deployment processes through process automation. Assist in designing process improvements across the build, deployment, and monitoring of Clearing applications. Support the maintenance and configuration of development environments in Kubernetes and AWS. Support terraform, ansible, Harness, and Jenkins jobs used to instantiate and manage development environments. Qualifications: BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team Programming/Scripting experience in languages like Java, Bash, Python or Go Knowledge of Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s) Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Experience with RESTful APIs and JSON RPC Experience following Git workflows Experience with Linux and Linux Shell Scripting. Jenkins job setup and execution analysis - including Splunk log review for Root Cause Analysis (RCA). Ability to manage Kubernetes deployments with helm charts, using continuous deployment tools like Harness.io Ability to manage AWS deployments using Terraform, Ansible, or similar Infrastructure as Code (IaC) frameworks. Experience with automation, configuration management and orchestration, infrastructure as code.
24/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is on the search for a Senior Associate, Cloud Engineer. This company is looking for a 3-year cloud engineer with experience with bash, python, AWS, Kubernetes, CICD, Ansible, Terraform, Linux Shell, IaC, etc. Responsibilities: Enable development teams to self-service build and deployment processes through process automation. Assist in designing process improvements across the build, deployment, and monitoring of Clearing applications. Support the maintenance and configuration of development environments in Kubernetes and AWS. Support terraform, ansible, Harness, and Jenkins jobs used to instantiate and manage development environments. Qualifications: BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team Programming/Scripting experience in languages like Java, Bash, Python or Go Knowledge of Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s) Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Experience with RESTful APIs and JSON RPC Experience following Git workflows Experience with Linux and Linux Shell Scripting. Jenkins job setup and execution analysis - including Splunk log review for Root Cause Analysis (RCA). Ability to manage Kubernetes deployments with helm charts, using continuous deployment tools like Harness.io Ability to manage AWS deployments using Terraform, Ansible, or similar Infrastructure as Code (IaC) frameworks. Experience with automation, configuration management and orchestration, infrastructure as code.
Senior Engineer, Cloud/Infrastructure Security Salary: Open + bonus Location: Chicago, IL Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* Qualifications Bachelor's degree in computer science related degree 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Preferred Experience with DevOps tools, ex. Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Familiarity with security standards such as the NIST CSF Related certifications Responsibilities Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery
24/09/2024
Full time
Senior Engineer, Cloud/Infrastructure Security Salary: Open + bonus Location: Chicago, IL Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* Qualifications Bachelor's degree in computer science related degree 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Preferred Experience with DevOps tools, ex. Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Familiarity with security standards such as the NIST CSF Related certifications Responsibilities Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery
Tiro Partners are seeking a highly skilled and experienced Digital Architect with expertise in Microservices Architecture and API Development to join one of our client based in Amsterdam. The ideal candidate will be responsible for designing, developing, and implementing scalable and efficient digital solutions. As a Digital Architect, you will work closely with cross-functional teams to ensure that the architecture aligns with the business's needs, focusing on improving agility, performance, and innovation through modern digital technologies. Key Responsibilities: Architect and Design Solutions : Lead the design and implementation of digital architectures using microservices and API-driven models to ensure scalability, flexibility, and efficiency. Microservices Development : Break down monolithic applications into microservices-based systems, ensuring proper inter-service communication, data management, and resilience. API Strategy : Define and implement an enterprise-wide API strategy, including the design and development of RESTful APIs, ensuring best practices and security standards. System Integration : Ensure seamless integration between internal and external systems using APIs and other modern integration patterns. Technical Leadership : Provide technical guidance to engineering teams, reviewing designs, conducting code reviews, and ensuring adherence to architectural standards. Cloud Architecture : Collaborate on cloud-native architecture design, focusing on services such as containers, orchestration (Kubernetes, Docker), and serverless computing. Security and Compliance : Ensure that architectural designs and implementations adhere to security, compliance, and performance standards. Collaboration : Work closely with product managers, business stakeholders, and developers to ensure that the architecture aligns with business goals and provides a seamless user experience. Continuous Improvement : Evaluate and recommend new technologies and tools to improve the efficiency and effectiveness of the digital architecture. Documentation : Maintain thorough and up-to-date documentation of architecture designs, system specifications, and coding standards. Qualifications: Education : Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Experience : 7+ years of experience in software architecture, with a strong focus on microservices and API development. Technical Expertise : Deep understanding of Microservices Architecture and associated patterns (eg, CQRS, Event-Driven Architecture). Proven experience in designing, building, and managing RESTful APIs . Familiarity with API gateways , management, and security protocols (OAuth2, JWT). Strong proficiency in programming languages like Java, Python, Node.js , or Go . Experience with cloud platforms (AWS, Azure, GCP), containers, and orchestration technologies (Docker, Kubernetes). Familiarity with DevOps and CI/CD pipelines, including tools like Jenkins, Git, and Terraform. Problem-Solving : Strong analytical and problem-solving skills with a focus on high availability, performance, and scalability. Leadership : Experience leading technical teams and mentoring junior developers. Communication : Excellent verbal and written communication skills, with the ability to present complex technical ideas to both technical and non-technical stakeholders. Why Join Us? Innovative Environment : Work with cutting-edge technologies in a company that encourages innovation and continuous learning. Impactful Work : Play a pivotal role in transforming our digital landscape and shaping the future of our technology stack. Collaborative Culture : Work alongside talented professionals in a supportive and collaborative environment. If you are passionate about building modern digital architectures and have a deep expertise in microservices and APIs, we would love to hear from you! If interested, please get in touch sharing your updated CV and best point of contact
24/09/2024
Full time
Tiro Partners are seeking a highly skilled and experienced Digital Architect with expertise in Microservices Architecture and API Development to join one of our client based in Amsterdam. The ideal candidate will be responsible for designing, developing, and implementing scalable and efficient digital solutions. As a Digital Architect, you will work closely with cross-functional teams to ensure that the architecture aligns with the business's needs, focusing on improving agility, performance, and innovation through modern digital technologies. Key Responsibilities: Architect and Design Solutions : Lead the design and implementation of digital architectures using microservices and API-driven models to ensure scalability, flexibility, and efficiency. Microservices Development : Break down monolithic applications into microservices-based systems, ensuring proper inter-service communication, data management, and resilience. API Strategy : Define and implement an enterprise-wide API strategy, including the design and development of RESTful APIs, ensuring best practices and security standards. System Integration : Ensure seamless integration between internal and external systems using APIs and other modern integration patterns. Technical Leadership : Provide technical guidance to engineering teams, reviewing designs, conducting code reviews, and ensuring adherence to architectural standards. Cloud Architecture : Collaborate on cloud-native architecture design, focusing on services such as containers, orchestration (Kubernetes, Docker), and serverless computing. Security and Compliance : Ensure that architectural designs and implementations adhere to security, compliance, and performance standards. Collaboration : Work closely with product managers, business stakeholders, and developers to ensure that the architecture aligns with business goals and provides a seamless user experience. Continuous Improvement : Evaluate and recommend new technologies and tools to improve the efficiency and effectiveness of the digital architecture. Documentation : Maintain thorough and up-to-date documentation of architecture designs, system specifications, and coding standards. Qualifications: Education : Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Experience : 7+ years of experience in software architecture, with a strong focus on microservices and API development. Technical Expertise : Deep understanding of Microservices Architecture and associated patterns (eg, CQRS, Event-Driven Architecture). Proven experience in designing, building, and managing RESTful APIs . Familiarity with API gateways , management, and security protocols (OAuth2, JWT). Strong proficiency in programming languages like Java, Python, Node.js , or Go . Experience with cloud platforms (AWS, Azure, GCP), containers, and orchestration technologies (Docker, Kubernetes). Familiarity with DevOps and CI/CD pipelines, including tools like Jenkins, Git, and Terraform. Problem-Solving : Strong analytical and problem-solving skills with a focus on high availability, performance, and scalability. Leadership : Experience leading technical teams and mentoring junior developers. Communication : Excellent verbal and written communication skills, with the ability to present complex technical ideas to both technical and non-technical stakeholders. Why Join Us? Innovative Environment : Work with cutting-edge technologies in a company that encourages innovation and continuous learning. Impactful Work : Play a pivotal role in transforming our digital landscape and shaping the future of our technology stack. Collaborative Culture : Work alongside talented professionals in a supportive and collaborative environment. If you are passionate about building modern digital architectures and have a deep expertise in microservices and APIs, we would love to hear from you! If interested, please get in touch sharing your updated CV and best point of contact
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
23/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
23/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications