NO SPONSORSHIP Associate Principal, Software Programming Quantitative Risk Management Area Associate Principal, Software Engineering Automating Risk Models Chicago - On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRMs software on these resources. Develop CI/CD pipelines. Contribute to development of QRMs databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Masters degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
25/06/2024
Full time
NO SPONSORSHIP Associate Principal, Software Programming Quantitative Risk Management Area Associate Principal, Software Engineering Automating Risk Models Chicago - On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRMs software on these resources. Develop CI/CD pipelines. Contribute to development of QRMs databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Masters degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
AWS Cloud based performance testing Chicago - Hybrid 3 days on site. - Long term contract role C2C or W2 Must be AWS certified heavy cloud experience setting up and maintenance of a cloud-based performance system to automate and troubleshoot environmental issues. Performance testing, automation testings, financial experience strongly preferred. python Scripting: converting Java to python. Don't have to be application developers and as much. Devops and containerization as possible splunk confluence Jira API testing uc4 or similar. All about cloud testing system they are migrating from an old system to a new system kafka is a HUGE plus WORK TO BE PERFORMED: Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles. SKILL AND EXPERIENCE REQUIRED: Python Scripting - familiarity with creating modules that multiply transactional data and other data multiplier strategies that will be used in test cycles of the Real Time Clearing System SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor. Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. AWS Certified SysOps Administrator or Certified Developer (required) Languages Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud.Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL; Operating Systems experience; Methodologies: Agile, Iterative Waterfall
25/06/2024
Project-based
AWS Cloud based performance testing Chicago - Hybrid 3 days on site. - Long term contract role C2C or W2 Must be AWS certified heavy cloud experience setting up and maintenance of a cloud-based performance system to automate and troubleshoot environmental issues. Performance testing, automation testings, financial experience strongly preferred. python Scripting: converting Java to python. Don't have to be application developers and as much. Devops and containerization as possible splunk confluence Jira API testing uc4 or similar. All about cloud testing system they are migrating from an old system to a new system kafka is a HUGE plus WORK TO BE PERFORMED: Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles. SKILL AND EXPERIENCE REQUIRED: Python Scripting - familiarity with creating modules that multiply transactional data and other data multiplier strategies that will be used in test cycles of the Real Time Clearing System SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor. Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. AWS Certified SysOps Administrator or Certified Developer (required) Languages Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud.Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL; Operating Systems experience; Methodologies: Agile, Iterative Waterfall
ASSOCIATE PRINCIPAL, SOFTWARE ENGINEERING (JAVA) SALARY: $160k - $170k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite and 2 days remote NO SPONSORSHIP Looking for a candidate with 5 plus years Back End Java development version 8 or above. financial big plus. Must have event-driven systems experience of cloud-based AWS data solutions any devops terraform ansible jenkins. big plus memory model data structures concurrency and Multithreading strong testing flint Apache Spark kafka streams etc. Re: Java, do you understand Multithreading What is your level of experience in Spring. A Re: Kafka Can you answer basic user/developer questions Re: Flink do you have any experience Do you have any skills or understanding of BigO notations. This role supports and works collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Specialist, Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Primary Duties and Responsibilities: To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. Support the application development of big data application for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. 5+ year of experience in building high speed, data-centric solutions 5+ years of experience in Java Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google Experience writing unit and integration tests with testing frameworks like Junit, Citrus Experience following Git workflows Working knowledge of DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Technical Skills: Java-based software development experience and Multithreading Fluent in object-oriented design Strong testing experience Experience working with two or more of the following: Unix/Linux environments, event-driven systems, transaction processing systems, distributed and parallel systems, large software system development, security software development, public-cloud platforms Hands-on experience with Java version 8 onwards, Spring, SpringBoot, Microservices, REST API
25/06/2024
Full time
ASSOCIATE PRINCIPAL, SOFTWARE ENGINEERING (JAVA) SALARY: $160k - $170k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite and 2 days remote NO SPONSORSHIP Looking for a candidate with 5 plus years Back End Java development version 8 or above. financial big plus. Must have event-driven systems experience of cloud-based AWS data solutions any devops terraform ansible jenkins. big plus memory model data structures concurrency and Multithreading strong testing flint Apache Spark kafka streams etc. Re: Java, do you understand Multithreading What is your level of experience in Spring. A Re: Kafka Can you answer basic user/developer questions Re: Flink do you have any experience Do you have any skills or understanding of BigO notations. This role supports and works collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Specialist, Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Primary Duties and Responsibilities: To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. Support the application development of big data application for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. 5+ year of experience in building high speed, data-centric solutions 5+ years of experience in Java Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google Experience writing unit and integration tests with testing frameworks like Junit, Citrus Experience following Git workflows Working knowledge of DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Technical Skills: Java-based software development experience and Multithreading Fluent in object-oriented design Strong testing experience Experience working with two or more of the following: Unix/Linux environments, event-driven systems, transaction processing systems, distributed and parallel systems, large software system development, security software development, public-cloud platforms Hands-on experience with Java version 8 onwards, Spring, SpringBoot, Microservices, REST API
NO SPONSORSHIP Software Engineering - Quantitative Risk Automation Modelers Keys are: Python, Java, Terraform, DevOps, Containerization and financial industry experience. Looking for hard core developers who want to work within quantitative risk management and develop applications and solutions for the QRM team. They do not build models, they automate models. They need to come from an industry company (financial institute, trading company, exchange, etc.). Develop hardcore applications. Need to have CICD pipelines, IaC, Kubernetes, Terraform This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting The role requires advanced coding, database and environment manipulation skills. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with Scripting languages such as Python Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
25/06/2024
Full time
NO SPONSORSHIP Software Engineering - Quantitative Risk Automation Modelers Keys are: Python, Java, Terraform, DevOps, Containerization and financial industry experience. Looking for hard core developers who want to work within quantitative risk management and develop applications and solutions for the QRM team. They do not build models, they automate models. They need to come from an industry company (financial institute, trading company, exchange, etc.). Develop hardcore applications. Need to have CICD pipelines, IaC, Kubernetes, Terraform This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting The role requires advanced coding, database and environment manipulation skills. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with Scripting languages such as Python Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for an Associate Principal, Java Software Engineering. This engineer will focus on Back End Java development and must have experience with event-driven architecture, AWS data solutions, Kafka, Multithreading, etc. Responsibilities: Support the application development of big data application for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Qualifications: BS degree in Computer Science, similar technical field required 5+ year of experience in building high speed, data-centric solutions Java-based software development experience, including deep understanding of Java fundamentals like Memory Model, Data structures, Concurrency and Multithreading Fluent in object-oriented design, industry best practices, software patterns, and architecture principles Strong testing experience which includes developing test plans, automated test cases, and working with test frameworks Experience working with two or more of the following: Unix/Linux environments, event-driven systems, transaction processing systems, distributed and parallel systems, large software system development, security software development, public-cloud platforms Hands-on experience with Java version 8 onwards, Spring, SpringBoot, Microservices, REST API Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google Experience writing unit and integration tests with testing frameworks like Junit, Citrus Experience working with various types of databases like Relational, NoSQL, Object-based, Graph Experience following Git workflows Working knowledge of DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc
25/06/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for an Associate Principal, Java Software Engineering. This engineer will focus on Back End Java development and must have experience with event-driven architecture, AWS data solutions, Kafka, Multithreading, etc. Responsibilities: Support the application development of big data application for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Qualifications: BS degree in Computer Science, similar technical field required 5+ year of experience in building high speed, data-centric solutions Java-based software development experience, including deep understanding of Java fundamentals like Memory Model, Data structures, Concurrency and Multithreading Fluent in object-oriented design, industry best practices, software patterns, and architecture principles Strong testing experience which includes developing test plans, automated test cases, and working with test frameworks Experience working with two or more of the following: Unix/Linux environments, event-driven systems, transaction processing systems, distributed and parallel systems, large software system development, security software development, public-cloud platforms Hands-on experience with Java version 8 onwards, Spring, SpringBoot, Microservices, REST API Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google Experience writing unit and integration tests with testing frameworks like Junit, Citrus Experience working with various types of databases like Relational, NoSQL, Object-based, Graph Experience following Git workflows Working knowledge of DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc
Job Description: We are seeking a skilled Java Developer with expertise in RESTful and SOAP microservices using Spring Boot to join our innovative team. As a Java Developer, you will be responsible for designing, developing, and maintaining high-quality Java applications that adhere to industry best practices and standards. The successful candidate preferably should have SC/DV clearance or must be eligible and willing to go through SC/DV clearance. Responsibilities: Design, develop, and deploy RESTful and SOAP microservices using Spring Boot framework. Collaborate with cross-functional teams to analyse requirements, design solutions, and implement software features. Write clean, efficient, adhering to SOLID principles and well-documented code following coding standards and best practices. Perform unit testing, integration testing, and troubleshooting to ensure the reliability, scalability, and security of the software. Participate in code reviews and provide constructive feedback to peers to improve code quality and maintainability. Stay up-to-date with emerging technologies and industry trends to continuously enhance skills and knowledge. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. 3+ years of experience as a Java Developer with a focus on developing RESTful and SOAP microservices. Proficiency in Java programming language (Java 8 or higher) and Spring Boot framework. Experience with Test Driven Development (TDD) and Beh aviour Driven Development (BDD) Strong understanding of RESTful API design principles and best practices. Experience with SOAP-based web services and related technologies such as WSDL and XML. Knowledge of microservices architecture patterns and design principles. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud Platform. Experience with relational data bases (eg, MySQL, PostgreSQL). Experience with containerization technologies such as Docker and Kubernetes. Experience with Maven, Gradle, Git, Junit, Cucu mber, Jenkins, CI/CD pipelines and SonarQube. Understanding of Agile methodologies and DevOps practices. Experience in documenting low level design Excellent problem-solving skills and attention to detail. Effective communication and collaboration skills. Ability to work independently and as part of a team in a fast-paced environment. Preferred Qualifications: Experience with Front End technologies such as HTML, CSS, JavaScript, and frameworks like Nodejs or Angular or React. Creativity and ability to think outside-the-box while definin g sound and practical solutions. Experience in implementing user authentication and authorisation in a web application utilising Keycloak Certification in Java programming or related technologies is a plus. How to Apply If you are passionate about software develop ment, enjoy solving complex problems, and thrive in a collaborative environment, we encourage you to apply. Please submit your resume and a cover letter detailing your relevant experience to? (see below) Location: Position may require flexibility in location, with a need to travel to London (M-25) with occasional travel to UK sites Commitment to Excellence: In this role, you will not only be responsible for software development but also for contributing to the growth and innovation of our company. Your work will directly impact the success of our projects and the satisfaction of our clients, making you a key player in our team.
25/06/2024
Full time
Job Description: We are seeking a skilled Java Developer with expertise in RESTful and SOAP microservices using Spring Boot to join our innovative team. As a Java Developer, you will be responsible for designing, developing, and maintaining high-quality Java applications that adhere to industry best practices and standards. The successful candidate preferably should have SC/DV clearance or must be eligible and willing to go through SC/DV clearance. Responsibilities: Design, develop, and deploy RESTful and SOAP microservices using Spring Boot framework. Collaborate with cross-functional teams to analyse requirements, design solutions, and implement software features. Write clean, efficient, adhering to SOLID principles and well-documented code following coding standards and best practices. Perform unit testing, integration testing, and troubleshooting to ensure the reliability, scalability, and security of the software. Participate in code reviews and provide constructive feedback to peers to improve code quality and maintainability. Stay up-to-date with emerging technologies and industry trends to continuously enhance skills and knowledge. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. 3+ years of experience as a Java Developer with a focus on developing RESTful and SOAP microservices. Proficiency in Java programming language (Java 8 or higher) and Spring Boot framework. Experience with Test Driven Development (TDD) and Beh aviour Driven Development (BDD) Strong understanding of RESTful API design principles and best practices. Experience with SOAP-based web services and related technologies such as WSDL and XML. Knowledge of microservices architecture patterns and design principles. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud Platform. Experience with relational data bases (eg, MySQL, PostgreSQL). Experience with containerization technologies such as Docker and Kubernetes. Experience with Maven, Gradle, Git, Junit, Cucu mber, Jenkins, CI/CD pipelines and SonarQube. Understanding of Agile methodologies and DevOps practices. Experience in documenting low level design Excellent problem-solving skills and attention to detail. Effective communication and collaboration skills. Ability to work independently and as part of a team in a fast-paced environment. Preferred Qualifications: Experience with Front End technologies such as HTML, CSS, JavaScript, and frameworks like Nodejs or Angular or React. Creativity and ability to think outside-the-box while definin g sound and practical solutions. Experience in implementing user authentication and authorisation in a web application utilising Keycloak Certification in Java programming or related technologies is a plus. How to Apply If you are passionate about software develop ment, enjoy solving complex problems, and thrive in a collaborative environment, we encourage you to apply. Please submit your resume and a cover letter detailing your relevant experience to? (see below) Location: Position may require flexibility in location, with a need to travel to London (M-25) with occasional travel to UK sites Commitment to Excellence: In this role, you will not only be responsible for software development but also for contributing to the growth and innovation of our company. Your work will directly impact the success of our projects and the satisfaction of our clients, making you a key player in our team.
NO SPONSORSHIP Senior Linux Server Administration with Kubernetes and Terraform We need a very senior Linux administrator with 10 years experience. You will need to have 4-5 years of Kubernetes, Openshift, Rancher, Terraform, Ansible for automation, Jenkins, Python, etc. KUBERNETES IS KEY. Enterprise linux administration, engineering automation and support. You will need some devops Kubernetes key terraform ansible CICD Grub PXE boot kickstart yum rpms satellite server SAN NAS devops openshift AWS Cloud six years plus vmware environment Required: S10 or more years of experience in Linux systems installation, operations, administration, and maintenance of physical and virtualized Servers Two or more years of experience in DevOps with Kubernetes Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools In depth system administration knowledge and skills for RedHat Linux. Knowledge of Amazon Linux is a plus. Primary Duties and Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Perform Linux administration including changes, deletes, disk space management, application installation and support. Use your infrastructure and networking knowledge to maintain cloud-based infrastructure (predominantly on AWS) involving EC2, S3, RDS & VPC. Use configuration management tools (Ansible and Terraform) to build and maintain a hybrid infrastructure hosted both at colocation facilities and in the public cloud. Technical Skills: In depth system administration knowledge and skills for RedHat Linux. Experience with using Github Experience using configuration management tools such as Puppet, Chef, or Ansible and container tools such as Docker Ability to write and maintain automation code and scripts and IaaS/Infrastructure as code, such as Terraform Experience with DevOps activities and using CICD pipeline software to deploy code Working knowledge of cloud components and services in AWS or Azure. System administration experience and knowledge of VMware and administration of virtual Servers Grub, PXE boot, Kickstart Experience with GitHub, Ansible, Jenkins and Terraform tools/applications Knowledge or experience with DevOps, OpenShift, AWS cloud, or other similar technologies is desirable
25/06/2024
Full time
NO SPONSORSHIP Senior Linux Server Administration with Kubernetes and Terraform We need a very senior Linux administrator with 10 years experience. You will need to have 4-5 years of Kubernetes, Openshift, Rancher, Terraform, Ansible for automation, Jenkins, Python, etc. KUBERNETES IS KEY. Enterprise linux administration, engineering automation and support. You will need some devops Kubernetes key terraform ansible CICD Grub PXE boot kickstart yum rpms satellite server SAN NAS devops openshift AWS Cloud six years plus vmware environment Required: S10 or more years of experience in Linux systems installation, operations, administration, and maintenance of physical and virtualized Servers Two or more years of experience in DevOps with Kubernetes Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools In depth system administration knowledge and skills for RedHat Linux. Knowledge of Amazon Linux is a plus. Primary Duties and Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Perform Linux administration including changes, deletes, disk space management, application installation and support. Use your infrastructure and networking knowledge to maintain cloud-based infrastructure (predominantly on AWS) involving EC2, S3, RDS & VPC. Use configuration management tools (Ansible and Terraform) to build and maintain a hybrid infrastructure hosted both at colocation facilities and in the public cloud. Technical Skills: In depth system administration knowledge and skills for RedHat Linux. Experience with using Github Experience using configuration management tools such as Puppet, Chef, or Ansible and container tools such as Docker Ability to write and maintain automation code and scripts and IaaS/Infrastructure as code, such as Terraform Experience with DevOps activities and using CICD pipeline software to deploy code Working knowledge of cloud components and services in AWS or Azure. System administration experience and knowledge of VMware and administration of virtual Servers Grub, PXE boot, Kickstart Experience with GitHub, Ansible, Jenkins and Terraform tools/applications Knowledge or experience with DevOps, OpenShift, AWS cloud, or other similar technologies is desirable
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Sr. Linux Administrator/Engineer. This admin/engineer will need 10+ years of heavy Linux experience, along with 4+ years of experience working with DevOps, Kubernetes, OpenShift, Ansible, Terraform, Jenkins, Rancher, and Python. Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Perform Linux administration including changes, deletes, disk space management, application installation and support. Use your infrastructure and networking knowledge to maintain cloud-based infrastructure (predominantly on AWS) involving EC2, S3, RDS & VPC. Use configuration management tools (Ansible and Terraform) to build and maintain a hybrid infrastructure hosted both at colocation facilities and in the public cloud. Run proof of concept projects on early-stage infrastructure improvements to validate the feasibility of an approach, evaluate performance, and spike an implementation. Review and evaluate virtual and physical server performance and capacity Forecast system demands and recommends upgrades, expansions and reconfigurations. Perform automated computing environment builds, site setup, user training, hardware/software installation, maintenance and support and documentation of operating procedures and processes Support VMware environment including changes, adding/removing systems, and disk space management. Troubleshoot hardware and software problems, takes appropriate corrective action and/or interact with IT staff or vendors in performing complex testing, support, server recovery, and troubleshooting functions. Qualifications: Bachelor's degree in Computer Science or a related discipline or an equivalent combination of education and work experience. Ten or more years of experience in Linux systems installation, operations, administration, and maintenance of physical and virtualized Servers 4 or more years of experience in DevOps with Kubernetes Python Scripting and Ansible for Automation Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools In depth system administration knowledge and skills for RedHat Linux. Knowledge of Amazon Linux is a plus. Experience with using Github or other version control tools for source code management Experience using configuration management tools such as Puppet, Chef, or Ansible and container tools such as Docker Ability to write and maintain automation code and scripts and IaaS/Infrastructure as code, such as Terraform Experience with DevOps activities and using CICD pipeline software to deploy code Working knowledge of cloud components and services in AWS or Azure. System administration experience and knowledge of VMware and administration of virtual Servers Experience with GitHub, Ansible, Jenkins and Terraform tools/applications
24/06/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Sr. Linux Administrator/Engineer. This admin/engineer will need 10+ years of heavy Linux experience, along with 4+ years of experience working with DevOps, Kubernetes, OpenShift, Ansible, Terraform, Jenkins, Rancher, and Python. Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Perform Linux administration including changes, deletes, disk space management, application installation and support. Use your infrastructure and networking knowledge to maintain cloud-based infrastructure (predominantly on AWS) involving EC2, S3, RDS & VPC. Use configuration management tools (Ansible and Terraform) to build and maintain a hybrid infrastructure hosted both at colocation facilities and in the public cloud. Run proof of concept projects on early-stage infrastructure improvements to validate the feasibility of an approach, evaluate performance, and spike an implementation. Review and evaluate virtual and physical server performance and capacity Forecast system demands and recommends upgrades, expansions and reconfigurations. Perform automated computing environment builds, site setup, user training, hardware/software installation, maintenance and support and documentation of operating procedures and processes Support VMware environment including changes, adding/removing systems, and disk space management. Troubleshoot hardware and software problems, takes appropriate corrective action and/or interact with IT staff or vendors in performing complex testing, support, server recovery, and troubleshooting functions. Qualifications: Bachelor's degree in Computer Science or a related discipline or an equivalent combination of education and work experience. Ten or more years of experience in Linux systems installation, operations, administration, and maintenance of physical and virtualized Servers 4 or more years of experience in DevOps with Kubernetes Python Scripting and Ansible for Automation Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools In depth system administration knowledge and skills for RedHat Linux. Knowledge of Amazon Linux is a plus. Experience with using Github or other version control tools for source code management Experience using configuration management tools such as Puppet, Chef, or Ansible and container tools such as Docker Ability to write and maintain automation code and scripts and IaaS/Infrastructure as code, such as Terraform Experience with DevOps activities and using CICD pipeline software to deploy code Working knowledge of cloud components and services in AWS or Azure. System administration experience and knowledge of VMware and administration of virtual Servers Experience with GitHub, Ansible, Jenkins and Terraform tools/applications
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
24/06/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Director, Software Engineering - QRM. This director will manage 6 people and will help develop software applications and solutions for the quantitative management platform. This director will need hands-on experience with Java, DevOps, CICD, AWS, Containers, terraform, Etc. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office.
21/06/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Director, Software Engineering - QRM. This director will manage 6 people and will help develop software applications and solutions for the quantitative management platform. This director will need hands-on experience with Java, DevOps, CICD, AWS, Containers, terraform, Etc. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
21/06/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
21/06/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
21/06/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
Director, Software Engineering - Quantitative Risk Management Applications SALARY: $200k - $230k flex plus 27% bonus LOCATION: Chicago, il Hybrid 3 days onsite, 2 days remote You will manage six plus people and help build the framewrok within the quantitative management platform developing software applications and solutions. Java C++ python automation devops cicd aws terraform Kubernetes SQL docker helm masters or Phd This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. This role will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, backtesting and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
20/06/2024
Full time
Director, Software Engineering - Quantitative Risk Management Applications SALARY: $200k - $230k flex plus 27% bonus LOCATION: Chicago, il Hybrid 3 days onsite, 2 days remote You will manage six plus people and help build the framewrok within the quantitative management platform developing software applications and solutions. Java C++ python automation devops cicd aws terraform Kubernetes SQL docker helm masters or Phd This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. This role will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, backtesting and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Director of Risk Management Software Engineering. Candidate will be responsible for functions within Quantitative Risk Management for developing and maintaining risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Responsibilities: Collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, back-testing and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, back-testing and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus.
20/06/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Director of Risk Management Software Engineering. Candidate will be responsible for functions within Quantitative Risk Management for developing and maintaining risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Responsibilities: Collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, back-testing and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, back-testing and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus.
Associate Principal, Software Programming - Quantitative Risk Management Area - Associate Principal, Software Engineering - Automating Risk Models On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
20/06/2024
Full time
Associate Principal, Software Programming - Quantitative Risk Management Area - Associate Principal, Software Engineering - Automating Risk Models On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Senior Linux DevOps Engineer. Candidate will be responsible for design and support of core platform engineering automation. This role will drive the strategy for infrastructure automation and be charged to improve application adoption, reduce overall operational support, and increase end-user usability of our platform services. Candidate will provide team leadership required to support a large, complex Architect L3 Linux based computing environment and an increasing transition to Linux infrastructure in AWS. Assist in driving infrastructure as code mentality throughout the organization and demonstrate a passion for automation concepts and tools. Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Qualifications : Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Technical Skills: Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Linux Experience: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Cloud Experience - Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Infra Automation - Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. CICD Experience - Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Resilient Design - Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
20/06/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Senior Linux DevOps Engineer. Candidate will be responsible for design and support of core platform engineering automation. This role will drive the strategy for infrastructure automation and be charged to improve application adoption, reduce overall operational support, and increase end-user usability of our platform services. Candidate will provide team leadership required to support a large, complex Architect L3 Linux based computing environment and an increasing transition to Linux infrastructure in AWS. Assist in driving infrastructure as code mentality throughout the organization and demonstrate a passion for automation concepts and tools. Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Qualifications : Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Technical Skills: Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Linux Experience: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Cloud Experience - Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Infra Automation - Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. CICD Experience - Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Resilient Design - Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Senior Linux DevOps Engineer. Candidate will be responsible for design and support of core platform engineering automation. This role will drive the strategy for infrastructure automation and be charged to improve application adoption, reduce overall operational support, and increase end-user usability of our platform services. Candidate will provide team leadership required to support a large, complex Architect L3 Linux based computing environment and an increasing transition to Linux infrastructure in AWS. Assist in driving infrastructure as code mentality throughout the organization and demonstrate a passion for automation concepts and tools. Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Qualifications : Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Technical Skills: Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Linux Experience: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Cloud Experience - Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Infra Automation - Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. CICD Experience - Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Resilient Design - Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
20/06/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Senior Linux DevOps Engineer. Candidate will be responsible for design and support of core platform engineering automation. This role will drive the strategy for infrastructure automation and be charged to improve application adoption, reduce overall operational support, and increase end-user usability of our platform services. Candidate will provide team leadership required to support a large, complex Architect L3 Linux based computing environment and an increasing transition to Linux infrastructure in AWS. Assist in driving infrastructure as code mentality throughout the organization and demonstrate a passion for automation concepts and tools. Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Qualifications : Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Technical Skills: Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Linux Experience: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Cloud Experience - Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Infra Automation - Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. CICD Experience - Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Resilient Design - Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
Senior Linux Sever Administrator/Engineer Salary: $140k-$150k + 15% bonus Location: Chicago, IL or Dallas, TX Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* *This role is not open for C2C, contract or contract to hire* Qualifications Bachelor's degree in computer science or a related 6+ years of experience in Linux systems installation, operations, administration, and maintenance of physical and virtualized Servers 2+ years of experience in DevOps and using CICD pipeline software to deploy code Experience using configuration management tools such as Puppet, Chef, or Ansible and container tools such as Docker Experience with Kubernetes required Ability to write and maintain automation code and scripts and IaaS/Infrastructure as code, such as Terraform System administration experience and knowledge of VMware and administration of virtual Servers Experience with cloud components and services in AWS In depth system administration knowledge and skills for RedHat Linux. Knowledge of Amazon Linux is a plus. Experience with using GitHub or other version control tools for source code management Grub, PXE boot, Kickstart Yum, rpms, Satellite server SVM, LVM, Boot from SAN, UFS/ZFS, filesystem configuration General working knowledge of NAS, SAN, and networking Experience with GitHub, Ansible, Jenkins and Terraform tools/applications Responsibilities Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Perform Linux administration including changes, deletes, disk space management, application installation and support. Use your infrastructure and networking knowledge to maintain cloud-based infrastructure (predominantly on AWS) involving EC2, S3, RDS & VPC. Use configuration management tools (Ansible and Terraform) to build and maintain a hybrid infrastructure hosted both at colocation facilities and in the public cloud. Support VMware environment including changes, adding/removing systems, and disk space management.
20/06/2024
Full time
Senior Linux Sever Administrator/Engineer Salary: $140k-$150k + 15% bonus Location: Chicago, IL or Dallas, TX Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* *This role is not open for C2C, contract or contract to hire* Qualifications Bachelor's degree in computer science or a related 6+ years of experience in Linux systems installation, operations, administration, and maintenance of physical and virtualized Servers 2+ years of experience in DevOps and using CICD pipeline software to deploy code Experience using configuration management tools such as Puppet, Chef, or Ansible and container tools such as Docker Experience with Kubernetes required Ability to write and maintain automation code and scripts and IaaS/Infrastructure as code, such as Terraform System administration experience and knowledge of VMware and administration of virtual Servers Experience with cloud components and services in AWS In depth system administration knowledge and skills for RedHat Linux. Knowledge of Amazon Linux is a plus. Experience with using GitHub or other version control tools for source code management Grub, PXE boot, Kickstart Yum, rpms, Satellite server SVM, LVM, Boot from SAN, UFS/ZFS, filesystem configuration General working knowledge of NAS, SAN, and networking Experience with GitHub, Ansible, Jenkins and Terraform tools/applications Responsibilities Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Perform Linux administration including changes, deletes, disk space management, application installation and support. Use your infrastructure and networking knowledge to maintain cloud-based infrastructure (predominantly on AWS) involving EC2, S3, RDS & VPC. Use configuration management tools (Ansible and Terraform) to build and maintain a hybrid infrastructure hosted both at colocation facilities and in the public cloud. Support VMware environment including changes, adding/removing systems, and disk space management.
Senior Linux Sever Administrator/Engineer Salary: $140k-$150k + 15% bonus Location: Chicago, IL or Dallas, TX Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* *This role is not open for C2C, contract or contract to hire* Qualifications Bachelor's degree in computer science or a related 6+ years of experience in Linux systems installation, operations, administration, and maintenance of physical and virtualized Servers 2+ years of experience in DevOps and using CICD pipeline software to deploy code Experience using configuration management tools such as Puppet, Chef, or Ansible and container tools such as Docker. Experienced with Kubernetes is required. Ability to write and maintain automation code and scripts and IaaS/Infrastructure as code, such as Terraform System administration experience and knowledge of VMware and administration of virtual Servers Experience with cloud components and services in AWS In depth system administration knowledge and skills for RedHat Linux. Knowledge of Amazon Linux is a plus. Experience with using GitHub or other version control tools for source code management Grub, PXE boot, Kickstart Yum, rpms, Satellite server SVM, LVM, Boot from SAN, UFS/ZFS, filesystem configuration General working knowledge of NAS, SAN, and networking Experience with GitHub, Ansible, Jenkins and Terraform tools/applications Responsibilities Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Perform Linux administration including changes, deletes, disk space management, application installation and support. Use your infrastructure and networking knowledge to maintain cloud-based infrastructure (predominantly on AWS) involving EC2, S3, RDS & VPC. Use configuration management tools (Ansible and Terraform) to build and maintain a hybrid infrastructure hosted both at colocation facilities and in the public cloud. Support VMware environment including changes, adding/removing systems, and disk space management.
20/06/2024
Full time
Senior Linux Sever Administrator/Engineer Salary: $140k-$150k + 15% bonus Location: Chicago, IL or Dallas, TX Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* *This role is not open for C2C, contract or contract to hire* Qualifications Bachelor's degree in computer science or a related 6+ years of experience in Linux systems installation, operations, administration, and maintenance of physical and virtualized Servers 2+ years of experience in DevOps and using CICD pipeline software to deploy code Experience using configuration management tools such as Puppet, Chef, or Ansible and container tools such as Docker. Experienced with Kubernetes is required. Ability to write and maintain automation code and scripts and IaaS/Infrastructure as code, such as Terraform System administration experience and knowledge of VMware and administration of virtual Servers Experience with cloud components and services in AWS In depth system administration knowledge and skills for RedHat Linux. Knowledge of Amazon Linux is a plus. Experience with using GitHub or other version control tools for source code management Grub, PXE boot, Kickstart Yum, rpms, Satellite server SVM, LVM, Boot from SAN, UFS/ZFS, filesystem configuration General working knowledge of NAS, SAN, and networking Experience with GitHub, Ansible, Jenkins and Terraform tools/applications Responsibilities Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Perform Linux administration including changes, deletes, disk space management, application installation and support. Use your infrastructure and networking knowledge to maintain cloud-based infrastructure (predominantly on AWS) involving EC2, S3, RDS & VPC. Use configuration management tools (Ansible and Terraform) to build and maintain a hybrid infrastructure hosted both at colocation facilities and in the public cloud. Support VMware environment including changes, adding/removing systems, and disk space management.