NO SPONSORSHIP Associate Principal, Software Engineering - QRM SALARY: $135k - $145k - $150kish plus 15% bonus LOCATION: CHICAGO, IL Hybrid 3 days onsite and 2 days remote SELLING POINTS: develops and maintains risk models for managing clearing fund and stress testing risk model software in production. AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. This role will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, backtesting and monitoring. Primary Duties and Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programing skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Certificates or Licenses:
08/05/2024
Full time
NO SPONSORSHIP Associate Principal, Software Engineering - QRM SALARY: $135k - $145k - $150kish plus 15% bonus LOCATION: CHICAGO, IL Hybrid 3 days onsite and 2 days remote SELLING POINTS: develops and maintains risk models for managing clearing fund and stress testing risk model software in production. AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. This role will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, backtesting and monitoring. Primary Duties and Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programing skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Certificates or Licenses:
Salesforce DevOps Engineer - FinTech/Banking - Salesforce, AWS, CI/CD, DevSecOps Oliver Bernard are currently working with a fast-growing FinTech/Banking company, with Headquarters across the UK, who are seeking a strong and experienced Senior Salesforce DevOps Engineer to join their Platform team as part of plans to scale their infrastructure and drive DevSecOps best practices on an existing Salesforce implementation. The incoming engineer will be contributing to the companies continued growth as they look to expand their practices in to new markets globally, and will work closely with a tight knit team of 5-6 other Platform Engineers, as well as their broader Salesforce and Development teams. To be a suitable for this opening, the following expertise is required: Strong Salesforce background, working with Salesforce APIs, CLI Tooling, SFDX etc Cloud experience with AWS and AWS Services Strong understanding of CI/CD and CI/CD best practices (ideally with GitLab, or equivalent) Knowledge of Security/DevSecOps practices Prior experience Scripting with Bash or Python is also a massive bonus This position can offer £85-95K, plus a healthy benefits package, and operates a flexible remote/hybrid working model (with office days only required 1-2 times p/month). Please apply here to register interest in this opportunity. Salesforce DevOps Engineer - FinTech/Banking - Salesforce, AWS, CI/CD, DevSecOps
07/05/2024
Full time
Salesforce DevOps Engineer - FinTech/Banking - Salesforce, AWS, CI/CD, DevSecOps Oliver Bernard are currently working with a fast-growing FinTech/Banking company, with Headquarters across the UK, who are seeking a strong and experienced Senior Salesforce DevOps Engineer to join their Platform team as part of plans to scale their infrastructure and drive DevSecOps best practices on an existing Salesforce implementation. The incoming engineer will be contributing to the companies continued growth as they look to expand their practices in to new markets globally, and will work closely with a tight knit team of 5-6 other Platform Engineers, as well as their broader Salesforce and Development teams. To be a suitable for this opening, the following expertise is required: Strong Salesforce background, working with Salesforce APIs, CLI Tooling, SFDX etc Cloud experience with AWS and AWS Services Strong understanding of CI/CD and CI/CD best practices (ideally with GitLab, or equivalent) Knowledge of Security/DevSecOps practices Prior experience Scripting with Bash or Python is also a massive bonus This position can offer £85-95K, plus a healthy benefits package, and operates a flexible remote/hybrid working model (with office days only required 1-2 times p/month). Please apply here to register interest in this opportunity. Salesforce DevOps Engineer - FinTech/Banking - Salesforce, AWS, CI/CD, DevSecOps
Development Manager/Delivery Manager/.NET/C#/AWS/Onshore/Offshore Role: Development Manager Company: Financial Services Location: Hybrid - Occasional travel to Stoke on Trent office 1/2 days per month Salary: up to £80,000 Our entire team operates across the UK, necessitating extensive experience in managing remote development teams. The ideal candidate will boast a background in software engineering, possessing proficiency in our core stack (C#, React, AWS), and will actively engage in hands-on development to set a precedent for excellence. Responsibilities: and manage our development team (UK/India) software engineering and dev/ops team, guiding the planning, design, and development of next gen FS solutions. closely with product management and delivery teams to prioritise new product development initiatives aligned with our business goals. cross-functional engineering resources, including Product Discovery, Delivery, QA, and DevOps. mentor, and cultivate a diverse team, nurturing individuals with varying experience levels and skill sets. the development and execution of the product roadmap, ensuring alignment with strategic objectives. Experience: years managing development teams working and delivering with offshore development teams background: .Net, C#, AWS, React Development Manager/Delivery Manager/.NET/C#/AWS/Onshore/Offshore
07/05/2024
Full time
Development Manager/Delivery Manager/.NET/C#/AWS/Onshore/Offshore Role: Development Manager Company: Financial Services Location: Hybrid - Occasional travel to Stoke on Trent office 1/2 days per month Salary: up to £80,000 Our entire team operates across the UK, necessitating extensive experience in managing remote development teams. The ideal candidate will boast a background in software engineering, possessing proficiency in our core stack (C#, React, AWS), and will actively engage in hands-on development to set a precedent for excellence. Responsibilities: and manage our development team (UK/India) software engineering and dev/ops team, guiding the planning, design, and development of next gen FS solutions. closely with product management and delivery teams to prioritise new product development initiatives aligned with our business goals. cross-functional engineering resources, including Product Discovery, Delivery, QA, and DevOps. mentor, and cultivate a diverse team, nurturing individuals with varying experience levels and skill sets. the development and execution of the product roadmap, ensuring alignment with strategic objectives. Experience: years managing development teams working and delivering with offshore development teams background: .Net, C#, AWS, React Development Manager/Delivery Manager/.NET/C#/AWS/Onshore/Offshore
Infrastructure Engineer - Linux, Docker, AWS, Terraform, Agile. The Company & Opportunity: Specialist technology provider, providing Real Time solutions, utilising leading-edge technology, delivering transportation technology, are looking for an experienced Infrastructure Engineer, to play a pivotal role in supporting their core products and associated systems, working in a hybrid role with occasional visits on-site, ensuring the on-prem infrastructure and AWS cloud estate are ready and available to support the teams, digital services, and customers. The company offers Hybrid working with 1 day week in their Derby office. The role is split as 70% on-prem (with occasional trips to client sites) and 30% cloud-based (AWS/Azure). *Candidates must work within a reasonable commuting distance of Derby. Core technical skills, responsibilities & attributes for the Infrastructure Engineer role: Minimum 7+ years commercial experience as an Infrastructure Engineer. Docker/Kubernetes (containerisation), Git (or similar), AWS, Azure (DevOps pipelines), Terraform. Linux support/administration (strong understanding of the Linux Ecosystem). Proven experience as an Infrastructure Engineer, working with Development/QA Teams, with a strong understanding of the development life cycle (Sprints/Scrum and/or Agile etc). Commercial experience supporting On-Prem applications/tools. MUST HAVE a strong Infrastructure Engineering background, encompassing - Server Configurations, Cabinets, Network Components, Data Centers, Switches & Firewalls etc. The company offers a Hybrid working environment working from home and 1 day per week in the Derby office, with a base salary range of £55-60K, depending on experience and a fantastic benefits package. Please apply now for a comprehensive specification on the position: Infrastructure Engineer - Linux, Docker, AWS, Terraform, Agile.
07/05/2024
Full time
Infrastructure Engineer - Linux, Docker, AWS, Terraform, Agile. The Company & Opportunity: Specialist technology provider, providing Real Time solutions, utilising leading-edge technology, delivering transportation technology, are looking for an experienced Infrastructure Engineer, to play a pivotal role in supporting their core products and associated systems, working in a hybrid role with occasional visits on-site, ensuring the on-prem infrastructure and AWS cloud estate are ready and available to support the teams, digital services, and customers. The company offers Hybrid working with 1 day week in their Derby office. The role is split as 70% on-prem (with occasional trips to client sites) and 30% cloud-based (AWS/Azure). *Candidates must work within a reasonable commuting distance of Derby. Core technical skills, responsibilities & attributes for the Infrastructure Engineer role: Minimum 7+ years commercial experience as an Infrastructure Engineer. Docker/Kubernetes (containerisation), Git (or similar), AWS, Azure (DevOps pipelines), Terraform. Linux support/administration (strong understanding of the Linux Ecosystem). Proven experience as an Infrastructure Engineer, working with Development/QA Teams, with a strong understanding of the development life cycle (Sprints/Scrum and/or Agile etc). Commercial experience supporting On-Prem applications/tools. MUST HAVE a strong Infrastructure Engineering background, encompassing - Server Configurations, Cabinets, Network Components, Data Centers, Switches & Firewalls etc. The company offers a Hybrid working environment working from home and 1 day per week in the Derby office, with a base salary range of £55-60K, depending on experience and a fantastic benefits package. Please apply now for a comprehensive specification on the position: Infrastructure Engineer - Linux, Docker, AWS, Terraform, Agile.
Senior DevOps Engineer - Cloud - Permanent - Poland Robson Bale are looking for a Senior DevOps Engineer to come on board for a permanent opportunity in Poland. Role can be fully remote from Poland Permanent, Excellent Salary Responsibilities: Technical Skills - Must have Leadership: Lead and manage DevOps/Infrastructure projects, overseeing the entire development life cycle. Collaborate with cross-functional teams to align project objectives and deliverables. Ensure adherence to timelines, budgets, and quality standards. Mentor and guide team members and interns, fostering a culture of continuous learning. Security and Compliance: Demonstrate a deep understanding of Standard Operating Procedures (SOP) for security practices. Perform threat modelling and implement encryption, network defense, and web security measures. Champion security best practices in a production environment and address cloud security risks. Integrate identity providers such as OAuth, OIDC, and SAML to enhance security. DevOps/Infrastructure and Cloud Expertise: Drive change, release, and incident management processes to maintain a stable environment. Utilize extensive experience in DevOps to optimize performance, conduct application upgrades, and apply patches. Lead continuous integration and deployment efforts using tools like Jenkins and Ansible. Demonstrate proficiency in coding and automation to streamline operations. Good hands-on knowledge of AWS/AZURE/GCP cloud service providers. Cloud Infrastructure Management: Exhibit strong expertise in AWS/AZURE/GCP/OCI cloud services and maintain infrastructure as code (IAC) using Ansible, Terraform, or CloudFormation. Oversee containerization technologies like Docker and Kubernetes to enhance scalability and efficiency. Manage Linux-based systems and network configurations to ensure smooth operations. Security and Access Management: Demonstrate a solid grasp of identity and access management (IAM) principles. Manage Security Groups (SGs), Firewall services, and secrets effectively. Optimize service costs based on resource utilization and scale. Monitoring and Reliability: Ensure ongoing and reliable monitoring of the infrastructure to promptly address issues. Implement performance tuning and optimization strategies to maintain high availability. Technical Requirements: Proficient in Python/Java/bash Scripting for automation and tooling. Expertise in AWS/AZURE/GCP/OCI cloud services like Azure Kubernetes Service/Elastic Kubernetes Service/Google Kubernetes Engine. Extensive experience with CI/CD pipelines, particularly using Jenkins . Strong familiarity with Docker and Kubernetes for container orchestration. In-depth understanding of networking principles. Good to Have Skillsets: Experience in crafting intuitive and engaging user interfaces (UI) for web applications, mobile apps, or other AI-powered interfaces. Experience with design thinking methodologies. Understanding of data visualization and information architecture. Ability to write clear documentation. Experience with voice user interfaces (VUIs). Knowledge of animation and micro interactions for enhancing user experience. Experience with design systems and component libraries. Process Skills: General SDLC processes Understanding of utilizing Agile and Scrum software development methodologies Attention to detail and commitment to quality. Behavioral Skills: Work closely with designers, product managers, Developers, and data scientists to deliver comprehensive solutions. Communicate effectively and share knowledge with the team. Be open to feedback and continuously learn and adapt to new technologies. Ability to work independently and as part of a team. Ability to work effectively under pressure and meet deadlines. Passion for learning and staying updated on the latest technologies. Good Attitude and Quick learner. Certification (Good to have) : Certifications (Preferred, any 1 or more Cloud Service Provider): AWS associate certification (eg, AWS Certified Solutions Architect, AWS Certified DevOps Engineer) Certified Kubernetes Administrator (CKA) certification. Certified Docker Captain. Azure Certifications (eg Azure Fundamentals, Azure Administrator Associate, DevOps Engineer Expert, Azure Security Engineer Associate) GCP certifications (eg Cloud DevOps Engineer, Cloud network Engineer, Google Workspace Administrator) Networking related certification. Role can be fully remote from Poland Permanent, Excellent Salary Senior DevOps Engineer - Cloud - Permanent - Poland
07/05/2024
Full time
Senior DevOps Engineer - Cloud - Permanent - Poland Robson Bale are looking for a Senior DevOps Engineer to come on board for a permanent opportunity in Poland. Role can be fully remote from Poland Permanent, Excellent Salary Responsibilities: Technical Skills - Must have Leadership: Lead and manage DevOps/Infrastructure projects, overseeing the entire development life cycle. Collaborate with cross-functional teams to align project objectives and deliverables. Ensure adherence to timelines, budgets, and quality standards. Mentor and guide team members and interns, fostering a culture of continuous learning. Security and Compliance: Demonstrate a deep understanding of Standard Operating Procedures (SOP) for security practices. Perform threat modelling and implement encryption, network defense, and web security measures. Champion security best practices in a production environment and address cloud security risks. Integrate identity providers such as OAuth, OIDC, and SAML to enhance security. DevOps/Infrastructure and Cloud Expertise: Drive change, release, and incident management processes to maintain a stable environment. Utilize extensive experience in DevOps to optimize performance, conduct application upgrades, and apply patches. Lead continuous integration and deployment efforts using tools like Jenkins and Ansible. Demonstrate proficiency in coding and automation to streamline operations. Good hands-on knowledge of AWS/AZURE/GCP cloud service providers. Cloud Infrastructure Management: Exhibit strong expertise in AWS/AZURE/GCP/OCI cloud services and maintain infrastructure as code (IAC) using Ansible, Terraform, or CloudFormation. Oversee containerization technologies like Docker and Kubernetes to enhance scalability and efficiency. Manage Linux-based systems and network configurations to ensure smooth operations. Security and Access Management: Demonstrate a solid grasp of identity and access management (IAM) principles. Manage Security Groups (SGs), Firewall services, and secrets effectively. Optimize service costs based on resource utilization and scale. Monitoring and Reliability: Ensure ongoing and reliable monitoring of the infrastructure to promptly address issues. Implement performance tuning and optimization strategies to maintain high availability. Technical Requirements: Proficient in Python/Java/bash Scripting for automation and tooling. Expertise in AWS/AZURE/GCP/OCI cloud services like Azure Kubernetes Service/Elastic Kubernetes Service/Google Kubernetes Engine. Extensive experience with CI/CD pipelines, particularly using Jenkins . Strong familiarity with Docker and Kubernetes for container orchestration. In-depth understanding of networking principles. Good to Have Skillsets: Experience in crafting intuitive and engaging user interfaces (UI) for web applications, mobile apps, or other AI-powered interfaces. Experience with design thinking methodologies. Understanding of data visualization and information architecture. Ability to write clear documentation. Experience with voice user interfaces (VUIs). Knowledge of animation and micro interactions for enhancing user experience. Experience with design systems and component libraries. Process Skills: General SDLC processes Understanding of utilizing Agile and Scrum software development methodologies Attention to detail and commitment to quality. Behavioral Skills: Work closely with designers, product managers, Developers, and data scientists to deliver comprehensive solutions. Communicate effectively and share knowledge with the team. Be open to feedback and continuously learn and adapt to new technologies. Ability to work independently and as part of a team. Ability to work effectively under pressure and meet deadlines. Passion for learning and staying updated on the latest technologies. Good Attitude and Quick learner. Certification (Good to have) : Certifications (Preferred, any 1 or more Cloud Service Provider): AWS associate certification (eg, AWS Certified Solutions Architect, AWS Certified DevOps Engineer) Certified Kubernetes Administrator (CKA) certification. Certified Docker Captain. Azure Certifications (eg Azure Fundamentals, Azure Administrator Associate, DevOps Engineer Expert, Azure Security Engineer Associate) GCP certifications (eg Cloud DevOps Engineer, Cloud network Engineer, Google Workspace Administrator) Networking related certification. Role can be fully remote from Poland Permanent, Excellent Salary Senior DevOps Engineer - Cloud - Permanent - Poland
Rust Programmer - Remote - 7-8 months+ (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. For this role you can work remotely. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
07/05/2024
Project-based
Rust Programmer - Remote - 7-8 months+ (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. For this role you can work remotely. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
F5 WAF Engineer Whitehall resources are looking for an F5 WAF Engineer. This is an initial 6-month contract, working onsite 2 days per week in Sheffield. *Inside IR35 - You will be required to use an FCSA Accredited Umbrella Company* Job Description: As an Automation Engineer, you will play a pivotal role in enhancing our IT infrastructure by designing, creating, and maintaining bespoke Continuous Integration/Continuous Deployment (CI/CD) pipelines tailored to specific project needs. This role will have an initial focus on leveraging F5 technologies alongside a broad spectrum of automation and DevOps practices to deliver our automation use cases; however once F5 automaton works have completed, works will progress to other WAF platforms and use cases. You will be responsible for the integration of CI/CD pipelines with solutions developed by other teams, Scripting, and the creation of Infrastructure as Code (IaC) manifests using tools like Terraform and Ansible. Your expertise in Jenkins, JIRA, GitHub, Python, and other relevant technologies will be essential. You should have a solid background in building CI/CD pipelines and a comprehensive understanding of DevOps practices. The ideal candidate should not only have technical proficiency in data structures, automation technologies, API interactions, and cloud services, but also exhibit a strong drive to research, investigate, and collaborate effectively within the organization. Key Responsibilities . Developing and Delivering Automation for F5 WAF Platform: In the first instance: Developing and delivering automation solutions specifically for our F5 Web Application Firewall (WAF) platform, aligned with our specific use cases. This involves Scripting, configuring, and deploying automation workflows that enhance security, manageability, and operational efficiency of the F5 WAF environment. . CI/CD Pipeline Development: Create, enhance and implement new, customized CI/CD pipelines tailored for specific project use cases, ensuring efficient, automated workflows . Pipeline Maintenance: Regularly update and maintain existing CI/CD pipelines to ensure they are efficient, secure, and up-to-date with the latest technology standards . Integration of Solutions: Work collaboratively with other teams to integrate their solutions and tools into the CI/CD pipelines effectively, enhancing overall workflow and productivity. . IaC Manifests Creation: Develop and maintain Infrastructure as Code (IaC) manifests, predominantly using Terraform, to manage and provision IT infrastructure in a consistent and repeatable manner . Tool Proficiency: Utilize and demonstrate expertise in tools such as Jenkins, JIRA, GitHub, and Python, effectively integrating them into the CI/CD processes . Script Writing: Write and maintain scripts to automate various aspects of the infrastructure and deployment processes, improving efficiency and reducing the potential for human error. . Collaboration and Communication: Collaborate with cross-functional teams, including software development, operations, and quality assurance, to ensure seamless integration and implementation of DevOps practices . Proactive Research and Collaboration: Eager to research and utilize company resources like Confluence, find relevant contacts, and reach out to other teams for unknowns. Prepared to independently investigate and resolve challenges. Required F5 Experiences - One or more of these . F5 ASM/AWAF Knowledge & Experience: Understanding and practical experience with F5's Application Security Manager (ASM) and Advanced WAF (AWAF), including configuration, management, and troubleshooting of application security policies and web application Firewalls. . F5 with API Gateway: Experience: Integrating F5 solutions with API Gateway technologies, demonstrating the ability to secure and manage APIs effectively. Experience in using F5 with Kong API Gateway; managing, and optimizing API traffic through F5 systems. . F5 GTM and Proxy Technologies: Knowledge and experience with F5's Global Traffic Manager (GTM) as well as experience with Proxy technologies, including forward and reverse proxies . Basic Certificate Management: Knowledge of SSL/TLS certificate management processes, including issuance, renewal, and deployment, within F5 environments. . F5 AS3: Experience; Experience with AS3 (Application Services 3 Extension), for declarative automation and orchestration of F5 BIG-IP services. Proficiency in automating the deployment and management of F5 configurations using AS3 Key Experience - Ideal Candidate Profile . Technical Expertise in CI/CD Tools: Proficiency in Continuous Integration and Continuous Deployment tools such as Jenkins, CircleCI, Travis CI, GitLab CI, and Bamboo. Ability to configure, manage, and optimize these tools for various project requirements. . Proficiency in Scripting Languages: Strong skills in Scripting languages such as Python, Bash, PowerShell. Ability to write and maintain scripts to automate routine tasks and deployments . Infrastructure as Code (IaC): Extensive experience in creating and managing infrastructure using code. Proficiency in IaC tools like Terraform, Ansible, Chef, or Puppet . Data Structuring and Management: Advanced skills in managing data using formats like JSON, YAML, XML, and others. Capable of parsing, creating, and maintaining complex data structures for configuration and automation purposes. . API Integration and Management: Expertise in querying, integrating, and managing APIs. Capable of constructing and executing API calls for data retrieval, updates, and inter-service communication. . Version Control Systems: In-depth knowledge of version control systems like Git, including branching strategies, repository management, and integrating with CI/CD pipelines . Containerization and Orchestration: Experience with containerization tools such as Docker and orchestration platforms like Kubernetes or Docker Swarm. Understanding of containerized environments and their integration into CI/CD pipelines . Cloud Platforms: Familiarity with major cloud platforms like AWS, Azure, or GCP; understanding of cloud-specific services and how to integrate them into CI/CD processes . Monitoring and Logging: Knowledge of monitoring and logging tools such as Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), or Splunk. Ability to set up and maintain monitoring and logging for applications and infrastructure . Security Practices in DevOps (DevSecOps): Understanding of security practices in a DevOps environment. Familiarity with security scanning tools, implementing secure coding practices, and ensuring compliance with industry standards . Agile and Scrum Methodologies: Experience with Agile and Scrum methodologies. Ability to work in fast-paced, iterative development environments and adapt to changing requirements . Networking and Security Fundamentals: Knowledge of networking concepts (eg, TCP/IP, DNS, HTTP/S) and basic security concepts (eg, Firewalls, VPNs, IDS/IPS). . Problem-Solving and Analytical Skills: Strong problem-solving skills and ability to analyze complex systems and workflows to propose effective automation solutions. . Collaboration and Communication: Excellent collaboration and communication skills. Ability to work effectively in a team and communicate complex technical concepts to both technical and non-technical stakeholders. . Project Management Skills: Basic project management skills with the ability to manage timelines, dependencies, and deliverables in a cross-functional environment. . Research and Investigative Skills: Motivated to self-educate and explore company resources and external knowledge bases. All of our opportunities require that applicants are eligible to work in the specified country/location, unless otherwise stated in the job description. Whitehall Resources are an equal opportunities employer who value a diverse and inclusive working environment. All qualified applicants will receive consideration for employment without regard to race, religion, gender identity or expression, sexual orientation, national origin, pregnancy, disability, age, veteran status, or other characteristics.
07/05/2024
Project-based
F5 WAF Engineer Whitehall resources are looking for an F5 WAF Engineer. This is an initial 6-month contract, working onsite 2 days per week in Sheffield. *Inside IR35 - You will be required to use an FCSA Accredited Umbrella Company* Job Description: As an Automation Engineer, you will play a pivotal role in enhancing our IT infrastructure by designing, creating, and maintaining bespoke Continuous Integration/Continuous Deployment (CI/CD) pipelines tailored to specific project needs. This role will have an initial focus on leveraging F5 technologies alongside a broad spectrum of automation and DevOps practices to deliver our automation use cases; however once F5 automaton works have completed, works will progress to other WAF platforms and use cases. You will be responsible for the integration of CI/CD pipelines with solutions developed by other teams, Scripting, and the creation of Infrastructure as Code (IaC) manifests using tools like Terraform and Ansible. Your expertise in Jenkins, JIRA, GitHub, Python, and other relevant technologies will be essential. You should have a solid background in building CI/CD pipelines and a comprehensive understanding of DevOps practices. The ideal candidate should not only have technical proficiency in data structures, automation technologies, API interactions, and cloud services, but also exhibit a strong drive to research, investigate, and collaborate effectively within the organization. Key Responsibilities . Developing and Delivering Automation for F5 WAF Platform: In the first instance: Developing and delivering automation solutions specifically for our F5 Web Application Firewall (WAF) platform, aligned with our specific use cases. This involves Scripting, configuring, and deploying automation workflows that enhance security, manageability, and operational efficiency of the F5 WAF environment. . CI/CD Pipeline Development: Create, enhance and implement new, customized CI/CD pipelines tailored for specific project use cases, ensuring efficient, automated workflows . Pipeline Maintenance: Regularly update and maintain existing CI/CD pipelines to ensure they are efficient, secure, and up-to-date with the latest technology standards . Integration of Solutions: Work collaboratively with other teams to integrate their solutions and tools into the CI/CD pipelines effectively, enhancing overall workflow and productivity. . IaC Manifests Creation: Develop and maintain Infrastructure as Code (IaC) manifests, predominantly using Terraform, to manage and provision IT infrastructure in a consistent and repeatable manner . Tool Proficiency: Utilize and demonstrate expertise in tools such as Jenkins, JIRA, GitHub, and Python, effectively integrating them into the CI/CD processes . Script Writing: Write and maintain scripts to automate various aspects of the infrastructure and deployment processes, improving efficiency and reducing the potential for human error. . Collaboration and Communication: Collaborate with cross-functional teams, including software development, operations, and quality assurance, to ensure seamless integration and implementation of DevOps practices . Proactive Research and Collaboration: Eager to research and utilize company resources like Confluence, find relevant contacts, and reach out to other teams for unknowns. Prepared to independently investigate and resolve challenges. Required F5 Experiences - One or more of these . F5 ASM/AWAF Knowledge & Experience: Understanding and practical experience with F5's Application Security Manager (ASM) and Advanced WAF (AWAF), including configuration, management, and troubleshooting of application security policies and web application Firewalls. . F5 with API Gateway: Experience: Integrating F5 solutions with API Gateway technologies, demonstrating the ability to secure and manage APIs effectively. Experience in using F5 with Kong API Gateway; managing, and optimizing API traffic through F5 systems. . F5 GTM and Proxy Technologies: Knowledge and experience with F5's Global Traffic Manager (GTM) as well as experience with Proxy technologies, including forward and reverse proxies . Basic Certificate Management: Knowledge of SSL/TLS certificate management processes, including issuance, renewal, and deployment, within F5 environments. . F5 AS3: Experience; Experience with AS3 (Application Services 3 Extension), for declarative automation and orchestration of F5 BIG-IP services. Proficiency in automating the deployment and management of F5 configurations using AS3 Key Experience - Ideal Candidate Profile . Technical Expertise in CI/CD Tools: Proficiency in Continuous Integration and Continuous Deployment tools such as Jenkins, CircleCI, Travis CI, GitLab CI, and Bamboo. Ability to configure, manage, and optimize these tools for various project requirements. . Proficiency in Scripting Languages: Strong skills in Scripting languages such as Python, Bash, PowerShell. Ability to write and maintain scripts to automate routine tasks and deployments . Infrastructure as Code (IaC): Extensive experience in creating and managing infrastructure using code. Proficiency in IaC tools like Terraform, Ansible, Chef, or Puppet . Data Structuring and Management: Advanced skills in managing data using formats like JSON, YAML, XML, and others. Capable of parsing, creating, and maintaining complex data structures for configuration and automation purposes. . API Integration and Management: Expertise in querying, integrating, and managing APIs. Capable of constructing and executing API calls for data retrieval, updates, and inter-service communication. . Version Control Systems: In-depth knowledge of version control systems like Git, including branching strategies, repository management, and integrating with CI/CD pipelines . Containerization and Orchestration: Experience with containerization tools such as Docker and orchestration platforms like Kubernetes or Docker Swarm. Understanding of containerized environments and their integration into CI/CD pipelines . Cloud Platforms: Familiarity with major cloud platforms like AWS, Azure, or GCP; understanding of cloud-specific services and how to integrate them into CI/CD processes . Monitoring and Logging: Knowledge of monitoring and logging tools such as Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), or Splunk. Ability to set up and maintain monitoring and logging for applications and infrastructure . Security Practices in DevOps (DevSecOps): Understanding of security practices in a DevOps environment. Familiarity with security scanning tools, implementing secure coding practices, and ensuring compliance with industry standards . Agile and Scrum Methodologies: Experience with Agile and Scrum methodologies. Ability to work in fast-paced, iterative development environments and adapt to changing requirements . Networking and Security Fundamentals: Knowledge of networking concepts (eg, TCP/IP, DNS, HTTP/S) and basic security concepts (eg, Firewalls, VPNs, IDS/IPS). . Problem-Solving and Analytical Skills: Strong problem-solving skills and ability to analyze complex systems and workflows to propose effective automation solutions. . Collaboration and Communication: Excellent collaboration and communication skills. Ability to work effectively in a team and communicate complex technical concepts to both technical and non-technical stakeholders. . Project Management Skills: Basic project management skills with the ability to manage timelines, dependencies, and deliverables in a cross-functional environment. . Research and Investigative Skills: Motivated to self-educate and explore company resources and external knowledge bases. All of our opportunities require that applicants are eligible to work in the specified country/location, unless otherwise stated in the job description. Whitehall Resources are an equal opportunities employer who value a diverse and inclusive working environment. All qualified applicants will receive consideration for employment without regard to race, religion, gender identity or expression, sexual orientation, national origin, pregnancy, disability, age, veteran status, or other characteristics.
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Romsey SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
07/05/2024
Full time
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Romsey SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Woking SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
07/05/2024
Full time
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Woking SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
Python Programmer - Brussels - English speaking (ML, Machine Learning, Data, Data Wrangling, AWS, Linux, Kubernetes, Argo, Automation) One of our Blue Chip Clients is urgently looking for a Python Programmer. Please find some details below: We are seeking a highly skilled Senior Python Programmer with expertise in machine learning (ML) data wrangling, interfacing, and automation. The ideal candidate will be proficient in building robust data pipelines and automating complex tasks to support ML initiatives. They will have a keen understanding of observability principles and possess hands-on experience with AWS, Linux, and preferably Kubernetes and Argo. Responsibilities: - Develop and maintain robust data pipelines for ML data wrangling, interfacing, and automation. - Implement automation solutions to streamline data processing and model deployment workflows. - Ensure observability and monitoring of systems, providing insights into performance and reliability. - Utilize AWS services such as S3, Lambda, and networking components for data storage, processing, and permissions management. - Collaborate with DevOps teams to deploy and manage applications in Linux environments. - Support Kubernetes and Argo workflows for scalable and efficient ML model training and deployment. - Manage AWS permissions and network configurations to ensure data security and compliance. - Maintain version control of codebase using Git and enforce best practices for code documentation and production readiness. - Collaborate with data scientists to develop small UI tools for querying data from databases and AWS S3. Requirements: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Proficiency in Python programming language with a focus on ML data wrangling and automation. - Strong experience with AWS services, including S3, Lambda, networking, and permissions management. - Hands-on experience with Linux environments and Shell Scripting. - Familiarity with Kubernetes and Argo for container orchestration and workflow management (preferred). - Knowledge of Git for version control and collaboration. - Excellent communication skills and ability to work in a collaborative team environment. - Strong problem-solving skills and attention to detail. - Ability to prioritize tasks and work efficiently in a fast-paced environment. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
07/05/2024
Project-based
Python Programmer - Brussels - English speaking (ML, Machine Learning, Data, Data Wrangling, AWS, Linux, Kubernetes, Argo, Automation) One of our Blue Chip Clients is urgently looking for a Python Programmer. Please find some details below: We are seeking a highly skilled Senior Python Programmer with expertise in machine learning (ML) data wrangling, interfacing, and automation. The ideal candidate will be proficient in building robust data pipelines and automating complex tasks to support ML initiatives. They will have a keen understanding of observability principles and possess hands-on experience with AWS, Linux, and preferably Kubernetes and Argo. Responsibilities: - Develop and maintain robust data pipelines for ML data wrangling, interfacing, and automation. - Implement automation solutions to streamline data processing and model deployment workflows. - Ensure observability and monitoring of systems, providing insights into performance and reliability. - Utilize AWS services such as S3, Lambda, and networking components for data storage, processing, and permissions management. - Collaborate with DevOps teams to deploy and manage applications in Linux environments. - Support Kubernetes and Argo workflows for scalable and efficient ML model training and deployment. - Manage AWS permissions and network configurations to ensure data security and compliance. - Maintain version control of codebase using Git and enforce best practices for code documentation and production readiness. - Collaborate with data scientists to develop small UI tools for querying data from databases and AWS S3. Requirements: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Proficiency in Python programming language with a focus on ML data wrangling and automation. - Strong experience with AWS services, including S3, Lambda, networking, and permissions management. - Hands-on experience with Linux environments and Shell Scripting. - Familiarity with Kubernetes and Argo for container orchestration and workflow management (preferred). - Knowledge of Git for version control and collaboration. - Excellent communication skills and ability to work in a collaborative team environment. - Strong problem-solving skills and attention to detail. - Ability to prioritize tasks and work efficiently in a fast-paced environment. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
Rust Programmer - Brussels - English speaking (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
07/05/2024
Project-based
Rust Programmer - Brussels - English speaking (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script - Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script - Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
ASSOCIATE PRINCIPAL, APPIAN SOFTWARE ENGINEERING SALARY: $140k - $145k - $152k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite, 2 days remote Looking for someone to design development testing and do the implementation of appian software. You will need 5 years Front End user experience, JavaScript automating workflows inside appian aws unix linux Java python node js angular 2.0 or react js and Middleware technologies. Working knowledge of devops terraform ansible Jenkins Kubernetes helm and cicd pipelines. Must have a degree and be apian certified developer required Contribute to design, technical direction and architecture including collaborating with various teams to build fit for purpose solutions. Applies expert knowledge of Java, Python, JavaScript, NodeJS, Angular 2.0 or ReactJS and middle-ware technologies in independently designing and developing key services with a focus on continuous integration and delivery Participates in code reviews, proactively identifying and mitigating potential issues and defects as well as assisting with continuous improvement Drives continuous improvement efforts by identifying and championing practical means of reducing time to market while maintaining high quality Qualifications: 5+ years of Front End, User Experience, development (required) 5+ years of experience in JavaScript skills (required) 3 + years of experience automating workflows inside Appian and in conjunction with integration to other tools (required) 3+ years of experience in React application development (required) 3+ years of hands-on HTML5/CSS3 experience (required) Experience with Java and/or Python (required) Experience with popular Javascript frameworks such as React, Node JS, Vue, Angular 2.0 (required) Experience of working with websockets, HTTP 1.1 and HTTP/2 (required) Experience with RESTful APIs and JSON RPC (required) Ability to write clean, bug-free code that is easy to understand and easily maintainable (required) Experience with BDD methodologies & automated acceptance testing (required) Technical Skills: 5+ years hands-on experience in Java, including good understanding of Java fundamentals such as Memory Model, Runtime Environment, Concurrency and Multithreading (required) Past/Current experience of 3+ years working on a large scale cloud native project (platform: Unix/Linux, Type of Systems: event-driven/transaction processing/high performance computing) as Technical Lead. These experiences should include developing/architecting core libraries or framework used by the platform to support fundamental services like storage, alert notifications, security, etc. (required) Appian Process Modeling, Smart Services, Rules and Tempo event services, database, and Web services (required) Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based, Graph (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc (required) Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics (required) Education and/or Experience: BS degree in Computer Science, similar technical field Appian certified developer
06/05/2024
Full time
ASSOCIATE PRINCIPAL, APPIAN SOFTWARE ENGINEERING SALARY: $140k - $145k - $152k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite, 2 days remote Looking for someone to design development testing and do the implementation of appian software. You will need 5 years Front End user experience, JavaScript automating workflows inside appian aws unix linux Java python node js angular 2.0 or react js and Middleware technologies. Working knowledge of devops terraform ansible Jenkins Kubernetes helm and cicd pipelines. Must have a degree and be apian certified developer required Contribute to design, technical direction and architecture including collaborating with various teams to build fit for purpose solutions. Applies expert knowledge of Java, Python, JavaScript, NodeJS, Angular 2.0 or ReactJS and middle-ware technologies in independently designing and developing key services with a focus on continuous integration and delivery Participates in code reviews, proactively identifying and mitigating potential issues and defects as well as assisting with continuous improvement Drives continuous improvement efforts by identifying and championing practical means of reducing time to market while maintaining high quality Qualifications: 5+ years of Front End, User Experience, development (required) 5+ years of experience in JavaScript skills (required) 3 + years of experience automating workflows inside Appian and in conjunction with integration to other tools (required) 3+ years of experience in React application development (required) 3+ years of hands-on HTML5/CSS3 experience (required) Experience with Java and/or Python (required) Experience with popular Javascript frameworks such as React, Node JS, Vue, Angular 2.0 (required) Experience of working with websockets, HTTP 1.1 and HTTP/2 (required) Experience with RESTful APIs and JSON RPC (required) Ability to write clean, bug-free code that is easy to understand and easily maintainable (required) Experience with BDD methodologies & automated acceptance testing (required) Technical Skills: 5+ years hands-on experience in Java, including good understanding of Java fundamentals such as Memory Model, Runtime Environment, Concurrency and Multithreading (required) Past/Current experience of 3+ years working on a large scale cloud native project (platform: Unix/Linux, Type of Systems: event-driven/transaction processing/high performance computing) as Technical Lead. These experiences should include developing/architecting core libraries or framework used by the platform to support fundamental services like storage, alert notifications, security, etc. (required) Appian Process Modeling, Smart Services, Rules and Tempo event services, database, and Web services (required) Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based, Graph (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc (required) Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics (required) Education and/or Experience: BS degree in Computer Science, similar technical field Appian certified developer
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
06/05/2024
Full time
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Linux Engineer. This engineer will focus on design, support, engineering, and automation for the Linux Operating system. This engineer will need hands on experience with Terraform, Kubernetes, Jenkins, Ansible, AWS, Docker, CICD, DevOps, etc. Responsibilities/Qualifications: Bachelor's degree, preferably in a technical discipline (Computer Science, Mathematics, etc.), or equivalent combination of education and experience required 8+ years' experience in IT systems installation, operations, administration, and maintenance of cloud systems/virtualized Servers Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
03/05/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Linux Engineer. This engineer will focus on design, support, engineering, and automation for the Linux Operating system. This engineer will need hands on experience with Terraform, Kubernetes, Jenkins, Ansible, AWS, Docker, CICD, DevOps, etc. Responsibilities/Qualifications: Bachelor's degree, preferably in a technical discipline (Computer Science, Mathematics, etc.), or equivalent combination of education and experience required 8+ years' experience in IT systems installation, operations, administration, and maintenance of cloud systems/virtualized Servers Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Linux Engineer. This engineer will focus on design, support, engineering, and automation for the Linux Operating system. This engineer will need hands on experience with Terraform, Kubernetes, Jenkins, Ansible, AWS, Docker, CICD, DevOps, etc. Responsibilities/Qualifications: Bachelor's degree, preferably in a technical discipline (Computer Science, Mathematics, etc.), or equivalent combination of education and experience required 8+ years' experience in IT systems installation, operations, administration, and maintenance of cloud systems/virtualized Servers Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
03/05/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Linux Engineer. This engineer will focus on design, support, engineering, and automation for the Linux Operating system. This engineer will need hands on experience with Terraform, Kubernetes, Jenkins, Ansible, AWS, Docker, CICD, DevOps, etc. Responsibilities/Qualifications: Bachelor's degree, preferably in a technical discipline (Computer Science, Mathematics, etc.), or equivalent combination of education and experience required 8+ years' experience in IT systems installation, operations, administration, and maintenance of cloud systems/virtualized Servers Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
Role: Scala Developer Location: Osterley, UK Duration: 6 months (With possible extension) Hybrid work option: Yes (2-3 days from office, if requested later candidate should be flexible to work Full time from office) Years of exp required: 5+ years Job details: Real Time data processing and RESTful microservices in Scala (Typelevel stack, Kafka, Cassandra, Kubernetes, GCP, AWS). Good working knowledge of Akka HTTP and Akka Streams is required to support existing services. Looking into how our personalisation services can evolve with machine learning. Having the freedom to self-organise as part of a cross functional agile team. Refining the team's processes to continuously integrate and working towards a deliverable application. Championing best practices such as Pair Programming and TDD in order to develop clean, resilient code that performs at serious scale. Coaching and providing feedback to fellow developers. Growing our engineering culture which is focussed on DevOps and GitOps principles. How will you be doing this? Work in a motivated team, empowered to meet ambitious goals. Collaborate on technical choices, architecture, tools and processes. Review code and give feedback to ensure that the highest standards are maintained. Actively improve overall software quality.
03/05/2024
Role: Scala Developer Location: Osterley, UK Duration: 6 months (With possible extension) Hybrid work option: Yes (2-3 days from office, if requested later candidate should be flexible to work Full time from office) Years of exp required: 5+ years Job details: Real Time data processing and RESTful microservices in Scala (Typelevel stack, Kafka, Cassandra, Kubernetes, GCP, AWS). Good working knowledge of Akka HTTP and Akka Streams is required to support existing services. Looking into how our personalisation services can evolve with machine learning. Having the freedom to self-organise as part of a cross functional agile team. Refining the team's processes to continuously integrate and working towards a deliverable application. Championing best practices such as Pair Programming and TDD in order to develop clean, resilient code that performs at serious scale. Coaching and providing feedback to fellow developers. Growing our engineering culture which is focussed on DevOps and GitOps principles. How will you be doing this? Work in a motivated team, empowered to meet ambitious goals. Collaborate on technical choices, architecture, tools and processes. Review code and give feedback to ensure that the highest standards are maintained. Actively improve overall software quality.