Aktuell sind wir von RED Global auf der Suche nach einem Experten im Bereich Applikations Management und Devops Engineering. Start: ab sofort Dauer: Erstbeauftragung bis 31.03.2025 + Verlängerungsoption Auslastung: Vollzeit Standort: Remote, Wien Rolle: Operations Manager/Webserver Tomcat & Apache Projektsprache: Deutsch Volumen: 1600 Stunden, großteils remote-Arbeit möglich, aber 2-wöchentliche Onsite-Präsenz tageweise notwendig bei Meetings; Für einen Kunden im Öffentlichen Sektor wird ein:e Applikationsmanager/DevOps Engineer gesucht, um das aktuelle zwei-Personen Team mit einer weiteren Vollzeitstelle zu unterstützen. Für die Unterstützung des Teams sind folgende Grundkenntnisse erforderlich: - Linux/Systemd (Bedienung auf Command Line) - Apache und Tomcat-Server (Konfiguration und Betrieb) - OpenLDAP (Konfiguration, Betrieb, DB-Manipulation) Wünschenswert, aber nach Rücksprache definitiv keine Voraussetzung: - Erfahrung im Umfeld der öffentlichen Verwaltung - Zertifikatshandling (SSL und Client-Zertifikate, OpenSSL) - SAML und OpenID Connect Protokoll - Modules for Online Applications (MOA) als Basis für Handysignatur und ID-Austria Die Tätigkeit umfasst sowohl den technischen als auch den administrativen Betrieb, sowie den laufenden Ausbau zentraler Portalinfrastrukturen Falls das Projekt Ihnen zusagt, würde ich mich sehr über Ihre Rückmeldung freuen. Senden Sie mir gerne Ihren aktuellen CV, Stundensatz sowie Ihre Telefonnummer zu. Ich melde mich schnellstmöglich bei Ihnen, um über nähere Details zu sprechen. Vielen Dank & Beste Grüße Mike Feustel
07/05/2024
Project-based
Aktuell sind wir von RED Global auf der Suche nach einem Experten im Bereich Applikations Management und Devops Engineering. Start: ab sofort Dauer: Erstbeauftragung bis 31.03.2025 + Verlängerungsoption Auslastung: Vollzeit Standort: Remote, Wien Rolle: Operations Manager/Webserver Tomcat & Apache Projektsprache: Deutsch Volumen: 1600 Stunden, großteils remote-Arbeit möglich, aber 2-wöchentliche Onsite-Präsenz tageweise notwendig bei Meetings; Für einen Kunden im Öffentlichen Sektor wird ein:e Applikationsmanager/DevOps Engineer gesucht, um das aktuelle zwei-Personen Team mit einer weiteren Vollzeitstelle zu unterstützen. Für die Unterstützung des Teams sind folgende Grundkenntnisse erforderlich: - Linux/Systemd (Bedienung auf Command Line) - Apache und Tomcat-Server (Konfiguration und Betrieb) - OpenLDAP (Konfiguration, Betrieb, DB-Manipulation) Wünschenswert, aber nach Rücksprache definitiv keine Voraussetzung: - Erfahrung im Umfeld der öffentlichen Verwaltung - Zertifikatshandling (SSL und Client-Zertifikate, OpenSSL) - SAML und OpenID Connect Protokoll - Modules for Online Applications (MOA) als Basis für Handysignatur und ID-Austria Die Tätigkeit umfasst sowohl den technischen als auch den administrativen Betrieb, sowie den laufenden Ausbau zentraler Portalinfrastrukturen Falls das Projekt Ihnen zusagt, würde ich mich sehr über Ihre Rückmeldung freuen. Senden Sie mir gerne Ihren aktuellen CV, Stundensatz sowie Ihre Telefonnummer zu. Ich melde mich schnellstmöglich bei Ihnen, um über nähere Details zu sprechen. Vielen Dank & Beste Grüße Mike Feustel
We are currently looking to recruit experienced self-starting Software Engineers for positions in Filton, Bristol. The roles are 100% on site. Applicants must be highly proficient in C++ and/or Core Java. Please Note: All projects are UK eyes only and require employees to achieve the appropriate clearance relevant to the role - the role will be condensed into 4 days on site per week. Responsibilities: In conjunction with the rest of the project team, participate in the design, development and proving activities for a launcher sub-system. To undertake a full range of engineering activities in line with relevant company processes and standards. Must be capable of taking ownership of features. The role includes system integration activities on a virtual test environment and potentially representative and deliverable hardware. Depending on the skills and experience of the engineer the role could potentially require involvement in the whole development life cycle from Architectural specification of the software product through to the testing and verification. The role will include supporting the production of project documentation. Skillset/experience required: A Software Engineer capable of design, development and proving of complex software products. The ability to communicate technical issues with other engineers and stakeholders from different skill areas. Experience operating as part of a collaborative Agile team. Knowledge of Software development practices and processes is required. Familiarity with a range of CI/CD/DevOps toolsets; Jira, GitHub, Jenkins in particular. Mandatory knowledge of C++ or Core Java. Having both is desirable Desirable knowledge includes, Java FX, QT, MISRA and Google Test.
07/05/2024
Project-based
We are currently looking to recruit experienced self-starting Software Engineers for positions in Filton, Bristol. The roles are 100% on site. Applicants must be highly proficient in C++ and/or Core Java. Please Note: All projects are UK eyes only and require employees to achieve the appropriate clearance relevant to the role - the role will be condensed into 4 days on site per week. Responsibilities: In conjunction with the rest of the project team, participate in the design, development and proving activities for a launcher sub-system. To undertake a full range of engineering activities in line with relevant company processes and standards. Must be capable of taking ownership of features. The role includes system integration activities on a virtual test environment and potentially representative and deliverable hardware. Depending on the skills and experience of the engineer the role could potentially require involvement in the whole development life cycle from Architectural specification of the software product through to the testing and verification. The role will include supporting the production of project documentation. Skillset/experience required: A Software Engineer capable of design, development and proving of complex software products. The ability to communicate technical issues with other engineers and stakeholders from different skill areas. Experience operating as part of a collaborative Agile team. Knowledge of Software development practices and processes is required. Familiarity with a range of CI/CD/DevOps toolsets; Jira, GitHub, Jenkins in particular. Mandatory knowledge of C++ or Core Java. Having both is desirable Desirable knowledge includes, Java FX, QT, MISRA and Google Test.
Rust Programmer - Remote - 7-8 months+ (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. For this role you can work remotely. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
07/05/2024
Project-based
Rust Programmer - Remote - 7-8 months+ (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. For this role you can work remotely. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
Key Essential Skills: 5+ years experience working network SME (Certifications required) Network Infrastructure: Cisco Nexus, ACI, Firewalls: Palo Alto, Fortinet, Juniper Firewalls, F5 Programming: Python Scripting experience Devops: 5+ years' experience automated code deployment tools (Ansible, Puppet) SNOW Nanobot AVI Load Balancers Experience in host, network and application security Integration & Automation within AWS (Certified) Experience working in the financial services sector Desirable Skills: Working knowledge of Agile framework and experience working with them. Excellent written and oral communication skills Extensive knowledge of Internet security issues and threat landscape Strong knowledge of web protocols and a good knowledge of Linux/Unix tools and architecture Self-driven, enjoys new challenges and quickly adapts to change in a fast-moving environment Overview: An exciting opportunity has opened for a Network Automation Engineer to join us with an immediate start. You will be working with our client who are a global leader in financial services technology, with a focus on retail and institutional banking, payments, asset and wealth management, risk and compliance, and outsourcing solutions. To be successful in this role you will bring a wealth of knowledge and come from a Networking background holding 10+ years' experience working as Network Engineer with in depth knowledge of routing/switching, Firewalls and load balancers Role & Responsibilities: Automate and scale security issue response Planning the design and implementation of changes across multiple Network technologies. Delivery of network and security improvement plans. Personally meets customer needs related to security products and services. Highlights and suggests improvements in processes, systems and procedures. Grows own capabilities by pursuing and investing in personal development opportunities. Demonstrable experience in network and/or security services or security principles. Outline Thebes Group: Thebes Group is a leading UK wide IT infrastructure technology consultancy. We are well-known for our extensive talent pool of highly competent IT professionals and exclusive Academy programmes, which provide a great opportunity to undertake technical training in core disciplines. Thebes work with a number of leading vendors, Government, financial institutions and insurance companies including investment banks, brokers and hedge funds. Thebes does IT solutions & services differently from most other IT service providers. As an Assured Outcome Provider (AOP) we have spent fifteen years willingly sharing the client's risk with them by focusing on outputs (ie quality service & solutions and ROI) rather than inputs (ie pricelists and headcount). We do this by fitting our skills, solutions & capabilities to needs, augmenting our staff with enthusiastic professionals from our Academy programme and remaining flexible as our clients' needs change.
07/05/2024
Project-based
Key Essential Skills: 5+ years experience working network SME (Certifications required) Network Infrastructure: Cisco Nexus, ACI, Firewalls: Palo Alto, Fortinet, Juniper Firewalls, F5 Programming: Python Scripting experience Devops: 5+ years' experience automated code deployment tools (Ansible, Puppet) SNOW Nanobot AVI Load Balancers Experience in host, network and application security Integration & Automation within AWS (Certified) Experience working in the financial services sector Desirable Skills: Working knowledge of Agile framework and experience working with them. Excellent written and oral communication skills Extensive knowledge of Internet security issues and threat landscape Strong knowledge of web protocols and a good knowledge of Linux/Unix tools and architecture Self-driven, enjoys new challenges and quickly adapts to change in a fast-moving environment Overview: An exciting opportunity has opened for a Network Automation Engineer to join us with an immediate start. You will be working with our client who are a global leader in financial services technology, with a focus on retail and institutional banking, payments, asset and wealth management, risk and compliance, and outsourcing solutions. To be successful in this role you will bring a wealth of knowledge and come from a Networking background holding 10+ years' experience working as Network Engineer with in depth knowledge of routing/switching, Firewalls and load balancers Role & Responsibilities: Automate and scale security issue response Planning the design and implementation of changes across multiple Network technologies. Delivery of network and security improvement plans. Personally meets customer needs related to security products and services. Highlights and suggests improvements in processes, systems and procedures. Grows own capabilities by pursuing and investing in personal development opportunities. Demonstrable experience in network and/or security services or security principles. Outline Thebes Group: Thebes Group is a leading UK wide IT infrastructure technology consultancy. We are well-known for our extensive talent pool of highly competent IT professionals and exclusive Academy programmes, which provide a great opportunity to undertake technical training in core disciplines. Thebes work with a number of leading vendors, Government, financial institutions and insurance companies including investment banks, brokers and hedge funds. Thebes does IT solutions & services differently from most other IT service providers. As an Assured Outcome Provider (AOP) we have spent fifteen years willingly sharing the client's risk with them by focusing on outputs (ie quality service & solutions and ROI) rather than inputs (ie pricelists and headcount). We do this by fitting our skills, solutions & capabilities to needs, augmenting our staff with enthusiastic professionals from our Academy programme and remaining flexible as our clients' needs change.
F5 WAF Engineer Whitehall resources are looking for an F5 WAF Engineer. This is an initial 6-month contract, working onsite 2 days per week in Sheffield. *Inside IR35 - You will be required to use an FCSA Accredited Umbrella Company* Job Description: As an Automation Engineer, you will play a pivotal role in enhancing our IT infrastructure by designing, creating, and maintaining bespoke Continuous Integration/Continuous Deployment (CI/CD) pipelines tailored to specific project needs. This role will have an initial focus on leveraging F5 technologies alongside a broad spectrum of automation and DevOps practices to deliver our automation use cases; however once F5 automaton works have completed, works will progress to other WAF platforms and use cases. You will be responsible for the integration of CI/CD pipelines with solutions developed by other teams, Scripting, and the creation of Infrastructure as Code (IaC) manifests using tools like Terraform and Ansible. Your expertise in Jenkins, JIRA, GitHub, Python, and other relevant technologies will be essential. You should have a solid background in building CI/CD pipelines and a comprehensive understanding of DevOps practices. The ideal candidate should not only have technical proficiency in data structures, automation technologies, API interactions, and cloud services, but also exhibit a strong drive to research, investigate, and collaborate effectively within the organization. Key Responsibilities . Developing and Delivering Automation for F5 WAF Platform: In the first instance: Developing and delivering automation solutions specifically for our F5 Web Application Firewall (WAF) platform, aligned with our specific use cases. This involves Scripting, configuring, and deploying automation workflows that enhance security, manageability, and operational efficiency of the F5 WAF environment. . CI/CD Pipeline Development: Create, enhance and implement new, customized CI/CD pipelines tailored for specific project use cases, ensuring efficient, automated workflows . Pipeline Maintenance: Regularly update and maintain existing CI/CD pipelines to ensure they are efficient, secure, and up-to-date with the latest technology standards . Integration of Solutions: Work collaboratively with other teams to integrate their solutions and tools into the CI/CD pipelines effectively, enhancing overall workflow and productivity. . IaC Manifests Creation: Develop and maintain Infrastructure as Code (IaC) manifests, predominantly using Terraform, to manage and provision IT infrastructure in a consistent and repeatable manner . Tool Proficiency: Utilize and demonstrate expertise in tools such as Jenkins, JIRA, GitHub, and Python, effectively integrating them into the CI/CD processes . Script Writing: Write and maintain scripts to automate various aspects of the infrastructure and deployment processes, improving efficiency and reducing the potential for human error. . Collaboration and Communication: Collaborate with cross-functional teams, including software development, operations, and quality assurance, to ensure seamless integration and implementation of DevOps practices . Proactive Research and Collaboration: Eager to research and utilize company resources like Confluence, find relevant contacts, and reach out to other teams for unknowns. Prepared to independently investigate and resolve challenges. Required F5 Experiences - One or more of these . F5 ASM/AWAF Knowledge & Experience: Understanding and practical experience with F5's Application Security Manager (ASM) and Advanced WAF (AWAF), including configuration, management, and troubleshooting of application security policies and web application Firewalls. . F5 with API Gateway: Experience: Integrating F5 solutions with API Gateway technologies, demonstrating the ability to secure and manage APIs effectively. Experience in using F5 with Kong API Gateway; managing, and optimizing API traffic through F5 systems. . F5 GTM and Proxy Technologies: Knowledge and experience with F5's Global Traffic Manager (GTM) as well as experience with Proxy technologies, including forward and reverse proxies . Basic Certificate Management: Knowledge of SSL/TLS certificate management processes, including issuance, renewal, and deployment, within F5 environments. . F5 AS3: Experience; Experience with AS3 (Application Services 3 Extension), for declarative automation and orchestration of F5 BIG-IP services. Proficiency in automating the deployment and management of F5 configurations using AS3 Key Experience - Ideal Candidate Profile . Technical Expertise in CI/CD Tools: Proficiency in Continuous Integration and Continuous Deployment tools such as Jenkins, CircleCI, Travis CI, GitLab CI, and Bamboo. Ability to configure, manage, and optimize these tools for various project requirements. . Proficiency in Scripting Languages: Strong skills in Scripting languages such as Python, Bash, PowerShell. Ability to write and maintain scripts to automate routine tasks and deployments . Infrastructure as Code (IaC): Extensive experience in creating and managing infrastructure using code. Proficiency in IaC tools like Terraform, Ansible, Chef, or Puppet . Data Structuring and Management: Advanced skills in managing data using formats like JSON, YAML, XML, and others. Capable of parsing, creating, and maintaining complex data structures for configuration and automation purposes. . API Integration and Management: Expertise in querying, integrating, and managing APIs. Capable of constructing and executing API calls for data retrieval, updates, and inter-service communication. . Version Control Systems: In-depth knowledge of version control systems like Git, including branching strategies, repository management, and integrating with CI/CD pipelines . Containerization and Orchestration: Experience with containerization tools such as Docker and orchestration platforms like Kubernetes or Docker Swarm. Understanding of containerized environments and their integration into CI/CD pipelines . Cloud Platforms: Familiarity with major cloud platforms like AWS, Azure, or GCP; understanding of cloud-specific services and how to integrate them into CI/CD processes . Monitoring and Logging: Knowledge of monitoring and logging tools such as Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), or Splunk. Ability to set up and maintain monitoring and logging for applications and infrastructure . Security Practices in DevOps (DevSecOps): Understanding of security practices in a DevOps environment. Familiarity with security scanning tools, implementing secure coding practices, and ensuring compliance with industry standards . Agile and Scrum Methodologies: Experience with Agile and Scrum methodologies. Ability to work in fast-paced, iterative development environments and adapt to changing requirements . Networking and Security Fundamentals: Knowledge of networking concepts (eg, TCP/IP, DNS, HTTP/S) and basic security concepts (eg, Firewalls, VPNs, IDS/IPS). . Problem-Solving and Analytical Skills: Strong problem-solving skills and ability to analyze complex systems and workflows to propose effective automation solutions. . Collaboration and Communication: Excellent collaboration and communication skills. Ability to work effectively in a team and communicate complex technical concepts to both technical and non-technical stakeholders. . Project Management Skills: Basic project management skills with the ability to manage timelines, dependencies, and deliverables in a cross-functional environment. . Research and Investigative Skills: Motivated to self-educate and explore company resources and external knowledge bases. All of our opportunities require that applicants are eligible to work in the specified country/location, unless otherwise stated in the job description. Whitehall Resources are an equal opportunities employer who value a diverse and inclusive working environment. All qualified applicants will receive consideration for employment without regard to race, religion, gender identity or expression, sexual orientation, national origin, pregnancy, disability, age, veteran status, or other characteristics.
07/05/2024
Project-based
F5 WAF Engineer Whitehall resources are looking for an F5 WAF Engineer. This is an initial 6-month contract, working onsite 2 days per week in Sheffield. *Inside IR35 - You will be required to use an FCSA Accredited Umbrella Company* Job Description: As an Automation Engineer, you will play a pivotal role in enhancing our IT infrastructure by designing, creating, and maintaining bespoke Continuous Integration/Continuous Deployment (CI/CD) pipelines tailored to specific project needs. This role will have an initial focus on leveraging F5 technologies alongside a broad spectrum of automation and DevOps practices to deliver our automation use cases; however once F5 automaton works have completed, works will progress to other WAF platforms and use cases. You will be responsible for the integration of CI/CD pipelines with solutions developed by other teams, Scripting, and the creation of Infrastructure as Code (IaC) manifests using tools like Terraform and Ansible. Your expertise in Jenkins, JIRA, GitHub, Python, and other relevant technologies will be essential. You should have a solid background in building CI/CD pipelines and a comprehensive understanding of DevOps practices. The ideal candidate should not only have technical proficiency in data structures, automation technologies, API interactions, and cloud services, but also exhibit a strong drive to research, investigate, and collaborate effectively within the organization. Key Responsibilities . Developing and Delivering Automation for F5 WAF Platform: In the first instance: Developing and delivering automation solutions specifically for our F5 Web Application Firewall (WAF) platform, aligned with our specific use cases. This involves Scripting, configuring, and deploying automation workflows that enhance security, manageability, and operational efficiency of the F5 WAF environment. . CI/CD Pipeline Development: Create, enhance and implement new, customized CI/CD pipelines tailored for specific project use cases, ensuring efficient, automated workflows . Pipeline Maintenance: Regularly update and maintain existing CI/CD pipelines to ensure they are efficient, secure, and up-to-date with the latest technology standards . Integration of Solutions: Work collaboratively with other teams to integrate their solutions and tools into the CI/CD pipelines effectively, enhancing overall workflow and productivity. . IaC Manifests Creation: Develop and maintain Infrastructure as Code (IaC) manifests, predominantly using Terraform, to manage and provision IT infrastructure in a consistent and repeatable manner . Tool Proficiency: Utilize and demonstrate expertise in tools such as Jenkins, JIRA, GitHub, and Python, effectively integrating them into the CI/CD processes . Script Writing: Write and maintain scripts to automate various aspects of the infrastructure and deployment processes, improving efficiency and reducing the potential for human error. . Collaboration and Communication: Collaborate with cross-functional teams, including software development, operations, and quality assurance, to ensure seamless integration and implementation of DevOps practices . Proactive Research and Collaboration: Eager to research and utilize company resources like Confluence, find relevant contacts, and reach out to other teams for unknowns. Prepared to independently investigate and resolve challenges. Required F5 Experiences - One or more of these . F5 ASM/AWAF Knowledge & Experience: Understanding and practical experience with F5's Application Security Manager (ASM) and Advanced WAF (AWAF), including configuration, management, and troubleshooting of application security policies and web application Firewalls. . F5 with API Gateway: Experience: Integrating F5 solutions with API Gateway technologies, demonstrating the ability to secure and manage APIs effectively. Experience in using F5 with Kong API Gateway; managing, and optimizing API traffic through F5 systems. . F5 GTM and Proxy Technologies: Knowledge and experience with F5's Global Traffic Manager (GTM) as well as experience with Proxy technologies, including forward and reverse proxies . Basic Certificate Management: Knowledge of SSL/TLS certificate management processes, including issuance, renewal, and deployment, within F5 environments. . F5 AS3: Experience; Experience with AS3 (Application Services 3 Extension), for declarative automation and orchestration of F5 BIG-IP services. Proficiency in automating the deployment and management of F5 configurations using AS3 Key Experience - Ideal Candidate Profile . Technical Expertise in CI/CD Tools: Proficiency in Continuous Integration and Continuous Deployment tools such as Jenkins, CircleCI, Travis CI, GitLab CI, and Bamboo. Ability to configure, manage, and optimize these tools for various project requirements. . Proficiency in Scripting Languages: Strong skills in Scripting languages such as Python, Bash, PowerShell. Ability to write and maintain scripts to automate routine tasks and deployments . Infrastructure as Code (IaC): Extensive experience in creating and managing infrastructure using code. Proficiency in IaC tools like Terraform, Ansible, Chef, or Puppet . Data Structuring and Management: Advanced skills in managing data using formats like JSON, YAML, XML, and others. Capable of parsing, creating, and maintaining complex data structures for configuration and automation purposes. . API Integration and Management: Expertise in querying, integrating, and managing APIs. Capable of constructing and executing API calls for data retrieval, updates, and inter-service communication. . Version Control Systems: In-depth knowledge of version control systems like Git, including branching strategies, repository management, and integrating with CI/CD pipelines . Containerization and Orchestration: Experience with containerization tools such as Docker and orchestration platforms like Kubernetes or Docker Swarm. Understanding of containerized environments and their integration into CI/CD pipelines . Cloud Platforms: Familiarity with major cloud platforms like AWS, Azure, or GCP; understanding of cloud-specific services and how to integrate them into CI/CD processes . Monitoring and Logging: Knowledge of monitoring and logging tools such as Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), or Splunk. Ability to set up and maintain monitoring and logging for applications and infrastructure . Security Practices in DevOps (DevSecOps): Understanding of security practices in a DevOps environment. Familiarity with security scanning tools, implementing secure coding practices, and ensuring compliance with industry standards . Agile and Scrum Methodologies: Experience with Agile and Scrum methodologies. Ability to work in fast-paced, iterative development environments and adapt to changing requirements . Networking and Security Fundamentals: Knowledge of networking concepts (eg, TCP/IP, DNS, HTTP/S) and basic security concepts (eg, Firewalls, VPNs, IDS/IPS). . Problem-Solving and Analytical Skills: Strong problem-solving skills and ability to analyze complex systems and workflows to propose effective automation solutions. . Collaboration and Communication: Excellent collaboration and communication skills. Ability to work effectively in a team and communicate complex technical concepts to both technical and non-technical stakeholders. . Project Management Skills: Basic project management skills with the ability to manage timelines, dependencies, and deliverables in a cross-functional environment. . Research and Investigative Skills: Motivated to self-educate and explore company resources and external knowledge bases. All of our opportunities require that applicants are eligible to work in the specified country/location, unless otherwise stated in the job description. Whitehall Resources are an equal opportunities employer who value a diverse and inclusive working environment. All qualified applicants will receive consideration for employment without regard to race, religion, gender identity or expression, sexual orientation, national origin, pregnancy, disability, age, veteran status, or other characteristics.
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Romsey SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
07/05/2024
Full time
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Romsey SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Woking SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
07/05/2024
Full time
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Woking SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
Your opportunity To work on our mission to empower every person and every business unit in the group to achieve more thanks to the Microsoft Power Platform To support everyone to build great solutions in Microsoft PowerApps, Power Automate and PowerBI with a high business value To work with internal Zurich teams and external IT suppliers on a variety of initiatives and global projects Join the experienced Power Platform Center for Enablement of one of the biggest Power Platform consumers in the world As a Power Platform Solution Architect your main responsibilities will involve: Empowerment Program Identification of teams and individuals interested in learning more about the Power Platform Delivery of tailored Power Platform trainings internally to empower our collaborators to deliver better value to internal and external customers Reusability Identification of successful solutions built internally which could be reused across the organization to further increase the related ROI Implementation of improvements on such solutions to support scale and roll out to wider population Power Pages Governance Assessment of Power Pages technology, definition and implementation of a suitable Governance Strategy for the organization Identification of a leading use case to implement and showcase the product Mentors lower-level colleagues Works in Agile methodology (Scrum, Kanban) using Azure DevOps, Your Experience As a Microsoft 365 Solution Architect your skills and qualifications will ideally include: Deep knowledge in Power Platform technologies with experience of 3 or more of the following: SharePoint Online, Microsoft Teams, Dynamics 365, Power BI, Power Apps, Power Automate, Dataverse, Power Pages Preferably some experience in IT Governance Preferably Software Engineer degree - Informatics and Computer Engineering Good negotiating skills, performance management, good practice, and techniques as well as fluent written and spoken English Very good team player who is skilled at building up and managing stakeholder relationships successfully Ideally you already hold Power Platform Certifications Your Technical Skills Power Platform Products (PowerApps, Power Automate, AI Builder etc.) Microsoft Office 365 (SharePoint Online, MS Teams, MS Forms, Outlook etc) Azure Cloud Services Job Title: Microsoft Power Platform Solution Architect Location: Zürich, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
07/05/2024
Project-based
Your opportunity To work on our mission to empower every person and every business unit in the group to achieve more thanks to the Microsoft Power Platform To support everyone to build great solutions in Microsoft PowerApps, Power Automate and PowerBI with a high business value To work with internal Zurich teams and external IT suppliers on a variety of initiatives and global projects Join the experienced Power Platform Center for Enablement of one of the biggest Power Platform consumers in the world As a Power Platform Solution Architect your main responsibilities will involve: Empowerment Program Identification of teams and individuals interested in learning more about the Power Platform Delivery of tailored Power Platform trainings internally to empower our collaborators to deliver better value to internal and external customers Reusability Identification of successful solutions built internally which could be reused across the organization to further increase the related ROI Implementation of improvements on such solutions to support scale and roll out to wider population Power Pages Governance Assessment of Power Pages technology, definition and implementation of a suitable Governance Strategy for the organization Identification of a leading use case to implement and showcase the product Mentors lower-level colleagues Works in Agile methodology (Scrum, Kanban) using Azure DevOps, Your Experience As a Microsoft 365 Solution Architect your skills and qualifications will ideally include: Deep knowledge in Power Platform technologies with experience of 3 or more of the following: SharePoint Online, Microsoft Teams, Dynamics 365, Power BI, Power Apps, Power Automate, Dataverse, Power Pages Preferably some experience in IT Governance Preferably Software Engineer degree - Informatics and Computer Engineering Good negotiating skills, performance management, good practice, and techniques as well as fluent written and spoken English Very good team player who is skilled at building up and managing stakeholder relationships successfully Ideally you already hold Power Platform Certifications Your Technical Skills Power Platform Products (PowerApps, Power Automate, AI Builder etc.) Microsoft Office 365 (SharePoint Online, MS Teams, MS Forms, Outlook etc) Azure Cloud Services Job Title: Microsoft Power Platform Solution Architect Location: Zürich, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
Azure Devops Engineer - AKS, Azure Services, Terraform, Linux, Docker, Azure Certified Long term fully remote contract opportunity for a proven Azure Devops Engineer to join my leading Telecommunications client on their Enterprise wide Cloud/Infrastructure transformation programme. The role of the Azure Devops Engineer will be to Design, Architect, implement, maintain & support automation for monitoring, logging and alerting for Azure Cloud infrastructure. Your role will also be to Install, configure and manage AKS clusters, as well as maintaining operating systems. You will also monitor/test the application performance to identify any issues and then resolve them. To be successful in this role you must be an Azure Cloud certified individual who's been a hands on Azure Devops Engineer previously. You must know AKS (Azure Kubernetes services) and must be familiar with all Azure services: VM's, Azure PostgreSQL Database, Storage, Virtual Network, Azure Resource manager, Key Vaults, Monitoring). You must also have experience with Terraform, and have a background within Linux. Please apply immediately if this sounds like the next role for you!
07/05/2024
Project-based
Azure Devops Engineer - AKS, Azure Services, Terraform, Linux, Docker, Azure Certified Long term fully remote contract opportunity for a proven Azure Devops Engineer to join my leading Telecommunications client on their Enterprise wide Cloud/Infrastructure transformation programme. The role of the Azure Devops Engineer will be to Design, Architect, implement, maintain & support automation for monitoring, logging and alerting for Azure Cloud infrastructure. Your role will also be to Install, configure and manage AKS clusters, as well as maintaining operating systems. You will also monitor/test the application performance to identify any issues and then resolve them. To be successful in this role you must be an Azure Cloud certified individual who's been a hands on Azure Devops Engineer previously. You must know AKS (Azure Kubernetes services) and must be familiar with all Azure services: VM's, Azure PostgreSQL Database, Storage, Virtual Network, Azure Resource manager, Key Vaults, Monitoring). You must also have experience with Terraform, and have a background within Linux. Please apply immediately if this sounds like the next role for you!
Python Programmer - Brussels - English speaking (ML, Machine Learning, Data, Data Wrangling, AWS, Linux, Kubernetes, Argo, Automation) One of our Blue Chip Clients is urgently looking for a Python Programmer. Please find some details below: We are seeking a highly skilled Senior Python Programmer with expertise in machine learning (ML) data wrangling, interfacing, and automation. The ideal candidate will be proficient in building robust data pipelines and automating complex tasks to support ML initiatives. They will have a keen understanding of observability principles and possess hands-on experience with AWS, Linux, and preferably Kubernetes and Argo. Responsibilities: - Develop and maintain robust data pipelines for ML data wrangling, interfacing, and automation. - Implement automation solutions to streamline data processing and model deployment workflows. - Ensure observability and monitoring of systems, providing insights into performance and reliability. - Utilize AWS services such as S3, Lambda, and networking components for data storage, processing, and permissions management. - Collaborate with DevOps teams to deploy and manage applications in Linux environments. - Support Kubernetes and Argo workflows for scalable and efficient ML model training and deployment. - Manage AWS permissions and network configurations to ensure data security and compliance. - Maintain version control of codebase using Git and enforce best practices for code documentation and production readiness. - Collaborate with data scientists to develop small UI tools for querying data from databases and AWS S3. Requirements: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Proficiency in Python programming language with a focus on ML data wrangling and automation. - Strong experience with AWS services, including S3, Lambda, networking, and permissions management. - Hands-on experience with Linux environments and Shell Scripting. - Familiarity with Kubernetes and Argo for container orchestration and workflow management (preferred). - Knowledge of Git for version control and collaboration. - Excellent communication skills and ability to work in a collaborative team environment. - Strong problem-solving skills and attention to detail. - Ability to prioritize tasks and work efficiently in a fast-paced environment. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
07/05/2024
Project-based
Python Programmer - Brussels - English speaking (ML, Machine Learning, Data, Data Wrangling, AWS, Linux, Kubernetes, Argo, Automation) One of our Blue Chip Clients is urgently looking for a Python Programmer. Please find some details below: We are seeking a highly skilled Senior Python Programmer with expertise in machine learning (ML) data wrangling, interfacing, and automation. The ideal candidate will be proficient in building robust data pipelines and automating complex tasks to support ML initiatives. They will have a keen understanding of observability principles and possess hands-on experience with AWS, Linux, and preferably Kubernetes and Argo. Responsibilities: - Develop and maintain robust data pipelines for ML data wrangling, interfacing, and automation. - Implement automation solutions to streamline data processing and model deployment workflows. - Ensure observability and monitoring of systems, providing insights into performance and reliability. - Utilize AWS services such as S3, Lambda, and networking components for data storage, processing, and permissions management. - Collaborate with DevOps teams to deploy and manage applications in Linux environments. - Support Kubernetes and Argo workflows for scalable and efficient ML model training and deployment. - Manage AWS permissions and network configurations to ensure data security and compliance. - Maintain version control of codebase using Git and enforce best practices for code documentation and production readiness. - Collaborate with data scientists to develop small UI tools for querying data from databases and AWS S3. Requirements: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Proficiency in Python programming language with a focus on ML data wrangling and automation. - Strong experience with AWS services, including S3, Lambda, networking, and permissions management. - Hands-on experience with Linux environments and Shell Scripting. - Familiarity with Kubernetes and Argo for container orchestration and workflow management (preferred). - Knowledge of Git for version control and collaboration. - Excellent communication skills and ability to work in a collaborative team environment. - Strong problem-solving skills and attention to detail. - Ability to prioritize tasks and work efficiently in a fast-paced environment. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
Rust Programmer - Brussels - English speaking (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
07/05/2024
Project-based
Rust Programmer - Brussels - English speaking (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
Client: Global banking sector PAYE - inside IR35 12 months' contract Location: London, UK Hybrid working (50% on site) At least 12 years experience needed! Key Roles & Responsibilities You will lead a global team of engineers to measure and improve performance and efficiency, driving deliverables through engineering excellence and effective relationship building with supporting technical service teams, up/downstream dependencies, stakeholders and vendors. You will be expected to; Lead a global team of engineers. Take responsibility for the building and maintenance of CICD pipelines and infrastructure serving several business-critical applications. Administer OpenShift and Kubernetes environments. Design resilient, scalable application infrastructure solutions for the following components: Elastic stack, Confluent Kafka, Datastax Cassandra. Provide operational support to developers, production services and management teams where necessary. Build and maintain relationships with stakeholders, vendors, and technical service teams. Adhere to, maintain, and improve the adoption of rigorous compliance measures in all aspects of your role. Automate everything through documented reusable code. Have experience of supporting large-scale data processing systems written in Java. Plan and execute work using Agile methodologies; JIRA planning and task prioritization, Confluence documentation and Bitbucket/GHE for all the wonderful code you'll be building. Who we're looking for: OpenShift/Kubernetes and Docker administration. You will have built and managed clusters and understand all aspects of both platform and application layer configuration and troubleshooting. A development background preferred. A previous DevOps, Sys Admin, SRE or Test Automation role. Scripting! We use Ansible, Puppet, Bash, Python, Groovy and Golang. CI/CD pipeline development will be your daily bread and butter, if it can be automated, your instinctive DevOps nature will compel you to design one. Proven experience with Kafka, Cassandra & Elasticsearch will be a significant advantage, as developers love to break these things. Experience working within the finance industry is desirable.
07/05/2024
Project-based
Client: Global banking sector PAYE - inside IR35 12 months' contract Location: London, UK Hybrid working (50% on site) At least 12 years experience needed! Key Roles & Responsibilities You will lead a global team of engineers to measure and improve performance and efficiency, driving deliverables through engineering excellence and effective relationship building with supporting technical service teams, up/downstream dependencies, stakeholders and vendors. You will be expected to; Lead a global team of engineers. Take responsibility for the building and maintenance of CICD pipelines and infrastructure serving several business-critical applications. Administer OpenShift and Kubernetes environments. Design resilient, scalable application infrastructure solutions for the following components: Elastic stack, Confluent Kafka, Datastax Cassandra. Provide operational support to developers, production services and management teams where necessary. Build and maintain relationships with stakeholders, vendors, and technical service teams. Adhere to, maintain, and improve the adoption of rigorous compliance measures in all aspects of your role. Automate everything through documented reusable code. Have experience of supporting large-scale data processing systems written in Java. Plan and execute work using Agile methodologies; JIRA planning and task prioritization, Confluence documentation and Bitbucket/GHE for all the wonderful code you'll be building. Who we're looking for: OpenShift/Kubernetes and Docker administration. You will have built and managed clusters and understand all aspects of both platform and application layer configuration and troubleshooting. A development background preferred. A previous DevOps, Sys Admin, SRE or Test Automation role. Scripting! We use Ansible, Puppet, Bash, Python, Groovy and Golang. CI/CD pipeline development will be your daily bread and butter, if it can be automated, your instinctive DevOps nature will compel you to design one. Proven experience with Kafka, Cassandra & Elasticsearch will be a significant advantage, as developers love to break these things. Experience working within the finance industry is desirable.
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script - Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script - Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
UK*C DV DEVOPS ENGINEER NEW CONTRACT OPPORTUNITY AVAILABLE FOR A DEVOPS ENGINEER WITH eDV IN MANCHESTER WORKING WITH A LEADING CONSULTANCY ON GOVERNMENT PROJECTS Permanent opportunity in Manchester for a DevOps Engineer to work on mission-critical projects. You must have an active eDV Clearance to start. Day Rate up to £800 per day INSIDE IR35 Manchester-based in an easily accessible location - hybrid working model To apply please email (see below) or call WHO WE ARE? We are recruiting DevOps Engineers for a globally leading consultancy in Manchester to work on a portfolio of public and private sector projects. We work across a range of industries, supporting both small and large clients using cutting edge technology. Our teams are what lead us forward and we are therefore looking for the best talent to join us as we continue to bring the best to the table. Due to the nature of our clients, you must have an active eDV clearance which we will sponsor! WHAT WILL THE DEVOPS ENGINEER BE DOING? We are looking for talented DevOps Engineers with an interest in Public Sector work to join our highly skilled engineering teams specialising in agile custom software development. As DevOps Engineer you will be responsible for Deploying APIs and UI components into a Kubernetes cluster. Integrating API/UI components with existing data stores and APIs. Maintaining and developing existing architectural components including Data Ingest, Data Stores and REST APIs. THE DEVOPS ENGINEER SHOULD HAVE . You must have an active eDV Clearance. Apache Nifi Flink Java Ansible Docker Kubernetes ELK stack Linux Sys Admin for deployed Clusters (10's of Servers) Jenkins Pipeline development Integration/debugging Understanding complex system architectures Technologically curious/Willing/Able to tactically upskill new technologies. It would be nice to have: An interest in National Security and Defence TO BE CONSIDERED . Please either apply by clicking online or emailing me directly to (see below) - I can make myself available outside of normal working hours to suit from 7am until 10pm. If unavailable, please leave a message and either myself or one of my colleagues will respond. By applying for this role, you give express consent for us to process & submit (subject to required skills) your application to our client in conjunction with this vacancy only. I look forward to hearing from you. DEVOPS ENGINEERS - MANCHESTER KEY SKILLS: SOFTWARE DEVELOPER/SOFTWARE ENGINEER/SENIOR SOFTWARE DEVELOPER/SENIOR SOFTWARE ENGINEER/DEVOPS ENGINEER/DEVOPS/APACHE NIFI/FLINK/JAVA/ANSIBLE/DOCKER/KUBERNETES/ELK STACK/TERRAFORM/Linux/GIT
07/05/2024
Project-based
UK*C DV DEVOPS ENGINEER NEW CONTRACT OPPORTUNITY AVAILABLE FOR A DEVOPS ENGINEER WITH eDV IN MANCHESTER WORKING WITH A LEADING CONSULTANCY ON GOVERNMENT PROJECTS Permanent opportunity in Manchester for a DevOps Engineer to work on mission-critical projects. You must have an active eDV Clearance to start. Day Rate up to £800 per day INSIDE IR35 Manchester-based in an easily accessible location - hybrid working model To apply please email (see below) or call WHO WE ARE? We are recruiting DevOps Engineers for a globally leading consultancy in Manchester to work on a portfolio of public and private sector projects. We work across a range of industries, supporting both small and large clients using cutting edge technology. Our teams are what lead us forward and we are therefore looking for the best talent to join us as we continue to bring the best to the table. Due to the nature of our clients, you must have an active eDV clearance which we will sponsor! WHAT WILL THE DEVOPS ENGINEER BE DOING? We are looking for talented DevOps Engineers with an interest in Public Sector work to join our highly skilled engineering teams specialising in agile custom software development. As DevOps Engineer you will be responsible for Deploying APIs and UI components into a Kubernetes cluster. Integrating API/UI components with existing data stores and APIs. Maintaining and developing existing architectural components including Data Ingest, Data Stores and REST APIs. THE DEVOPS ENGINEER SHOULD HAVE . You must have an active eDV Clearance. Apache Nifi Flink Java Ansible Docker Kubernetes ELK stack Linux Sys Admin for deployed Clusters (10's of Servers) Jenkins Pipeline development Integration/debugging Understanding complex system architectures Technologically curious/Willing/Able to tactically upskill new technologies. It would be nice to have: An interest in National Security and Defence TO BE CONSIDERED . Please either apply by clicking online or emailing me directly to (see below) - I can make myself available outside of normal working hours to suit from 7am until 10pm. If unavailable, please leave a message and either myself or one of my colleagues will respond. By applying for this role, you give express consent for us to process & submit (subject to required skills) your application to our client in conjunction with this vacancy only. I look forward to hearing from you. DEVOPS ENGINEERS - MANCHESTER KEY SKILLS: SOFTWARE DEVELOPER/SOFTWARE ENGINEER/SENIOR SOFTWARE DEVELOPER/SENIOR SOFTWARE ENGINEER/DEVOPS ENGINEER/DEVOPS/APACHE NIFI/FLINK/JAVA/ANSIBLE/DOCKER/KUBERNETES/ELK STACK/TERRAFORM/Linux/GIT
DEVOPS ENGINEER NEW PERMANENT OPPORTUNITY AVAILABLE FOR A DEVOPS ENGINEER IN MANCHESTER WORKING WITH A LEADING CONSULTANCY ON GOVERNMENT PROJECTS Permanent opportunity in Manchester for DevOps Engineer to work on mission critical projects. You must have an active eDV Clearance to start. Competitive salary package depending on experience accompanied by a large training/certification budget Manchester based in an easily accessible location - hybrid working model To apply please email (see below) or call WHO WE ARE? We are recruiting DevOps Engineers for a globally leading consultancy in Manchester to work on a portfolio of public and private sector projects. We work across a range of industries, supporting both small and large clients using cutting edge technology. Our teams are what lead us forward and we are therefore looking for the best talent to join us as we continue to bring the best to the table. Due to the nature of our clients, you must have an active eDV clearance which we will sponsor! WHAT WILL THE DEVOPS ENGINEER BE DOING? We are looking for talented DevOps Engineers with an interest in Public Sector work to join our highly skilled engineering teams specialising in agile custom software development. As DevOps Engineer you will be responsible for Deploying APIs and UI components into a Kubernetes cluster. Integrating API/UI components with existing data stores and APIs. Maintaining and developing existing architectural components including Data Ingest, Data Stores and REST APIs. You will be working with cutting edge technology and given the opportunity for multiple paid training opportunities and certifications. Paid certifications can be done during your working week as the client allows extra days on top of your holidays for this training. THE DEVOPS ENGINEER SHOULD HAVE . You must have an active eDV Clearance. Apache Nifi Flink Java Ansible Docker Kubernetes ELK stack Linux Sys Admin for deployed Clusters (10's of Servers) Jenkins Pipeline development Integration/debugging Understanding complex system architectures Technologically curious/Willing/Able to tactically upskill new technologies. It would be nice to have: An interest in National Security and Defence TO BE CONSIDERED . Please either apply by clicking online or emailing me directly to (see below) - I can make myself available outside of normal working hours to suit from 7am until 10pm. If unavailable, please leave a message and either myself or one of my colleagues will respond. By applying for this role, you give express consent for us to process & submit (subject to required skills) your application to our client in conjunction with this vacancy only. I look forward to hearing from you. DEVOPS ENGINEERS - MANCHESTER KEY SKILLS: SOFTWARE DEVELOPER/SOFTWARE ENGINEER/SENIOR SOFTWARE DEVELOPER/SENIOR SOFTWARE ENGINEER/DEVOPS ENGINEER/DEVOPS/APACHE NIFI/FLINK/JAVA/ANSIBLE/DOCKER/KUBERNETES/ELK STACK/TERRAFORM/Linux/GIT
07/05/2024
Full time
DEVOPS ENGINEER NEW PERMANENT OPPORTUNITY AVAILABLE FOR A DEVOPS ENGINEER IN MANCHESTER WORKING WITH A LEADING CONSULTANCY ON GOVERNMENT PROJECTS Permanent opportunity in Manchester for DevOps Engineer to work on mission critical projects. You must have an active eDV Clearance to start. Competitive salary package depending on experience accompanied by a large training/certification budget Manchester based in an easily accessible location - hybrid working model To apply please email (see below) or call WHO WE ARE? We are recruiting DevOps Engineers for a globally leading consultancy in Manchester to work on a portfolio of public and private sector projects. We work across a range of industries, supporting both small and large clients using cutting edge technology. Our teams are what lead us forward and we are therefore looking for the best talent to join us as we continue to bring the best to the table. Due to the nature of our clients, you must have an active eDV clearance which we will sponsor! WHAT WILL THE DEVOPS ENGINEER BE DOING? We are looking for talented DevOps Engineers with an interest in Public Sector work to join our highly skilled engineering teams specialising in agile custom software development. As DevOps Engineer you will be responsible for Deploying APIs and UI components into a Kubernetes cluster. Integrating API/UI components with existing data stores and APIs. Maintaining and developing existing architectural components including Data Ingest, Data Stores and REST APIs. You will be working with cutting edge technology and given the opportunity for multiple paid training opportunities and certifications. Paid certifications can be done during your working week as the client allows extra days on top of your holidays for this training. THE DEVOPS ENGINEER SHOULD HAVE . You must have an active eDV Clearance. Apache Nifi Flink Java Ansible Docker Kubernetes ELK stack Linux Sys Admin for deployed Clusters (10's of Servers) Jenkins Pipeline development Integration/debugging Understanding complex system architectures Technologically curious/Willing/Able to tactically upskill new technologies. It would be nice to have: An interest in National Security and Defence TO BE CONSIDERED . Please either apply by clicking online or emailing me directly to (see below) - I can make myself available outside of normal working hours to suit from 7am until 10pm. If unavailable, please leave a message and either myself or one of my colleagues will respond. By applying for this role, you give express consent for us to process & submit (subject to required skills) your application to our client in conjunction with this vacancy only. I look forward to hearing from you. DEVOPS ENGINEERS - MANCHESTER KEY SKILLS: SOFTWARE DEVELOPER/SOFTWARE ENGINEER/SENIOR SOFTWARE DEVELOPER/SENIOR SOFTWARE ENGINEER/DEVOPS ENGINEER/DEVOPS/APACHE NIFI/FLINK/JAVA/ANSIBLE/DOCKER/KUBERNETES/ELK STACK/TERRAFORM/Linux/GIT