Cloud & Infrastructure, AWS, Windows Server Admin, Networking Admin, PowerShell, Terraform, DevOps Engineer, Systems Engineer, Systems Admin, MUST be able to drive(!) *Must be able to drive on site (own a vehicle) 3 day's a week. 2 days a week remote* An exciting financial services firm are looking for an Infrastructure/Cloud Engineer (DevOps/Systems Engineer) with AWS experience. Responsibilities: First point of contact for all internally managed Server instances. Daily review and maintenance of AWS estate through monitoring tools. Ensuring business continuity with the availability of services running on AWS including webservers, IaaS and SaaS solutions. Perform discovery tasks and drive change and innovation for Infrastructure platforms for continuous improvements, with a preference for Cloud native platforms. Provide support and guidance on best practices for the Development team from non-production environments through to Production for business objectives. Explore technical direction for IaaS and SaaS technologies for improvements to current processes. Experience: Windows Server Administration 2016 + including AD and DNS. Network Administration subnetting, VLAN, WAN and VPN On-Prem and Cloud. Webserver and IIS administration and configuration. Windows Clustering HADR. Hands-on administration of AWS including EC2, IAM, EBS, RDS and VPC. PowerShell for Windows. Cloud & Infrastructure, AWS, Windows Server Admin, Networking Admin, PowerShell, Terraform, DevOps Engineer, Systems Engineer, Systems Admin, MUST be able to drive(!)
07/05/2024
Full time
Cloud & Infrastructure, AWS, Windows Server Admin, Networking Admin, PowerShell, Terraform, DevOps Engineer, Systems Engineer, Systems Admin, MUST be able to drive(!) *Must be able to drive on site (own a vehicle) 3 day's a week. 2 days a week remote* An exciting financial services firm are looking for an Infrastructure/Cloud Engineer (DevOps/Systems Engineer) with AWS experience. Responsibilities: First point of contact for all internally managed Server instances. Daily review and maintenance of AWS estate through monitoring tools. Ensuring business continuity with the availability of services running on AWS including webservers, IaaS and SaaS solutions. Perform discovery tasks and drive change and innovation for Infrastructure platforms for continuous improvements, with a preference for Cloud native platforms. Provide support and guidance on best practices for the Development team from non-production environments through to Production for business objectives. Explore technical direction for IaaS and SaaS technologies for improvements to current processes. Experience: Windows Server Administration 2016 + including AD and DNS. Network Administration subnetting, VLAN, WAN and VPN On-Prem and Cloud. Webserver and IIS administration and configuration. Windows Clustering HADR. Hands-on administration of AWS including EC2, IAM, EBS, RDS and VPC. PowerShell for Windows. Cloud & Infrastructure, AWS, Windows Server Admin, Networking Admin, PowerShell, Terraform, DevOps Engineer, Systems Engineer, Systems Admin, MUST be able to drive(!)
Infrastructure Engineer - Linux, Docker, AWS, Terraform, Agile. The Company & Opportunity: Specialist technology provider, providing Real Time solutions, utilising leading-edge technology, delivering transportation technology, are looking for an experienced Infrastructure Engineer, to play a pivotal role in supporting their core products and associated systems, working in a hybrid role with occasional visits on-site, ensuring the on-prem infrastructure and AWS cloud estate are ready and available to support the teams, digital services, and customers. The company offers Hybrid working with 1 day week in their Derby office. The role is split as 70% on-prem (with occasional trips to client sites) and 30% cloud-based (AWS/Azure). *Candidates must work within a reasonable commuting distance of Derby. Core technical skills, responsibilities & attributes for the Infrastructure Engineer role: Minimum 7+ years commercial experience as an Infrastructure Engineer. Docker/Kubernetes (containerisation), Git (or similar), AWS, Azure (DevOps pipelines), Terraform. Linux support/administration (strong understanding of the Linux Ecosystem). Proven experience as an Infrastructure Engineer, working with Development/QA Teams, with a strong understanding of the development life cycle (Sprints/Scrum and/or Agile etc). Commercial experience supporting On-Prem applications/tools. MUST HAVE a strong Infrastructure Engineering background, encompassing - Server Configurations, Cabinets, Network Components, Data Centers, Switches & Firewalls etc. The company offers a Hybrid working environment working from home and 1 day per week in the Derby office, with a base salary range of £55-60K, depending on experience and a fantastic benefits package. Please apply now for a comprehensive specification on the position: Infrastructure Engineer - Linux, Docker, AWS, Terraform, Agile.
07/05/2024
Full time
Infrastructure Engineer - Linux, Docker, AWS, Terraform, Agile. The Company & Opportunity: Specialist technology provider, providing Real Time solutions, utilising leading-edge technology, delivering transportation technology, are looking for an experienced Infrastructure Engineer, to play a pivotal role in supporting their core products and associated systems, working in a hybrid role with occasional visits on-site, ensuring the on-prem infrastructure and AWS cloud estate are ready and available to support the teams, digital services, and customers. The company offers Hybrid working with 1 day week in their Derby office. The role is split as 70% on-prem (with occasional trips to client sites) and 30% cloud-based (AWS/Azure). *Candidates must work within a reasonable commuting distance of Derby. Core technical skills, responsibilities & attributes for the Infrastructure Engineer role: Minimum 7+ years commercial experience as an Infrastructure Engineer. Docker/Kubernetes (containerisation), Git (or similar), AWS, Azure (DevOps pipelines), Terraform. Linux support/administration (strong understanding of the Linux Ecosystem). Proven experience as an Infrastructure Engineer, working with Development/QA Teams, with a strong understanding of the development life cycle (Sprints/Scrum and/or Agile etc). Commercial experience supporting On-Prem applications/tools. MUST HAVE a strong Infrastructure Engineering background, encompassing - Server Configurations, Cabinets, Network Components, Data Centers, Switches & Firewalls etc. The company offers a Hybrid working environment working from home and 1 day per week in the Derby office, with a base salary range of £55-60K, depending on experience and a fantastic benefits package. Please apply now for a comprehensive specification on the position: Infrastructure Engineer - Linux, Docker, AWS, Terraform, Agile.
Job Description Role Overview: SHIFT PATTERNS 3 DAYS 8AM-4PM 2 DAYS 12PM -8PM We're seeking a Senior Azure DevOps Engineer to join our dynamic team. This role requires a blend of technical expertise, leadership, and a passion for cloud technologies. You'll be instrumental in automating, optimizing, and securing our Azure environments, driving efficiency, and innovation across our projects. Responsibilities: Lead Azure infrastructure projects, ensuring best practices in CI/CD pipelines, monitoring, and security. Collaborate with development teams to implement scalable and secure cloud solutions. Drive the adoption of Infrastructure as Code (IaC) within Azure, enhancing automation and consistency. Mentor junior DevOps team members, sharing knowledge and fostering a culture of continuous improvement. Stay abreast of the latest Azure services and features, evaluating their potential impact on our projects. Ensure system reliability and performance, troubleshooting, and resolving issues proactively. Qualifications: Proven track record as an Azure DevOps Engineer, with extensive experience in Azure services. Strong background in CI/CD tooling, containerization (Docker, Kubernetes), and IaC (Terraform, ARM templates). Azure certifications (eg, Azure DevOps Engineer Expert, Azure Solutions Architect Expert) highly preferred. Excellent problem-solving skills, with the ability to lead projects and teams effectively. Familiarity with Agile methodologies and a commitment to best practices in DevOps. Outstanding communication skills, capable of working collaboratively across multidisciplinary teams. What We Offer: A key role in a consultancy at the forefront of public sector digital innovation. Opportunities for professional growth, with access to training and certifications. Competitive salary and benefits package, reflecting our commitment to our team's well-being and development. A collaborative, inclusive work environment where your ideas and contributions are valued. Application Process: Interested in shaping the future of digital public services? Apply by sending your CV and a cover letter detailing your Azure DevOps experience and what motivates you to join Scrumconnect Consulting. At Scrumconnect Consulting, we embrace diversity and encourage applications from all qualified candidates, regardless of background. Join us, and let's innovate together.
07/05/2024
Full time
Job Description Role Overview: SHIFT PATTERNS 3 DAYS 8AM-4PM 2 DAYS 12PM -8PM We're seeking a Senior Azure DevOps Engineer to join our dynamic team. This role requires a blend of technical expertise, leadership, and a passion for cloud technologies. You'll be instrumental in automating, optimizing, and securing our Azure environments, driving efficiency, and innovation across our projects. Responsibilities: Lead Azure infrastructure projects, ensuring best practices in CI/CD pipelines, monitoring, and security. Collaborate with development teams to implement scalable and secure cloud solutions. Drive the adoption of Infrastructure as Code (IaC) within Azure, enhancing automation and consistency. Mentor junior DevOps team members, sharing knowledge and fostering a culture of continuous improvement. Stay abreast of the latest Azure services and features, evaluating their potential impact on our projects. Ensure system reliability and performance, troubleshooting, and resolving issues proactively. Qualifications: Proven track record as an Azure DevOps Engineer, with extensive experience in Azure services. Strong background in CI/CD tooling, containerization (Docker, Kubernetes), and IaC (Terraform, ARM templates). Azure certifications (eg, Azure DevOps Engineer Expert, Azure Solutions Architect Expert) highly preferred. Excellent problem-solving skills, with the ability to lead projects and teams effectively. Familiarity with Agile methodologies and a commitment to best practices in DevOps. Outstanding communication skills, capable of working collaboratively across multidisciplinary teams. What We Offer: A key role in a consultancy at the forefront of public sector digital innovation. Opportunities for professional growth, with access to training and certifications. Competitive salary and benefits package, reflecting our commitment to our team's well-being and development. A collaborative, inclusive work environment where your ideas and contributions are valued. Application Process: Interested in shaping the future of digital public services? Apply by sending your CV and a cover letter detailing your Azure DevOps experience and what motivates you to join Scrumconnect Consulting. At Scrumconnect Consulting, we embrace diversity and encourage applications from all qualified candidates, regardless of background. Join us, and let's innovate together.
Senior DevOps Engineer - Cloud - Permanent - Poland Robson Bale are looking for a Senior DevOps Engineer to come on board for a permanent opportunity in Poland. Role can be fully remote from Poland Permanent, Excellent Salary Responsibilities: Technical Skills - Must have Leadership: Lead and manage DevOps/Infrastructure projects, overseeing the entire development life cycle. Collaborate with cross-functional teams to align project objectives and deliverables. Ensure adherence to timelines, budgets, and quality standards. Mentor and guide team members and interns, fostering a culture of continuous learning. Security and Compliance: Demonstrate a deep understanding of Standard Operating Procedures (SOP) for security practices. Perform threat modelling and implement encryption, network defense, and web security measures. Champion security best practices in a production environment and address cloud security risks. Integrate identity providers such as OAuth, OIDC, and SAML to enhance security. DevOps/Infrastructure and Cloud Expertise: Drive change, release, and incident management processes to maintain a stable environment. Utilize extensive experience in DevOps to optimize performance, conduct application upgrades, and apply patches. Lead continuous integration and deployment efforts using tools like Jenkins and Ansible. Demonstrate proficiency in coding and automation to streamline operations. Good hands-on knowledge of AWS/AZURE/GCP cloud service providers. Cloud Infrastructure Management: Exhibit strong expertise in AWS/AZURE/GCP/OCI cloud services and maintain infrastructure as code (IAC) using Ansible, Terraform, or CloudFormation. Oversee containerization technologies like Docker and Kubernetes to enhance scalability and efficiency. Manage Linux-based systems and network configurations to ensure smooth operations. Security and Access Management: Demonstrate a solid grasp of identity and access management (IAM) principles. Manage Security Groups (SGs), Firewall services, and secrets effectively. Optimize service costs based on resource utilization and scale. Monitoring and Reliability: Ensure ongoing and reliable monitoring of the infrastructure to promptly address issues. Implement performance tuning and optimization strategies to maintain high availability. Technical Requirements: Proficient in Python/Java/bash Scripting for automation and tooling. Expertise in AWS/AZURE/GCP/OCI cloud services like Azure Kubernetes Service/Elastic Kubernetes Service/Google Kubernetes Engine. Extensive experience with CI/CD pipelines, particularly using Jenkins . Strong familiarity with Docker and Kubernetes for container orchestration. In-depth understanding of networking principles. Good to Have Skillsets: Experience in crafting intuitive and engaging user interfaces (UI) for web applications, mobile apps, or other AI-powered interfaces. Experience with design thinking methodologies. Understanding of data visualization and information architecture. Ability to write clear documentation. Experience with voice user interfaces (VUIs). Knowledge of animation and micro interactions for enhancing user experience. Experience with design systems and component libraries. Process Skills: General SDLC processes Understanding of utilizing Agile and Scrum software development methodologies Attention to detail and commitment to quality. Behavioral Skills: Work closely with designers, product managers, Developers, and data scientists to deliver comprehensive solutions. Communicate effectively and share knowledge with the team. Be open to feedback and continuously learn and adapt to new technologies. Ability to work independently and as part of a team. Ability to work effectively under pressure and meet deadlines. Passion for learning and staying updated on the latest technologies. Good Attitude and Quick learner. Certification (Good to have) : Certifications (Preferred, any 1 or more Cloud Service Provider): AWS associate certification (eg, AWS Certified Solutions Architect, AWS Certified DevOps Engineer) Certified Kubernetes Administrator (CKA) certification. Certified Docker Captain. Azure Certifications (eg Azure Fundamentals, Azure Administrator Associate, DevOps Engineer Expert, Azure Security Engineer Associate) GCP certifications (eg Cloud DevOps Engineer, Cloud network Engineer, Google Workspace Administrator) Networking related certification. Role can be fully remote from Poland Permanent, Excellent Salary Senior DevOps Engineer - Cloud - Permanent - Poland
07/05/2024
Full time
Senior DevOps Engineer - Cloud - Permanent - Poland Robson Bale are looking for a Senior DevOps Engineer to come on board for a permanent opportunity in Poland. Role can be fully remote from Poland Permanent, Excellent Salary Responsibilities: Technical Skills - Must have Leadership: Lead and manage DevOps/Infrastructure projects, overseeing the entire development life cycle. Collaborate with cross-functional teams to align project objectives and deliverables. Ensure adherence to timelines, budgets, and quality standards. Mentor and guide team members and interns, fostering a culture of continuous learning. Security and Compliance: Demonstrate a deep understanding of Standard Operating Procedures (SOP) for security practices. Perform threat modelling and implement encryption, network defense, and web security measures. Champion security best practices in a production environment and address cloud security risks. Integrate identity providers such as OAuth, OIDC, and SAML to enhance security. DevOps/Infrastructure and Cloud Expertise: Drive change, release, and incident management processes to maintain a stable environment. Utilize extensive experience in DevOps to optimize performance, conduct application upgrades, and apply patches. Lead continuous integration and deployment efforts using tools like Jenkins and Ansible. Demonstrate proficiency in coding and automation to streamline operations. Good hands-on knowledge of AWS/AZURE/GCP cloud service providers. Cloud Infrastructure Management: Exhibit strong expertise in AWS/AZURE/GCP/OCI cloud services and maintain infrastructure as code (IAC) using Ansible, Terraform, or CloudFormation. Oversee containerization technologies like Docker and Kubernetes to enhance scalability and efficiency. Manage Linux-based systems and network configurations to ensure smooth operations. Security and Access Management: Demonstrate a solid grasp of identity and access management (IAM) principles. Manage Security Groups (SGs), Firewall services, and secrets effectively. Optimize service costs based on resource utilization and scale. Monitoring and Reliability: Ensure ongoing and reliable monitoring of the infrastructure to promptly address issues. Implement performance tuning and optimization strategies to maintain high availability. Technical Requirements: Proficient in Python/Java/bash Scripting for automation and tooling. Expertise in AWS/AZURE/GCP/OCI cloud services like Azure Kubernetes Service/Elastic Kubernetes Service/Google Kubernetes Engine. Extensive experience with CI/CD pipelines, particularly using Jenkins . Strong familiarity with Docker and Kubernetes for container orchestration. In-depth understanding of networking principles. Good to Have Skillsets: Experience in crafting intuitive and engaging user interfaces (UI) for web applications, mobile apps, or other AI-powered interfaces. Experience with design thinking methodologies. Understanding of data visualization and information architecture. Ability to write clear documentation. Experience with voice user interfaces (VUIs). Knowledge of animation and micro interactions for enhancing user experience. Experience with design systems and component libraries. Process Skills: General SDLC processes Understanding of utilizing Agile and Scrum software development methodologies Attention to detail and commitment to quality. Behavioral Skills: Work closely with designers, product managers, Developers, and data scientists to deliver comprehensive solutions. Communicate effectively and share knowledge with the team. Be open to feedback and continuously learn and adapt to new technologies. Ability to work independently and as part of a team. Ability to work effectively under pressure and meet deadlines. Passion for learning and staying updated on the latest technologies. Good Attitude and Quick learner. Certification (Good to have) : Certifications (Preferred, any 1 or more Cloud Service Provider): AWS associate certification (eg, AWS Certified Solutions Architect, AWS Certified DevOps Engineer) Certified Kubernetes Administrator (CKA) certification. Certified Docker Captain. Azure Certifications (eg Azure Fundamentals, Azure Administrator Associate, DevOps Engineer Expert, Azure Security Engineer Associate) GCP certifications (eg Cloud DevOps Engineer, Cloud network Engineer, Google Workspace Administrator) Networking related certification. Role can be fully remote from Poland Permanent, Excellent Salary Senior DevOps Engineer - Cloud - Permanent - Poland
Context: The client are on site with GDS setting up an organisational unit. This will include security control policies, aws configutaion, security hub. This will be very governance/control orientated. They will then be migrating GDS' AWS accounts across to this new organisational unit. The role is as follows: Inside IR35 £550 Inside ir35 Remote day to day, twice a month in the office (Whitehall or Whitechapel) Start 15.05.24 End 15.11.24 Essential Skills: AWS Consultancy skills - can you push back on client/colleagues in a good manner? Terraform Security Cleared Then a combination of the below; AWS Control Tower Organisational Units Splunk Ruby Python Golang JavaScript Desirable: Public Sector Consultancy
07/05/2024
Project-based
Context: The client are on site with GDS setting up an organisational unit. This will include security control policies, aws configutaion, security hub. This will be very governance/control orientated. They will then be migrating GDS' AWS accounts across to this new organisational unit. The role is as follows: Inside IR35 £550 Inside ir35 Remote day to day, twice a month in the office (Whitehall or Whitechapel) Start 15.05.24 End 15.11.24 Essential Skills: AWS Consultancy skills - can you push back on client/colleagues in a good manner? Terraform Security Cleared Then a combination of the below; AWS Control Tower Organisational Units Splunk Ruby Python Golang JavaScript Desirable: Public Sector Consultancy
Oracle PL/SQL Dev. Engineer - Long-term - Amsterdam, Hybrid Levy Professionals is currently looking for a Dev. Engineer with strong experience in Oracle database development for the financial reporting teams at one of the largest financial institutions in the Netherlands. You will be ensuring that the reporting is complete, accurate and timely and to ensure that the company globally complies with the relevant regulations. Responsibilities As an Oracle Dev. Engineer, these will be your key responsibilities: -Design and develop new procedures and functions in the company's RDBMS -Solving and optimize complex SQL queries to provide data for analytics and reporting -Build SQL queries to automate procedures by including new frameworks and regulations -Write PL/SQL Packages, Procedures Functions, Triggers -Databases Performance Tuning and Test Automation Who are you? -5+ years of experience as a developer, with experience in the banking industry and financial markets transaction reporting -Expertise in Oracle database development (SQL and PL/SQL) -Strong understanding of RDBMS and CI/CD concepts -Strong knowledge of query optimization and performance tuning -Knowledge of Azure DevOps -Experience with Tibco BusinessWorks/EMS is a strong advantage About Levy Professionals Since 2000, we have been delivering professional solutions to organizations ranging from tech start-ups to global players. From our offices in Amsterdam and London, we have built an international and local network of experienced salaried professionals, driven by our passion for connecting skills with projects. Over the years we have filled over 1,700 applications and today we have consistently recruited and seconded 250+ professionals from 14 countries who have been deployed on a variety of projects. Our strength is the way we see and treat people. This will always be an important factor in our strategy for the coming years.
07/05/2024
Project-based
Oracle PL/SQL Dev. Engineer - Long-term - Amsterdam, Hybrid Levy Professionals is currently looking for a Dev. Engineer with strong experience in Oracle database development for the financial reporting teams at one of the largest financial institutions in the Netherlands. You will be ensuring that the reporting is complete, accurate and timely and to ensure that the company globally complies with the relevant regulations. Responsibilities As an Oracle Dev. Engineer, these will be your key responsibilities: -Design and develop new procedures and functions in the company's RDBMS -Solving and optimize complex SQL queries to provide data for analytics and reporting -Build SQL queries to automate procedures by including new frameworks and regulations -Write PL/SQL Packages, Procedures Functions, Triggers -Databases Performance Tuning and Test Automation Who are you? -5+ years of experience as a developer, with experience in the banking industry and financial markets transaction reporting -Expertise in Oracle database development (SQL and PL/SQL) -Strong understanding of RDBMS and CI/CD concepts -Strong knowledge of query optimization and performance tuning -Knowledge of Azure DevOps -Experience with Tibco BusinessWorks/EMS is a strong advantage About Levy Professionals Since 2000, we have been delivering professional solutions to organizations ranging from tech start-ups to global players. From our offices in Amsterdam and London, we have built an international and local network of experienced salaried professionals, driven by our passion for connecting skills with projects. Over the years we have filled over 1,700 applications and today we have consistently recruited and seconded 250+ professionals from 14 countries who have been deployed on a variety of projects. Our strength is the way we see and treat people. This will always be an important factor in our strategy for the coming years.
Oracle PL/SQL Dev. Engineer - Long-term - Amsterdam, Hybrid Levy Professionals is currently looking for a Dev. Engineer with strong experience in Oracle database development for the financial reporting teams at one of the largest financial institutions in the Netherlands. You will be ensuring that the reporting is complete, accurate and timely and to ensure that the company globally complies with the relevant regulations. Responsibilities As an Oracle Dev. Engineer, these will be your key responsibilities: -Design and develop new procedures and functions in the company's RDBMS -Solving and optimize complex SQL queries to provide data for analytics and reporting -Build SQL queries to automate procedures by including new frameworks and regulations -Write PL/SQL Packages, Procedures Functions, Triggers -Databases Performance Tuning and Test Automation Who are you? -5+ years of experience as a developer, with experience in the banking industry and financial markets transaction reporting -Expertise in Oracle database development (SQL and PL/SQL) -Strong understanding of RDBMS and CI/CD concepts -Strong knowledge of query optimization and performance tuning -Knowledge of Azure DevOps -Experience with Tibco BusinessWorks/EMS is a strong advantage About Levy Professionals Since 2000, we have been delivering professional solutions to organizations ranging from tech start-ups to global players. From our offices in Amsterdam and London, we have built an international and local network of experienced salaried professionals, driven by our passion for connecting skills with projects. Over the years we have filled over 1,700 applications and today we have consistently recruited and seconded 250+ professionals from 14 countries who have been deployed on a variety of projects. Our strength is the way we see and treat people. This will always be an important factor in our strategy for the coming years.
07/05/2024
Project-based
Oracle PL/SQL Dev. Engineer - Long-term - Amsterdam, Hybrid Levy Professionals is currently looking for a Dev. Engineer with strong experience in Oracle database development for the financial reporting teams at one of the largest financial institutions in the Netherlands. You will be ensuring that the reporting is complete, accurate and timely and to ensure that the company globally complies with the relevant regulations. Responsibilities As an Oracle Dev. Engineer, these will be your key responsibilities: -Design and develop new procedures and functions in the company's RDBMS -Solving and optimize complex SQL queries to provide data for analytics and reporting -Build SQL queries to automate procedures by including new frameworks and regulations -Write PL/SQL Packages, Procedures Functions, Triggers -Databases Performance Tuning and Test Automation Who are you? -5+ years of experience as a developer, with experience in the banking industry and financial markets transaction reporting -Expertise in Oracle database development (SQL and PL/SQL) -Strong understanding of RDBMS and CI/CD concepts -Strong knowledge of query optimization and performance tuning -Knowledge of Azure DevOps -Experience with Tibco BusinessWorks/EMS is a strong advantage About Levy Professionals Since 2000, we have been delivering professional solutions to organizations ranging from tech start-ups to global players. From our offices in Amsterdam and London, we have built an international and local network of experienced salaried professionals, driven by our passion for connecting skills with projects. Over the years we have filled over 1,700 applications and today we have consistently recruited and seconded 250+ professionals from 14 countries who have been deployed on a variety of projects. Our strength is the way we see and treat people. This will always be an important factor in our strategy for the coming years.
Aktuell sind wir von RED Global auf der Suche nach einem Experten im Bereich Applikations Management und Devops Engineering. Start: ab sofort Dauer: Erstbeauftragung bis 31.03.2025 + Verlängerungsoption Auslastung: Vollzeit Standort: Remote, Wien Rolle: Operations Manager/Webserver Tomcat & Apache Projektsprache: Deutsch Volumen: 1600 Stunden, großteils remote-Arbeit möglich, aber 2-wöchentliche Onsite-Präsenz tageweise notwendig bei Meetings; Für einen Kunden im Öffentlichen Sektor wird ein:e Applikationsmanager/DevOps Engineer gesucht, um das aktuelle zwei-Personen Team mit einer weiteren Vollzeitstelle zu unterstützen. Für die Unterstützung des Teams sind folgende Grundkenntnisse erforderlich: - Linux/Systemd (Bedienung auf Command Line) - Apache und Tomcat-Server (Konfiguration und Betrieb) - OpenLDAP (Konfiguration, Betrieb, DB-Manipulation) Wünschenswert, aber nach Rücksprache definitiv keine Voraussetzung: - Erfahrung im Umfeld der öffentlichen Verwaltung - Zertifikatshandling (SSL und Client-Zertifikate, OpenSSL) - SAML und OpenID Connect Protokoll - Modules for Online Applications (MOA) als Basis für Handysignatur und ID-Austria Die Tätigkeit umfasst sowohl den technischen als auch den administrativen Betrieb, sowie den laufenden Ausbau zentraler Portalinfrastrukturen Falls das Projekt Ihnen zusagt, würde ich mich sehr über Ihre Rückmeldung freuen. Senden Sie mir gerne Ihren aktuellen CV, Stundensatz sowie Ihre Telefonnummer zu. Ich melde mich schnellstmöglich bei Ihnen, um über nähere Details zu sprechen. Vielen Dank & Beste Grüße Mike Feustel
07/05/2024
Project-based
Aktuell sind wir von RED Global auf der Suche nach einem Experten im Bereich Applikations Management und Devops Engineering. Start: ab sofort Dauer: Erstbeauftragung bis 31.03.2025 + Verlängerungsoption Auslastung: Vollzeit Standort: Remote, Wien Rolle: Operations Manager/Webserver Tomcat & Apache Projektsprache: Deutsch Volumen: 1600 Stunden, großteils remote-Arbeit möglich, aber 2-wöchentliche Onsite-Präsenz tageweise notwendig bei Meetings; Für einen Kunden im Öffentlichen Sektor wird ein:e Applikationsmanager/DevOps Engineer gesucht, um das aktuelle zwei-Personen Team mit einer weiteren Vollzeitstelle zu unterstützen. Für die Unterstützung des Teams sind folgende Grundkenntnisse erforderlich: - Linux/Systemd (Bedienung auf Command Line) - Apache und Tomcat-Server (Konfiguration und Betrieb) - OpenLDAP (Konfiguration, Betrieb, DB-Manipulation) Wünschenswert, aber nach Rücksprache definitiv keine Voraussetzung: - Erfahrung im Umfeld der öffentlichen Verwaltung - Zertifikatshandling (SSL und Client-Zertifikate, OpenSSL) - SAML und OpenID Connect Protokoll - Modules for Online Applications (MOA) als Basis für Handysignatur und ID-Austria Die Tätigkeit umfasst sowohl den technischen als auch den administrativen Betrieb, sowie den laufenden Ausbau zentraler Portalinfrastrukturen Falls das Projekt Ihnen zusagt, würde ich mich sehr über Ihre Rückmeldung freuen. Senden Sie mir gerne Ihren aktuellen CV, Stundensatz sowie Ihre Telefonnummer zu. Ich melde mich schnellstmöglich bei Ihnen, um über nähere Details zu sprechen. Vielen Dank & Beste Grüße Mike Feustel
Rust Programmer - Remote - 7-8 months+ (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. For this role you can work remotely. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
07/05/2024
Project-based
Rust Programmer - Remote - 7-8 months+ (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. For this role you can work remotely. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
F5 WAF Engineer Whitehall resources are looking for an F5 WAF Engineer. This is an initial 6-month contract, working onsite 2 days per week in Sheffield. *Inside IR35 - You will be required to use an FCSA Accredited Umbrella Company* Job Description: As an Automation Engineer, you will play a pivotal role in enhancing our IT infrastructure by designing, creating, and maintaining bespoke Continuous Integration/Continuous Deployment (CI/CD) pipelines tailored to specific project needs. This role will have an initial focus on leveraging F5 technologies alongside a broad spectrum of automation and DevOps practices to deliver our automation use cases; however once F5 automaton works have completed, works will progress to other WAF platforms and use cases. You will be responsible for the integration of CI/CD pipelines with solutions developed by other teams, Scripting, and the creation of Infrastructure as Code (IaC) manifests using tools like Terraform and Ansible. Your expertise in Jenkins, JIRA, GitHub, Python, and other relevant technologies will be essential. You should have a solid background in building CI/CD pipelines and a comprehensive understanding of DevOps practices. The ideal candidate should not only have technical proficiency in data structures, automation technologies, API interactions, and cloud services, but also exhibit a strong drive to research, investigate, and collaborate effectively within the organization. Key Responsibilities . Developing and Delivering Automation for F5 WAF Platform: In the first instance: Developing and delivering automation solutions specifically for our F5 Web Application Firewall (WAF) platform, aligned with our specific use cases. This involves Scripting, configuring, and deploying automation workflows that enhance security, manageability, and operational efficiency of the F5 WAF environment. . CI/CD Pipeline Development: Create, enhance and implement new, customized CI/CD pipelines tailored for specific project use cases, ensuring efficient, automated workflows . Pipeline Maintenance: Regularly update and maintain existing CI/CD pipelines to ensure they are efficient, secure, and up-to-date with the latest technology standards . Integration of Solutions: Work collaboratively with other teams to integrate their solutions and tools into the CI/CD pipelines effectively, enhancing overall workflow and productivity. . IaC Manifests Creation: Develop and maintain Infrastructure as Code (IaC) manifests, predominantly using Terraform, to manage and provision IT infrastructure in a consistent and repeatable manner . Tool Proficiency: Utilize and demonstrate expertise in tools such as Jenkins, JIRA, GitHub, and Python, effectively integrating them into the CI/CD processes . Script Writing: Write and maintain scripts to automate various aspects of the infrastructure and deployment processes, improving efficiency and reducing the potential for human error. . Collaboration and Communication: Collaborate with cross-functional teams, including software development, operations, and quality assurance, to ensure seamless integration and implementation of DevOps practices . Proactive Research and Collaboration: Eager to research and utilize company resources like Confluence, find relevant contacts, and reach out to other teams for unknowns. Prepared to independently investigate and resolve challenges. Required F5 Experiences - One or more of these . F5 ASM/AWAF Knowledge & Experience: Understanding and practical experience with F5's Application Security Manager (ASM) and Advanced WAF (AWAF), including configuration, management, and troubleshooting of application security policies and web application Firewalls. . F5 with API Gateway: Experience: Integrating F5 solutions with API Gateway technologies, demonstrating the ability to secure and manage APIs effectively. Experience in using F5 with Kong API Gateway; managing, and optimizing API traffic through F5 systems. . F5 GTM and Proxy Technologies: Knowledge and experience with F5's Global Traffic Manager (GTM) as well as experience with Proxy technologies, including forward and reverse proxies . Basic Certificate Management: Knowledge of SSL/TLS certificate management processes, including issuance, renewal, and deployment, within F5 environments. . F5 AS3: Experience; Experience with AS3 (Application Services 3 Extension), for declarative automation and orchestration of F5 BIG-IP services. Proficiency in automating the deployment and management of F5 configurations using AS3 Key Experience - Ideal Candidate Profile . Technical Expertise in CI/CD Tools: Proficiency in Continuous Integration and Continuous Deployment tools such as Jenkins, CircleCI, Travis CI, GitLab CI, and Bamboo. Ability to configure, manage, and optimize these tools for various project requirements. . Proficiency in Scripting Languages: Strong skills in Scripting languages such as Python, Bash, PowerShell. Ability to write and maintain scripts to automate routine tasks and deployments . Infrastructure as Code (IaC): Extensive experience in creating and managing infrastructure using code. Proficiency in IaC tools like Terraform, Ansible, Chef, or Puppet . Data Structuring and Management: Advanced skills in managing data using formats like JSON, YAML, XML, and others. Capable of parsing, creating, and maintaining complex data structures for configuration and automation purposes. . API Integration and Management: Expertise in querying, integrating, and managing APIs. Capable of constructing and executing API calls for data retrieval, updates, and inter-service communication. . Version Control Systems: In-depth knowledge of version control systems like Git, including branching strategies, repository management, and integrating with CI/CD pipelines . Containerization and Orchestration: Experience with containerization tools such as Docker and orchestration platforms like Kubernetes or Docker Swarm. Understanding of containerized environments and their integration into CI/CD pipelines . Cloud Platforms: Familiarity with major cloud platforms like AWS, Azure, or GCP; understanding of cloud-specific services and how to integrate them into CI/CD processes . Monitoring and Logging: Knowledge of monitoring and logging tools such as Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), or Splunk. Ability to set up and maintain monitoring and logging for applications and infrastructure . Security Practices in DevOps (DevSecOps): Understanding of security practices in a DevOps environment. Familiarity with security scanning tools, implementing secure coding practices, and ensuring compliance with industry standards . Agile and Scrum Methodologies: Experience with Agile and Scrum methodologies. Ability to work in fast-paced, iterative development environments and adapt to changing requirements . Networking and Security Fundamentals: Knowledge of networking concepts (eg, TCP/IP, DNS, HTTP/S) and basic security concepts (eg, Firewalls, VPNs, IDS/IPS). . Problem-Solving and Analytical Skills: Strong problem-solving skills and ability to analyze complex systems and workflows to propose effective automation solutions. . Collaboration and Communication: Excellent collaboration and communication skills. Ability to work effectively in a team and communicate complex technical concepts to both technical and non-technical stakeholders. . Project Management Skills: Basic project management skills with the ability to manage timelines, dependencies, and deliverables in a cross-functional environment. . Research and Investigative Skills: Motivated to self-educate and explore company resources and external knowledge bases. All of our opportunities require that applicants are eligible to work in the specified country/location, unless otherwise stated in the job description. Whitehall Resources are an equal opportunities employer who value a diverse and inclusive working environment. All qualified applicants will receive consideration for employment without regard to race, religion, gender identity or expression, sexual orientation, national origin, pregnancy, disability, age, veteran status, or other characteristics.
07/05/2024
Project-based
F5 WAF Engineer Whitehall resources are looking for an F5 WAF Engineer. This is an initial 6-month contract, working onsite 2 days per week in Sheffield. *Inside IR35 - You will be required to use an FCSA Accredited Umbrella Company* Job Description: As an Automation Engineer, you will play a pivotal role in enhancing our IT infrastructure by designing, creating, and maintaining bespoke Continuous Integration/Continuous Deployment (CI/CD) pipelines tailored to specific project needs. This role will have an initial focus on leveraging F5 technologies alongside a broad spectrum of automation and DevOps practices to deliver our automation use cases; however once F5 automaton works have completed, works will progress to other WAF platforms and use cases. You will be responsible for the integration of CI/CD pipelines with solutions developed by other teams, Scripting, and the creation of Infrastructure as Code (IaC) manifests using tools like Terraform and Ansible. Your expertise in Jenkins, JIRA, GitHub, Python, and other relevant technologies will be essential. You should have a solid background in building CI/CD pipelines and a comprehensive understanding of DevOps practices. The ideal candidate should not only have technical proficiency in data structures, automation technologies, API interactions, and cloud services, but also exhibit a strong drive to research, investigate, and collaborate effectively within the organization. Key Responsibilities . Developing and Delivering Automation for F5 WAF Platform: In the first instance: Developing and delivering automation solutions specifically for our F5 Web Application Firewall (WAF) platform, aligned with our specific use cases. This involves Scripting, configuring, and deploying automation workflows that enhance security, manageability, and operational efficiency of the F5 WAF environment. . CI/CD Pipeline Development: Create, enhance and implement new, customized CI/CD pipelines tailored for specific project use cases, ensuring efficient, automated workflows . Pipeline Maintenance: Regularly update and maintain existing CI/CD pipelines to ensure they are efficient, secure, and up-to-date with the latest technology standards . Integration of Solutions: Work collaboratively with other teams to integrate their solutions and tools into the CI/CD pipelines effectively, enhancing overall workflow and productivity. . IaC Manifests Creation: Develop and maintain Infrastructure as Code (IaC) manifests, predominantly using Terraform, to manage and provision IT infrastructure in a consistent and repeatable manner . Tool Proficiency: Utilize and demonstrate expertise in tools such as Jenkins, JIRA, GitHub, and Python, effectively integrating them into the CI/CD processes . Script Writing: Write and maintain scripts to automate various aspects of the infrastructure and deployment processes, improving efficiency and reducing the potential for human error. . Collaboration and Communication: Collaborate with cross-functional teams, including software development, operations, and quality assurance, to ensure seamless integration and implementation of DevOps practices . Proactive Research and Collaboration: Eager to research and utilize company resources like Confluence, find relevant contacts, and reach out to other teams for unknowns. Prepared to independently investigate and resolve challenges. Required F5 Experiences - One or more of these . F5 ASM/AWAF Knowledge & Experience: Understanding and practical experience with F5's Application Security Manager (ASM) and Advanced WAF (AWAF), including configuration, management, and troubleshooting of application security policies and web application Firewalls. . F5 with API Gateway: Experience: Integrating F5 solutions with API Gateway technologies, demonstrating the ability to secure and manage APIs effectively. Experience in using F5 with Kong API Gateway; managing, and optimizing API traffic through F5 systems. . F5 GTM and Proxy Technologies: Knowledge and experience with F5's Global Traffic Manager (GTM) as well as experience with Proxy technologies, including forward and reverse proxies . Basic Certificate Management: Knowledge of SSL/TLS certificate management processes, including issuance, renewal, and deployment, within F5 environments. . F5 AS3: Experience; Experience with AS3 (Application Services 3 Extension), for declarative automation and orchestration of F5 BIG-IP services. Proficiency in automating the deployment and management of F5 configurations using AS3 Key Experience - Ideal Candidate Profile . Technical Expertise in CI/CD Tools: Proficiency in Continuous Integration and Continuous Deployment tools such as Jenkins, CircleCI, Travis CI, GitLab CI, and Bamboo. Ability to configure, manage, and optimize these tools for various project requirements. . Proficiency in Scripting Languages: Strong skills in Scripting languages such as Python, Bash, PowerShell. Ability to write and maintain scripts to automate routine tasks and deployments . Infrastructure as Code (IaC): Extensive experience in creating and managing infrastructure using code. Proficiency in IaC tools like Terraform, Ansible, Chef, or Puppet . Data Structuring and Management: Advanced skills in managing data using formats like JSON, YAML, XML, and others. Capable of parsing, creating, and maintaining complex data structures for configuration and automation purposes. . API Integration and Management: Expertise in querying, integrating, and managing APIs. Capable of constructing and executing API calls for data retrieval, updates, and inter-service communication. . Version Control Systems: In-depth knowledge of version control systems like Git, including branching strategies, repository management, and integrating with CI/CD pipelines . Containerization and Orchestration: Experience with containerization tools such as Docker and orchestration platforms like Kubernetes or Docker Swarm. Understanding of containerized environments and their integration into CI/CD pipelines . Cloud Platforms: Familiarity with major cloud platforms like AWS, Azure, or GCP; understanding of cloud-specific services and how to integrate them into CI/CD processes . Monitoring and Logging: Knowledge of monitoring and logging tools such as Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), or Splunk. Ability to set up and maintain monitoring and logging for applications and infrastructure . Security Practices in DevOps (DevSecOps): Understanding of security practices in a DevOps environment. Familiarity with security scanning tools, implementing secure coding practices, and ensuring compliance with industry standards . Agile and Scrum Methodologies: Experience with Agile and Scrum methodologies. Ability to work in fast-paced, iterative development environments and adapt to changing requirements . Networking and Security Fundamentals: Knowledge of networking concepts (eg, TCP/IP, DNS, HTTP/S) and basic security concepts (eg, Firewalls, VPNs, IDS/IPS). . Problem-Solving and Analytical Skills: Strong problem-solving skills and ability to analyze complex systems and workflows to propose effective automation solutions. . Collaboration and Communication: Excellent collaboration and communication skills. Ability to work effectively in a team and communicate complex technical concepts to both technical and non-technical stakeholders. . Project Management Skills: Basic project management skills with the ability to manage timelines, dependencies, and deliverables in a cross-functional environment. . Research and Investigative Skills: Motivated to self-educate and explore company resources and external knowledge bases. All of our opportunities require that applicants are eligible to work in the specified country/location, unless otherwise stated in the job description. Whitehall Resources are an equal opportunities employer who value a diverse and inclusive working environment. All qualified applicants will receive consideration for employment without regard to race, religion, gender identity or expression, sexual orientation, national origin, pregnancy, disability, age, veteran status, or other characteristics.
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Romsey SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
07/05/2024
Full time
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Romsey SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Woking SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
07/05/2024
Full time
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Woking SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
Your opportunity To work on our mission to empower every person and every business unit in the group to achieve more thanks to the Microsoft Power Platform To support everyone to build great solutions in Microsoft PowerApps, Power Automate and PowerBI with a high business value To work with internal Zurich teams and external IT suppliers on a variety of initiatives and global projects Join the experienced Power Platform Center for Enablement of one of the biggest Power Platform consumers in the world As a Power Platform Solution Architect your main responsibilities will involve: Empowerment Program Identification of teams and individuals interested in learning more about the Power Platform Delivery of tailored Power Platform trainings internally to empower our collaborators to deliver better value to internal and external customers Reusability Identification of successful solutions built internally which could be reused across the organization to further increase the related ROI Implementation of improvements on such solutions to support scale and roll out to wider population Power Pages Governance Assessment of Power Pages technology, definition and implementation of a suitable Governance Strategy for the organization Identification of a leading use case to implement and showcase the product Mentors lower-level colleagues Works in Agile methodology (Scrum, Kanban) using Azure DevOps, Your Experience As a Microsoft 365 Solution Architect your skills and qualifications will ideally include: Deep knowledge in Power Platform technologies with experience of 3 or more of the following: SharePoint Online, Microsoft Teams, Dynamics 365, Power BI, Power Apps, Power Automate, Dataverse, Power Pages Preferably some experience in IT Governance Preferably Software Engineer degree - Informatics and Computer Engineering Good negotiating skills, performance management, good practice, and techniques as well as fluent written and spoken English Very good team player who is skilled at building up and managing stakeholder relationships successfully Ideally you already hold Power Platform Certifications Your Technical Skills Power Platform Products (PowerApps, Power Automate, AI Builder etc.) Microsoft Office 365 (SharePoint Online, MS Teams, MS Forms, Outlook etc) Azure Cloud Services Job Title: Microsoft Power Platform Solution Architect Location: Zürich, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
07/05/2024
Project-based
Your opportunity To work on our mission to empower every person and every business unit in the group to achieve more thanks to the Microsoft Power Platform To support everyone to build great solutions in Microsoft PowerApps, Power Automate and PowerBI with a high business value To work with internal Zurich teams and external IT suppliers on a variety of initiatives and global projects Join the experienced Power Platform Center for Enablement of one of the biggest Power Platform consumers in the world As a Power Platform Solution Architect your main responsibilities will involve: Empowerment Program Identification of teams and individuals interested in learning more about the Power Platform Delivery of tailored Power Platform trainings internally to empower our collaborators to deliver better value to internal and external customers Reusability Identification of successful solutions built internally which could be reused across the organization to further increase the related ROI Implementation of improvements on such solutions to support scale and roll out to wider population Power Pages Governance Assessment of Power Pages technology, definition and implementation of a suitable Governance Strategy for the organization Identification of a leading use case to implement and showcase the product Mentors lower-level colleagues Works in Agile methodology (Scrum, Kanban) using Azure DevOps, Your Experience As a Microsoft 365 Solution Architect your skills and qualifications will ideally include: Deep knowledge in Power Platform technologies with experience of 3 or more of the following: SharePoint Online, Microsoft Teams, Dynamics 365, Power BI, Power Apps, Power Automate, Dataverse, Power Pages Preferably some experience in IT Governance Preferably Software Engineer degree - Informatics and Computer Engineering Good negotiating skills, performance management, good practice, and techniques as well as fluent written and spoken English Very good team player who is skilled at building up and managing stakeholder relationships successfully Ideally you already hold Power Platform Certifications Your Technical Skills Power Platform Products (PowerApps, Power Automate, AI Builder etc.) Microsoft Office 365 (SharePoint Online, MS Teams, MS Forms, Outlook etc) Azure Cloud Services Job Title: Microsoft Power Platform Solution Architect Location: Zürich, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
Python Programmer - Brussels - English speaking (ML, Machine Learning, Data, Data Wrangling, AWS, Linux, Kubernetes, Argo, Automation) One of our Blue Chip Clients is urgently looking for a Python Programmer. Please find some details below: We are seeking a highly skilled Senior Python Programmer with expertise in machine learning (ML) data wrangling, interfacing, and automation. The ideal candidate will be proficient in building robust data pipelines and automating complex tasks to support ML initiatives. They will have a keen understanding of observability principles and possess hands-on experience with AWS, Linux, and preferably Kubernetes and Argo. Responsibilities: - Develop and maintain robust data pipelines for ML data wrangling, interfacing, and automation. - Implement automation solutions to streamline data processing and model deployment workflows. - Ensure observability and monitoring of systems, providing insights into performance and reliability. - Utilize AWS services such as S3, Lambda, and networking components for data storage, processing, and permissions management. - Collaborate with DevOps teams to deploy and manage applications in Linux environments. - Support Kubernetes and Argo workflows for scalable and efficient ML model training and deployment. - Manage AWS permissions and network configurations to ensure data security and compliance. - Maintain version control of codebase using Git and enforce best practices for code documentation and production readiness. - Collaborate with data scientists to develop small UI tools for querying data from databases and AWS S3. Requirements: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Proficiency in Python programming language with a focus on ML data wrangling and automation. - Strong experience with AWS services, including S3, Lambda, networking, and permissions management. - Hands-on experience with Linux environments and Shell Scripting. - Familiarity with Kubernetes and Argo for container orchestration and workflow management (preferred). - Knowledge of Git for version control and collaboration. - Excellent communication skills and ability to work in a collaborative team environment. - Strong problem-solving skills and attention to detail. - Ability to prioritize tasks and work efficiently in a fast-paced environment. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
07/05/2024
Project-based
Python Programmer - Brussels - English speaking (ML, Machine Learning, Data, Data Wrangling, AWS, Linux, Kubernetes, Argo, Automation) One of our Blue Chip Clients is urgently looking for a Python Programmer. Please find some details below: We are seeking a highly skilled Senior Python Programmer with expertise in machine learning (ML) data wrangling, interfacing, and automation. The ideal candidate will be proficient in building robust data pipelines and automating complex tasks to support ML initiatives. They will have a keen understanding of observability principles and possess hands-on experience with AWS, Linux, and preferably Kubernetes and Argo. Responsibilities: - Develop and maintain robust data pipelines for ML data wrangling, interfacing, and automation. - Implement automation solutions to streamline data processing and model deployment workflows. - Ensure observability and monitoring of systems, providing insights into performance and reliability. - Utilize AWS services such as S3, Lambda, and networking components for data storage, processing, and permissions management. - Collaborate with DevOps teams to deploy and manage applications in Linux environments. - Support Kubernetes and Argo workflows for scalable and efficient ML model training and deployment. - Manage AWS permissions and network configurations to ensure data security and compliance. - Maintain version control of codebase using Git and enforce best practices for code documentation and production readiness. - Collaborate with data scientists to develop small UI tools for querying data from databases and AWS S3. Requirements: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Proficiency in Python programming language with a focus on ML data wrangling and automation. - Strong experience with AWS services, including S3, Lambda, networking, and permissions management. - Hands-on experience with Linux environments and Shell Scripting. - Familiarity with Kubernetes and Argo for container orchestration and workflow management (preferred). - Knowledge of Git for version control and collaboration. - Excellent communication skills and ability to work in a collaborative team environment. - Strong problem-solving skills and attention to detail. - Ability to prioritize tasks and work efficiently in a fast-paced environment. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
Rust Programmer - Brussels - English speaking (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
07/05/2024
Project-based
Rust Programmer - Brussels - English speaking (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script - Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script - Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
07/05/2024
Full time
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension