RedHat Linux Engineer - 12-Month Contract- Hybrid/Brussels Hamilton Barnes is currently on the lookout for a Redhat Linux Engineer. The successful candidate will be responsible for a variety of systems operation activities, including the setup, installation, tuning, troubleshooting, monitoring, and maintenance of approximately 600 Linux Servers. This role requires installing, configuring, and hardening new RedHat Linux Servers, both physical and virtual. Key Responsibilities: Setup, installation, tuning, troubleshooting, monitoring, and maintenance of Linux Servers Management of firmware patches Installing, configuring, and hardening new Linux RedHat Servers (physical and virtual) Automation of tasks using Ansible, Shell Scripting, Playbooks, and Python Utilization of Red Hat Satellite for patch management Skills/Requirements: Proficiency in Red Hat Linux versions 6.x, 7.x, and 8.x Extensive experience with Ansible and automation tools Strong Scripting skills in Shell, Playbook, and Python Hands-on experience with Red Hat Satellite for patch management Knowledge of DevOps practices and Kubernetes/container technology (eg, Docker) Understanding of Infrastructure as Code (IaC) principles Familiarity with Cloud (AWS, Azure) and Hybrid Cloud environments Contract Details: Duration: 12 months (View to Extension) Day Rate: €460 per day Location: Occasional weekly onsite requirements in Belgium Start Date: ASAP RedHat Linux Engineer - 12-Month Contract- Hybrid/Brussels
17/05/2024
Project-based
RedHat Linux Engineer - 12-Month Contract- Hybrid/Brussels Hamilton Barnes is currently on the lookout for a Redhat Linux Engineer. The successful candidate will be responsible for a variety of systems operation activities, including the setup, installation, tuning, troubleshooting, monitoring, and maintenance of approximately 600 Linux Servers. This role requires installing, configuring, and hardening new RedHat Linux Servers, both physical and virtual. Key Responsibilities: Setup, installation, tuning, troubleshooting, monitoring, and maintenance of Linux Servers Management of firmware patches Installing, configuring, and hardening new Linux RedHat Servers (physical and virtual) Automation of tasks using Ansible, Shell Scripting, Playbooks, and Python Utilization of Red Hat Satellite for patch management Skills/Requirements: Proficiency in Red Hat Linux versions 6.x, 7.x, and 8.x Extensive experience with Ansible and automation tools Strong Scripting skills in Shell, Playbook, and Python Hands-on experience with Red Hat Satellite for patch management Knowledge of DevOps practices and Kubernetes/container technology (eg, Docker) Understanding of Infrastructure as Code (IaC) principles Familiarity with Cloud (AWS, Azure) and Hybrid Cloud environments Contract Details: Duration: 12 months (View to Extension) Day Rate: €460 per day Location: Occasional weekly onsite requirements in Belgium Start Date: ASAP RedHat Linux Engineer - 12-Month Contract- Hybrid/Brussels
Your new company and role Hays' client is a public sector organisation who are looking for a database engineer to join their database development team. The purpose of this role is to accelerate the client's migration from on-prem databases to new AWS native solutions. Main outcomes and objectives include: Major version upgrade of the current on-prem MongoDB Estate. Major version upgrades to multiple on-prem Postgres databases. Support development teams with on-prem migrations to AWS RDS. Help migrate our current on-prem Grafana instance to containers deployed on OpenShift. Support the Database team with ongoing BAU tasks such as upgrading, patching, automation, and monitoring improvements etc. Help fixing the Support Requests raised by our stakeholders. What you'll need to succeed Significant commercial experience with the following technology: MongoDB Postgres AWS IAM, S3, EC2, RDS Ansible Typescript CDK and AWS Development tools including Cloud Formation SQL Monitoring solutions (eg, CloudWatch, Grafana) What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
17/05/2024
Project-based
Your new company and role Hays' client is a public sector organisation who are looking for a database engineer to join their database development team. The purpose of this role is to accelerate the client's migration from on-prem databases to new AWS native solutions. Main outcomes and objectives include: Major version upgrade of the current on-prem MongoDB Estate. Major version upgrades to multiple on-prem Postgres databases. Support development teams with on-prem migrations to AWS RDS. Help migrate our current on-prem Grafana instance to containers deployed on OpenShift. Support the Database team with ongoing BAU tasks such as upgrading, patching, automation, and monitoring improvements etc. Help fixing the Support Requests raised by our stakeholders. What you'll need to succeed Significant commercial experience with the following technology: MongoDB Postgres AWS IAM, S3, EC2, RDS Ansible Typescript CDK and AWS Development tools including Cloud Formation SQL Monitoring solutions (eg, CloudWatch, Grafana) What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
Cloud Operations Engineer Looking for a DevOps engineer interested in a challenging and rewarding opportunity? My client, based in the South West, is actively hiring for a mid-level DevOps professional to join their team. The Role: You'll play a key part in an ambitious cloud migration project, transitioning the company's existing AWS infrastructure to Azure over the next 12 months. Working closely with the lead DevOps engineer, you'll be involved in every stage of the migration process, from inception through to implementation. Your Responsibilities: Collaborate with cross-functional teams to plan, design, and execute the migration strategy Leverage experience in AWS and Azure to ensure a smooth and efficient transition Automate deployment processes and implement DevOps best practices Contribute to the continuous improvement of the company's cloud infrastructure What You'll Bring: Proven experience in DevOps principles and methodologies Strong proficiency in AWS and Azure cloud platforms Expertise in containerisation technologies (Docker, Kubernetes) Familiarity with configuration management tools (Ansible, Terraform) Excellent problem-solving and analytical skills A collaborative and team-oriented mindset What We Offer: Competitive salary with a top banding of £70,000 Flexible working arrangements, with occasional on-site presence required (1-2 days per month) Opportunity to be part of a cutting-edge cloud migration project Continuous professional development and growth opportunities Collaborative and supportive work environment If you're excited about this challenge and have experience with this technology, I'd love to hear from you. Please apply apply below or email me at (see below) with your CV and we can set up a call. Cloud Operations Engineer
17/05/2024
Full time
Cloud Operations Engineer Looking for a DevOps engineer interested in a challenging and rewarding opportunity? My client, based in the South West, is actively hiring for a mid-level DevOps professional to join their team. The Role: You'll play a key part in an ambitious cloud migration project, transitioning the company's existing AWS infrastructure to Azure over the next 12 months. Working closely with the lead DevOps engineer, you'll be involved in every stage of the migration process, from inception through to implementation. Your Responsibilities: Collaborate with cross-functional teams to plan, design, and execute the migration strategy Leverage experience in AWS and Azure to ensure a smooth and efficient transition Automate deployment processes and implement DevOps best practices Contribute to the continuous improvement of the company's cloud infrastructure What You'll Bring: Proven experience in DevOps principles and methodologies Strong proficiency in AWS and Azure cloud platforms Expertise in containerisation technologies (Docker, Kubernetes) Familiarity with configuration management tools (Ansible, Terraform) Excellent problem-solving and analytical skills A collaborative and team-oriented mindset What We Offer: Competitive salary with a top banding of £70,000 Flexible working arrangements, with occasional on-site presence required (1-2 days per month) Opportunity to be part of a cutting-edge cloud migration project Continuous professional development and growth opportunities Collaborative and supportive work environment If you're excited about this challenge and have experience with this technology, I'd love to hear from you. Please apply apply below or email me at (see below) with your CV and we can set up a call. Cloud Operations Engineer
Senior Cloud Network Engineer Permanent, 3 days in office in London Overview: The Company is a leading financial services firm. The technology is being transformed to a Cloud-First, Cloud-Native architectural model, utilizing DevSecOps processes and adoption of systems-thinking concepts to enhance productivity. The Cloud Network Engineer are responsible for delivering modern end user solutions that are fully automated through code, ensuring scalability and optimize availability and reliability 24/7. Responsibilities: Engineer and secure core Azure platform services across global footprint. Go deep on Cloud Network Engineer, adopting Zero Trust Architecture principles. Engineer and maintain Cloud Secure Web Gateways, Next-Gen CASB solutions Advance branch/SD-WAN solution to optimise network performance and connectivity. Colloborate with other areas of engineering and Service operations to ensure the successful integration of SSE/SASE Automate every operational aspect of infrastructure and systems life cycle Respond to incidents. Run Infrastructure with Python/PowerShell, Ansible, Terraform, Azure DevOps, CI/CD, Kubernetes. Design, build and maintain core infrastructure. Debub production issues Requirements: Strong experience in Windows Servers, Virtulisation, Containerisation tech on Azure Proficiency in Object Oriented programming and developing automated solutions through code. Experience in configuration management systems like Ansible Passion for network security and desire to protect organisations from cyber threats. Keen on Open Source development. Collaborative and able to communicate effectively and asynchronously.
16/05/2024
Full time
Senior Cloud Network Engineer Permanent, 3 days in office in London Overview: The Company is a leading financial services firm. The technology is being transformed to a Cloud-First, Cloud-Native architectural model, utilizing DevSecOps processes and adoption of systems-thinking concepts to enhance productivity. The Cloud Network Engineer are responsible for delivering modern end user solutions that are fully automated through code, ensuring scalability and optimize availability and reliability 24/7. Responsibilities: Engineer and secure core Azure platform services across global footprint. Go deep on Cloud Network Engineer, adopting Zero Trust Architecture principles. Engineer and maintain Cloud Secure Web Gateways, Next-Gen CASB solutions Advance branch/SD-WAN solution to optimise network performance and connectivity. Colloborate with other areas of engineering and Service operations to ensure the successful integration of SSE/SASE Automate every operational aspect of infrastructure and systems life cycle Respond to incidents. Run Infrastructure with Python/PowerShell, Ansible, Terraform, Azure DevOps, CI/CD, Kubernetes. Design, build and maintain core infrastructure. Debub production issues Requirements: Strong experience in Windows Servers, Virtulisation, Containerisation tech on Azure Proficiency in Object Oriented programming and developing automated solutions through code. Experience in configuration management systems like Ansible Passion for network security and desire to protect organisations from cyber threats. Keen on Open Source development. Collaborative and able to communicate effectively and asynchronously.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking an AWS DevOps Software Engineer. Candidate will provide subject matter expertise for ongoing support of applications deployed to non-production AWS environments and supporting 3rd party applications. Identify root causes and automate solutions in support of development. Candidate will have a deep understanding of DevOps practices, leadership skills, and expertise in various tools and technologies. You will be working in a fast-paced, dynamic environment, using cutting-edge tools and cloud technologies. Manage day to day activities when called upon. Responsibilities: Desing Develop release and support, Cloud Native applications running on Containers Kubernetes and Docker within AWS. DevOps Strategy: Develop and implement DevOps strategies and best practices to enhance development, testing, and deployment processes. Possess in-depth knowledge and hands-on experience with DevOps tools and technologies, including but not limited to GitHub, Jenkins, Terraform, Ansible, Kafka, AWS, Apigee. Support the lower environments for incident and problem management. Resolve complex support issues in non-production environments. Create procedural and troubleshooting documentation related to cloud native applications. Write complex automation scripts using common automation tools, such as yaml, Json, Bash, Groovy, Ansible, Terraform and python, Perform other duties as assigned Qualifications: Excellent problem-solving skills. Ability to work independently. Ability to work with management to prioritize tasks. Demonstrate strong confidence in abilities and knowledge. Ability to work well in crisis situations. Ability to work under minimal supervision. Flexibility to be on call from 5 PM to 7 AM for 3 months per year. Good written and oral communication skills. Technical Skills: Expertise on Kubernetes and Docker, including best practices Expertise in cloud containerization; design, develop and troubleshoot Strong programming or Scripting skills in yaml, Helm Charts, Json, Bash, Groovy, Ansible, Terraform, python or Java. Advance level on Networking technologies CI/CD tools such as Artifactory, Jenkins, and GIT, SonarQube Experience with cloud-based systems such as AWS, Azure, or Google Cloud, including expertise in IaC and CaC; Ansible, Terraform Experience with Kafka infrastructure and processes Understanding of software development methodologies and Agile practices Excellent analytical and problem-solving skills, with the ability to troubleshoot and identify the root cause of issues Good verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams. Familiarity with monitoring and logging tools such Elk stack, Splunk. Familiarity with Technologies used to support microservices. Minimum 7 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with Kubernetes. Minimum 3 years experience working on Scripting or Programming Bachelor's degree in a related area Cloud Certification a plus
16/05/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking an AWS DevOps Software Engineer. Candidate will provide subject matter expertise for ongoing support of applications deployed to non-production AWS environments and supporting 3rd party applications. Identify root causes and automate solutions in support of development. Candidate will have a deep understanding of DevOps practices, leadership skills, and expertise in various tools and technologies. You will be working in a fast-paced, dynamic environment, using cutting-edge tools and cloud technologies. Manage day to day activities when called upon. Responsibilities: Desing Develop release and support, Cloud Native applications running on Containers Kubernetes and Docker within AWS. DevOps Strategy: Develop and implement DevOps strategies and best practices to enhance development, testing, and deployment processes. Possess in-depth knowledge and hands-on experience with DevOps tools and technologies, including but not limited to GitHub, Jenkins, Terraform, Ansible, Kafka, AWS, Apigee. Support the lower environments for incident and problem management. Resolve complex support issues in non-production environments. Create procedural and troubleshooting documentation related to cloud native applications. Write complex automation scripts using common automation tools, such as yaml, Json, Bash, Groovy, Ansible, Terraform and python, Perform other duties as assigned Qualifications: Excellent problem-solving skills. Ability to work independently. Ability to work with management to prioritize tasks. Demonstrate strong confidence in abilities and knowledge. Ability to work well in crisis situations. Ability to work under minimal supervision. Flexibility to be on call from 5 PM to 7 AM for 3 months per year. Good written and oral communication skills. Technical Skills: Expertise on Kubernetes and Docker, including best practices Expertise in cloud containerization; design, develop and troubleshoot Strong programming or Scripting skills in yaml, Helm Charts, Json, Bash, Groovy, Ansible, Terraform, python or Java. Advance level on Networking technologies CI/CD tools such as Artifactory, Jenkins, and GIT, SonarQube Experience with cloud-based systems such as AWS, Azure, or Google Cloud, including expertise in IaC and CaC; Ansible, Terraform Experience with Kafka infrastructure and processes Understanding of software development methodologies and Agile practices Excellent analytical and problem-solving skills, with the ability to troubleshoot and identify the root cause of issues Good verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams. Familiarity with monitoring and logging tools such Elk stack, Splunk. Familiarity with Technologies used to support microservices. Minimum 7 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with Kubernetes. Minimum 3 years experience working on Scripting or Programming Bachelor's degree in a related area Cloud Certification a plus
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking an AWS DevOps Software Engineer. Candidate will provide subject matter expertise for ongoing support of applications deployed to non-production AWS environments and supporting 3rd party applications. Identify root causes and automate solutions in support of development. Candidate will have a deep understanding of DevOps practices, leadership skills, and expertise in various tools and technologies. You will be working in a fast-paced, dynamic environment, using cutting-edge tools and cloud technologies. Manage day to day activities when called upon. Responsibilities: Desing Develop release and support, Cloud Native applications running on Containers Kubernetes and Docker within AWS. DevOps Strategy: Develop and implement DevOps strategies and best practices to enhance development, testing, and deployment processes. Possess in-depth knowledge and hands-on experience with DevOps tools and technologies, including but not limited to GitHub, Jenkins, Terraform, Ansible, Kafka, AWS, Apigee. Support the lower environments for incident and problem management. Resolve complex support issues in non-production environments. Create procedural and troubleshooting documentation related to cloud native applications. Write complex automation scripts using common automation tools, such as yaml, Json, Bash, Groovy, Ansible, Terraform and python, Perform other duties as assigned Qualifications: Excellent problem-solving skills. Ability to work independently. Ability to work with management to prioritize tasks. Demonstrate strong confidence in abilities and knowledge. Ability to work well in crisis situations. Ability to work under minimal supervision. Flexibility to be on call from 5 PM to 7 AM for 3 months per year. Good written and oral communication skills. Technical Skills: Expertise on Kubernetes and Docker, including best practices Expertise in cloud containerization; design, develop and troubleshoot Strong programming or Scripting skills in yaml, Helm Charts, Json, Bash, Groovy, Ansible, Terraform, python or Java. Advance level on Networking technologies CI/CD tools such as Artifactory, Jenkins, and GIT, SonarQube Experience with cloud-based systems such as AWS, Azure, or Google Cloud, including expertise in IaC and CaC; Ansible, Terraform Experience with Kafka infrastructure and processes Understanding of software development methodologies and Agile practices Excellent analytical and problem-solving skills, with the ability to troubleshoot and identify the root cause of issues Good verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams. Familiarity with monitoring and logging tools such Elk stack, Splunk. Familiarity with Technologies used to support microservices. Minimum 7 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with Kubernetes. Minimum 3 years experience working on Scripting or Programming Bachelor's degree in a related area Cloud Certification a plus
16/05/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking an AWS DevOps Software Engineer. Candidate will provide subject matter expertise for ongoing support of applications deployed to non-production AWS environments and supporting 3rd party applications. Identify root causes and automate solutions in support of development. Candidate will have a deep understanding of DevOps practices, leadership skills, and expertise in various tools and technologies. You will be working in a fast-paced, dynamic environment, using cutting-edge tools and cloud technologies. Manage day to day activities when called upon. Responsibilities: Desing Develop release and support, Cloud Native applications running on Containers Kubernetes and Docker within AWS. DevOps Strategy: Develop and implement DevOps strategies and best practices to enhance development, testing, and deployment processes. Possess in-depth knowledge and hands-on experience with DevOps tools and technologies, including but not limited to GitHub, Jenkins, Terraform, Ansible, Kafka, AWS, Apigee. Support the lower environments for incident and problem management. Resolve complex support issues in non-production environments. Create procedural and troubleshooting documentation related to cloud native applications. Write complex automation scripts using common automation tools, such as yaml, Json, Bash, Groovy, Ansible, Terraform and python, Perform other duties as assigned Qualifications: Excellent problem-solving skills. Ability to work independently. Ability to work with management to prioritize tasks. Demonstrate strong confidence in abilities and knowledge. Ability to work well in crisis situations. Ability to work under minimal supervision. Flexibility to be on call from 5 PM to 7 AM for 3 months per year. Good written and oral communication skills. Technical Skills: Expertise on Kubernetes and Docker, including best practices Expertise in cloud containerization; design, develop and troubleshoot Strong programming or Scripting skills in yaml, Helm Charts, Json, Bash, Groovy, Ansible, Terraform, python or Java. Advance level on Networking technologies CI/CD tools such as Artifactory, Jenkins, and GIT, SonarQube Experience with cloud-based systems such as AWS, Azure, or Google Cloud, including expertise in IaC and CaC; Ansible, Terraform Experience with Kafka infrastructure and processes Understanding of software development methodologies and Agile practices Excellent analytical and problem-solving skills, with the ability to troubleshoot and identify the root cause of issues Good verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams. Familiarity with monitoring and logging tools such Elk stack, Splunk. Familiarity with Technologies used to support microservices. Minimum 7 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with Kubernetes. Minimum 3 years experience working on Scripting or Programming Bachelor's degree in a related area Cloud Certification a plus
Role: Senior Site Reliability Engineer Contract Type: Contract Location: Brussels (Hybrid) Languages: English We are currently seeking a Senior Site Reliability Engineer to work for our client that is in the Aviation Sector in Brussels. Responsibilities: Be a member of a dynamic team to operate and maintain mission critical applications. Work with the newest, state of art cloud native technologies both in the cloud and on prem. Detect, identify, and analyse faults if they arise, help to fix them, and work on solutions to avoid further occurrence. Monitor system performance and proactively identify and resolve issues. Conduct root cause analysis for production errors and implement preventive measures. Collaborate with development teams to integrate SRE best practices into the software life cycle. Constantly improve the service availability, scalability, performance, monitoring, and overall manageability. Be involved in common work with security experts, architects, and developers to build and improve a sustainable technical landscape. Continuously research and assess new approaches for potential use, and provide recommendations and subject matter expertise regarding trends, technology, tools, and services. Experience: Relevant bachelor's degree or equivalent work experience in computer science or related field. Good understanding and experience in SRE and Platform Engineering principles and frameworks Advanced experience in automation, Infrastructure as Code, CI/CD (Terraform, Ansible, Jenkins) Advanced experience in Kubernetes (OpenShift) and Linux system administration (RedHat) Advanced experience in operating and automating solutions Good understanding of security principles. To apply, please send your CV, as interviews will be taking place immediately
16/05/2024
Project-based
Role: Senior Site Reliability Engineer Contract Type: Contract Location: Brussels (Hybrid) Languages: English We are currently seeking a Senior Site Reliability Engineer to work for our client that is in the Aviation Sector in Brussels. Responsibilities: Be a member of a dynamic team to operate and maintain mission critical applications. Work with the newest, state of art cloud native technologies both in the cloud and on prem. Detect, identify, and analyse faults if they arise, help to fix them, and work on solutions to avoid further occurrence. Monitor system performance and proactively identify and resolve issues. Conduct root cause analysis for production errors and implement preventive measures. Collaborate with development teams to integrate SRE best practices into the software life cycle. Constantly improve the service availability, scalability, performance, monitoring, and overall manageability. Be involved in common work with security experts, architects, and developers to build and improve a sustainable technical landscape. Continuously research and assess new approaches for potential use, and provide recommendations and subject matter expertise regarding trends, technology, tools, and services. Experience: Relevant bachelor's degree or equivalent work experience in computer science or related field. Good understanding and experience in SRE and Platform Engineering principles and frameworks Advanced experience in automation, Infrastructure as Code, CI/CD (Terraform, Ansible, Jenkins) Advanced experience in Kubernetes (OpenShift) and Linux system administration (RedHat) Advanced experience in operating and automating solutions Good understanding of security principles. To apply, please send your CV, as interviews will be taking place immediately
Role responsibilities: Interacting with project roles as required, to gain an understanding of the business environment, technical context, and organisational strategic direction. Advising our customer on the latest technologies and methodologies, designing and implementing innovative approaches to their problems using automation. Understanding security policies and implementing solutions to satisfy security requirements. Designing and implementing solutions which have high availability and are scalable. What you will bring to the team: Enthusiasm for collaboration and excellent communication skills (written and verbal). An interest in keeping up with emerging tools, techniques, and technologies. Effective time management and organisational skills. A flexible and Agile way of working within a fast paced and everchanging environment. Attention to detail with a pragmatic and enthusiastic attitude to work Desirable Skills and Technologies: Experience and knowledge of AWS/Azure and Azure Virtual Desktop. Experience with configuration management tools, eg, Ansible (preferred), Puppet, Chef. Familiar with (or ability to learn easily) the following languages: Python, bash Scripting, React, Go. Experience with deploying, configuring, and managing cloud architecture and technologies in AWS environments. Experience with web application services such as NGINX, Apache, JBoss. Knowledge of OpenShift Containerisation, RHEL 6,7,8, Docker and Kubernetes. Experience with monitoring systems eg, ELK, Nagios, New Relic, DataDog, Splunk etc. Working knowledge of digital delivery processes and methodologies. Knowledge of Atlassian Toolset. Knowledge of JavaScript Understanding of Front End technologies, such as HTML5, and CSS3. Understanding the nature of asynchronous programming, its quirks and workarounds. Understanding of database schemas and query languages. Knowledge of infrastructure as code and CI/CD pipelines eg, Jenkins, Terraform, Bitbucket, GIT repositories, Concourse, Team City etc. An understanding of how to deploy and configure AWS components to adhere to tight security requirements. Awareness of security identity, access management and authentication using products such as ADFS, SSL/TLS Certs, OIDC, OAUTH2, Keycloak or Redhat SSO
15/05/2024
Full time
Role responsibilities: Interacting with project roles as required, to gain an understanding of the business environment, technical context, and organisational strategic direction. Advising our customer on the latest technologies and methodologies, designing and implementing innovative approaches to their problems using automation. Understanding security policies and implementing solutions to satisfy security requirements. Designing and implementing solutions which have high availability and are scalable. What you will bring to the team: Enthusiasm for collaboration and excellent communication skills (written and verbal). An interest in keeping up with emerging tools, techniques, and technologies. Effective time management and organisational skills. A flexible and Agile way of working within a fast paced and everchanging environment. Attention to detail with a pragmatic and enthusiastic attitude to work Desirable Skills and Technologies: Experience and knowledge of AWS/Azure and Azure Virtual Desktop. Experience with configuration management tools, eg, Ansible (preferred), Puppet, Chef. Familiar with (or ability to learn easily) the following languages: Python, bash Scripting, React, Go. Experience with deploying, configuring, and managing cloud architecture and technologies in AWS environments. Experience with web application services such as NGINX, Apache, JBoss. Knowledge of OpenShift Containerisation, RHEL 6,7,8, Docker and Kubernetes. Experience with monitoring systems eg, ELK, Nagios, New Relic, DataDog, Splunk etc. Working knowledge of digital delivery processes and methodologies. Knowledge of Atlassian Toolset. Knowledge of JavaScript Understanding of Front End technologies, such as HTML5, and CSS3. Understanding the nature of asynchronous programming, its quirks and workarounds. Understanding of database schemas and query languages. Knowledge of infrastructure as code and CI/CD pipelines eg, Jenkins, Terraform, Bitbucket, GIT repositories, Concourse, Team City etc. An understanding of how to deploy and configure AWS components to adhere to tight security requirements. Awareness of security identity, access management and authentication using products such as ADFS, SSL/TLS Certs, OIDC, OAUTH2, Keycloak or Redhat SSO
Request Technology - Craig Johnson
Chicago, Illinois
* Position is bonus eligible* Prestigious Financial Institution is currently seeking an Enterprise Monitoring Technical Lead Engineer with strong Splunk experience. Candidate will lead the investigating, planning, and implementing of the enterprise monitoring system, as well as identify areas for improvement, recommend allocation of resources, and work with solution architects to craft an appropriate remediation or enhancement for these systems. Responsibilities: Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems. Qualifications: Expert understanding of: Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITIL Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree in a related area 10+ years of related experience 10 years experience working in a distributed multi-platform environment. 3 years experience working with cloud native applications 3 years experience managing technical projects Cloud certification in AWS is a plus
14/05/2024
Full time
* Position is bonus eligible* Prestigious Financial Institution is currently seeking an Enterprise Monitoring Technical Lead Engineer with strong Splunk experience. Candidate will lead the investigating, planning, and implementing of the enterprise monitoring system, as well as identify areas for improvement, recommend allocation of resources, and work with solution architects to craft an appropriate remediation or enhancement for these systems. Responsibilities: Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems. Qualifications: Expert understanding of: Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITIL Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree in a related area 10+ years of related experience 10 years experience working in a distributed multi-platform environment. 3 years experience working with cloud native applications 3 years experience managing technical projects Cloud certification in AWS is a plus
NO SPONSORSHIP Principal, Software Engineering Enterprise Cloud Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Dallas, TX 3 days onsite, 2 days remote It is all about on-premises monitoring and cloud monitoring The products they are looking for outside of Splunk is Data Dog, Dynatrace, New Relic Heavy cloud, AWS, EC2, Automation, application performance monitoring, enterprise monitoring, any EMC patrol, Tivoli, and regulatory experience Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
14/05/2024
Full time
NO SPONSORSHIP Principal, Software Engineering Enterprise Cloud Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Dallas, TX 3 days onsite, 2 days remote It is all about on-premises monitoring and cloud monitoring The products they are looking for outside of Splunk is Data Dog, Dynatrace, New Relic Heavy cloud, AWS, EC2, Automation, application performance monitoring, enterprise monitoring, any EMC patrol, Tivoli, and regulatory experience Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
NO SPONSORSHIP Principal, Software Engineering Enterprise Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Chicago, IL 3 days onsite, 2 days remote Looking for a technical team lead over the enterprise splunk monitoring system. You will be the SME in Splunk Monitoring, Cloud Native Applications running on Kubernetes within AWS. Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
14/05/2024
Full time
NO SPONSORSHIP Principal, Software Engineering Enterprise Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Chicago, IL 3 days onsite, 2 days remote Looking for a technical team lead over the enterprise splunk monitoring system. You will be the SME in Splunk Monitoring, Cloud Native Applications running on Kubernetes within AWS. Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
Contract - UC4 Automation Engineer Rate: Open Location: Chicago, IL Hybrid: 3 days on-site, 2 days remote Qualifications Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL Responsibilities Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles.
14/05/2024
Project-based
Contract - UC4 Automation Engineer Rate: Open Location: Chicago, IL Hybrid: 3 days on-site, 2 days remote Qualifications Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL Responsibilities Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles.
Prestigious opportunity for Senior Platform Engineers, who are hands on and possess a deep understanding of the Azure ecosystem to join delivery teams working on some of the most exciting digital programmes within the industry. Working in our hybrid model of up to 2 days a week in office/3 days WFH. Responsibilities Design, build and maintain secure cloud infrastructure using Terraform, Ansible, OWASP and release pipelines using Git, Jenkins, Azure DevOps Deploy and monitor software and configuration changes with Ansible, Jfrog, AppD, Azure Monitor, etc. Understand Microsoft Azure, ideally their Platform as a Service offerings (App Services, Azure SQL, Azure Search, Azure Key Vault etc.), as well as Azure DevOps Use Terraform templates and Scripting languages such as PowerShell or Python. Use Test-Driven Development and associated technologies such as NUnit, XUnit Serve as a coach and mentor to team colleagues and act as an internal consultant and advisor for other technical teams with relation to leveraging automation technologies. Comfortable working in a highly visible and business-facing role where you need to break down complex problems, facilitate capturing requirements and provide creative solutions suitable to the stakeholder's problem statement. Demonstrate a passion for cloud engineering and eager to help shape an agile-minded cloud platform team ready for growth. Deliverables: Terraform provisioning templates that are supportable/maintainable. Building of the Azure Sandbox/Landing zones. Applying NSG/ASG and associated security. Providing an E2E solution for automating the provisioning of code, config, data. Proving documentation to cover the above. In return, you will be rewarded with ongoing training and career development with an excellent benefits package! What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
14/05/2024
Full time
Prestigious opportunity for Senior Platform Engineers, who are hands on and possess a deep understanding of the Azure ecosystem to join delivery teams working on some of the most exciting digital programmes within the industry. Working in our hybrid model of up to 2 days a week in office/3 days WFH. Responsibilities Design, build and maintain secure cloud infrastructure using Terraform, Ansible, OWASP and release pipelines using Git, Jenkins, Azure DevOps Deploy and monitor software and configuration changes with Ansible, Jfrog, AppD, Azure Monitor, etc. Understand Microsoft Azure, ideally their Platform as a Service offerings (App Services, Azure SQL, Azure Search, Azure Key Vault etc.), as well as Azure DevOps Use Terraform templates and Scripting languages such as PowerShell or Python. Use Test-Driven Development and associated technologies such as NUnit, XUnit Serve as a coach and mentor to team colleagues and act as an internal consultant and advisor for other technical teams with relation to leveraging automation technologies. Comfortable working in a highly visible and business-facing role where you need to break down complex problems, facilitate capturing requirements and provide creative solutions suitable to the stakeholder's problem statement. Demonstrate a passion for cloud engineering and eager to help shape an agile-minded cloud platform team ready for growth. Deliverables: Terraform provisioning templates that are supportable/maintainable. Building of the Azure Sandbox/Landing zones. Applying NSG/ASG and associated security. Providing an E2E solution for automating the provisioning of code, config, data. Proving documentation to cover the above. In return, you will be rewarded with ongoing training and career development with an excellent benefits package! What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
Global Enterprise Partners is currently looking for an Oracle DBA for our client in the Financial services industry in Utrecht, The Netherlands As part of a DevOps team you will be supporting, designing, building, and automating the Oracle environment. Products supported within that environment include Oracle databases, Oracle Enterprise Manager, Oracle Internet Directory and Oracle Engineered Systems. ROLE You engineer the solution to be compliant with the regulatory requirements and take care of embedding the solution into the Rabobank Infrastructure and code automation environment. A part of the workload will be developing the infrastructure to support our CI/CD pipeline based on products such as Git, Ansible and Python and make it available to customers. Your day-to-day activities range from actual engineering the pipeline and coding automation scripts, but also engineering new or adapted solutions for our banking organization. As part of a self-steering service delivery team, you are responsible for the operation of our Oracle database infrastructure, which is mostly delivered on Oracle Exadata and a small number on commodity hardware. Keeping our environments, standards, and documents up to date is part of the job. Who are you You are an experienced (at least 5 years) Oracle focused DevOps engineer with extensive security knowledge and like to work in a DevOps team in a complex environment. You get enthusiastic when talking about (test-) automation options (Ansible, Python, API's, Rest, PyTest, Rspec or other). But also, a simple task automation will have your full attention You are willing to share knowledge and experience You understand that the "Cloud" will be a very large part of any infrastructure solution. You see the big picture but like to go in the details to solve complex integration issues. You are a team player who can cooperate with everybody. You like feedback as this is used to improve yourself and your team to become even better. Your English language needs to be at proficient level (B2) What you bring Must have: Good knowledge of Oracle RDBMS Technology stack products Good knowledge of Oracle Enterprise Manager Knowledge of Oracle Exadata and ZFS (Infrastructure) Knowledge of Oracle Enterprise Linux and virtualization Knowledge of Oracle Internet Directory, LDAP Knowledge of Networking Extensive experience with automation (Ansible, Azure DevOps and Python) Preferable working experience with Arcsight, Splunk and Qualys In addition to the technical skills, knowledge and experience in the following areas: Service management (ITIL) Risk, security and compliance Documentation and Knowledge Sharing Requirement analyses and Detailed design Testing And the following competence related qualities: Experience with DevOps and working Scrum (sprint planning, review, daily, retro) Customer-centric Take ownership and Result-oriented Collaboration Are you interested in this opportunity and do you meet the requirements? Please contact Marco Eindhoven of Global Enterprise Partners.
14/05/2024
Project-based
Global Enterprise Partners is currently looking for an Oracle DBA for our client in the Financial services industry in Utrecht, The Netherlands As part of a DevOps team you will be supporting, designing, building, and automating the Oracle environment. Products supported within that environment include Oracle databases, Oracle Enterprise Manager, Oracle Internet Directory and Oracle Engineered Systems. ROLE You engineer the solution to be compliant with the regulatory requirements and take care of embedding the solution into the Rabobank Infrastructure and code automation environment. A part of the workload will be developing the infrastructure to support our CI/CD pipeline based on products such as Git, Ansible and Python and make it available to customers. Your day-to-day activities range from actual engineering the pipeline and coding automation scripts, but also engineering new or adapted solutions for our banking organization. As part of a self-steering service delivery team, you are responsible for the operation of our Oracle database infrastructure, which is mostly delivered on Oracle Exadata and a small number on commodity hardware. Keeping our environments, standards, and documents up to date is part of the job. Who are you You are an experienced (at least 5 years) Oracle focused DevOps engineer with extensive security knowledge and like to work in a DevOps team in a complex environment. You get enthusiastic when talking about (test-) automation options (Ansible, Python, API's, Rest, PyTest, Rspec or other). But also, a simple task automation will have your full attention You are willing to share knowledge and experience You understand that the "Cloud" will be a very large part of any infrastructure solution. You see the big picture but like to go in the details to solve complex integration issues. You are a team player who can cooperate with everybody. You like feedback as this is used to improve yourself and your team to become even better. Your English language needs to be at proficient level (B2) What you bring Must have: Good knowledge of Oracle RDBMS Technology stack products Good knowledge of Oracle Enterprise Manager Knowledge of Oracle Exadata and ZFS (Infrastructure) Knowledge of Oracle Enterprise Linux and virtualization Knowledge of Oracle Internet Directory, LDAP Knowledge of Networking Extensive experience with automation (Ansible, Azure DevOps and Python) Preferable working experience with Arcsight, Splunk and Qualys In addition to the technical skills, knowledge and experience in the following areas: Service management (ITIL) Risk, security and compliance Documentation and Knowledge Sharing Requirement analyses and Detailed design Testing And the following competence related qualities: Experience with DevOps and working Scrum (sprint planning, review, daily, retro) Customer-centric Take ownership and Result-oriented Collaboration Are you interested in this opportunity and do you meet the requirements? Please contact Marco Eindhoven of Global Enterprise Partners.
ASSOCIATE PRINCIPAL, APPIAN SOFTWARE ENGINEERING SALARY: $140k - $145k - $152k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite, 2 days remote Looking for someone to design development testing and do the implementation of appian software. You will need 5 years Front End user experience, JavaScript automating workflows inside appian aws unix linux Java python node js angular 2.0 or react js and Middleware technologies. Working knowledge of devops terraform ansible Jenkins Kubernetes helm and cicd pipelines. Must have a degree and be apian certified developer required Contribute to design, technical direction and architecture including collaborating with various teams to build fit for purpose solutions. Applies expert knowledge of Java, Python, JavaScript, NodeJS, Angular 2.0 or ReactJS and middle-ware technologies in independently designing and developing key services with a focus on continuous integration and delivery Participates in code reviews, proactively identifying and mitigating potential issues and defects as well as assisting with continuous improvement Drives continuous improvement efforts by identifying and championing practical means of reducing time to market while maintaining high quality Qualifications: 5+ years of Front End, User Experience, development (required) 5+ years of experience in JavaScript skills (required) 3 + years of experience automating workflows inside Appian and in conjunction with integration to other tools (required) 3+ years of experience in React application development (required) 3+ years of hands-on HTML5/CSS3 experience (required) Experience with Java and/or Python (required) Experience with popular Javascript frameworks such as React, Node JS, Vue, Angular 2.0 (required) Experience of working with websockets, HTTP 1.1 and HTTP/2 (required) Experience with RESTful APIs and JSON RPC (required) Ability to write clean, bug-free code that is easy to understand and easily maintainable (required) Experience with BDD methodologies & automated acceptance testing (required) Technical Skills: 5+ years hands-on experience in Java, including good understanding of Java fundamentals such as Memory Model, Runtime Environment, Concurrency and Multithreading (required) Past/Current experience of 3+ years working on a large scale cloud native project (platform: Unix/Linux, Type of Systems: event-driven/transaction processing/high performance computing) as Technical Lead. These experiences should include developing/architecting core libraries or framework used by the platform to support fundamental services like storage, alert notifications, security, etc. (required) Appian Process Modeling, Smart Services, Rules and Tempo event services, database, and Web services (required) Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based, Graph (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc (required) Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics (required) Education and/or Experience: BS degree in Computer Science, similar technical field Appian certified developer
13/05/2024
Full time
ASSOCIATE PRINCIPAL, APPIAN SOFTWARE ENGINEERING SALARY: $140k - $145k - $152k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite, 2 days remote Looking for someone to design development testing and do the implementation of appian software. You will need 5 years Front End user experience, JavaScript automating workflows inside appian aws unix linux Java python node js angular 2.0 or react js and Middleware technologies. Working knowledge of devops terraform ansible Jenkins Kubernetes helm and cicd pipelines. Must have a degree and be apian certified developer required Contribute to design, technical direction and architecture including collaborating with various teams to build fit for purpose solutions. Applies expert knowledge of Java, Python, JavaScript, NodeJS, Angular 2.0 or ReactJS and middle-ware technologies in independently designing and developing key services with a focus on continuous integration and delivery Participates in code reviews, proactively identifying and mitigating potential issues and defects as well as assisting with continuous improvement Drives continuous improvement efforts by identifying and championing practical means of reducing time to market while maintaining high quality Qualifications: 5+ years of Front End, User Experience, development (required) 5+ years of experience in JavaScript skills (required) 3 + years of experience automating workflows inside Appian and in conjunction with integration to other tools (required) 3+ years of experience in React application development (required) 3+ years of hands-on HTML5/CSS3 experience (required) Experience with Java and/or Python (required) Experience with popular Javascript frameworks such as React, Node JS, Vue, Angular 2.0 (required) Experience of working with websockets, HTTP 1.1 and HTTP/2 (required) Experience with RESTful APIs and JSON RPC (required) Ability to write clean, bug-free code that is easy to understand and easily maintainable (required) Experience with BDD methodologies & automated acceptance testing (required) Technical Skills: 5+ years hands-on experience in Java, including good understanding of Java fundamentals such as Memory Model, Runtime Environment, Concurrency and Multithreading (required) Past/Current experience of 3+ years working on a large scale cloud native project (platform: Unix/Linux, Type of Systems: event-driven/transaction processing/high performance computing) as Technical Lead. These experiences should include developing/architecting core libraries or framework used by the platform to support fundamental services like storage, alert notifications, security, etc. (required) Appian Process Modeling, Smart Services, Rules and Tempo event services, database, and Web services (required) Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based, Graph (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc (required) Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics (required) Education and/or Experience: BS degree in Computer Science, similar technical field Appian certified developer
- Head of Site Reliability/Infrastructure - Glasgow/Hybrid - Excellent Salary & Benefits Package - Immediate Start Fantastic new opportunity to the market to join our Glasgow-based Fintech client, specialising in managed Cloud provision. The business is entering a growth phase and now recruiting for a seasoned Head of Site Reliability with an infrastructure background, as they continue to grow their tech team from their newly opened, state-of-the-art tech hub in Glasgow. This is a key hire and the first in this space, as the business begins to build out their new Site Reliability team. The successful candidate will be responsible for building out the function, providing true leadership and co-ordination, whilst having a breadth of technical know-how. This opportunity is truly greenfield in nature and offers a blank canvas to implement plans and procedures with the aim of improving the infrastructure reliability, security and functionality with automation at the forefront. Reporting into the COO, you will be a natural leader of people and teams, with the goal of collaborating on the design, deployment, and maintenance of the global infrastructure and to provide system support for the Security, Network Operations and Development teams. The role would ideally suit an experienced automation-focused individual with comprehensive working infrastructure knowledge of Windows and Linux environments (RHEL, Ubuntu), as well as network operating systems experience. Commercial use of Infrastructure-As-Code (IAC) tooling such as Terraform and Ansible is also beneficial. Candidates who are proactive and dedicated are preferred, as this role is highly visible. You will also be a significant contributor to the team's IT success, supporting and delivering infrastructure and solutions and working directly with data centre, network, software development and project teams alike. Key Skills & Experience Proven experience in a site reliability engineering, DevOps, or similar role, with multiple years in a leadership position. Extensive background in cloud computing services (AWS, Google Cloud or Azure) Container orchestration technology exposure (eg Kubernetes). Proficiency in automation Knowledge of Scripting languages (Python, Shell or Go). Knowledge of Cyber Security principles and best practices. Knowledge of regulatory environments and compliance standards Exceptional problem-solving skills Ability to work under pressure in a fast-paced environment. Excellent communication and leadership abilities Strong track-record of building and motivating high-performing teams. Bachelor's or master's degree in Computer Science, Engineering, or a related field.The above is not exhaustive. Please forward your CV to discuss this requirement in more detail to (see below) The above is not exhaustive. Please forward your CV to discuss this requirement in more detail to (see below)
13/05/2024
Full time
- Head of Site Reliability/Infrastructure - Glasgow/Hybrid - Excellent Salary & Benefits Package - Immediate Start Fantastic new opportunity to the market to join our Glasgow-based Fintech client, specialising in managed Cloud provision. The business is entering a growth phase and now recruiting for a seasoned Head of Site Reliability with an infrastructure background, as they continue to grow their tech team from their newly opened, state-of-the-art tech hub in Glasgow. This is a key hire and the first in this space, as the business begins to build out their new Site Reliability team. The successful candidate will be responsible for building out the function, providing true leadership and co-ordination, whilst having a breadth of technical know-how. This opportunity is truly greenfield in nature and offers a blank canvas to implement plans and procedures with the aim of improving the infrastructure reliability, security and functionality with automation at the forefront. Reporting into the COO, you will be a natural leader of people and teams, with the goal of collaborating on the design, deployment, and maintenance of the global infrastructure and to provide system support for the Security, Network Operations and Development teams. The role would ideally suit an experienced automation-focused individual with comprehensive working infrastructure knowledge of Windows and Linux environments (RHEL, Ubuntu), as well as network operating systems experience. Commercial use of Infrastructure-As-Code (IAC) tooling such as Terraform and Ansible is also beneficial. Candidates who are proactive and dedicated are preferred, as this role is highly visible. You will also be a significant contributor to the team's IT success, supporting and delivering infrastructure and solutions and working directly with data centre, network, software development and project teams alike. Key Skills & Experience Proven experience in a site reliability engineering, DevOps, or similar role, with multiple years in a leadership position. Extensive background in cloud computing services (AWS, Google Cloud or Azure) Container orchestration technology exposure (eg Kubernetes). Proficiency in automation Knowledge of Scripting languages (Python, Shell or Go). Knowledge of Cyber Security principles and best practices. Knowledge of regulatory environments and compliance standards Exceptional problem-solving skills Ability to work under pressure in a fast-paced environment. Excellent communication and leadership abilities Strong track-record of building and motivating high-performing teams. Bachelor's or master's degree in Computer Science, Engineering, or a related field.The above is not exhaustive. Please forward your CV to discuss this requirement in more detail to (see below) The above is not exhaustive. Please forward your CV to discuss this requirement in more detail to (see below)
Data DevOps Engineer - DevOps, Big data - Permanent - Gloucestershire Location: Gloucestershire/Bristol (full-time onsite) Salary: £65 - £95K per annum Negotiable DOE Benefits: Flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: We are looking for a bright, driven, customer focussed professional to join our clients Hybrid Cloud Delivery team, and work alongside Enterprise Data Engineering Consultants to accelerate and drive data engineering opportunities. This is a fantastic opportunity for a dynamic individual with big ambitions, who is an established technologist with both outstanding technical ability and consultative mindset. This would suit an open-minded personable self-starter who relishes the fluidity and collaborative nature of consultancy. The Role: This role sits on our clients Advisory and Professional Services delivery team, who provide thought-leadership, industry know-how and technical excellence to consultative engagements. Helping customers to reap maximum business benefit from their technical investments, leveraging best in class Vender & Partner technologies to create relevant and effective business-valued technical solutions. The Data DevOps Engineer role is all about the detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems. Duties: Participating in the full life cycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader Assisting with solution improvement activities driven either by the project or service Essential Requirements: Excellent knowledge of Linux operating system administration and implementation Broad understanding of the containerisation domain adjacent technologies/services, such as: Docker, OpenShift, Kubernetes etc. Infrastructure as Code and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies eg NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages Desirable Requirements: Jupyter Hub Awareness Minio or similar S3 storage technology Trino/Presto RabbitMQ or other common queue technology eg ActiveMQ NiFi Rego Familiarity with code development, Shell-Scripting in Python, Bash etc. To apply for this Data DevOps Engineer permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
13/05/2024
Full time
Data DevOps Engineer - DevOps, Big data - Permanent - Gloucestershire Location: Gloucestershire/Bristol (full-time onsite) Salary: £65 - £95K per annum Negotiable DOE Benefits: Flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: We are looking for a bright, driven, customer focussed professional to join our clients Hybrid Cloud Delivery team, and work alongside Enterprise Data Engineering Consultants to accelerate and drive data engineering opportunities. This is a fantastic opportunity for a dynamic individual with big ambitions, who is an established technologist with both outstanding technical ability and consultative mindset. This would suit an open-minded personable self-starter who relishes the fluidity and collaborative nature of consultancy. The Role: This role sits on our clients Advisory and Professional Services delivery team, who provide thought-leadership, industry know-how and technical excellence to consultative engagements. Helping customers to reap maximum business benefit from their technical investments, leveraging best in class Vender & Partner technologies to create relevant and effective business-valued technical solutions. The Data DevOps Engineer role is all about the detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems. Duties: Participating in the full life cycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader Assisting with solution improvement activities driven either by the project or service Essential Requirements: Excellent knowledge of Linux operating system administration and implementation Broad understanding of the containerisation domain adjacent technologies/services, such as: Docker, OpenShift, Kubernetes etc. Infrastructure as Code and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies eg NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages Desirable Requirements: Jupyter Hub Awareness Minio or similar S3 storage technology Trino/Presto RabbitMQ or other common queue technology eg ActiveMQ NiFi Rego Familiarity with code development, Shell-Scripting in Python, Bash etc. To apply for this Data DevOps Engineer permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
13/05/2024
Full time
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
* Position is bonus eligible* Prestigious Financial Institution is currently seeking an Enterprise Monitoring Technical Lead Engineer with strong Splunk experience. Candidate will lead the investigating, planning, and implementing of the enterprise monitoring system, as well as identify areas for improvement, recommend allocation of resources, and work with solution architects to craft an appropriate remediation or enhancement for these systems. Responsibilities: Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems. Qualifications: Expert understanding of: Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITIL Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree in a related area 10+ years of related experience 10 years experience working in a distributed multi-platform environment. 3 years experience working with cloud native applications 3 years experience managing technical projects Cloud certification in AWS is a plus
23/04/2024
Full time
* Position is bonus eligible* Prestigious Financial Institution is currently seeking an Enterprise Monitoring Technical Lead Engineer with strong Splunk experience. Candidate will lead the investigating, planning, and implementing of the enterprise monitoring system, as well as identify areas for improvement, recommend allocation of resources, and work with solution architects to craft an appropriate remediation or enhancement for these systems. Responsibilities: Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems. Qualifications: Expert understanding of: Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITIL Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree in a related area 10+ years of related experience 10 years experience working in a distributed multi-platform environment. 3 years experience working with cloud native applications 3 years experience managing technical projects Cloud certification in AWS is a plus