Request Technology - Craig Johnson
Chicago, Illinois
* Position is bonus eligible* Prestigious Financial Institution is currently seeking an Enterprise Monitoring Technical Lead Engineer with strong Splunk experience. Candidate will lead the investigating, planning, and implementing of the enterprise monitoring system, as well as identify areas for improvement, recommend allocation of resources, and work with solution architects to craft an appropriate remediation or enhancement for these systems. Responsibilities: Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems. Qualifications: Expert understanding of: Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITIL Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree in a related area 10+ years of related experience 10 years experience working in a distributed multi-platform environment. 3 years experience working with cloud native applications 3 years experience managing technical projects Cloud certification in AWS is a plus
07/05/2024
Full time
* Position is bonus eligible* Prestigious Financial Institution is currently seeking an Enterprise Monitoring Technical Lead Engineer with strong Splunk experience. Candidate will lead the investigating, planning, and implementing of the enterprise monitoring system, as well as identify areas for improvement, recommend allocation of resources, and work with solution architects to craft an appropriate remediation or enhancement for these systems. Responsibilities: Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems. Qualifications: Expert understanding of: Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITIL Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree in a related area 10+ years of related experience 10 years experience working in a distributed multi-platform environment. 3 years experience working with cloud native applications 3 years experience managing technical projects Cloud certification in AWS is a plus
NO SPONSORSHIP Principal, Software Engineering Enterprise Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Chicago, IL 3 days onsite, 2 days remote Looking for a technical team lead over the enterprise splunk monitoring system. You will be the SME in Splunk Monitoring, Cloud Native Applications running on Kubernetes within AWS. Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
07/05/2024
Full time
NO SPONSORSHIP Principal, Software Engineering Enterprise Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Chicago, IL 3 days onsite, 2 days remote Looking for a technical team lead over the enterprise splunk monitoring system. You will be the SME in Splunk Monitoring, Cloud Native Applications running on Kubernetes within AWS. Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
NO SPONSORSHIP Principal, Software Engineering Enterprise Cloud Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Dallas, TX 3 days onsite, 2 days remote It is all about on-premises monitoring and cloud monitoring The products they are looking for outside of Splunk is Data Dog, Dynatrace, New Relic Heavy cloud, AWS, EC2, Automation, application performance monitoring, enterprise monitoring, any EMC patrol, Tivoli, and regulatory experience Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
07/05/2024
Full time
NO SPONSORSHIP Principal, Software Engineering Enterprise Cloud Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Dallas, TX 3 days onsite, 2 days remote It is all about on-premises monitoring and cloud monitoring The products they are looking for outside of Splunk is Data Dog, Dynatrace, New Relic Heavy cloud, AWS, EC2, Automation, application performance monitoring, enterprise monitoring, any EMC patrol, Tivoli, and regulatory experience Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
Contract - UC4 Automation Engineer Rate: Open Location: Chicago, IL Hybrid: 3 days on-site, 2 days remote Qualifications Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL Responsibilities Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles.
07/05/2024
Project-based
Contract - UC4 Automation Engineer Rate: Open Location: Chicago, IL Hybrid: 3 days on-site, 2 days remote Qualifications Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL Responsibilities Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles.
Global financial services giant with office locations in east side of Surrey and the City of London, requires experienced Netbackup engineer to join an expanding storage and backup engineering team. Essential skills: Good experience in NetBackup - AIR, OST, API's, Opscenter, Upgrading, Performance Tuning, Troubleshooting; Good experience in Data Domain - MTREE Replication, DDBoost, VTL, DDMC, RMAN, Upgrading, Troubleshooting; Strong experience in Scripting languages (Python, Linux Shell, Perl). Desirable skills: Exposure to automation tools - Rundeck, Ansible, Jenkins, would be considered beneficial This role will act as the Senior Backup SME in the team and as such will bring a braod set of technical skills within storage and general infrastucture to the table. A key part pf the role will be to provide Storage & Backup services around servie delivery and automation. The successful candidates will be an engineer who is experienced in administering the hardware and software solutions that provide Enterprise Storage (SAN/NAS/Object) and Backup services for applications across the company. You will be expected to display strong technical skills while exhibiting a high level of ownership within a demanding working environment. You will be part of a global team providing day to day Engineering support and enhancement especially the coding skill set. Day tp day Responsibilities will include: Provide systems administration on Storage & Backup platforms including HA solution design; Hardware/Software implementation and maintenance; capacity planning; performance tuning; patching; monitoring and upgrades. Perform routine Storage & Backup systems operation automation, Risk & Vulnerability remediation, monitor systems activities to ensure smooth daily operation of systems facilities. Handlng Storage & Backup related BAU jobs such as handling (Netapp/Isilon/NBU backup/Data Domain/Powermax/Vmax) service requests, incidents and changes to keep the platform running smoothly. This is an exciting senior technical SME role for an engineer who is a Netbackup specialist. the role is hyrbrid in location between offices in Surrey and/or City, with a requiremenet for minimum of 8 days a month in office. How that is split up is flexible.
07/05/2024
Full time
Global financial services giant with office locations in east side of Surrey and the City of London, requires experienced Netbackup engineer to join an expanding storage and backup engineering team. Essential skills: Good experience in NetBackup - AIR, OST, API's, Opscenter, Upgrading, Performance Tuning, Troubleshooting; Good experience in Data Domain - MTREE Replication, DDBoost, VTL, DDMC, RMAN, Upgrading, Troubleshooting; Strong experience in Scripting languages (Python, Linux Shell, Perl). Desirable skills: Exposure to automation tools - Rundeck, Ansible, Jenkins, would be considered beneficial This role will act as the Senior Backup SME in the team and as such will bring a braod set of technical skills within storage and general infrastucture to the table. A key part pf the role will be to provide Storage & Backup services around servie delivery and automation. The successful candidates will be an engineer who is experienced in administering the hardware and software solutions that provide Enterprise Storage (SAN/NAS/Object) and Backup services for applications across the company. You will be expected to display strong technical skills while exhibiting a high level of ownership within a demanding working environment. You will be part of a global team providing day to day Engineering support and enhancement especially the coding skill set. Day tp day Responsibilities will include: Provide systems administration on Storage & Backup platforms including HA solution design; Hardware/Software implementation and maintenance; capacity planning; performance tuning; patching; monitoring and upgrades. Perform routine Storage & Backup systems operation automation, Risk & Vulnerability remediation, monitor systems activities to ensure smooth daily operation of systems facilities. Handlng Storage & Backup related BAU jobs such as handling (Netapp/Isilon/NBU backup/Data Domain/Powermax/Vmax) service requests, incidents and changes to keep the platform running smoothly. This is an exciting senior technical SME role for an engineer who is a Netbackup specialist. the role is hyrbrid in location between offices in Surrey and/or City, with a requiremenet for minimum of 8 days a month in office. How that is split up is flexible.
Senior DevOps Engineer - Cloud - Permanent - Poland Robson Bale are looking for a Senior DevOps Engineer to come on board for a permanent opportunity in Poland. Role can be fully remote from Poland Permanent, Excellent Salary Responsibilities: Technical Skills - Must have Leadership: Lead and manage DevOps/Infrastructure projects, overseeing the entire development life cycle. Collaborate with cross-functional teams to align project objectives and deliverables. Ensure adherence to timelines, budgets, and quality standards. Mentor and guide team members and interns, fostering a culture of continuous learning. Security and Compliance: Demonstrate a deep understanding of Standard Operating Procedures (SOP) for security practices. Perform threat modelling and implement encryption, network defense, and web security measures. Champion security best practices in a production environment and address cloud security risks. Integrate identity providers such as OAuth, OIDC, and SAML to enhance security. DevOps/Infrastructure and Cloud Expertise: Drive change, release, and incident management processes to maintain a stable environment. Utilize extensive experience in DevOps to optimize performance, conduct application upgrades, and apply patches. Lead continuous integration and deployment efforts using tools like Jenkins and Ansible. Demonstrate proficiency in coding and automation to streamline operations. Good hands-on knowledge of AWS/AZURE/GCP cloud service providers. Cloud Infrastructure Management: Exhibit strong expertise in AWS/AZURE/GCP/OCI cloud services and maintain infrastructure as code (IAC) using Ansible, Terraform, or CloudFormation. Oversee containerization technologies like Docker and Kubernetes to enhance scalability and efficiency. Manage Linux-based systems and network configurations to ensure smooth operations. Security and Access Management: Demonstrate a solid grasp of identity and access management (IAM) principles. Manage Security Groups (SGs), Firewall services, and secrets effectively. Optimize service costs based on resource utilization and scale. Monitoring and Reliability: Ensure ongoing and reliable monitoring of the infrastructure to promptly address issues. Implement performance tuning and optimization strategies to maintain high availability. Technical Requirements: Proficient in Python/Java/bash Scripting for automation and tooling. Expertise in AWS/AZURE/GCP/OCI cloud services like Azure Kubernetes Service/Elastic Kubernetes Service/Google Kubernetes Engine. Extensive experience with CI/CD pipelines, particularly using Jenkins . Strong familiarity with Docker and Kubernetes for container orchestration. In-depth understanding of networking principles. Good to Have Skillsets: Experience in crafting intuitive and engaging user interfaces (UI) for web applications, mobile apps, or other AI-powered interfaces. Experience with design thinking methodologies. Understanding of data visualization and information architecture. Ability to write clear documentation. Experience with voice user interfaces (VUIs). Knowledge of animation and micro interactions for enhancing user experience. Experience with design systems and component libraries. Process Skills: General SDLC processes Understanding of utilizing Agile and Scrum software development methodologies Attention to detail and commitment to quality. Behavioral Skills: Work closely with designers, product managers, Developers, and data scientists to deliver comprehensive solutions. Communicate effectively and share knowledge with the team. Be open to feedback and continuously learn and adapt to new technologies. Ability to work independently and as part of a team. Ability to work effectively under pressure and meet deadlines. Passion for learning and staying updated on the latest technologies. Good Attitude and Quick learner. Certification (Good to have) : Certifications (Preferred, any 1 or more Cloud Service Provider): AWS associate certification (eg, AWS Certified Solutions Architect, AWS Certified DevOps Engineer) Certified Kubernetes Administrator (CKA) certification. Certified Docker Captain. Azure Certifications (eg Azure Fundamentals, Azure Administrator Associate, DevOps Engineer Expert, Azure Security Engineer Associate) GCP certifications (eg Cloud DevOps Engineer, Cloud network Engineer, Google Workspace Administrator) Networking related certification. Role can be fully remote from Poland Permanent, Excellent Salary Senior DevOps Engineer - Cloud - Permanent - Poland
07/05/2024
Full time
Senior DevOps Engineer - Cloud - Permanent - Poland Robson Bale are looking for a Senior DevOps Engineer to come on board for a permanent opportunity in Poland. Role can be fully remote from Poland Permanent, Excellent Salary Responsibilities: Technical Skills - Must have Leadership: Lead and manage DevOps/Infrastructure projects, overseeing the entire development life cycle. Collaborate with cross-functional teams to align project objectives and deliverables. Ensure adherence to timelines, budgets, and quality standards. Mentor and guide team members and interns, fostering a culture of continuous learning. Security and Compliance: Demonstrate a deep understanding of Standard Operating Procedures (SOP) for security practices. Perform threat modelling and implement encryption, network defense, and web security measures. Champion security best practices in a production environment and address cloud security risks. Integrate identity providers such as OAuth, OIDC, and SAML to enhance security. DevOps/Infrastructure and Cloud Expertise: Drive change, release, and incident management processes to maintain a stable environment. Utilize extensive experience in DevOps to optimize performance, conduct application upgrades, and apply patches. Lead continuous integration and deployment efforts using tools like Jenkins and Ansible. Demonstrate proficiency in coding and automation to streamline operations. Good hands-on knowledge of AWS/AZURE/GCP cloud service providers. Cloud Infrastructure Management: Exhibit strong expertise in AWS/AZURE/GCP/OCI cloud services and maintain infrastructure as code (IAC) using Ansible, Terraform, or CloudFormation. Oversee containerization technologies like Docker and Kubernetes to enhance scalability and efficiency. Manage Linux-based systems and network configurations to ensure smooth operations. Security and Access Management: Demonstrate a solid grasp of identity and access management (IAM) principles. Manage Security Groups (SGs), Firewall services, and secrets effectively. Optimize service costs based on resource utilization and scale. Monitoring and Reliability: Ensure ongoing and reliable monitoring of the infrastructure to promptly address issues. Implement performance tuning and optimization strategies to maintain high availability. Technical Requirements: Proficient in Python/Java/bash Scripting for automation and tooling. Expertise in AWS/AZURE/GCP/OCI cloud services like Azure Kubernetes Service/Elastic Kubernetes Service/Google Kubernetes Engine. Extensive experience with CI/CD pipelines, particularly using Jenkins . Strong familiarity with Docker and Kubernetes for container orchestration. In-depth understanding of networking principles. Good to Have Skillsets: Experience in crafting intuitive and engaging user interfaces (UI) for web applications, mobile apps, or other AI-powered interfaces. Experience with design thinking methodologies. Understanding of data visualization and information architecture. Ability to write clear documentation. Experience with voice user interfaces (VUIs). Knowledge of animation and micro interactions for enhancing user experience. Experience with design systems and component libraries. Process Skills: General SDLC processes Understanding of utilizing Agile and Scrum software development methodologies Attention to detail and commitment to quality. Behavioral Skills: Work closely with designers, product managers, Developers, and data scientists to deliver comprehensive solutions. Communicate effectively and share knowledge with the team. Be open to feedback and continuously learn and adapt to new technologies. Ability to work independently and as part of a team. Ability to work effectively under pressure and meet deadlines. Passion for learning and staying updated on the latest technologies. Good Attitude and Quick learner. Certification (Good to have) : Certifications (Preferred, any 1 or more Cloud Service Provider): AWS associate certification (eg, AWS Certified Solutions Architect, AWS Certified DevOps Engineer) Certified Kubernetes Administrator (CKA) certification. Certified Docker Captain. Azure Certifications (eg Azure Fundamentals, Azure Administrator Associate, DevOps Engineer Expert, Azure Security Engineer Associate) GCP certifications (eg Cloud DevOps Engineer, Cloud network Engineer, Google Workspace Administrator) Networking related certification. Role can be fully remote from Poland Permanent, Excellent Salary Senior DevOps Engineer - Cloud - Permanent - Poland
F5 WAF Engineer Whitehall resources are looking for an F5 WAF Engineer. This is an initial 6-month contract, working onsite 2 days per week in Sheffield. *Inside IR35 - You will be required to use an FCSA Accredited Umbrella Company* Job Description: As an Automation Engineer, you will play a pivotal role in enhancing our IT infrastructure by designing, creating, and maintaining bespoke Continuous Integration/Continuous Deployment (CI/CD) pipelines tailored to specific project needs. This role will have an initial focus on leveraging F5 technologies alongside a broad spectrum of automation and DevOps practices to deliver our automation use cases; however once F5 automaton works have completed, works will progress to other WAF platforms and use cases. You will be responsible for the integration of CI/CD pipelines with solutions developed by other teams, Scripting, and the creation of Infrastructure as Code (IaC) manifests using tools like Terraform and Ansible. Your expertise in Jenkins, JIRA, GitHub, Python, and other relevant technologies will be essential. You should have a solid background in building CI/CD pipelines and a comprehensive understanding of DevOps practices. The ideal candidate should not only have technical proficiency in data structures, automation technologies, API interactions, and cloud services, but also exhibit a strong drive to research, investigate, and collaborate effectively within the organization. Key Responsibilities . Developing and Delivering Automation for F5 WAF Platform: In the first instance: Developing and delivering automation solutions specifically for our F5 Web Application Firewall (WAF) platform, aligned with our specific use cases. This involves Scripting, configuring, and deploying automation workflows that enhance security, manageability, and operational efficiency of the F5 WAF environment. . CI/CD Pipeline Development: Create, enhance and implement new, customized CI/CD pipelines tailored for specific project use cases, ensuring efficient, automated workflows . Pipeline Maintenance: Regularly update and maintain existing CI/CD pipelines to ensure they are efficient, secure, and up-to-date with the latest technology standards . Integration of Solutions: Work collaboratively with other teams to integrate their solutions and tools into the CI/CD pipelines effectively, enhancing overall workflow and productivity. . IaC Manifests Creation: Develop and maintain Infrastructure as Code (IaC) manifests, predominantly using Terraform, to manage and provision IT infrastructure in a consistent and repeatable manner . Tool Proficiency: Utilize and demonstrate expertise in tools such as Jenkins, JIRA, GitHub, and Python, effectively integrating them into the CI/CD processes . Script Writing: Write and maintain scripts to automate various aspects of the infrastructure and deployment processes, improving efficiency and reducing the potential for human error. . Collaboration and Communication: Collaborate with cross-functional teams, including software development, operations, and quality assurance, to ensure seamless integration and implementation of DevOps practices . Proactive Research and Collaboration: Eager to research and utilize company resources like Confluence, find relevant contacts, and reach out to other teams for unknowns. Prepared to independently investigate and resolve challenges. Required F5 Experiences - One or more of these . F5 ASM/AWAF Knowledge & Experience: Understanding and practical experience with F5's Application Security Manager (ASM) and Advanced WAF (AWAF), including configuration, management, and troubleshooting of application security policies and web application Firewalls. . F5 with API Gateway: Experience: Integrating F5 solutions with API Gateway technologies, demonstrating the ability to secure and manage APIs effectively. Experience in using F5 with Kong API Gateway; managing, and optimizing API traffic through F5 systems. . F5 GTM and Proxy Technologies: Knowledge and experience with F5's Global Traffic Manager (GTM) as well as experience with Proxy technologies, including forward and reverse proxies . Basic Certificate Management: Knowledge of SSL/TLS certificate management processes, including issuance, renewal, and deployment, within F5 environments. . F5 AS3: Experience; Experience with AS3 (Application Services 3 Extension), for declarative automation and orchestration of F5 BIG-IP services. Proficiency in automating the deployment and management of F5 configurations using AS3 Key Experience - Ideal Candidate Profile . Technical Expertise in CI/CD Tools: Proficiency in Continuous Integration and Continuous Deployment tools such as Jenkins, CircleCI, Travis CI, GitLab CI, and Bamboo. Ability to configure, manage, and optimize these tools for various project requirements. . Proficiency in Scripting Languages: Strong skills in Scripting languages such as Python, Bash, PowerShell. Ability to write and maintain scripts to automate routine tasks and deployments . Infrastructure as Code (IaC): Extensive experience in creating and managing infrastructure using code. Proficiency in IaC tools like Terraform, Ansible, Chef, or Puppet . Data Structuring and Management: Advanced skills in managing data using formats like JSON, YAML, XML, and others. Capable of parsing, creating, and maintaining complex data structures for configuration and automation purposes. . API Integration and Management: Expertise in querying, integrating, and managing APIs. Capable of constructing and executing API calls for data retrieval, updates, and inter-service communication. . Version Control Systems: In-depth knowledge of version control systems like Git, including branching strategies, repository management, and integrating with CI/CD pipelines . Containerization and Orchestration: Experience with containerization tools such as Docker and orchestration platforms like Kubernetes or Docker Swarm. Understanding of containerized environments and their integration into CI/CD pipelines . Cloud Platforms: Familiarity with major cloud platforms like AWS, Azure, or GCP; understanding of cloud-specific services and how to integrate them into CI/CD processes . Monitoring and Logging: Knowledge of monitoring and logging tools such as Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), or Splunk. Ability to set up and maintain monitoring and logging for applications and infrastructure . Security Practices in DevOps (DevSecOps): Understanding of security practices in a DevOps environment. Familiarity with security scanning tools, implementing secure coding practices, and ensuring compliance with industry standards . Agile and Scrum Methodologies: Experience with Agile and Scrum methodologies. Ability to work in fast-paced, iterative development environments and adapt to changing requirements . Networking and Security Fundamentals: Knowledge of networking concepts (eg, TCP/IP, DNS, HTTP/S) and basic security concepts (eg, Firewalls, VPNs, IDS/IPS). . Problem-Solving and Analytical Skills: Strong problem-solving skills and ability to analyze complex systems and workflows to propose effective automation solutions. . Collaboration and Communication: Excellent collaboration and communication skills. Ability to work effectively in a team and communicate complex technical concepts to both technical and non-technical stakeholders. . Project Management Skills: Basic project management skills with the ability to manage timelines, dependencies, and deliverables in a cross-functional environment. . Research and Investigative Skills: Motivated to self-educate and explore company resources and external knowledge bases. All of our opportunities require that applicants are eligible to work in the specified country/location, unless otherwise stated in the job description. Whitehall Resources are an equal opportunities employer who value a diverse and inclusive working environment. All qualified applicants will receive consideration for employment without regard to race, religion, gender identity or expression, sexual orientation, national origin, pregnancy, disability, age, veteran status, or other characteristics.
07/05/2024
Project-based
F5 WAF Engineer Whitehall resources are looking for an F5 WAF Engineer. This is an initial 6-month contract, working onsite 2 days per week in Sheffield. *Inside IR35 - You will be required to use an FCSA Accredited Umbrella Company* Job Description: As an Automation Engineer, you will play a pivotal role in enhancing our IT infrastructure by designing, creating, and maintaining bespoke Continuous Integration/Continuous Deployment (CI/CD) pipelines tailored to specific project needs. This role will have an initial focus on leveraging F5 technologies alongside a broad spectrum of automation and DevOps practices to deliver our automation use cases; however once F5 automaton works have completed, works will progress to other WAF platforms and use cases. You will be responsible for the integration of CI/CD pipelines with solutions developed by other teams, Scripting, and the creation of Infrastructure as Code (IaC) manifests using tools like Terraform and Ansible. Your expertise in Jenkins, JIRA, GitHub, Python, and other relevant technologies will be essential. You should have a solid background in building CI/CD pipelines and a comprehensive understanding of DevOps practices. The ideal candidate should not only have technical proficiency in data structures, automation technologies, API interactions, and cloud services, but also exhibit a strong drive to research, investigate, and collaborate effectively within the organization. Key Responsibilities . Developing and Delivering Automation for F5 WAF Platform: In the first instance: Developing and delivering automation solutions specifically for our F5 Web Application Firewall (WAF) platform, aligned with our specific use cases. This involves Scripting, configuring, and deploying automation workflows that enhance security, manageability, and operational efficiency of the F5 WAF environment. . CI/CD Pipeline Development: Create, enhance and implement new, customized CI/CD pipelines tailored for specific project use cases, ensuring efficient, automated workflows . Pipeline Maintenance: Regularly update and maintain existing CI/CD pipelines to ensure they are efficient, secure, and up-to-date with the latest technology standards . Integration of Solutions: Work collaboratively with other teams to integrate their solutions and tools into the CI/CD pipelines effectively, enhancing overall workflow and productivity. . IaC Manifests Creation: Develop and maintain Infrastructure as Code (IaC) manifests, predominantly using Terraform, to manage and provision IT infrastructure in a consistent and repeatable manner . Tool Proficiency: Utilize and demonstrate expertise in tools such as Jenkins, JIRA, GitHub, and Python, effectively integrating them into the CI/CD processes . Script Writing: Write and maintain scripts to automate various aspects of the infrastructure and deployment processes, improving efficiency and reducing the potential for human error. . Collaboration and Communication: Collaborate with cross-functional teams, including software development, operations, and quality assurance, to ensure seamless integration and implementation of DevOps practices . Proactive Research and Collaboration: Eager to research and utilize company resources like Confluence, find relevant contacts, and reach out to other teams for unknowns. Prepared to independently investigate and resolve challenges. Required F5 Experiences - One or more of these . F5 ASM/AWAF Knowledge & Experience: Understanding and practical experience with F5's Application Security Manager (ASM) and Advanced WAF (AWAF), including configuration, management, and troubleshooting of application security policies and web application Firewalls. . F5 with API Gateway: Experience: Integrating F5 solutions with API Gateway technologies, demonstrating the ability to secure and manage APIs effectively. Experience in using F5 with Kong API Gateway; managing, and optimizing API traffic through F5 systems. . F5 GTM and Proxy Technologies: Knowledge and experience with F5's Global Traffic Manager (GTM) as well as experience with Proxy technologies, including forward and reverse proxies . Basic Certificate Management: Knowledge of SSL/TLS certificate management processes, including issuance, renewal, and deployment, within F5 environments. . F5 AS3: Experience; Experience with AS3 (Application Services 3 Extension), for declarative automation and orchestration of F5 BIG-IP services. Proficiency in automating the deployment and management of F5 configurations using AS3 Key Experience - Ideal Candidate Profile . Technical Expertise in CI/CD Tools: Proficiency in Continuous Integration and Continuous Deployment tools such as Jenkins, CircleCI, Travis CI, GitLab CI, and Bamboo. Ability to configure, manage, and optimize these tools for various project requirements. . Proficiency in Scripting Languages: Strong skills in Scripting languages such as Python, Bash, PowerShell. Ability to write and maintain scripts to automate routine tasks and deployments . Infrastructure as Code (IaC): Extensive experience in creating and managing infrastructure using code. Proficiency in IaC tools like Terraform, Ansible, Chef, or Puppet . Data Structuring and Management: Advanced skills in managing data using formats like JSON, YAML, XML, and others. Capable of parsing, creating, and maintaining complex data structures for configuration and automation purposes. . API Integration and Management: Expertise in querying, integrating, and managing APIs. Capable of constructing and executing API calls for data retrieval, updates, and inter-service communication. . Version Control Systems: In-depth knowledge of version control systems like Git, including branching strategies, repository management, and integrating with CI/CD pipelines . Containerization and Orchestration: Experience with containerization tools such as Docker and orchestration platforms like Kubernetes or Docker Swarm. Understanding of containerized environments and their integration into CI/CD pipelines . Cloud Platforms: Familiarity with major cloud platforms like AWS, Azure, or GCP; understanding of cloud-specific services and how to integrate them into CI/CD processes . Monitoring and Logging: Knowledge of monitoring and logging tools such as Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), or Splunk. Ability to set up and maintain monitoring and logging for applications and infrastructure . Security Practices in DevOps (DevSecOps): Understanding of security practices in a DevOps environment. Familiarity with security scanning tools, implementing secure coding practices, and ensuring compliance with industry standards . Agile and Scrum Methodologies: Experience with Agile and Scrum methodologies. Ability to work in fast-paced, iterative development environments and adapt to changing requirements . Networking and Security Fundamentals: Knowledge of networking concepts (eg, TCP/IP, DNS, HTTP/S) and basic security concepts (eg, Firewalls, VPNs, IDS/IPS). . Problem-Solving and Analytical Skills: Strong problem-solving skills and ability to analyze complex systems and workflows to propose effective automation solutions. . Collaboration and Communication: Excellent collaboration and communication skills. Ability to work effectively in a team and communicate complex technical concepts to both technical and non-technical stakeholders. . Project Management Skills: Basic project management skills with the ability to manage timelines, dependencies, and deliverables in a cross-functional environment. . Research and Investigative Skills: Motivated to self-educate and explore company resources and external knowledge bases. All of our opportunities require that applicants are eligible to work in the specified country/location, unless otherwise stated in the job description. Whitehall Resources are an equal opportunities employer who value a diverse and inclusive working environment. All qualified applicants will receive consideration for employment without regard to race, religion, gender identity or expression, sexual orientation, national origin, pregnancy, disability, age, veteran status, or other characteristics.
ASSOCIATE PRINCIPAL, APPIAN SOFTWARE ENGINEERING SALARY: $140k - $145k - $152k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite, 2 days remote Looking for someone to design development testing and do the implementation of appian software. You will need 5 years Front End user experience, JavaScript automating workflows inside appian aws unix linux Java python node js angular 2.0 or react js and Middleware technologies. Working knowledge of devops terraform ansible Jenkins Kubernetes helm and cicd pipelines. Must have a degree and be apian certified developer required Contribute to design, technical direction and architecture including collaborating with various teams to build fit for purpose solutions. Applies expert knowledge of Java, Python, JavaScript, NodeJS, Angular 2.0 or ReactJS and middle-ware technologies in independently designing and developing key services with a focus on continuous integration and delivery Participates in code reviews, proactively identifying and mitigating potential issues and defects as well as assisting with continuous improvement Drives continuous improvement efforts by identifying and championing practical means of reducing time to market while maintaining high quality Qualifications: 5+ years of Front End, User Experience, development (required) 5+ years of experience in JavaScript skills (required) 3 + years of experience automating workflows inside Appian and in conjunction with integration to other tools (required) 3+ years of experience in React application development (required) 3+ years of hands-on HTML5/CSS3 experience (required) Experience with Java and/or Python (required) Experience with popular Javascript frameworks such as React, Node JS, Vue, Angular 2.0 (required) Experience of working with websockets, HTTP 1.1 and HTTP/2 (required) Experience with RESTful APIs and JSON RPC (required) Ability to write clean, bug-free code that is easy to understand and easily maintainable (required) Experience with BDD methodologies & automated acceptance testing (required) Technical Skills: 5+ years hands-on experience in Java, including good understanding of Java fundamentals such as Memory Model, Runtime Environment, Concurrency and Multithreading (required) Past/Current experience of 3+ years working on a large scale cloud native project (platform: Unix/Linux, Type of Systems: event-driven/transaction processing/high performance computing) as Technical Lead. These experiences should include developing/architecting core libraries or framework used by the platform to support fundamental services like storage, alert notifications, security, etc. (required) Appian Process Modeling, Smart Services, Rules and Tempo event services, database, and Web services (required) Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based, Graph (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc (required) Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics (required) Education and/or Experience: BS degree in Computer Science, similar technical field Appian certified developer
06/05/2024
Full time
ASSOCIATE PRINCIPAL, APPIAN SOFTWARE ENGINEERING SALARY: $140k - $145k - $152k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite, 2 days remote Looking for someone to design development testing and do the implementation of appian software. You will need 5 years Front End user experience, JavaScript automating workflows inside appian aws unix linux Java python node js angular 2.0 or react js and Middleware technologies. Working knowledge of devops terraform ansible Jenkins Kubernetes helm and cicd pipelines. Must have a degree and be apian certified developer required Contribute to design, technical direction and architecture including collaborating with various teams to build fit for purpose solutions. Applies expert knowledge of Java, Python, JavaScript, NodeJS, Angular 2.0 or ReactJS and middle-ware technologies in independently designing and developing key services with a focus on continuous integration and delivery Participates in code reviews, proactively identifying and mitigating potential issues and defects as well as assisting with continuous improvement Drives continuous improvement efforts by identifying and championing practical means of reducing time to market while maintaining high quality Qualifications: 5+ years of Front End, User Experience, development (required) 5+ years of experience in JavaScript skills (required) 3 + years of experience automating workflows inside Appian and in conjunction with integration to other tools (required) 3+ years of experience in React application development (required) 3+ years of hands-on HTML5/CSS3 experience (required) Experience with Java and/or Python (required) Experience with popular Javascript frameworks such as React, Node JS, Vue, Angular 2.0 (required) Experience of working with websockets, HTTP 1.1 and HTTP/2 (required) Experience with RESTful APIs and JSON RPC (required) Ability to write clean, bug-free code that is easy to understand and easily maintainable (required) Experience with BDD methodologies & automated acceptance testing (required) Technical Skills: 5+ years hands-on experience in Java, including good understanding of Java fundamentals such as Memory Model, Runtime Environment, Concurrency and Multithreading (required) Past/Current experience of 3+ years working on a large scale cloud native project (platform: Unix/Linux, Type of Systems: event-driven/transaction processing/high performance computing) as Technical Lead. These experiences should include developing/architecting core libraries or framework used by the platform to support fundamental services like storage, alert notifications, security, etc. (required) Appian Process Modeling, Smart Services, Rules and Tempo event services, database, and Web services (required) Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based, Graph (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc (required) Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics (required) Education and/or Experience: BS degree in Computer Science, similar technical field Appian certified developer
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
06/05/2024
Full time
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
Data DevOps Engineer - DevOps, Big data - Permanent - Gloucestershire Location: Gloucestershire/Bristol (full-time onsite) Salary: £65 - £95K per annum Negotiable DOE Benefits: Flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: We are looking for a bright, driven, customer focussed professional to join our clients Hybrid Cloud Delivery team, and work alongside Enterprise Data Engineering Consultants to accelerate and drive data engineering opportunities. This is a fantastic opportunity for a dynamic individual with big ambitions, who is an established technologist with both outstanding technical ability and consultative mindset. This would suit an open-minded personable self-starter who relishes the fluidity and collaborative nature of consultancy. The Role: This role sits on our clients Advisory and Professional Services delivery team, who provide thought-leadership, industry know-how and technical excellence to consultative engagements. Helping customers to reap maximum business benefit from their technical investments, leveraging best in class Vender & Partner technologies to create relevant and effective business-valued technical solutions. The Data DevOps Engineer role is all about the detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems. Duties: Participating in the full life cycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader Assisting with solution improvement activities driven either by the project or service Essential Requirements: Excellent knowledge of Linux operating system administration and implementation Broad understanding of the containerisation domain adjacent technologies/services, such as: Docker, OpenShift, Kubernetes etc. Infrastructure as Code and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies eg NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages Desirable Requirements: Jupyter Hub Awareness Minio or similar S3 storage technology Trino/Presto RabbitMQ or other common queue technology eg ActiveMQ NiFi Rego Familiarity with code development, Shell-Scripting in Python, Bash etc. To apply for this Data DevOps Engineer permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
06/05/2024
Full time
Data DevOps Engineer - DevOps, Big data - Permanent - Gloucestershire Location: Gloucestershire/Bristol (full-time onsite) Salary: £65 - £95K per annum Negotiable DOE Benefits: Flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: We are looking for a bright, driven, customer focussed professional to join our clients Hybrid Cloud Delivery team, and work alongside Enterprise Data Engineering Consultants to accelerate and drive data engineering opportunities. This is a fantastic opportunity for a dynamic individual with big ambitions, who is an established technologist with both outstanding technical ability and consultative mindset. This would suit an open-minded personable self-starter who relishes the fluidity and collaborative nature of consultancy. The Role: This role sits on our clients Advisory and Professional Services delivery team, who provide thought-leadership, industry know-how and technical excellence to consultative engagements. Helping customers to reap maximum business benefit from their technical investments, leveraging best in class Vender & Partner technologies to create relevant and effective business-valued technical solutions. The Data DevOps Engineer role is all about the detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems. Duties: Participating in the full life cycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader Assisting with solution improvement activities driven either by the project or service Essential Requirements: Excellent knowledge of Linux operating system administration and implementation Broad understanding of the containerisation domain adjacent technologies/services, such as: Docker, OpenShift, Kubernetes etc. Infrastructure as Code and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies eg NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages Desirable Requirements: Jupyter Hub Awareness Minio or similar S3 storage technology Trino/Presto RabbitMQ or other common queue technology eg ActiveMQ NiFi Rego Familiarity with code development, Shell-Scripting in Python, Bash etc. To apply for this Data DevOps Engineer permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Linux Engineer. This engineer will focus on design, support, engineering, and automation for the Linux Operating system. This engineer will need hands on experience with Terraform, Kubernetes, Jenkins, Ansible, AWS, Docker, CICD, DevOps, etc. Responsibilities/Qualifications: Bachelor's degree, preferably in a technical discipline (Computer Science, Mathematics, etc.), or equivalent combination of education and experience required 8+ years' experience in IT systems installation, operations, administration, and maintenance of cloud systems/virtualized Servers Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
03/05/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Linux Engineer. This engineer will focus on design, support, engineering, and automation for the Linux Operating system. This engineer will need hands on experience with Terraform, Kubernetes, Jenkins, Ansible, AWS, Docker, CICD, DevOps, etc. Responsibilities/Qualifications: Bachelor's degree, preferably in a technical discipline (Computer Science, Mathematics, etc.), or equivalent combination of education and experience required 8+ years' experience in IT systems installation, operations, administration, and maintenance of cloud systems/virtualized Servers Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Linux Engineer. This engineer will focus on design, support, engineering, and automation for the Linux Operating system. This engineer will need hands on experience with Terraform, Kubernetes, Jenkins, Ansible, AWS, Docker, CICD, DevOps, etc. Responsibilities/Qualifications: Bachelor's degree, preferably in a technical discipline (Computer Science, Mathematics, etc.), or equivalent combination of education and experience required 8+ years' experience in IT systems installation, operations, administration, and maintenance of cloud systems/virtualized Servers Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
03/05/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Linux Engineer. This engineer will focus on design, support, engineering, and automation for the Linux Operating system. This engineer will need hands on experience with Terraform, Kubernetes, Jenkins, Ansible, AWS, Docker, CICD, DevOps, etc. Responsibilities/Qualifications: Bachelor's degree, preferably in a technical discipline (Computer Science, Mathematics, etc.), or equivalent combination of education and experience required 8+ years' experience in IT systems installation, operations, administration, and maintenance of cloud systems/virtualized Servers Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
Software Engineer - SC Cleared Salary: £35,000 - £55,000 Dependent on experience We are currently seeking a skilled and experienced Software Engineer to join the team at a leading defence engineering consultancy. You will have a strong background in software development, with experience in Python, Java, or Kotlin and supporting Back End and Middleware frameworks. As a Software Engineer, you will be responsible for designing, developing, and testing software systems for a range of applications. You will work closely with our multidisciplinary team of engineers to deliver innovative solutions to complex problems. Please note - Security clearance will be required for this role. Location: Woking Work Pattern: Flexible/Hybrid (A combination of home and office working) Key Responsibilities: Design, develop, and test software systems using Python, Java, or Kotlin and supporting Back End and Middleware frameworks Collaborate with cross-functional teams to integrate software into systems Conduct research to identify new techniques and technologies Analyze system performance and recommend improvements Prepare technical reports and presentations for clients and stakeholders Stay up-to-date with the latest developments in software engineering and related fields Essential Skills and Experience: Degree in Computer Science, Engineering, or a related field Proven experience in software development using Python, Java, or Kotlin Experience with SQL and NoSQL database systems Familiarity with Linux-based operating systems - Ubuntu, Centos/RHEL Experience with mobile operating systems - especially Android Knowledge of message brokering, serialisation and queuing systems Experience with microservices, containers and hosts Familiarity with Infrastructure as Code - Vagrant, Ansible and Terraform Experience with AWS and Azure Cloud Experience with Git and version control systems Desirable Skills and Experience: Experience with other programming languages such as C++ or JavaScript Familiarity with machine learning and artificial intelligence techniques Experience with simulation tools such as Simulink or Xilinx System Generator Knowledge of signal processing techniques Benefits: This blue chip employer offers a supportive and collaborative work environment, where you will have the opportunity to work on exciting and challenging projects. Benefits include: Hybrid working pattern (a combination of home and office working) Competitive salary and benefits package Comprehensive training and development opportunities Access to cutting-edge technology and resources Opportunities to work with leading experts in the field Apply now and I will call to discuss your situation and this role in more depth
02/05/2024
Full time
Software Engineer - SC Cleared Salary: £35,000 - £55,000 Dependent on experience We are currently seeking a skilled and experienced Software Engineer to join the team at a leading defence engineering consultancy. You will have a strong background in software development, with experience in Python, Java, or Kotlin and supporting Back End and Middleware frameworks. As a Software Engineer, you will be responsible for designing, developing, and testing software systems for a range of applications. You will work closely with our multidisciplinary team of engineers to deliver innovative solutions to complex problems. Please note - Security clearance will be required for this role. Location: Woking Work Pattern: Flexible/Hybrid (A combination of home and office working) Key Responsibilities: Design, develop, and test software systems using Python, Java, or Kotlin and supporting Back End and Middleware frameworks Collaborate with cross-functional teams to integrate software into systems Conduct research to identify new techniques and technologies Analyze system performance and recommend improvements Prepare technical reports and presentations for clients and stakeholders Stay up-to-date with the latest developments in software engineering and related fields Essential Skills and Experience: Degree in Computer Science, Engineering, or a related field Proven experience in software development using Python, Java, or Kotlin Experience with SQL and NoSQL database systems Familiarity with Linux-based operating systems - Ubuntu, Centos/RHEL Experience with mobile operating systems - especially Android Knowledge of message brokering, serialisation and queuing systems Experience with microservices, containers and hosts Familiarity with Infrastructure as Code - Vagrant, Ansible and Terraform Experience with AWS and Azure Cloud Experience with Git and version control systems Desirable Skills and Experience: Experience with other programming languages such as C++ or JavaScript Familiarity with machine learning and artificial intelligence techniques Experience with simulation tools such as Simulink or Xilinx System Generator Knowledge of signal processing techniques Benefits: This blue chip employer offers a supportive and collaborative work environment, where you will have the opportunity to work on exciting and challenging projects. Benefits include: Hybrid working pattern (a combination of home and office working) Competitive salary and benefits package Comprehensive training and development opportunities Access to cutting-edge technology and resources Opportunities to work with leading experts in the field Apply now and I will call to discuss your situation and this role in more depth
Software Engineer - SC Cleared Salary: £35,000 - £55,000 Dependent on experience We are currently seeking a skilled and experienced Software Engineer to join the team at a leading defence engineering consultancy. You will have a strong background in software development, with experience in Python, Java, or Kotlin and supporting Back End and Middleware frameworks. As a Software Engineer, you will be responsible for designing, developing, and testing software systems for a range of applications. You will work closely with our multidisciplinary team of engineers to deliver innovative solutions to complex problems. Please note - Security clearance will be required for this role. Location: Greater Southampton area. Work Pattern: Flexible/Hybrid (A combination of home and office working) Key Responsibilities: Design, develop, and test software systems using Python, Java, or Kotlin and supporting Back End and Middleware frameworks Collaborate with cross-functional teams to integrate software into systems Conduct research to identify new techniques and technologies Analyze system performance and recommend improvements Prepare technical reports and presentations for clients and stakeholders Stay up-to-date with the latest developments in software engineering and related fields Essential Skills and Experience: Degree in Computer Science, Engineering, or a related field Proven experience in software development using Python, Java, or Kotlin Experience with SQL and NoSQL database systems Familiarity with Linux-based operating systems - Ubuntu, Centos/RHEL Experience with mobile operating systems - especially Android Knowledge of message brokering, serialisation and queuing systems Experience with microservices, containers and hosts Familiarity with Infrastructure as Code - Vagrant, Ansible and Terraform Experience with AWS and Azure Cloud Experience with Git and version control systems Desirable Skills and Experience: Experience with other programming languages such as C++ or JavaScript Familiarity with machine learning and artificial intelligence techniques Experience with simulation tools such as Simulink or Xilinx System Generator Knowledge of signal processing techniques Benefits: This blue chip employer offers a supportive and collaborative work environment, where you will have the opportunity to work on exciting and challenging projects. Benefits include: Hybrid working pattern (a combination of home and office working) Competitive salary and benefits package Comprehensive training and development opportunities Access to cutting-edge technology and resources Opportunities to work with leading experts in the field Apply now and I will call to discuss your situation and this role in more depth
02/05/2024
Full time
Software Engineer - SC Cleared Salary: £35,000 - £55,000 Dependent on experience We are currently seeking a skilled and experienced Software Engineer to join the team at a leading defence engineering consultancy. You will have a strong background in software development, with experience in Python, Java, or Kotlin and supporting Back End and Middleware frameworks. As a Software Engineer, you will be responsible for designing, developing, and testing software systems for a range of applications. You will work closely with our multidisciplinary team of engineers to deliver innovative solutions to complex problems. Please note - Security clearance will be required for this role. Location: Greater Southampton area. Work Pattern: Flexible/Hybrid (A combination of home and office working) Key Responsibilities: Design, develop, and test software systems using Python, Java, or Kotlin and supporting Back End and Middleware frameworks Collaborate with cross-functional teams to integrate software into systems Conduct research to identify new techniques and technologies Analyze system performance and recommend improvements Prepare technical reports and presentations for clients and stakeholders Stay up-to-date with the latest developments in software engineering and related fields Essential Skills and Experience: Degree in Computer Science, Engineering, or a related field Proven experience in software development using Python, Java, or Kotlin Experience with SQL and NoSQL database systems Familiarity with Linux-based operating systems - Ubuntu, Centos/RHEL Experience with mobile operating systems - especially Android Knowledge of message brokering, serialisation and queuing systems Experience with microservices, containers and hosts Familiarity with Infrastructure as Code - Vagrant, Ansible and Terraform Experience with AWS and Azure Cloud Experience with Git and version control systems Desirable Skills and Experience: Experience with other programming languages such as C++ or JavaScript Familiarity with machine learning and artificial intelligence techniques Experience with simulation tools such as Simulink or Xilinx System Generator Knowledge of signal processing techniques Benefits: This blue chip employer offers a supportive and collaborative work environment, where you will have the opportunity to work on exciting and challenging projects. Benefits include: Hybrid working pattern (a combination of home and office working) Competitive salary and benefits package Comprehensive training and development opportunities Access to cutting-edge technology and resources Opportunities to work with leading experts in the field Apply now and I will call to discuss your situation and this role in more depth
Performance Testing - CI/CD - Open Source Tools, Uc4 C2C LOCATION: CHICAGO - HYBRID 3 DAYS ONSITE Long Term Contract Looking for a candidate to do performance testing using open source tools like jmeter, gatling, Perl, solid python Scripting. Familiar with creating modules that multiply transaction (data) multiple platforms store data financial environment Java cloud automation look at Java and convert it to python 20% SDET automation testing QA automation testing using CICD concepts Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. EXPERIENCE REQUIRED: Python Scripting - familiarity with creating modules that multiply transactional data and other data multiplier strategies that will be used in test cycles of the Real Time Clearing System SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL; Operating Systems experience; Methodologies: Agile, Iterative & Waterfall
01/05/2024
Project-based
Performance Testing - CI/CD - Open Source Tools, Uc4 C2C LOCATION: CHICAGO - HYBRID 3 DAYS ONSITE Long Term Contract Looking for a candidate to do performance testing using open source tools like jmeter, gatling, Perl, solid python Scripting. Familiar with creating modules that multiply transaction (data) multiple platforms store data financial environment Java cloud automation look at Java and convert it to python 20% SDET automation testing QA automation testing using CICD concepts Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. EXPERIENCE REQUIRED: Python Scripting - familiarity with creating modules that multiply transactional data and other data multiplier strategies that will be used in test cycles of the Real Time Clearing System SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL; Operating Systems experience; Methodologies: Agile, Iterative & Waterfall
* Position is bonus eligible* Prestigious Financial Institution is currently seeking an Enterprise Monitoring Technical Lead Engineer with strong Splunk experience. Candidate will lead the investigating, planning, and implementing of the enterprise monitoring system, as well as identify areas for improvement, recommend allocation of resources, and work with solution architects to craft an appropriate remediation or enhancement for these systems. Responsibilities: Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems. Qualifications: Expert understanding of: Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITIL Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree in a related area 10+ years of related experience 10 years experience working in a distributed multi-platform environment. 3 years experience working with cloud native applications 3 years experience managing technical projects Cloud certification in AWS is a plus
23/04/2024
Full time
* Position is bonus eligible* Prestigious Financial Institution is currently seeking an Enterprise Monitoring Technical Lead Engineer with strong Splunk experience. Candidate will lead the investigating, planning, and implementing of the enterprise monitoring system, as well as identify areas for improvement, recommend allocation of resources, and work with solution architects to craft an appropriate remediation or enhancement for these systems. Responsibilities: Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems. Qualifications: Expert understanding of: Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITIL Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree in a related area 10+ years of related experience 10 years experience working in a distributed multi-platform environment. 3 years experience working with cloud native applications 3 years experience managing technical projects Cloud certification in AWS is a plus