Contract - Performance Testing/Automated Test Systems - Java to Python They're going from an old system to a new system, so it is all about automated test systems Test cases Converting Java to Python Python Scripting UC4 is a plus Must have heavy Cloud Kafka is a high plus, but not necessary All about CI/CD and automation LOCATION: CHICAGO - HYBRID 3 DAYS ONSITE C2C SELLING POINTS: Performance testing open source tools like jmeter gatling Perl solid python Scripting familiar with creating modules that multiply transaction (data) multiple platforms store data financial environment Java cloud automation look at Java and convert it to python 20% SDET automation testing QA automation testing using CICD concepts. Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Analyze business requirements and functional documents and create solid test strategies that define test environment, phases of testing, entrance and exit criteria and help to define the resources and tools needed to execute test cycles. Design, develop and implement automated testing solutions that will be utilized in a parallel testing project (Legacy versus OVAT). Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles. This includes extensive knowledge of the platform and the ability to troubleshoot environmental issues that could occur in the new cloud platform in a timely manner. REQUIRED: Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Python Scripting Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, monitoring tools on-prem and in the Cloud.
26/04/2024
Project-based
Contract - Performance Testing/Automated Test Systems - Java to Python They're going from an old system to a new system, so it is all about automated test systems Test cases Converting Java to Python Python Scripting UC4 is a plus Must have heavy Cloud Kafka is a high plus, but not necessary All about CI/CD and automation LOCATION: CHICAGO - HYBRID 3 DAYS ONSITE C2C SELLING POINTS: Performance testing open source tools like jmeter gatling Perl solid python Scripting familiar with creating modules that multiply transaction (data) multiple platforms store data financial environment Java cloud automation look at Java and convert it to python 20% SDET automation testing QA automation testing using CICD concepts. Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Analyze business requirements and functional documents and create solid test strategies that define test environment, phases of testing, entrance and exit criteria and help to define the resources and tools needed to execute test cycles. Design, develop and implement automated testing solutions that will be utilized in a parallel testing project (Legacy versus OVAT). Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles. This includes extensive knowledge of the platform and the ability to troubleshoot environmental issues that could occur in the new cloud platform in a timely manner. REQUIRED: Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Python Scripting Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, monitoring tools on-prem and in the Cloud.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Linux Engineer. This engineer will focus on design, support, engineering, and automation for the Linux Operating system. This engineer will need hands on experience with Terraform, Kubernetes, Jenkins, Ansible, AWS, Docker, CICD, DevOps, etc. Responsibilities/Qualifications: Bachelor's degree, preferably in a technical discipline (Computer Science, Mathematics, etc.), or equivalent combination of education and experience required 8+ years' experience in IT systems installation, operations, administration, and maintenance of cloud systems/virtualized Servers Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
26/04/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Linux Engineer. This engineer will focus on design, support, engineering, and automation for the Linux Operating system. This engineer will need hands on experience with Terraform, Kubernetes, Jenkins, Ansible, AWS, Docker, CICD, DevOps, etc. Responsibilities/Qualifications: Bachelor's degree, preferably in a technical discipline (Computer Science, Mathematics, etc.), or equivalent combination of education and experience required 8+ years' experience in IT systems installation, operations, administration, and maintenance of cloud systems/virtualized Servers Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Linux Engineer. This engineer will focus on design, support, engineering, and automation for the Linux Operating system. This engineer will need hands on experience with Terraform, Kubernetes, Jenkins, Ansible, AWS, Docker, CICD, DevOps, etc. Responsibilities/Qualifications: Bachelor's degree, preferably in a technical discipline (Computer Science, Mathematics, etc.), or equivalent combination of education and experience required 8+ years' experience in IT systems installation, operations, administration, and maintenance of cloud systems/virtualized Servers Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
26/04/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Linux Engineer. This engineer will focus on design, support, engineering, and automation for the Linux Operating system. This engineer will need hands on experience with Terraform, Kubernetes, Jenkins, Ansible, AWS, Docker, CICD, DevOps, etc. Responsibilities/Qualifications: Bachelor's degree, preferably in a technical discipline (Computer Science, Mathematics, etc.), or equivalent combination of education and experience required 8+ years' experience in IT systems installation, operations, administration, and maintenance of cloud systems/virtualized Servers Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Senior Linux DevOps Engineer. Candidate will be responsible for design and support of core platform engineering automation. This role will drive the strategy for infrastructure automation and be charged to improve application adoption, reduce overall operational support, and increase end-user usability of our platform services. Candidate will provide team leadership required to support a large, complex Architect L3 Linux based computing environment and an increasing transition to Linux infrastructure in AWS. Assist in driving infrastructure as code mentality throughout the organization and demonstrate a passion for automation concepts and tools. Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Qualifications : Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Technical Skills: Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Linux Experience: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Cloud Experience - Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Infra Automation - Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. CICD Experience - Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Resilient Design - Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
26/04/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Senior Linux DevOps Engineer. Candidate will be responsible for design and support of core platform engineering automation. This role will drive the strategy for infrastructure automation and be charged to improve application adoption, reduce overall operational support, and increase end-user usability of our platform services. Candidate will provide team leadership required to support a large, complex Architect L3 Linux based computing environment and an increasing transition to Linux infrastructure in AWS. Assist in driving infrastructure as code mentality throughout the organization and demonstrate a passion for automation concepts and tools. Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Qualifications : Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Technical Skills: Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Linux Experience: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Cloud Experience - Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Infra Automation - Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. CICD Experience - Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Resilient Design - Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Senior Linux DevOps Engineer. Candidate will be responsible for design and support of core platform engineering automation. This role will drive the strategy for infrastructure automation and be charged to improve application adoption, reduce overall operational support, and increase end-user usability of our platform services. Candidate will provide team leadership required to support a large, complex Architect L3 Linux based computing environment and an increasing transition to Linux infrastructure in AWS. Assist in driving infrastructure as code mentality throughout the organization and demonstrate a passion for automation concepts and tools. Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Qualifications : Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Technical Skills: Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Linux Experience: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Cloud Experience - Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Infra Automation - Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. CICD Experience - Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Resilient Design - Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
26/04/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Senior Linux DevOps Engineer. Candidate will be responsible for design and support of core platform engineering automation. This role will drive the strategy for infrastructure automation and be charged to improve application adoption, reduce overall operational support, and increase end-user usability of our platform services. Candidate will provide team leadership required to support a large, complex Architect L3 Linux based computing environment and an increasing transition to Linux infrastructure in AWS. Assist in driving infrastructure as code mentality throughout the organization and demonstrate a passion for automation concepts and tools. Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Qualifications : Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Technical Skills: Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Linux Experience: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Cloud Experience - Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Infra Automation - Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. CICD Experience - Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Resilient Design - Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
Performance Testing - CI/CD - Open Source Tools, Uc4 C2C LOCATION: CHICAGO - HYBRID 3 DAYS ONSITE Long Term Contract Looking for a candidate to do performance testing using open source tools like jmeter, gatling, Perl, solid python Scripting. Familiar with creating modules that multiply transaction (data) multiple platforms store data financial environment Java cloud automation look at Java and convert it to python 20% SDET automation testing QA automation testing using CICD concepts Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. EXPERIENCE REQUIRED: Python Scripting - familiarity with creating modules that multiply transactional data and other data multiplier strategies that will be used in test cycles of the Real Time Clearing System SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL; Operating Systems experience; Methodologies: Agile, Iterative & Waterfall
24/04/2024
Project-based
Performance Testing - CI/CD - Open Source Tools, Uc4 C2C LOCATION: CHICAGO - HYBRID 3 DAYS ONSITE Long Term Contract Looking for a candidate to do performance testing using open source tools like jmeter, gatling, Perl, solid python Scripting. Familiar with creating modules that multiply transaction (data) multiple platforms store data financial environment Java cloud automation look at Java and convert it to python 20% SDET automation testing QA automation testing using CICD concepts Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. EXPERIENCE REQUIRED: Python Scripting - familiarity with creating modules that multiply transactional data and other data multiplier strategies that will be used in test cycles of the Real Time Clearing System SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL; Operating Systems experience; Methodologies: Agile, Iterative & Waterfall
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Woking SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
24/04/2024
Full time
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Woking SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Romsey SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
24/04/2024
Full time
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Romsey SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
Contract - UC4 Automation Engineer Rate: Open Location: Chicago, IL Hybrid: 3 days on-site, 2 days remote Qualifications Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL Responsibilities Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles.
23/04/2024
Project-based
Contract - UC4 Automation Engineer Rate: Open Location: Chicago, IL Hybrid: 3 days on-site, 2 days remote Qualifications Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL Responsibilities Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles.
Red - The Global SAP Solutions Provider
Oslo, Oslo
Data Modeller/Oslo 3 days per week onsite/12 months +/Start ASAP Responsibility: * Work tight with different data domains, business areas, initiative to define and develop certified data products and information flow. * Build relation and show direction for how new solution can affect our existing services and deliveries. * Hold the "Data Engineering Community of Practice" within our Tech Family * Establish new communication channels for technical knowledge sharing of modern data engineering. Experience: * Experience with Datamodelling. * Experience with data warehouse, data lakes, data fabric, Mesh etc. * Experience with both data- and software engineering, and how to combine those to build data products. * Experience with DevOps methods. * Experience with Middleware, ETL/ELT, SQL, Apache Kafka, Streamsets, DBT, Apache Airflow, Snowflake or similar tooling stack. * Experience with building cloud solutions (AWS/Azure, Serverless, cost engineering etc.) * Strong in automation; use of metadata driven design, CI/CD, event driven architecture. * Experience from big and complex organisations with high demand of security and availability in a broad portfolio of different technologies. Qualifications: * Master (M.S.c.) or bachelor's in informatics, computer engineering, cybernetics or similar. * Minimum 3-5 years relevant work experience * Good in communication, both verbally and in written.
23/04/2024
Project-based
Data Modeller/Oslo 3 days per week onsite/12 months +/Start ASAP Responsibility: * Work tight with different data domains, business areas, initiative to define and develop certified data products and information flow. * Build relation and show direction for how new solution can affect our existing services and deliveries. * Hold the "Data Engineering Community of Practice" within our Tech Family * Establish new communication channels for technical knowledge sharing of modern data engineering. Experience: * Experience with Datamodelling. * Experience with data warehouse, data lakes, data fabric, Mesh etc. * Experience with both data- and software engineering, and how to combine those to build data products. * Experience with DevOps methods. * Experience with Middleware, ETL/ELT, SQL, Apache Kafka, Streamsets, DBT, Apache Airflow, Snowflake or similar tooling stack. * Experience with building cloud solutions (AWS/Azure, Serverless, cost engineering etc.) * Strong in automation; use of metadata driven design, CI/CD, event driven architecture. * Experience from big and complex organisations with high demand of security and availability in a broad portfolio of different technologies. Qualifications: * Master (M.S.c.) or bachelor's in informatics, computer engineering, cybernetics or similar. * Minimum 3-5 years relevant work experience * Good in communication, both verbally and in written.
Senior Backend .NET Developer (.NET, AWS, Terraform, DynamoDB) Location: Manchester - Hybrid (2 days a week in the office) Salary: up to £65,000 plus a great benefits package We are working with a leading organisation who is currently looking for a number of talented Senior .NET Developers to join their brand new development team. Situated within the Enterprise Technology sector, their Product Engineering division is dedicated to crafting top-tier software products for clients. Embracing modern software development practices, they harness Cloud Technologies and DevOps principles to deliver robust, scalable, and secure systems. The Role: As a Senior .NET Developer, you will play a pivotal role in developing and maintaining the revolutionary connected data platform system. Collaborating closely with cross-functional teams, the successful candidate will architect and implement solutions that exceed user expectations. If passionate about software development with a strong background in .NET technologies, they want you on their team! Key Responsibilities: Developing and maintaining a greenfield connected data platform system. Collaborating with cross-functional teams to deliver high-quality, scalable solutions. Ensuring delivery of very high quality code and workmanship. Tech Stack: Cloud Platform: AWS Infrastructure-as-Code: Terraform Primary Compute: .NET 7/Linux/Docker/AWS EKS/AWS ECS Worker Compute: AWS Lambda (Javascript/Python) Primary SQL: AWS Aurora (MySQL/Postgres) NoSQL: AWS DocumentDB/AWS DynamoDB Message Bus: Kafka/AWS MSK, SNS/SQS, AWS EventBridge Web Experience: React Source Control: GitHub Enterprise CICD: GitHub Actions Required Skills and Experience: Strong background in Software Engineering, with expertise in .Net, AWS, Terraform, SQL & No-SQL databases, and Event Driven systems. Experience working within fast-paced teams, employing Continuous Delivery and modern software Engineering practices. Proficiency in delivering CLEAN code to high standards. Experience in Product-driven teams, thriving in Lean/Agile environments. Familiarity with Test Driven Development (TDD). To be successful in these Senior .NET Developer roles, you will need: Excellent interpersonal and communication skills. Ability to articulate arguments effectively to technical and non-technical stakeholders. Natural inclination towards value delivery and risk minimisation. Hands-on experience working through end-to-end project life cycles within Agile environments. Capacity to manage multiple priorities simultaneously and contribute to broader group strategies. Ability to work independently or collaboratively with minimal supervision. If you're interested in joining this forward-thinking team and making an impact with cutting-edge technology, apply now!
23/04/2024
Full time
Senior Backend .NET Developer (.NET, AWS, Terraform, DynamoDB) Location: Manchester - Hybrid (2 days a week in the office) Salary: up to £65,000 plus a great benefits package We are working with a leading organisation who is currently looking for a number of talented Senior .NET Developers to join their brand new development team. Situated within the Enterprise Technology sector, their Product Engineering division is dedicated to crafting top-tier software products for clients. Embracing modern software development practices, they harness Cloud Technologies and DevOps principles to deliver robust, scalable, and secure systems. The Role: As a Senior .NET Developer, you will play a pivotal role in developing and maintaining the revolutionary connected data platform system. Collaborating closely with cross-functional teams, the successful candidate will architect and implement solutions that exceed user expectations. If passionate about software development with a strong background in .NET technologies, they want you on their team! Key Responsibilities: Developing and maintaining a greenfield connected data platform system. Collaborating with cross-functional teams to deliver high-quality, scalable solutions. Ensuring delivery of very high quality code and workmanship. Tech Stack: Cloud Platform: AWS Infrastructure-as-Code: Terraform Primary Compute: .NET 7/Linux/Docker/AWS EKS/AWS ECS Worker Compute: AWS Lambda (Javascript/Python) Primary SQL: AWS Aurora (MySQL/Postgres) NoSQL: AWS DocumentDB/AWS DynamoDB Message Bus: Kafka/AWS MSK, SNS/SQS, AWS EventBridge Web Experience: React Source Control: GitHub Enterprise CICD: GitHub Actions Required Skills and Experience: Strong background in Software Engineering, with expertise in .Net, AWS, Terraform, SQL & No-SQL databases, and Event Driven systems. Experience working within fast-paced teams, employing Continuous Delivery and modern software Engineering practices. Proficiency in delivering CLEAN code to high standards. Experience in Product-driven teams, thriving in Lean/Agile environments. Familiarity with Test Driven Development (TDD). To be successful in these Senior .NET Developer roles, you will need: Excellent interpersonal and communication skills. Ability to articulate arguments effectively to technical and non-technical stakeholders. Natural inclination towards value delivery and risk minimisation. Hands-on experience working through end-to-end project life cycles within Agile environments. Capacity to manage multiple priorities simultaneously and contribute to broader group strategies. Ability to work independently or collaboratively with minimal supervision. If you're interested in joining this forward-thinking team and making an impact with cutting-edge technology, apply now!
Data visualization specialists X5 are needed for 12 month contract based in Wroclaw, Poland. The ideal Data visualization specialist will have Investment Banking/Finance experience. The ideal Data visualization specialist will be fully bi-lingual in Polish and English This is a hybrid role with at least 3 days working onsite paying 850PLN to 900PLN. This role is based in Wroclaw, Poland 5 days a week 3 onsite. Skills Create and implement highly scalable and reliable data distribution solution using VQL, Python, Spark & open-source technologies, to deliver data to business components. Work with Denodo, ADLS, Databricks, Kafka, Datamodelling, data replication, clustering, SQL Query patterns and indexing for handling for large data sets. Demonstrate experience with Python and data access (NumPy, SciPy, panda etc.), machine learning (TensorFlow etc.), and AI libraries (Chat GTP etc.) 4-5 years of hands-on experience in developing large scale applications using data virtualization and/or data streaming technologies. Software engineer/developer focused on cloud based data virtualization and data delivery technologies Denodo platform familiarity and SQL Experience highly desirable Know-how to apply standards, methods, techniques and templates as defined by our SDLC including code control, code inspection, code Deployment Design, plan and deliver solutions in a large scale enterprise environment Working with solution architect & business analysts to define implementation design & coding of the assigned modules/responsibilities with highest quality (bug free). Determining technical approaches to be used, and defining the appropriate methodologies Must be capable of working in a collaborative, multi-site environment to support rapid development and delivery of results and capabilities (ie AGILE SDLC) Effectively communicating technical analyses, recommendations, status, and results to project management team. Produce secure and clean code that is stable, operational, consistent and well-performing Role - Data visualization specialists X5 Location - Wroclaw, Poland Rate - 850PLN to 900PLN Duration - 12 Months
23/04/2024
Project-based
Data visualization specialists X5 are needed for 12 month contract based in Wroclaw, Poland. The ideal Data visualization specialist will have Investment Banking/Finance experience. The ideal Data visualization specialist will be fully bi-lingual in Polish and English This is a hybrid role with at least 3 days working onsite paying 850PLN to 900PLN. This role is based in Wroclaw, Poland 5 days a week 3 onsite. Skills Create and implement highly scalable and reliable data distribution solution using VQL, Python, Spark & open-source technologies, to deliver data to business components. Work with Denodo, ADLS, Databricks, Kafka, Datamodelling, data replication, clustering, SQL Query patterns and indexing for handling for large data sets. Demonstrate experience with Python and data access (NumPy, SciPy, panda etc.), machine learning (TensorFlow etc.), and AI libraries (Chat GTP etc.) 4-5 years of hands-on experience in developing large scale applications using data virtualization and/or data streaming technologies. Software engineer/developer focused on cloud based data virtualization and data delivery technologies Denodo platform familiarity and SQL Experience highly desirable Know-how to apply standards, methods, techniques and templates as defined by our SDLC including code control, code inspection, code Deployment Design, plan and deliver solutions in a large scale enterprise environment Working with solution architect & business analysts to define implementation design & coding of the assigned modules/responsibilities with highest quality (bug free). Determining technical approaches to be used, and defining the appropriate methodologies Must be capable of working in a collaborative, multi-site environment to support rapid development and delivery of results and capabilities (ie AGILE SDLC) Effectively communicating technical analyses, recommendations, status, and results to project management team. Produce secure and clean code that is stable, operational, consistent and well-performing Role - Data visualization specialists X5 Location - Wroclaw, Poland Rate - 850PLN to 900PLN Duration - 12 Months
ASSOCIATE PRINCIPAL, APPIAN SOFTWARE ENGINEERING SALARY: $140k - $145k - $152k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite, 2 days remote Looking for someone to design development testing and do the implementation of appian software. You will need 5 years Front End user experience, JavaScript automating workflows inside appian aws unix linux Java python node js angular 2.0 or react js and Middleware technologies. Working knowledge of devops terraform ansible Jenkins Kubernetes helm and cicd pipelines. Must have a degree and be apian certified developer required Contribute to design, technical direction and architecture including collaborating with various teams to build fit for purpose solutions. Applies expert knowledge of Java, Python, JavaScript, NodeJS, Angular 2.0 or ReactJS and middle-ware technologies in independently designing and developing key services with a focus on continuous integration and delivery Participates in code reviews, proactively identifying and mitigating potential issues and defects as well as assisting with continuous improvement Drives continuous improvement efforts by identifying and championing practical means of reducing time to market while maintaining high quality Qualifications: 5+ years of Front End, User Experience, development (required) 5+ years of experience in JavaScript skills (required) 3 + years of experience automating workflows inside Appian and in conjunction with integration to other tools (required) 3+ years of experience in React application development (required) 3+ years of hands-on HTML5/CSS3 experience (required) Experience with Java and/or Python (required) Experience with popular Javascript frameworks such as React, Node JS, Vue, Angular 2.0 (required) Experience of working with websockets, HTTP 1.1 and HTTP/2 (required) Experience with RESTful APIs and JSON RPC (required) Ability to write clean, bug-free code that is easy to understand and easily maintainable (required) Experience with BDD methodologies & automated acceptance testing (required) Technical Skills: 5+ years hands-on experience in Java, including good understanding of Java fundamentals such as Memory Model, Runtime Environment, Concurrency and Multithreading (required) Past/Current experience of 3+ years working on a large scale cloud native project (platform: Unix/Linux, Type of Systems: event-driven/transaction processing/high performance computing) as Technical Lead. These experiences should include developing/architecting core libraries or framework used by the platform to support fundamental services like storage, alert notifications, security, etc. (required) Appian Process Modeling, Smart Services, Rules and Tempo event services, database, and Web services (required) Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based, Graph (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc (required) Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics (required) Education and/or Experience: BS degree in Computer Science, similar technical field Appian certified developer
22/04/2024
Full time
ASSOCIATE PRINCIPAL, APPIAN SOFTWARE ENGINEERING SALARY: $140k - $145k - $152k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite, 2 days remote Looking for someone to design development testing and do the implementation of appian software. You will need 5 years Front End user experience, JavaScript automating workflows inside appian aws unix linux Java python node js angular 2.0 or react js and Middleware technologies. Working knowledge of devops terraform ansible Jenkins Kubernetes helm and cicd pipelines. Must have a degree and be apian certified developer required Contribute to design, technical direction and architecture including collaborating with various teams to build fit for purpose solutions. Applies expert knowledge of Java, Python, JavaScript, NodeJS, Angular 2.0 or ReactJS and middle-ware technologies in independently designing and developing key services with a focus on continuous integration and delivery Participates in code reviews, proactively identifying and mitigating potential issues and defects as well as assisting with continuous improvement Drives continuous improvement efforts by identifying and championing practical means of reducing time to market while maintaining high quality Qualifications: 5+ years of Front End, User Experience, development (required) 5+ years of experience in JavaScript skills (required) 3 + years of experience automating workflows inside Appian and in conjunction with integration to other tools (required) 3+ years of experience in React application development (required) 3+ years of hands-on HTML5/CSS3 experience (required) Experience with Java and/or Python (required) Experience with popular Javascript frameworks such as React, Node JS, Vue, Angular 2.0 (required) Experience of working with websockets, HTTP 1.1 and HTTP/2 (required) Experience with RESTful APIs and JSON RPC (required) Ability to write clean, bug-free code that is easy to understand and easily maintainable (required) Experience with BDD methodologies & automated acceptance testing (required) Technical Skills: 5+ years hands-on experience in Java, including good understanding of Java fundamentals such as Memory Model, Runtime Environment, Concurrency and Multithreading (required) Past/Current experience of 3+ years working on a large scale cloud native project (platform: Unix/Linux, Type of Systems: event-driven/transaction processing/high performance computing) as Technical Lead. These experiences should include developing/architecting core libraries or framework used by the platform to support fundamental services like storage, alert notifications, security, etc. (required) Appian Process Modeling, Smart Services, Rules and Tempo event services, database, and Web services (required) Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based, Graph (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc (required) Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics (required) Education and/or Experience: BS degree in Computer Science, similar technical field Appian certified developer
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Woking SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
22/04/2024
Full time
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Woking SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Romsey SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.
22/04/2024
Full time
Role: DevOps Engineer Salary: Up to £50,000 per annum dependent on experience Location: Hybrid/Romsey SC clearance is required for this role We are looking for an experienced DevOps Engineer with experience around 2-3 years experience in software development. You will be overseeing code releases, deployments, and support operational systems. Skills and experience; Active SC clearance Experience with cloud technologies ie AWS or Azure Programming language experience ie Java, Python, node.js or SQL Data technologies experience ie PostgreSQL, MongoDB, kafka, Hadoop If you are interested in discussing this DevOps Engineer role further, please apply or send a copy of your updated CV to (see below) CBSbutler is acting as an employment agency for this role.