My European based customer is searching the market for European Citizens only . The Senior Cloud/DevOps Architect/SRE plays a pivotal role in shaping the cloud strategy and implementation for our organization. This position is responsible for designing and implementing scalable, efficient, and secure cloud solutions. Job Title: Cloud DevOps Architect Location: Remote 100% Duration: 6 months Outside IR35 European Citizens only Task Description: Design and implement highly scalable, resilient, and performant cloud solutions. Define progressive transformation strategies to cloud for existing systems. Lead the development and optimization of CI/CD pipelines for seamless code deployment and management. Automate infrastructure provisioning and configuration to reduce manual interventions and improve efficiency. Identify and develop cross-cutting reusable building blocks to be provided to domain applications. Work with security teams to ensure cloud solutions comply with industry best practices and regulatory standards. Provide guidance and mentorship to development teams on adopting cloud-native technologies and DevOps practices. Collaborate with stakeholders to understand business requirements and translate them into technical specifications. Conduct system troubleshooting and problem resolution across various application domains and platforms. Stay up to date with emerging trends in cloud computing, DevOps, and software development methodologies. Professional Skills Strong understanding of cloud computing technology and infrastructure as well as cloud design patterns and strategies (eg, AWS, Azure, Google Cloud Platform). Solid understanding of cloud-native architecture principles and transformation patterns. Deep knowledge of container and container orchestration services (eg, Kubernetes, Docker). Deep knowledge of infrastructure as a code and automation tools (eg, Terraform, Ansible, Crossplane). Familiarity with GitOps principles and tooling (eg ArgoCD, Flux). Extensive experience with CI/CD tools (eg, Jenkins, GitLab CI, Azure DevOps). Familiarity with Platform Engineering principles and tooling (eg Backstage). Deep knowledge of microservices architecture and serverless computing. Familiarity with observability principles and tools (eg, Prometheus, Grafana, ELK Stack). Solid understanding of network, security, and database architecture. Familiarity with Integration platforms, including API Management solutions as well as messaging solutions. Ability to work collaboratively in a team environment and communicate effectively with stakeholders at all levels. Strong problem-solving skills and the ability to work under pressure to meet tight deadlines. European Citizens only
18/04/2024
Project-based
My European based customer is searching the market for European Citizens only . The Senior Cloud/DevOps Architect/SRE plays a pivotal role in shaping the cloud strategy and implementation for our organization. This position is responsible for designing and implementing scalable, efficient, and secure cloud solutions. Job Title: Cloud DevOps Architect Location: Remote 100% Duration: 6 months Outside IR35 European Citizens only Task Description: Design and implement highly scalable, resilient, and performant cloud solutions. Define progressive transformation strategies to cloud for existing systems. Lead the development and optimization of CI/CD pipelines for seamless code deployment and management. Automate infrastructure provisioning and configuration to reduce manual interventions and improve efficiency. Identify and develop cross-cutting reusable building blocks to be provided to domain applications. Work with security teams to ensure cloud solutions comply with industry best practices and regulatory standards. Provide guidance and mentorship to development teams on adopting cloud-native technologies and DevOps practices. Collaborate with stakeholders to understand business requirements and translate them into technical specifications. Conduct system troubleshooting and problem resolution across various application domains and platforms. Stay up to date with emerging trends in cloud computing, DevOps, and software development methodologies. Professional Skills Strong understanding of cloud computing technology and infrastructure as well as cloud design patterns and strategies (eg, AWS, Azure, Google Cloud Platform). Solid understanding of cloud-native architecture principles and transformation patterns. Deep knowledge of container and container orchestration services (eg, Kubernetes, Docker). Deep knowledge of infrastructure as a code and automation tools (eg, Terraform, Ansible, Crossplane). Familiarity with GitOps principles and tooling (eg ArgoCD, Flux). Extensive experience with CI/CD tools (eg, Jenkins, GitLab CI, Azure DevOps). Familiarity with Platform Engineering principles and tooling (eg Backstage). Deep knowledge of microservices architecture and serverless computing. Familiarity with observability principles and tools (eg, Prometheus, Grafana, ELK Stack). Solid understanding of network, security, and database architecture. Familiarity with Integration platforms, including API Management solutions as well as messaging solutions. Ability to work collaboratively in a team environment and communicate effectively with stakeholders at all levels. Strong problem-solving skills and the ability to work under pressure to meet tight deadlines. European Citizens only
Performance Testing - CI/CD - Open Source Tools, Uc4 C2C LOCATION: CHICAGO - HYBRID 3 DAYS ONSITE Long Term Contract Looking for a candidate to do performance testing using open source tools like jmeter, gatling, Perl, solid python Scripting. Familiar with creating modules that multiply transaction (data) multiple platforms store data financial environment Java cloud automation look at Java and convert it to python 20% SDET automation testing QA automation testing using CICD concepts Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. EXPERIENCE REQUIRED: Python Scripting - familiarity with creating modules that multiply transactional data and other data multiplier strategies that will be used in test cycles of the Real Time Clearing System SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL; Operating Systems experience; Methodologies: Agile, Iterative & Waterfall
17/04/2024
Project-based
Performance Testing - CI/CD - Open Source Tools, Uc4 C2C LOCATION: CHICAGO - HYBRID 3 DAYS ONSITE Long Term Contract Looking for a candidate to do performance testing using open source tools like jmeter, gatling, Perl, solid python Scripting. Familiar with creating modules that multiply transaction (data) multiple platforms store data financial environment Java cloud automation look at Java and convert it to python 20% SDET automation testing QA automation testing using CICD concepts Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. EXPERIENCE REQUIRED: Python Scripting - familiarity with creating modules that multiply transactional data and other data multiplier strategies that will be used in test cycles of the Real Time Clearing System SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL; Operating Systems experience; Methodologies: Agile, Iterative & Waterfall
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Birmingham; can be remote-based, hybrid working or office-based Key experience desired/what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps/DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (eg, pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
17/04/2024
Full time
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Birmingham; can be remote-based, hybrid working or office-based Key experience desired/what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps/DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (eg, pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Newcastle or Birmingham; can be remote-based, hybrid working or office-based Key experience desired/what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps/DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (eg, pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
17/04/2024
Full time
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Newcastle or Birmingham; can be remote-based, hybrid working or office-based Key experience desired/what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps/DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (eg, pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
Your new company I am working with an industry-leading company specialising in cutting-edge multi-link connectivity solutions. Their innovative approach combines various network connections to create a highly efficient and robust virtual pipeline. With their state-of-the-art technology, which is compatible with any customer premises equipment (CPE), and our cloud-agnostic and auto-scaling Back End, we deliver optimal performance for mass-market applications. Your new role Your new role as a Network & System Engineer will involve a range of responsibilities. You will collaborate closely with senior management to define the long-term roadmap for their network infrastructure. Working alongside the Head of DevOps, you will actively participate in designing and architecting the enterprise network and systems. Your expertise will be crucial in implementing new network functions and systems, ensuring they are integrated smoothly with existing infrastructure. In addition, you will play a key role in maintaining the reliability, stability, and performance of our enterprise environment. As a point of escalation, you will provide valuable advice and technical expertise to our test and support teams. Network maintenance, including scheduled system patching, will also be part of your responsibilities. Furthermore, you will collaborate with the Operations team to enhance and maintain our CI/CD pipelines, contributing to the continuous improvement of our processes. What you'll need to succeed Strong experience working with Linux systems, including software-defined networking (Debian, Ubuntu, RedHat). Proficiency in configuration management tools to ensure consistent network configuration. Solid understanding of switching and routing, VLANs, and VPNs. Knowledge of virtualization technologies, particularly VMware. Comfortable writing Shell Scripts for automation purposes. Familiarity with cloud environments, such as AWS, Azure, GCP, and OpenStack. Experience with Microsoft Azure and/or 365 platforms, preferably Azure AD. Understanding of system and network monitoring principles; exposure to Zabbix is a plus. Knowledge of CI/CD principles. What you'll get in return Competitive salary package based on experience . Generous holiday allowance of 25 days, plus bank holidays. Private health care coverage. Life insurance to ensure financial security. Convenient car parking facilities. Gym membership contribution for a healthy work-life balance. Workplace pension scheme for a secure future. What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. Hays EA is a trading division of Hays Specialist Recruitment Limited and acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
17/04/2024
Full time
Your new company I am working with an industry-leading company specialising in cutting-edge multi-link connectivity solutions. Their innovative approach combines various network connections to create a highly efficient and robust virtual pipeline. With their state-of-the-art technology, which is compatible with any customer premises equipment (CPE), and our cloud-agnostic and auto-scaling Back End, we deliver optimal performance for mass-market applications. Your new role Your new role as a Network & System Engineer will involve a range of responsibilities. You will collaborate closely with senior management to define the long-term roadmap for their network infrastructure. Working alongside the Head of DevOps, you will actively participate in designing and architecting the enterprise network and systems. Your expertise will be crucial in implementing new network functions and systems, ensuring they are integrated smoothly with existing infrastructure. In addition, you will play a key role in maintaining the reliability, stability, and performance of our enterprise environment. As a point of escalation, you will provide valuable advice and technical expertise to our test and support teams. Network maintenance, including scheduled system patching, will also be part of your responsibilities. Furthermore, you will collaborate with the Operations team to enhance and maintain our CI/CD pipelines, contributing to the continuous improvement of our processes. What you'll need to succeed Strong experience working with Linux systems, including software-defined networking (Debian, Ubuntu, RedHat). Proficiency in configuration management tools to ensure consistent network configuration. Solid understanding of switching and routing, VLANs, and VPNs. Knowledge of virtualization technologies, particularly VMware. Comfortable writing Shell Scripts for automation purposes. Familiarity with cloud environments, such as AWS, Azure, GCP, and OpenStack. Experience with Microsoft Azure and/or 365 platforms, preferably Azure AD. Understanding of system and network monitoring principles; exposure to Zabbix is a plus. Knowledge of CI/CD principles. What you'll get in return Competitive salary package based on experience . Generous holiday allowance of 25 days, plus bank holidays. Private health care coverage. Life insurance to ensure financial security. Convenient car parking facilities. Gym membership contribution for a healthy work-life balance. Workplace pension scheme for a secure future. What you need to do now If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now. Hays EA is a trading division of Hays Specialist Recruitment Limited and acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
Salary: £60k Job Type: Contract (6 month initial - with the option to extend) Job Location: Newcastle Workplace Type: Hybrid (3 days in office) Seeking a highly skilled and innovative full-stack Software Engineer with a minimum of 4 years of hands-on experience in the Software Development field. The ideal candidate will have a proven track record of working autonomously and must be proficient in a wide range of programming languages. As a Software Engineer, you will be instrumental in the development and deployment of state-of-the-art software solutions. You will work with cutting-edge technologies, contribute to the full software development life cycle, and collaborate with cross-functional teams. Key responsibilities: Develop serverless applications using AWS Lambda, API Gateway, and other AWS services. Leverage AWS infrastructure to build scalable and reliable software solutions. Experience with cloud computing platforms, particularly Azure, is desirable Full-stack Java, JavaScript (Typescript) for full-stack JavaScript development. Implement efficient and maintainable code for both Front End and Back End components. Knowledge of implementing and maintaining CI/CD pipelines Implement Test Driven Development (TDD) and Behaviour Driven Development (BDD) Conduct browser testing with Cypress & Browser Stack. Perform accessibility testing using AXE. Utilise Grafana for Load/Stress/Break testing. Proficiency in SQL and noSQL databases, including but not limited to Postgres, MySQL, and MongoDB, is preferred. Familiarity and expertise in APIs, RESTful services, and Microservice Architectures are desired qualifications Design and implement web front ends using ReactJS, with a focus on Next.js for enhanced performance. Manage code and versioning using GitHub. Skills and experience: Hands-on experience with the mentioned technologies and tools. Proven track record of successful software development projects with a minimum of 4 years of hands-on experience. Proficiency in full-stack Java, JavaScript and Typescript. Experience with AWS, Azure, Serverless Technologies, ReactJS, and Next.js. Strong understanding of Infrastructure as Code (IaC) principles. Familiarity with TDD and BDD. Excellent communication skills, both written and verbal. Ability to collaborate effectively in a multi-functional Agile delivery team Accountable for maintaining the operational stability of developed products and having the capability to influence continuous enhancements in their robustness and resilience About FDM Our people are our passion and that's why we make your training and career growth our priority. We are a global professional services provider focusing on IT and one of the UK's leading graduate employers, recruiting the brightest talent to become the innovators of tomorrow. With centres across Europe, North America and Asia-Pacific, and nearly 5000 consultants currently placed on client site around the world, FDM has shown exponential growth throughout the years, firmly establishing itself as an award-winning FTSE 250 employer. Diversity and Inclusion FDM Group is an Equal Opportunity Employer, and all qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, national origin, age, disability, veteran status or any other status protected by federal, provincial or local laws.
17/04/2024
Project-based
Salary: £60k Job Type: Contract (6 month initial - with the option to extend) Job Location: Newcastle Workplace Type: Hybrid (3 days in office) Seeking a highly skilled and innovative full-stack Software Engineer with a minimum of 4 years of hands-on experience in the Software Development field. The ideal candidate will have a proven track record of working autonomously and must be proficient in a wide range of programming languages. As a Software Engineer, you will be instrumental in the development and deployment of state-of-the-art software solutions. You will work with cutting-edge technologies, contribute to the full software development life cycle, and collaborate with cross-functional teams. Key responsibilities: Develop serverless applications using AWS Lambda, API Gateway, and other AWS services. Leverage AWS infrastructure to build scalable and reliable software solutions. Experience with cloud computing platforms, particularly Azure, is desirable Full-stack Java, JavaScript (Typescript) for full-stack JavaScript development. Implement efficient and maintainable code for both Front End and Back End components. Knowledge of implementing and maintaining CI/CD pipelines Implement Test Driven Development (TDD) and Behaviour Driven Development (BDD) Conduct browser testing with Cypress & Browser Stack. Perform accessibility testing using AXE. Utilise Grafana for Load/Stress/Break testing. Proficiency in SQL and noSQL databases, including but not limited to Postgres, MySQL, and MongoDB, is preferred. Familiarity and expertise in APIs, RESTful services, and Microservice Architectures are desired qualifications Design and implement web front ends using ReactJS, with a focus on Next.js for enhanced performance. Manage code and versioning using GitHub. Skills and experience: Hands-on experience with the mentioned technologies and tools. Proven track record of successful software development projects with a minimum of 4 years of hands-on experience. Proficiency in full-stack Java, JavaScript and Typescript. Experience with AWS, Azure, Serverless Technologies, ReactJS, and Next.js. Strong understanding of Infrastructure as Code (IaC) principles. Familiarity with TDD and BDD. Excellent communication skills, both written and verbal. Ability to collaborate effectively in a multi-functional Agile delivery team Accountable for maintaining the operational stability of developed products and having the capability to influence continuous enhancements in their robustness and resilience About FDM Our people are our passion and that's why we make your training and career growth our priority. We are a global professional services provider focusing on IT and one of the UK's leading graduate employers, recruiting the brightest talent to become the innovators of tomorrow. With centres across Europe, North America and Asia-Pacific, and nearly 5000 consultants currently placed on client site around the world, FDM has shown exponential growth throughout the years, firmly establishing itself as an award-winning FTSE 250 employer. Diversity and Inclusion FDM Group is an Equal Opportunity Employer, and all qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, national origin, age, disability, veteran status or any other status protected by federal, provincial or local laws.
Request Technology - Craig Johnson
Chicago, Illinois
* Position is bonus eligible* Prestigious Financial Institution is currently seeking an Enterprise Monitoring Technical Lead Engineer with strong Splunk experience. Candidate will lead the investigating, planning, and implementing of the enterprise monitoring system, as well as identify areas for improvement, recommend allocation of resources, and work with solution architects to craft an appropriate remediation or enhancement for these systems. Responsibilities: Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems. Qualifications: Expert understanding of: Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITIL Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree in a related area 10+ years of related experience 10 years experience working in a distributed multi-platform environment. 3 years experience working with cloud native applications 3 years experience managing technical projects Cloud certification in AWS is a plus
16/04/2024
Full time
* Position is bonus eligible* Prestigious Financial Institution is currently seeking an Enterprise Monitoring Technical Lead Engineer with strong Splunk experience. Candidate will lead the investigating, planning, and implementing of the enterprise monitoring system, as well as identify areas for improvement, recommend allocation of resources, and work with solution architects to craft an appropriate remediation or enhancement for these systems. Responsibilities: Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems. Qualifications: Expert understanding of: Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITIL Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree in a related area 10+ years of related experience 10 years experience working in a distributed multi-platform environment. 3 years experience working with cloud native applications 3 years experience managing technical projects Cloud certification in AWS is a plus
NO SPONSORSHIP Principal, Software Engineering Enterprise Cloud Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Dallas, TX 3 days onsite, 2 days remote It is all about on-premises monitoring and cloud monitoring The products they are looking for outside of Splunk is Data Dog, Dynatrace, New Relic Heavy cloud, AWS, EC2, Automation, application performance monitoring, enterprise monitoring, any EMC patrol, Tivoli, and regulatory experience Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
16/04/2024
Full time
NO SPONSORSHIP Principal, Software Engineering Enterprise Cloud Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Dallas, TX 3 days onsite, 2 days remote It is all about on-premises monitoring and cloud monitoring The products they are looking for outside of Splunk is Data Dog, Dynatrace, New Relic Heavy cloud, AWS, EC2, Automation, application performance monitoring, enterprise monitoring, any EMC patrol, Tivoli, and regulatory experience Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
NO SPONSORSHIP Principal, Software Engineering Enterprise Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Chicago, IL 3 days onsite, 2 days remote Looking for a technical team lead over the enterprise splunk monitoring system. You will be the SME in Splunk Monitoring, Cloud Native Applications running on Kubernetes within AWS. Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
16/04/2024
Full time
NO SPONSORSHIP Principal, Software Engineering Enterprise Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Chicago, IL 3 days onsite, 2 days remote Looking for a technical team lead over the enterprise splunk monitoring system. You will be the SME in Splunk Monitoring, Cloud Native Applications running on Kubernetes within AWS. Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
Contract - UC4 Automation Engineer Rate: Open Location: Chicago, IL Hybrid: 3 days on-site, 2 days remote Qualifications Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL Responsibilities Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles.
16/04/2024
Project-based
Contract - UC4 Automation Engineer Rate: Open Location: Chicago, IL Hybrid: 3 days on-site, 2 days remote Qualifications Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL Responsibilities Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles.
* Position is bonus eligible* Prestigious Financial Institution is currently seeking an Enterprise Monitoring Technical Lead Engineer with strong Splunk experience. Candidate will lead the investigating, planning, and implementing of the enterprise monitoring system, as well as identify areas for improvement, recommend allocation of resources, and work with solution architects to craft an appropriate remediation or enhancement for these systems. Responsibilities: Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems. Qualifications: Expert understanding of: Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITIL Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree in a related area 10+ years of related experience 10 years experience working in a distributed multi-platform environment. 3 years experience working with cloud native applications 3 years experience managing technical projects Cloud certification in AWS is a plus
16/04/2024
Full time
* Position is bonus eligible* Prestigious Financial Institution is currently seeking an Enterprise Monitoring Technical Lead Engineer with strong Splunk experience. Candidate will lead the investigating, planning, and implementing of the enterprise monitoring system, as well as identify areas for improvement, recommend allocation of resources, and work with solution architects to craft an appropriate remediation or enhancement for these systems. Responsibilities: Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems. Qualifications: Expert understanding of: Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITIL Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree in a related area 10+ years of related experience 10 years experience working in a distributed multi-platform environment. 3 years experience working with cloud native applications 3 years experience managing technical projects Cloud certification in AWS is a plus
Senior Workday Software Engineer/Architect Market leading Funding management services company with office in Ireland, the UK and the MENA region has an urgent requirement for a Senior Workday Software Engineer/Architect The Workday Software Engineer/Architect will Drive engineering projects by developing software solutions and leading a team in their implementation, supervising base and associate level engineers as needed. This role is offered on a primarily remote basis, but very occasionally might require the Workday Software Engineer/Architect to pay an onsite visit to the company's offices in Dublin or London. While the role is initially on a contract basis, there is a strong probability of the role being made permanent for candidates interested in this form of employment. Either way, the role offers a great opportunity for ambitious Software Engineering/Architecture professionals to further their careers in a dynamic and rewarding environment. Prospective candidate must have previous experoence with Workday FIN/HCM. Responsibilities of role Develop and maintain robust, efficient and extensible Solutions using a varied set of languages, frameworks and tools including Xpresso, React, Typescript, GraphQL, NodeJS, Cypress and Jest Deliver reliable software through continuous integration, automated testing, and in-depth code & design reviews. Collaborate with cross-functional teams to ideate, prototype, plan and deliver exciting capabilities. Advocate & champion best practices in both software engineering & agile development. Build proof of concepts to help shape the future success of our product. Mentor, coach and learn from peer teammates through workshops, pair programming, code/design reviews and documentation. Essential Candidate Requirements Third level degree in Computer Science, IT or related area 7+ Years development experience Direct experience leading development teams Must have direct experience with Workday FINS and HCM CI/CD, using Jenkins, SonarQube, Git, Azure DevOps Automated Release Management and Integration Excellent knowledge of Build Frameworks (eg maven, gradle, msbuild, junit) JAVA 8 or higher .NET Framework Database Packaging and Deployment Artefact Management, Maven and Nuget Powershell and Windows Command Line experience Automated Monitoring at least one of Elk Stack, Neu Relic, Elastic.io High level understanding of different SQL formats Strong Understanding of Automated Testing and End to End Test Case Management Hands on Implementation with Identity and Access Management, including SAML2, OAUTH, SSL, LDAP, DS, Kerberos Strong experience with Cloud Infrastructure (Azure/Google/AWS or similar) IMPORTANT! All applicants must have immediate availability to work in the EU as our client cannot provide any kind of Visa or Work Permit sponsorship at present. To Apply For more information on this role, please contact Níall or send current CV along with brief cover letter through this site
16/04/2024
Project-based
Senior Workday Software Engineer/Architect Market leading Funding management services company with office in Ireland, the UK and the MENA region has an urgent requirement for a Senior Workday Software Engineer/Architect The Workday Software Engineer/Architect will Drive engineering projects by developing software solutions and leading a team in their implementation, supervising base and associate level engineers as needed. This role is offered on a primarily remote basis, but very occasionally might require the Workday Software Engineer/Architect to pay an onsite visit to the company's offices in Dublin or London. While the role is initially on a contract basis, there is a strong probability of the role being made permanent for candidates interested in this form of employment. Either way, the role offers a great opportunity for ambitious Software Engineering/Architecture professionals to further their careers in a dynamic and rewarding environment. Prospective candidate must have previous experoence with Workday FIN/HCM. Responsibilities of role Develop and maintain robust, efficient and extensible Solutions using a varied set of languages, frameworks and tools including Xpresso, React, Typescript, GraphQL, NodeJS, Cypress and Jest Deliver reliable software through continuous integration, automated testing, and in-depth code & design reviews. Collaborate with cross-functional teams to ideate, prototype, plan and deliver exciting capabilities. Advocate & champion best practices in both software engineering & agile development. Build proof of concepts to help shape the future success of our product. Mentor, coach and learn from peer teammates through workshops, pair programming, code/design reviews and documentation. Essential Candidate Requirements Third level degree in Computer Science, IT or related area 7+ Years development experience Direct experience leading development teams Must have direct experience with Workday FINS and HCM CI/CD, using Jenkins, SonarQube, Git, Azure DevOps Automated Release Management and Integration Excellent knowledge of Build Frameworks (eg maven, gradle, msbuild, junit) JAVA 8 or higher .NET Framework Database Packaging and Deployment Artefact Management, Maven and Nuget Powershell and Windows Command Line experience Automated Monitoring at least one of Elk Stack, Neu Relic, Elastic.io High level understanding of different SQL formats Strong Understanding of Automated Testing and End to End Test Case Management Hands on Implementation with Identity and Access Management, including SAML2, OAUTH, SSL, LDAP, DS, Kerberos Strong experience with Cloud Infrastructure (Azure/Google/AWS or similar) IMPORTANT! All applicants must have immediate availability to work in the EU as our client cannot provide any kind of Visa or Work Permit sponsorship at present. To Apply For more information on this role, please contact Níall or send current CV along with brief cover letter through this site
As a senior Cloud Native OPS Engineer, you have over 5 years of technical system expertise to perform technical cloud engineering services: - You configure Azure services and work with Terraform Scripting (infrastructure as a code), AWS networking/gateways, AWS Landing Zone setup, lambda and container services; - You evaluate and translate requirements into design; - You evaluate design benefits and trade-offs; - You validate design compliance and support deployment of the design to ensure the requirements are met; - You use development tools to efficiently solve technical or business challenges, incl. technology evolution, capacity management, and performance optimization; - You innovate to present new ideas which improve an existing system/process/service; - You maintain knowledge of existing technology documents via technical writing; - You perform (complex) incident resolution and root cause analyses; - On duty call for the systems you are responsible for, can be required. Next to a proven experience in system software and cloud infrastructure, you have the following core competences: Adaptive, Analytical thinking, Collaborating, Flexible, IT Infrastructure, Result driven, Software development. knowledge of: public cloud AWS CICD tooling AWS Lambda Python Terraform AWS Athena As a part of our team, you are responsible for the architectural decisions, engineering, integration and maintenance of the cloud platform. Currently we actively manage AWS & Azure cloud environments and keep an eye on other cloud platforms. The focus is on technology/infrastructure services, not the usage and development methodologies that use the cloud platform. The latter is handled by DevOps teams with whom you will be working closely. You assist in feasibility studies to take on new technological services or improvements and help design the services with focus on security, maintainability, flexibility and efficiency. You work together with architects and analysts to come to a proper final design, product owners and scrum masters to govern the exercise and its allocated resources, and software engineering to ensure effective positioning and service offerings.
16/04/2024
Project-based
As a senior Cloud Native OPS Engineer, you have over 5 years of technical system expertise to perform technical cloud engineering services: - You configure Azure services and work with Terraform Scripting (infrastructure as a code), AWS networking/gateways, AWS Landing Zone setup, lambda and container services; - You evaluate and translate requirements into design; - You evaluate design benefits and trade-offs; - You validate design compliance and support deployment of the design to ensure the requirements are met; - You use development tools to efficiently solve technical or business challenges, incl. technology evolution, capacity management, and performance optimization; - You innovate to present new ideas which improve an existing system/process/service; - You maintain knowledge of existing technology documents via technical writing; - You perform (complex) incident resolution and root cause analyses; - On duty call for the systems you are responsible for, can be required. Next to a proven experience in system software and cloud infrastructure, you have the following core competences: Adaptive, Analytical thinking, Collaborating, Flexible, IT Infrastructure, Result driven, Software development. knowledge of: public cloud AWS CICD tooling AWS Lambda Python Terraform AWS Athena As a part of our team, you are responsible for the architectural decisions, engineering, integration and maintenance of the cloud platform. Currently we actively manage AWS & Azure cloud environments and keep an eye on other cloud platforms. The focus is on technology/infrastructure services, not the usage and development methodologies that use the cloud platform. The latter is handled by DevOps teams with whom you will be working closely. You assist in feasibility studies to take on new technological services or improvements and help design the services with focus on security, maintainability, flexibility and efficiency. You work together with architects and analysts to come to a proper final design, product owners and scrum masters to govern the exercise and its allocated resources, and software engineering to ensure effective positioning and service offerings.
ASSOCIATE PRINCIPAL, APPIAN SOFTWARE ENGINEERING SALARY: $140k - $145k - $152k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite, 2 days remote Looking for someone to design development testing and do the implementation of appian software. You will need 5 years Front End user experience, JavaScript automating workflows inside appian aws unix linux Java python node js angular 2.0 or react js and Middleware technologies. Working knowledge of devops terraform ansible Jenkins Kubernetes helm and cicd pipelines. Must have a degree and be apian certified developer required Contribute to design, technical direction and architecture including collaborating with various teams to build fit for purpose solutions. Applies expert knowledge of Java, Python, JavaScript, NodeJS, Angular 2.0 or ReactJS and middle-ware technologies in independently designing and developing key services with a focus on continuous integration and delivery Participates in code reviews, proactively identifying and mitigating potential issues and defects as well as assisting with continuous improvement Drives continuous improvement efforts by identifying and championing practical means of reducing time to market while maintaining high quality Qualifications: 5+ years of Front End, User Experience, development (required) 5+ years of experience in JavaScript skills (required) 3 + years of experience automating workflows inside Appian and in conjunction with integration to other tools (required) 3+ years of experience in React application development (required) 3+ years of hands-on HTML5/CSS3 experience (required) Experience with Java and/or Python (required) Experience with popular Javascript frameworks such as React, Node JS, Vue, Angular 2.0 (required) Experience of working with websockets, HTTP 1.1 and HTTP/2 (required) Experience with RESTful APIs and JSON RPC (required) Ability to write clean, bug-free code that is easy to understand and easily maintainable (required) Experience with BDD methodologies & automated acceptance testing (required) Technical Skills: 5+ years hands-on experience in Java, including good understanding of Java fundamentals such as Memory Model, Runtime Environment, Concurrency and Multithreading (required) Past/Current experience of 3+ years working on a large scale cloud native project (platform: Unix/Linux, Type of Systems: event-driven/transaction processing/high performance computing) as Technical Lead. These experiences should include developing/architecting core libraries or framework used by the platform to support fundamental services like storage, alert notifications, security, etc. (required) Appian Process Modeling, Smart Services, Rules and Tempo event services, database, and Web services (required) Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based, Graph (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc (required) Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics (required) Education and/or Experience: BS degree in Computer Science, similar technical field Appian certified developer
15/04/2024
Full time
ASSOCIATE PRINCIPAL, APPIAN SOFTWARE ENGINEERING SALARY: $140k - $145k - $152k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite, 2 days remote Looking for someone to design development testing and do the implementation of appian software. You will need 5 years Front End user experience, JavaScript automating workflows inside appian aws unix linux Java python node js angular 2.0 or react js and Middleware technologies. Working knowledge of devops terraform ansible Jenkins Kubernetes helm and cicd pipelines. Must have a degree and be apian certified developer required Contribute to design, technical direction and architecture including collaborating with various teams to build fit for purpose solutions. Applies expert knowledge of Java, Python, JavaScript, NodeJS, Angular 2.0 or ReactJS and middle-ware technologies in independently designing and developing key services with a focus on continuous integration and delivery Participates in code reviews, proactively identifying and mitigating potential issues and defects as well as assisting with continuous improvement Drives continuous improvement efforts by identifying and championing practical means of reducing time to market while maintaining high quality Qualifications: 5+ years of Front End, User Experience, development (required) 5+ years of experience in JavaScript skills (required) 3 + years of experience automating workflows inside Appian and in conjunction with integration to other tools (required) 3+ years of experience in React application development (required) 3+ years of hands-on HTML5/CSS3 experience (required) Experience with Java and/or Python (required) Experience with popular Javascript frameworks such as React, Node JS, Vue, Angular 2.0 (required) Experience of working with websockets, HTTP 1.1 and HTTP/2 (required) Experience with RESTful APIs and JSON RPC (required) Ability to write clean, bug-free code that is easy to understand and easily maintainable (required) Experience with BDD methodologies & automated acceptance testing (required) Technical Skills: 5+ years hands-on experience in Java, including good understanding of Java fundamentals such as Memory Model, Runtime Environment, Concurrency and Multithreading (required) Past/Current experience of 3+ years working on a large scale cloud native project (platform: Unix/Linux, Type of Systems: event-driven/transaction processing/high performance computing) as Technical Lead. These experiences should include developing/architecting core libraries or framework used by the platform to support fundamental services like storage, alert notifications, security, etc. (required) Appian Process Modeling, Smart Services, Rules and Tempo event services, database, and Web services (required) Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based, Graph (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc (required) Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics (required) Education and/or Experience: BS degree in Computer Science, similar technical field Appian certified developer
Fruition IT Resources Limited
Basingstoke, Hampshire
DevSecOps Lead Basingstoke, Hybrid (2 days per week) Up to £105,000 + 20% Bonus Would you be interested in Leading the DevSecOps function of one of the top roadside assistance organisations in the UK? Fruition IT are supporting an organisation have been the saviour of Britain's roads for over 100 years, and not surprisingly, technology is helping them to keep on top. After receiving significant investment this business are continuing to grow across multiple product channels, including finance, insurance, leisure and lifestyle services. This organisation also operate a cutting edge innovation arm that's in the process of building some game changing products that we'll start to see rollout over the coming months. What does the role entail? You will be be responsible for the delivery of all DevSecOps initiatives, closely collaborating with skilled Software Engineers, Architects, and Product teams to design and build foundational toolsets and capabilities that allow their teams to quickly bring new features to market. Tech skills needed: AWS - ECS, EKS Fargate, Lambda, S3, IAM, VPCs, CloudFront, Containerisation - Kubernetes/EKS/AKS Databases - SQL/noSQL CI/CD Package: Up to £105,000 base salary 20% annual bonus Private health care And much more Apply now, or contact (see below) for more information We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation or age.
15/04/2024
Full time
DevSecOps Lead Basingstoke, Hybrid (2 days per week) Up to £105,000 + 20% Bonus Would you be interested in Leading the DevSecOps function of one of the top roadside assistance organisations in the UK? Fruition IT are supporting an organisation have been the saviour of Britain's roads for over 100 years, and not surprisingly, technology is helping them to keep on top. After receiving significant investment this business are continuing to grow across multiple product channels, including finance, insurance, leisure and lifestyle services. This organisation also operate a cutting edge innovation arm that's in the process of building some game changing products that we'll start to see rollout over the coming months. What does the role entail? You will be be responsible for the delivery of all DevSecOps initiatives, closely collaborating with skilled Software Engineers, Architects, and Product teams to design and build foundational toolsets and capabilities that allow their teams to quickly bring new features to market. Tech skills needed: AWS - ECS, EKS Fargate, Lambda, S3, IAM, VPCs, CloudFront, Containerisation - Kubernetes/EKS/AKS Databases - SQL/noSQL CI/CD Package: Up to £105,000 base salary 20% annual bonus Private health care And much more Apply now, or contact (see below) for more information We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation or age.
Python Software Engineer - Remote BioTech Xcede are working with a Cambridge based BioTech whose cloud based platform allows users to upload and analyse their data using advanced ML techniques. Your impact on the platform will have a direct effect on patients lives through faster, more accurate and reliable data which is already being used by their partners to change lives for the better! The role is remote, with the team meeting up once per month for whiteboarding sessions and socials. We are looking for experienced developers to help direct the development of their platform, REST APIs and web-based UI. You will be working on the Python Back End and depending on your skills the Next.js Front End which is deployed using Terraform to their EKS environment on AWS. What will you need? A STEM degree from a well-respected university Full right to work in the UK without restriction, time limit, or sponsorship Proficiency in Python You have built, documented, and tested REST APIs Experience with AWS cloud services Produce maintainable, testable, documented, production-grade code Have utilised CI/CD processes (gitlab preferred) Strong written and verbal communication skills Experience of working in a team using agile methodologies Good to have: Comfortable with Linux-based operating systems Experience using Django Exposure to JavaScript/TypeScript is a bonus but not essential Familiarity with SQL and good schema design Have built and optimised Docker containers and Docker-compose Please note: Sponsorship is not provided for this role. If you would like to hear more please drop me a message or apply below.
15/04/2024
Full time
Python Software Engineer - Remote BioTech Xcede are working with a Cambridge based BioTech whose cloud based platform allows users to upload and analyse their data using advanced ML techniques. Your impact on the platform will have a direct effect on patients lives through faster, more accurate and reliable data which is already being used by their partners to change lives for the better! The role is remote, with the team meeting up once per month for whiteboarding sessions and socials. We are looking for experienced developers to help direct the development of their platform, REST APIs and web-based UI. You will be working on the Python Back End and depending on your skills the Next.js Front End which is deployed using Terraform to their EKS environment on AWS. What will you need? A STEM degree from a well-respected university Full right to work in the UK without restriction, time limit, or sponsorship Proficiency in Python You have built, documented, and tested REST APIs Experience with AWS cloud services Produce maintainable, testable, documented, production-grade code Have utilised CI/CD processes (gitlab preferred) Strong written and verbal communication skills Experience of working in a team using agile methodologies Good to have: Comfortable with Linux-based operating systems Experience using Django Exposure to JavaScript/TypeScript is a bonus but not essential Familiarity with SQL and good schema design Have built and optimised Docker containers and Docker-compose Please note: Sponsorship is not provided for this role. If you would like to hear more please drop me a message or apply below.
Contract - Performance Testing/Automated Test Systems - Java to Python They're going from an old system to a new system, so it is all about automated test systems Test cases Converting Java to Python Python Scripting UC4 is a plus Must have heavy Cloud Kafka is a high plus, but not necessary All about CI/CD and automation LOCATION: CHICAGO - HYBRID 3 DAYS ONSITE C2C SELLING POINTS: Performance testing open source tools like jmeter gatling Perl solid python Scripting familiar with creating modules that multiply transaction (data) multiple platforms store data financial environment Java cloud automation look at Java and convert it to python 20% SDET automation testing QA automation testing using CICD concepts. Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Analyze business requirements and functional documents and create solid test strategies that define test environment, phases of testing, entrance and exit criteria and help to define the resources and tools needed to execute test cycles. Design, develop and implement automated testing solutions that will be utilized in a parallel testing project (Legacy versus OVAT). Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles. This includes extensive knowledge of the platform and the ability to troubleshoot environmental issues that could occur in the new cloud platform in a timely manner. REQUIRED: Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Python Scripting Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, monitoring tools on-prem and in the Cloud.
13/04/2024
Project-based
Contract - Performance Testing/Automated Test Systems - Java to Python They're going from an old system to a new system, so it is all about automated test systems Test cases Converting Java to Python Python Scripting UC4 is a plus Must have heavy Cloud Kafka is a high plus, but not necessary All about CI/CD and automation LOCATION: CHICAGO - HYBRID 3 DAYS ONSITE C2C SELLING POINTS: Performance testing open source tools like jmeter gatling Perl solid python Scripting familiar with creating modules that multiply transaction (data) multiple platforms store data financial environment Java cloud automation look at Java and convert it to python 20% SDET automation testing QA automation testing using CICD concepts. Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Analyze business requirements and functional documents and create solid test strategies that define test environment, phases of testing, entrance and exit criteria and help to define the resources and tools needed to execute test cycles. Design, develop and implement automated testing solutions that will be utilized in a parallel testing project (Legacy versus OVAT). Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles. This includes extensive knowledge of the platform and the ability to troubleshoot environmental issues that could occur in the new cloud platform in a timely manner. REQUIRED: Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Python Scripting Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, monitoring tools on-prem and in the Cloud.
Software Engineering (AWS DevOps Architect) SALARY: $115K - $135K plus 15% bonus LOCATION: DALLAS Hybrid 2 days onsite This is to support AWS infrastructure as a code IaC, terraform ci/cd artifactory Jenkins git Scripting in Python, Bash, Java, etc. distributed multi-platform environment Docker Kafka and Container. Provide subject matter expertise for ongoing support of applications deployed to nonproduction AWS environments and supporting 3rd party applications. Primary Duties and Responsibilities: Provide technical guidance to other team members for the design, implementation, and support of infrastructure and cloud architecture and automation technologies. Act as a subject matter expert of the organization for cloud, automation, and end-to-end architecture for cloud infrastructure solutions. Maintain overall industry knowledge on the latest trends and technology and demonstrate forward thinking around how technology can support the organizational direction. Design, configure, implement, and support a fully automated workflow for provisioning and maintaining a complex, highly available cloud environment using infrastructure as code. Enable DevOps development activities and complex development tasks that will involve working with tools such as Docker, Kafka, and container management systems. Participate in cloud computing environment buildouts, software installation, maintenance, and support. Provide technical guidance to junior team members. Qualifications: Hands on experience with agile, DevOps and CI/CD. Expert understanding of: Proficiency in AWS services such as EC2, S3, RDS, Lambda, IAM, etc. Understanding Network technologies and Knowledge of security practices, compliance standards, and monitoring/logging tools. CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience. Technologies used to support microservices. Experience with cloud-based systems such as AWS, Azure, or Google Cloud, including expertise in infrastructure-as-code tools such as Terraform or CloudFormation Strong Scripting and programming skills (Python, Bash, Java, etc.). Experience with MRC environments Understanding of software development methodologies and Agile practices Excellent analytical and problem-solving skills, with the ability to troubleshoot and a proactive approach to system optimization. Excellent verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams Bachelor's degree in a related area 7-10 years of related experience Minimum 7 years' experience working in a distributed multi-platform environment. Minimum 3 supporting enterprise monitoring technologies
02/04/2024
Full time
Software Engineering (AWS DevOps Architect) SALARY: $115K - $135K plus 15% bonus LOCATION: DALLAS Hybrid 2 days onsite This is to support AWS infrastructure as a code IaC, terraform ci/cd artifactory Jenkins git Scripting in Python, Bash, Java, etc. distributed multi-platform environment Docker Kafka and Container. Provide subject matter expertise for ongoing support of applications deployed to nonproduction AWS environments and supporting 3rd party applications. Primary Duties and Responsibilities: Provide technical guidance to other team members for the design, implementation, and support of infrastructure and cloud architecture and automation technologies. Act as a subject matter expert of the organization for cloud, automation, and end-to-end architecture for cloud infrastructure solutions. Maintain overall industry knowledge on the latest trends and technology and demonstrate forward thinking around how technology can support the organizational direction. Design, configure, implement, and support a fully automated workflow for provisioning and maintaining a complex, highly available cloud environment using infrastructure as code. Enable DevOps development activities and complex development tasks that will involve working with tools such as Docker, Kafka, and container management systems. Participate in cloud computing environment buildouts, software installation, maintenance, and support. Provide technical guidance to junior team members. Qualifications: Hands on experience with agile, DevOps and CI/CD. Expert understanding of: Proficiency in AWS services such as EC2, S3, RDS, Lambda, IAM, etc. Understanding Network technologies and Knowledge of security practices, compliance standards, and monitoring/logging tools. CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience. Technologies used to support microservices. Experience with cloud-based systems such as AWS, Azure, or Google Cloud, including expertise in infrastructure-as-code tools such as Terraform or CloudFormation Strong Scripting and programming skills (Python, Bash, Java, etc.). Experience with MRC environments Understanding of software development methodologies and Agile practices Excellent analytical and problem-solving skills, with the ability to troubleshoot and a proactive approach to system optimization. Excellent verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams Bachelor's degree in a related area 7-10 years of related experience Minimum 7 years' experience working in a distributed multi-platform environment. Minimum 3 supporting enterprise monitoring technologies