LA International Computer Consultants Ltd
Glasgow, Lanarkshire
PostgresSQL DBA Inside IR35 Hybrid in Glasgow (ideally 3 days per week) 12months+ contract o PostgreSQL/Greenplum would be 6+ years of Sr. Database Administration working with Enterprise Greenplum databases and well versed with Greenplum Installation, Upgrades, and Performance Tuning. o Worked cross-functionally across numerous DBA and Development teams located globally and exhibits effective communication, both written and verbal, for all the consulting/collaboration, engineering, and 24/7 support. o Excellent experience in monitoring databases and identifying expensive and long running queries using query plans and analysis to tune queries and improve overall performance. o Experience in monitoring the database logs for Greenplum parameter errors and fine-tuning database configuration settings like statement_mem, workfile limitations, etc. to improve overall database performance. o Well versed in Shell Scripting and scheduling jobs for running daily stats collection process, vacuum analysis on catalog tables, identifying skewed tables, cron jobs, and overall automation of effort, etc. o Worked with multiple DBAs across regions and contributed to development and coaching to junior members on assignment of work and ownership of deliverables. o Experience in Greenplum Command Center monitoring and reviewing gpperfmon data for the database analysis and understanding the database workload, daily no.# of queries and reviewing current DB settings and Capacity Expansion planning. o Experience in database performance monitoring and review segment health to ensure overall DB performance is healthy with proper best practices. o Deep experience in managing DR environments using backup and restore across different data centers/regions. * Client evaluates on the following. o Installation o Replication o Performance o DB Objects o PSG config params o Architecture o Statistics o Ansible o Shell o Communication o Attitude o Basic SQL Development * For PostgresSQL these are additional requirements. o Basic working hands on skill. Needs to understand and know regular day to day DBA job role questions. o Know and explain different scenarios which DBA comes across on day-to-day basis. o SQL writing o Prior Scripting or automation experience LA International is a HMG approved ICT Recruitment and Project Solutions Consultancy, operating globally from the largest single site in the UK as an IT Consultancy or as an Employment Business & Agency depending upon the precise nature of the work, for security cleared jobs or non-clearance vacancies, LA International welcome applications from all sections of the community and from people with diverse experience and backgrounds. Award Winning LA International, winner of the Recruiter Awards for Excellence, Best IT Recruitment Company, Best Public Sector Recruitment Company and overall Gold Award winner, has now secured the most prestigious business award that any business can receive, The Queens Award for Enterprise: International Trade, for the second consecutive period.
18/09/2024
Project-based
PostgresSQL DBA Inside IR35 Hybrid in Glasgow (ideally 3 days per week) 12months+ contract o PostgreSQL/Greenplum would be 6+ years of Sr. Database Administration working with Enterprise Greenplum databases and well versed with Greenplum Installation, Upgrades, and Performance Tuning. o Worked cross-functionally across numerous DBA and Development teams located globally and exhibits effective communication, both written and verbal, for all the consulting/collaboration, engineering, and 24/7 support. o Excellent experience in monitoring databases and identifying expensive and long running queries using query plans and analysis to tune queries and improve overall performance. o Experience in monitoring the database logs for Greenplum parameter errors and fine-tuning database configuration settings like statement_mem, workfile limitations, etc. to improve overall database performance. o Well versed in Shell Scripting and scheduling jobs for running daily stats collection process, vacuum analysis on catalog tables, identifying skewed tables, cron jobs, and overall automation of effort, etc. o Worked with multiple DBAs across regions and contributed to development and coaching to junior members on assignment of work and ownership of deliverables. o Experience in Greenplum Command Center monitoring and reviewing gpperfmon data for the database analysis and understanding the database workload, daily no.# of queries and reviewing current DB settings and Capacity Expansion planning. o Experience in database performance monitoring and review segment health to ensure overall DB performance is healthy with proper best practices. o Deep experience in managing DR environments using backup and restore across different data centers/regions. * Client evaluates on the following. o Installation o Replication o Performance o DB Objects o PSG config params o Architecture o Statistics o Ansible o Shell o Communication o Attitude o Basic SQL Development * For PostgresSQL these are additional requirements. o Basic working hands on skill. Needs to understand and know regular day to day DBA job role questions. o Know and explain different scenarios which DBA comes across on day-to-day basis. o SQL writing o Prior Scripting or automation experience LA International is a HMG approved ICT Recruitment and Project Solutions Consultancy, operating globally from the largest single site in the UK as an IT Consultancy or as an Employment Business & Agency depending upon the precise nature of the work, for security cleared jobs or non-clearance vacancies, LA International welcome applications from all sections of the community and from people with diverse experience and backgrounds. Award Winning LA International, winner of the Recruiter Awards for Excellence, Best IT Recruitment Company, Best Public Sector Recruitment Company and overall Gold Award winner, has now secured the most prestigious business award that any business can receive, The Queens Award for Enterprise: International Trade, for the second consecutive period.
Application Support Engineer: Our client is an industry leading organisation with a global client base seeking an Application Support Engineer. The focus is on products and services within the environmental and sustainability sector. We are seeking a skilled and motivated Application Support Engineer to join our dynamic engineering team to provide second and third line technical support to clients. This software is run as SaaS offerings hosted on cloud environments. This is a customer focused role requiring a strong technical background, customer-facing communication skills, and a proactive approach to problem-solving, ensuring customer satisfaction by resolving complex technical issues and working closely with the development team to investigating and resolving technical issues to enhance product reliability and performance. Principal Accountabilities for the Application Support Engineer Provide 2nd/3rd line technical support for the SaaS solution troubleshooting software issues. Investigate and resolve technical issues through analysis of product logs, configuration files providing timely resolution and guidance. Ability to prioritise requests and tickets and follow through to completion Develop and maintain in-depth knowledge of the company's cloud service products Create clear, easy to follow knowledgebase articles for internal use and customers Provide appropriate feedback to business regarding service and issues Deliver customer service to customers, driving customer satisfaction and delivering to SLAs Education/Experience Degree in Engineering or equivalent Customer-facing skill set managing internal & external stakeholders Experience of using SaaS solutions in cloud environments eg, AWS/Azure or equivalent Software development skills eg C/C++, Java, Python Help desk ticketing systems ie, Jira Technical capability with analytical and problem solving skills Desirable skills and experience: Experience of IT infrastructure monitoring tools eg, Nagios, AWS CloudWatch Knowledge/experience using Linux or equivalent command line interfaces REST API configuration Database experience eg, SQL Networking and server infrastructure knowledge Technical background in telemetry or equivalent ie, Telco, IT Networks
18/09/2024
Full time
Application Support Engineer: Our client is an industry leading organisation with a global client base seeking an Application Support Engineer. The focus is on products and services within the environmental and sustainability sector. We are seeking a skilled and motivated Application Support Engineer to join our dynamic engineering team to provide second and third line technical support to clients. This software is run as SaaS offerings hosted on cloud environments. This is a customer focused role requiring a strong technical background, customer-facing communication skills, and a proactive approach to problem-solving, ensuring customer satisfaction by resolving complex technical issues and working closely with the development team to investigating and resolving technical issues to enhance product reliability and performance. Principal Accountabilities for the Application Support Engineer Provide 2nd/3rd line technical support for the SaaS solution troubleshooting software issues. Investigate and resolve technical issues through analysis of product logs, configuration files providing timely resolution and guidance. Ability to prioritise requests and tickets and follow through to completion Develop and maintain in-depth knowledge of the company's cloud service products Create clear, easy to follow knowledgebase articles for internal use and customers Provide appropriate feedback to business regarding service and issues Deliver customer service to customers, driving customer satisfaction and delivering to SLAs Education/Experience Degree in Engineering or equivalent Customer-facing skill set managing internal & external stakeholders Experience of using SaaS solutions in cloud environments eg, AWS/Azure or equivalent Software development skills eg C/C++, Java, Python Help desk ticketing systems ie, Jira Technical capability with analytical and problem solving skills Desirable skills and experience: Experience of IT infrastructure monitoring tools eg, Nagios, AWS CloudWatch Knowledge/experience using Linux or equivalent command line interfaces REST API configuration Database experience eg, SQL Networking and server infrastructure knowledge Technical background in telemetry or equivalent ie, Telco, IT Networks
Security Engineering - PAM - Secrets SALARY: $150K - $160K plus 15% bonus LOCATION: DALLAS Secrets and privileged access management hardware security modules HSMs integrating secrets management PKI sessions management python ansible terraform and YAML HSMS MS PKI Hashicorp vault CyberArk AIM PSMP PVWA CPM vault leveraging APIs cryptographic operations As a member of the Secrets and Privileged Access Management team you are responsible for applying skills and knowledge to perform functions for Privileged Access and Secrets Management solutions, Hardware security modules (HSMs), and encryption practices. You must ensure to take a security first approach when deploying or integrating Secrets Management, PKI, Sessions Management, or authentication integrations under the team's purview using agile methodology. Primary Duties and Responsibilities: To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. Design, document, deploy, and support PAM solutions supporting vaulting, session management, hardcoded credential removal, and support integrations with PAM solution for secure secrets management supporting app-to-app communication. Design, document, develop, and support PAM integrations to support automated password rotations and establishing secure sessions through jump host solution. Design, document, implement, and maintain our Certificate Authority PKI infrastructure. Ensure certificates are correctly issued, renewed, and revoked as necessary. Implement and manage certificate templates and revocation configurations. Implement, configure, and maintain HSMs to support PKI operations. Work with vendors to ensure systems are patched and up to date. Address and troubleshoot issues related to PAM, PKI, and HSM solutions. Implement and manage encryption tools and software. Ensure team solutions are monitored following best practice. Proficient in using Scripting and automation skills to convert manual maintenance and audit functions into orchestrated automation. Track and execute work following agile best practices with self-motivation to bring a task from ideation to implementation. Ability to operate in a highly regulated complex operational environment and collaborate with internal SMEs required to maintain and mature the PAM program. Document, review, and update run books supporting Secrets and Privileged Access Management solutions. Develop and maintain encryption standards, practices, and solutions. Develop and maintain documentation related to PAM policies, procedures, and configurations. Qualifications: Experience with enterprise PAM tools and technologies such as various CyberArk and HashiVault components and underlying infrastructure supporting those technologies. Experience with various integration techniques for Secrets Management and Privileged Management to target systems such as databases, directories, and applications. Experience with Microsoft certificate authority PKI infrastructure. Experience with hardware security modules (HSMs). Experience with Python, Ansible, Terraform, and YAML packages. Requires in-depth knowledge of PAM and Secrets Management best practices. Requires in-depth knowledge of encryption algorithms, protocols, and best practices. Working knowledge of system monitoring techniques and tooling. Working knowledge of the cloud ecosystem and CI/CD deployments with Terraform, Ansible, and Jenkins pipelines. 5+ years of experience with PAM tools and technologies. 3+ years of experience in PKI infrastructure including Microsoft Certificate Authority. Bachelor's degree in computer science, Information Technology, or related field. Technical Skills: Hands on deployment, management, and troubleshooting experience with HSMs, MS PKI, HashiCorp Vault, and all CyberArk components (AIM, PSM/P, PVWA, CPM, VAULT). Hands on experience leveraging APIs. Knowledge of cryptographic operations, secure key storage, and key life cycle management with HSM and encryption tools. Knowledge of end-to-end encryption, data at rest, and data in transit protection methodologies. Ability to interpret logs and events related to PKI, HSMs, encryption, and PAM activities. Education and/or Experience: 5+ years of experience with security engineering activities and testing. 5+ years of experience with privileged access management platforms. 3+ years of experience with HSM, PKI, Microsoft Certificate Authority. 2+ years of experience with DevOps/DevSecOps (eg, GitOps, Version Control, RESTful APIs) 2+ years of experience with cloud architecture and deployments. Certificates or Licenses: CyberArk Defender, Sentry, or Guardian HashiCorp Certified: Terraform Associate HashiCorp Certified: Vault Associate Certification Information Systems Security Professional (CISSP) AWS Certified Security Specialty CompTIA Security+ Microsoft Certified: Security Engineer Associate
17/09/2024
Full time
Security Engineering - PAM - Secrets SALARY: $150K - $160K plus 15% bonus LOCATION: DALLAS Secrets and privileged access management hardware security modules HSMs integrating secrets management PKI sessions management python ansible terraform and YAML HSMS MS PKI Hashicorp vault CyberArk AIM PSMP PVWA CPM vault leveraging APIs cryptographic operations As a member of the Secrets and Privileged Access Management team you are responsible for applying skills and knowledge to perform functions for Privileged Access and Secrets Management solutions, Hardware security modules (HSMs), and encryption practices. You must ensure to take a security first approach when deploying or integrating Secrets Management, PKI, Sessions Management, or authentication integrations under the team's purview using agile methodology. Primary Duties and Responsibilities: To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. Design, document, deploy, and support PAM solutions supporting vaulting, session management, hardcoded credential removal, and support integrations with PAM solution for secure secrets management supporting app-to-app communication. Design, document, develop, and support PAM integrations to support automated password rotations and establishing secure sessions through jump host solution. Design, document, implement, and maintain our Certificate Authority PKI infrastructure. Ensure certificates are correctly issued, renewed, and revoked as necessary. Implement and manage certificate templates and revocation configurations. Implement, configure, and maintain HSMs to support PKI operations. Work with vendors to ensure systems are patched and up to date. Address and troubleshoot issues related to PAM, PKI, and HSM solutions. Implement and manage encryption tools and software. Ensure team solutions are monitored following best practice. Proficient in using Scripting and automation skills to convert manual maintenance and audit functions into orchestrated automation. Track and execute work following agile best practices with self-motivation to bring a task from ideation to implementation. Ability to operate in a highly regulated complex operational environment and collaborate with internal SMEs required to maintain and mature the PAM program. Document, review, and update run books supporting Secrets and Privileged Access Management solutions. Develop and maintain encryption standards, practices, and solutions. Develop and maintain documentation related to PAM policies, procedures, and configurations. Qualifications: Experience with enterprise PAM tools and technologies such as various CyberArk and HashiVault components and underlying infrastructure supporting those technologies. Experience with various integration techniques for Secrets Management and Privileged Management to target systems such as databases, directories, and applications. Experience with Microsoft certificate authority PKI infrastructure. Experience with hardware security modules (HSMs). Experience with Python, Ansible, Terraform, and YAML packages. Requires in-depth knowledge of PAM and Secrets Management best practices. Requires in-depth knowledge of encryption algorithms, protocols, and best practices. Working knowledge of system monitoring techniques and tooling. Working knowledge of the cloud ecosystem and CI/CD deployments with Terraform, Ansible, and Jenkins pipelines. 5+ years of experience with PAM tools and technologies. 3+ years of experience in PKI infrastructure including Microsoft Certificate Authority. Bachelor's degree in computer science, Information Technology, or related field. Technical Skills: Hands on deployment, management, and troubleshooting experience with HSMs, MS PKI, HashiCorp Vault, and all CyberArk components (AIM, PSM/P, PVWA, CPM, VAULT). Hands on experience leveraging APIs. Knowledge of cryptographic operations, secure key storage, and key life cycle management with HSM and encryption tools. Knowledge of end-to-end encryption, data at rest, and data in transit protection methodologies. Ability to interpret logs and events related to PKI, HSMs, encryption, and PAM activities. Education and/or Experience: 5+ years of experience with security engineering activities and testing. 5+ years of experience with privileged access management platforms. 3+ years of experience with HSM, PKI, Microsoft Certificate Authority. 2+ years of experience with DevOps/DevSecOps (eg, GitOps, Version Control, RESTful APIs) 2+ years of experience with cloud architecture and deployments. Certificates or Licenses: CyberArk Defender, Sentry, or Guardian HashiCorp Certified: Terraform Associate HashiCorp Certified: Vault Associate Certification Information Systems Security Professional (CISSP) AWS Certified Security Specialty CompTIA Security+ Microsoft Certified: Security Engineer Associate
Security Engineering - PAM - Secrets SALARY: $150K - $160K plus 15% bonus LOCATION: CHICAGO Secrets and privileged access management hardware security modules HSMs integrating secrets management PKI sessions management python ansible terraform and YAML HSMS MS PKI Hashicorp vault CyberArk AIM PSMP PVWA CPM vault leveraging APIs cryptographic operations As a member of the Secrets and Privileged Access Management team you are responsible for applying skills and knowledge to perform functions for Privileged Access and Secrets Management solutions, Hardware security modules (HSMs), and encryption practices. You must ensure to take a security first approach when deploying or integrating Secrets Management, PKI, Sessions Management, or authentication integrations under the team's purview using agile methodology. Primary Duties and Responsibilities: To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. Design, document, deploy, and support PAM solutions supporting vaulting, session management, hardcoded credential removal, and support integrations with PAM solution for secure secrets management supporting app-to-app communication. Design, document, develop, and support PAM integrations to support automated password rotations and establishing secure sessions through jump host solution. Design, document, implement, and maintain our Certificate Authority PKI infrastructure. Ensure certificates are correctly issued, renewed, and revoked as necessary. Implement and manage certificate templates and revocation configurations. Implement, configure, and maintain HSMs to support PKI operations. Work with vendors to ensure systems are patched and up to date. Address and troubleshoot issues related to PAM, PKI, and HSM solutions. Implement and manage encryption tools and software. Ensure team solutions are monitored following best practice. Proficient in using Scripting and automation skills to convert manual maintenance and audit functions into orchestrated automation. Track and execute work following agile best practices with self-motivation to bring a task from ideation to implementation. Ability to operate in a highly regulated complex operational environment and collaborate with internal SMEs required to maintain and mature the PAM program. Document, review, and update run books supporting Secrets and Privileged Access Management solutions. Develop and maintain encryption standards, practices, and solutions. Develop and maintain documentation related to PAM policies, procedures, and configurations. Qualifications: Experience with enterprise PAM tools and technologies such as various CyberArk and HashiVault components and underlying infrastructure supporting those technologies. Experience with various integration techniques for Secrets Management and Privileged Management to target systems such as databases, directories, and applications. Experience with Microsoft certificate authority PKI infrastructure. Experience with hardware security modules (HSMs). Experience with Python, Ansible, Terraform, and YAML packages. Requires in-depth knowledge of PAM and Secrets Management best practices. Requires in-depth knowledge of encryption algorithms, protocols, and best practices. Working knowledge of system monitoring techniques and tooling. Working knowledge of the cloud ecosystem and CI/CD deployments with Terraform, Ansible, and Jenkins pipelines. 5+ years of experience with PAM tools and technologies. 3+ years of experience in PKI infrastructure including Microsoft Certificate Authority. Bachelor's degree in computer science, Information Technology, or related field. Technical Skills: Hands on deployment, management, and troubleshooting experience with HSMs, MS PKI, HashiCorp Vault, and all CyberArk components (AIM, PSM/P, PVWA, CPM, VAULT). Hands on experience leveraging APIs. Knowledge of cryptographic operations, secure key storage, and key life cycle management with HSM and encryption tools. Knowledge of end-to-end encryption, data at rest, and data in transit protection methodologies. Ability to interpret logs and events related to PKI, HSMs, encryption, and PAM activities. Education and/or Experience: 5+ years of experience with security engineering activities and testing. 5+ years of experience with privileged access management platforms. 3+ years of experience with HSM, PKI, Microsoft Certificate Authority. 2+ years of experience with DevOps/DevSecOps (eg, GitOps, Version Control, RESTful APIs) 2+ years of experience with cloud architecture and deployments. Certificates or Licenses: CyberArk Defender, Sentry, or Guardian HashiCorp Certified: Terraform Associate HashiCorp Certified: Vault Associate Certification Information Systems Security Professional (CISSP) AWS Certified Security Specialty CompTIA Security+ Microsoft Certified: Security Engineer Associate
17/09/2024
Full time
Security Engineering - PAM - Secrets SALARY: $150K - $160K plus 15% bonus LOCATION: CHICAGO Secrets and privileged access management hardware security modules HSMs integrating secrets management PKI sessions management python ansible terraform and YAML HSMS MS PKI Hashicorp vault CyberArk AIM PSMP PVWA CPM vault leveraging APIs cryptographic operations As a member of the Secrets and Privileged Access Management team you are responsible for applying skills and knowledge to perform functions for Privileged Access and Secrets Management solutions, Hardware security modules (HSMs), and encryption practices. You must ensure to take a security first approach when deploying or integrating Secrets Management, PKI, Sessions Management, or authentication integrations under the team's purview using agile methodology. Primary Duties and Responsibilities: To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. Design, document, deploy, and support PAM solutions supporting vaulting, session management, hardcoded credential removal, and support integrations with PAM solution for secure secrets management supporting app-to-app communication. Design, document, develop, and support PAM integrations to support automated password rotations and establishing secure sessions through jump host solution. Design, document, implement, and maintain our Certificate Authority PKI infrastructure. Ensure certificates are correctly issued, renewed, and revoked as necessary. Implement and manage certificate templates and revocation configurations. Implement, configure, and maintain HSMs to support PKI operations. Work with vendors to ensure systems are patched and up to date. Address and troubleshoot issues related to PAM, PKI, and HSM solutions. Implement and manage encryption tools and software. Ensure team solutions are monitored following best practice. Proficient in using Scripting and automation skills to convert manual maintenance and audit functions into orchestrated automation. Track and execute work following agile best practices with self-motivation to bring a task from ideation to implementation. Ability to operate in a highly regulated complex operational environment and collaborate with internal SMEs required to maintain and mature the PAM program. Document, review, and update run books supporting Secrets and Privileged Access Management solutions. Develop and maintain encryption standards, practices, and solutions. Develop and maintain documentation related to PAM policies, procedures, and configurations. Qualifications: Experience with enterprise PAM tools and technologies such as various CyberArk and HashiVault components and underlying infrastructure supporting those technologies. Experience with various integration techniques for Secrets Management and Privileged Management to target systems such as databases, directories, and applications. Experience with Microsoft certificate authority PKI infrastructure. Experience with hardware security modules (HSMs). Experience with Python, Ansible, Terraform, and YAML packages. Requires in-depth knowledge of PAM and Secrets Management best practices. Requires in-depth knowledge of encryption algorithms, protocols, and best practices. Working knowledge of system monitoring techniques and tooling. Working knowledge of the cloud ecosystem and CI/CD deployments with Terraform, Ansible, and Jenkins pipelines. 5+ years of experience with PAM tools and technologies. 3+ years of experience in PKI infrastructure including Microsoft Certificate Authority. Bachelor's degree in computer science, Information Technology, or related field. Technical Skills: Hands on deployment, management, and troubleshooting experience with HSMs, MS PKI, HashiCorp Vault, and all CyberArk components (AIM, PSM/P, PVWA, CPM, VAULT). Hands on experience leveraging APIs. Knowledge of cryptographic operations, secure key storage, and key life cycle management with HSM and encryption tools. Knowledge of end-to-end encryption, data at rest, and data in transit protection methodologies. Ability to interpret logs and events related to PKI, HSMs, encryption, and PAM activities. Education and/or Experience: 5+ years of experience with security engineering activities and testing. 5+ years of experience with privileged access management platforms. 3+ years of experience with HSM, PKI, Microsoft Certificate Authority. 2+ years of experience with DevOps/DevSecOps (eg, GitOps, Version Control, RESTful APIs) 2+ years of experience with cloud architecture and deployments. Certificates or Licenses: CyberArk Defender, Sentry, or Guardian HashiCorp Certified: Terraform Associate HashiCorp Certified: Vault Associate Certification Information Systems Security Professional (CISSP) AWS Certified Security Specialty CompTIA Security+ Microsoft Certified: Security Engineer Associate
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
16/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
16/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
13/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
13/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
13/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious financial firm is looking for a Principal Software Engineer. This engineer will build software solutions to test systems of financial products. This engineer will need heavy experience using Java, python, Terraform, CI/CD, DevOps, and containerization. The ideal candidate will have experience of working in a highly regulated financial environment. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus.
12/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious financial firm is looking for a Principal Software Engineer. This engineer will build software solutions to test systems of financial products. This engineer will need heavy experience using Java, python, Terraform, CI/CD, DevOps, and containerization. The ideal candidate will have experience of working in a highly regulated financial environment. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus.
My client is a leading global Digital Consultancy and are seeking a Storage Engineer - Senior IT Specialist to join their team at client facility in Cumbria, UK. Note: Must hold active SC Clearance and be eligible for DV Clearance The role will involve working across multiple IT disciplines (such as Servers, storage, databases, security tools, audio visual equipment), but the primary requirement is for a storage and database engineer. As such, experience with Dell. Role Responsibilities: Incident management to resolution Carrying out regular maintenance tasks to ensure the systems under support are able to continue to meet availability and performance requirements. Carrying out storage monitoring Upgrade hardware, firmware, software under Change control Racking and cabling hardware Patching network connections What are the working conditions? Located at customer site - Cumbria Shift work, following a 2 x 12.5-hour day shifts, 2 x 12.5-night shifts, and 6 rest days pattern Member of a shift team containing one Network specialist, one Infrastructure specialist, and one Storage, Database, and Security Tooling specialist There are five shift teams in total. Role Requirements: An experienced Storage Engineer with excellent knowledge of Dell EMC products (preferably 4 years minimum experience) Hands-on experience of other areas of IT, such as Servers, networks, security tools, audio visual equipment ITIL 4 qualified, with experience of implementing system modifications via Change control. Ability and motivation to learn new IT skills. Good stakeholder management Excellent written and verbal communication skills. A strong analytical troubleshooting approach Rewards and benefits: Shift allowance 25 days annual paid leave; Wellbeing programs & work-life balance - integration and passion sharing events; Private medical and dental care; Pension contributions up to 10%; Flex benefits program; Courses and certifications opportunities; Conferences and Expert Communities; Charity and eco initiatives.
12/09/2024
Full time
My client is a leading global Digital Consultancy and are seeking a Storage Engineer - Senior IT Specialist to join their team at client facility in Cumbria, UK. Note: Must hold active SC Clearance and be eligible for DV Clearance The role will involve working across multiple IT disciplines (such as Servers, storage, databases, security tools, audio visual equipment), but the primary requirement is for a storage and database engineer. As such, experience with Dell. Role Responsibilities: Incident management to resolution Carrying out regular maintenance tasks to ensure the systems under support are able to continue to meet availability and performance requirements. Carrying out storage monitoring Upgrade hardware, firmware, software under Change control Racking and cabling hardware Patching network connections What are the working conditions? Located at customer site - Cumbria Shift work, following a 2 x 12.5-hour day shifts, 2 x 12.5-night shifts, and 6 rest days pattern Member of a shift team containing one Network specialist, one Infrastructure specialist, and one Storage, Database, and Security Tooling specialist There are five shift teams in total. Role Requirements: An experienced Storage Engineer with excellent knowledge of Dell EMC products (preferably 4 years minimum experience) Hands-on experience of other areas of IT, such as Servers, networks, security tools, audio visual equipment ITIL 4 qualified, with experience of implementing system modifications via Change control. Ability and motivation to learn new IT skills. Good stakeholder management Excellent written and verbal communication skills. A strong analytical troubleshooting approach Rewards and benefits: Shift allowance 25 days annual paid leave; Wellbeing programs & work-life balance - integration and passion sharing events; Private medical and dental care; Pension contributions up to 10%; Flex benefits program; Courses and certifications opportunities; Conferences and Expert Communities; Charity and eco initiatives.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Senior Java Software Engineer. Candidate will support and work collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Preferred] Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google [Required] Experience writing unit and integration tests with testing frameworks like Junit, Citrus [Required] Experience working with various types of databases like Relational, NoSQL [Required] Experience working with Git [Preferred] Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc [Preferred] Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics [Required] Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API Technical Skills: [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns Education and/or Experience: [Required] BS degree in Computer Science, similar technical field required
11/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Senior Java Software Engineer. Candidate will support and work collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Preferred] Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google [Required] Experience writing unit and integration tests with testing frameworks like Junit, Citrus [Required] Experience working with various types of databases like Relational, NoSQL [Required] Experience working with Git [Preferred] Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc [Preferred] Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics [Required] Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API Technical Skills: [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns Education and/or Experience: [Required] BS degree in Computer Science, similar technical field required
NO SPONSORSHIP Software Engineering - Python, Java, Terraform, DevOps, Containerization Understanding of industry They do not necessarily have to work within a QRM portal. But they have to understand the industry and come from a highly regulated background, preferably financial Looking for a hard core developer who can work within quantitative risk management and they develop applications and solutions for the QRM team They do not build models, they automate models Develop hardcore applications These people will have masters in mathematics, statistics, physics, or computer science *They may even have a PhD They need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Develops and maintains risk models for managing clearing fund and stress testing risk model software in production AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
11/09/2024
Full time
NO SPONSORSHIP Software Engineering - Python, Java, Terraform, DevOps, Containerization Understanding of industry They do not necessarily have to work within a QRM portal. But they have to understand the industry and come from a highly regulated background, preferably financial Looking for a hard core developer who can work within quantitative risk management and they develop applications and solutions for the QRM team They do not build models, they automate models Develop hardcore applications These people will have masters in mathematics, statistics, physics, or computer science *They may even have a PhD They need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Develops and maintains risk models for managing clearing fund and stress testing risk model software in production AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
Senior Engineer, Cloud/Infrastructure Security Salary: Open + bonus Location: Chicago, IL Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* Qualifications Bachelor's degree in computer science related degree 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Preferred Experience with DevOps tools, ex. Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Familiarity with security standards such as the NIST CSF Related certifications Responsibilities Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery
27/08/2024
Full time
Senior Engineer, Cloud/Infrastructure Security Salary: Open + bonus Location: Chicago, IL Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* Qualifications Bachelor's degree in computer science related degree 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Preferred Experience with DevOps tools, ex. Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Familiarity with security standards such as the NIST CSF Related certifications Responsibilities Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery