*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Senior Linux DevOps Engineer. Candidate will be responsible for design and support of core platform engineering automation. This role will drive the strategy for infrastructure automation and be charged to improve application adoption, reduce overall operational support, and increase end-user usability of our platform services. Candidate will provide team leadership required to support a large, complex Architect L3 Linux based computing environment and an increasing transition to Linux infrastructure in AWS. Assist in driving infrastructure as code mentality throughout the organization and demonstrate a passion for automation concepts and tools. Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Qualifications : Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Technical Skills: Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Linux Experience: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Cloud Experience - Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Infra Automation - Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. CICD Experience - Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Resilient Design - Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
26/04/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Senior Linux DevOps Engineer. Candidate will be responsible for design and support of core platform engineering automation. This role will drive the strategy for infrastructure automation and be charged to improve application adoption, reduce overall operational support, and increase end-user usability of our platform services. Candidate will provide team leadership required to support a large, complex Architect L3 Linux based computing environment and an increasing transition to Linux infrastructure in AWS. Assist in driving infrastructure as code mentality throughout the organization and demonstrate a passion for automation concepts and tools. Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Qualifications : Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Technical Skills: Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Linux Experience: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Cloud Experience - Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Infra Automation - Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. CICD Experience - Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Resilient Design - Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Senior Linux DevOps Engineer. Candidate will be responsible for design and support of core platform engineering automation. This role will drive the strategy for infrastructure automation and be charged to improve application adoption, reduce overall operational support, and increase end-user usability of our platform services. Candidate will provide team leadership required to support a large, complex Architect L3 Linux based computing environment and an increasing transition to Linux infrastructure in AWS. Assist in driving infrastructure as code mentality throughout the organization and demonstrate a passion for automation concepts and tools. Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Qualifications : Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Technical Skills: Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Linux Experience: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Cloud Experience - Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Infra Automation - Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. CICD Experience - Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Resilient Design - Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
26/04/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Senior Linux DevOps Engineer. Candidate will be responsible for design and support of core platform engineering automation. This role will drive the strategy for infrastructure automation and be charged to improve application adoption, reduce overall operational support, and increase end-user usability of our platform services. Candidate will provide team leadership required to support a large, complex Architect L3 Linux based computing environment and an increasing transition to Linux infrastructure in AWS. Assist in driving infrastructure as code mentality throughout the organization and demonstrate a passion for automation concepts and tools. Responsibilities: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Qualifications : Hands-on experience with: Terraform, Kubernetes, Jenkins, Kafka, Github, and configuration management tools such as Ansible. Relevant experience with configuration and implementation of IaaS, Infrastructure as code, AWS, Azure, etc. Extensive knowledge of Linux operating systems, Linux shells and standard utilities, and common Linux security tools at L3 level In depth system administration knowledge and skills for RedHat Linux. Technical Skills: Kubernetes Experience - Strong knowledge in Kubernetes deployment frameworks/platforms including Helm, Docker, Rancher, OpenShift, EKS. Linux Experience: Provide advanced system administration, operational support and problem resolution for a large complex Linux computing environment, including both virtualized and physical Servers. Create and Patch AMIs, perform pull requests, write Automation code using tools such as Ansible, Terraform, etc. Cloud Experience - Strong knowledge of secure cloud infrastructure design and components, such as: Servers, operating systems, networks, IAM, and storage. Cloud Certifications, specifically AWS Cloud certification would be preferred. Infra Automation - Expert knowledge in core automation development toolchain including Terraform, Ansible, Jenkins, Git, Harness. CICD Experience - Mastery of CICD best practices in a large organization. (GitOps/DevOps, secure builds, secure code promotion, deployments (Harness/Argo), automated testing (app and infra), integration of policy frameworks, cost-optimization, SLSA best practices) Resilient Design - Experience with architecting, implementing and maintaining highly available mission critical environments for 24/7 availability.
Our client is looking for a skilled and enthusiastic network engineer to join their team based around Glasgow. The ideal candidate will have a strong grasp of the requirements below. If you feel that you are capable, I would love to hear from you and discuss the position in full. Duties and Responsibilities Design, implement, configure and manage the organisation's network infrastructure, including LANs, WANs, VPNs, Routers, Switches, Firewalls, and wireless access points. Identify and address issues to ensure high availability, reliability, and optimal performance. Deploy and maintain the systems' infrastructure, including Servers, storage solutions, operating systems, virtualisation platforms and cloud services. Manage network and systems capacity planning to accommodate growth and changing computing requirements. Collaborate with IT teams worldwide to develop integrated network and systems solutions aligned with business objectives and technology standards. Perform regular security assessments and audits to identify vulnerabilities and implement necessary patches, updates, and security protocols. Design, implement and maintain disaster recovery and business continuity plans. Provide technical support to end-users and other IT teams, addressing network and systems-related incidents and challenges. Document network and systems configurations, procedures, and troubleshooting guides to facilitate knowledge sharing and training. Stay informed about emerging technologies, industry trends, and best practices in networking and systems engineering. Automate network and systems tasks using Scripting languages and configuration management tools. Work with vendors and service providers for procurement, maintenance, and support of network and systems equipment and software. Install hardware for systems and users, as required. Packaging and deployment of applications and software updates. Identify, propose, contribute and manage IT projects for continuous improvement. Qualifications, Knowledge & Skills Bachelor's degree in Computer Science, Information Technology, or related field; or relevant work experience for a minimum of five years. Proven experience as a Network Engineer/Administrator, Systems Engineer/Administrator, or similar role, demonstrating proficiency in both networking and systems administration. Strong understanding of network protocols, routing, switching, and network security practices. Familiarity with various operating systems, including Windows and VMWare ESXi and experience in system administration. Proficiency in configuring and managing virtualisation platforms such as VMware. Scripting skills (eg, PowerShell) for network and systems automation and optimisation. Knowledge of hardware components, server architecture, and storage systems (SANs). Familiarity with security tools, encryption, certificates, PKI, authentication, and patch management for both networks and systems. Excellent communication skills to collaborate effectively with technical and non-technical teams. Strong problem-solving abilities for diagnosing and resolving complex network and systems issues. Ability to manage multiple tasks, projects, and priorities while adhering to deadlines. Main benefits: Salary Life Assurance x 4 times annual salary Critical Illness x 2 times annual salary Westfield Health Cover - CashPlan and Hospital Plan Personal Private Pension (currently Scottish Widows). Salary Exchange. 5% company contribution 34 day holiday (includes public holidays) Contribution to fitness club or classes Please send a copy of your CV for more information and to discuss your suitability. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
25/04/2024
Full time
Our client is looking for a skilled and enthusiastic network engineer to join their team based around Glasgow. The ideal candidate will have a strong grasp of the requirements below. If you feel that you are capable, I would love to hear from you and discuss the position in full. Duties and Responsibilities Design, implement, configure and manage the organisation's network infrastructure, including LANs, WANs, VPNs, Routers, Switches, Firewalls, and wireless access points. Identify and address issues to ensure high availability, reliability, and optimal performance. Deploy and maintain the systems' infrastructure, including Servers, storage solutions, operating systems, virtualisation platforms and cloud services. Manage network and systems capacity planning to accommodate growth and changing computing requirements. Collaborate with IT teams worldwide to develop integrated network and systems solutions aligned with business objectives and technology standards. Perform regular security assessments and audits to identify vulnerabilities and implement necessary patches, updates, and security protocols. Design, implement and maintain disaster recovery and business continuity plans. Provide technical support to end-users and other IT teams, addressing network and systems-related incidents and challenges. Document network and systems configurations, procedures, and troubleshooting guides to facilitate knowledge sharing and training. Stay informed about emerging technologies, industry trends, and best practices in networking and systems engineering. Automate network and systems tasks using Scripting languages and configuration management tools. Work with vendors and service providers for procurement, maintenance, and support of network and systems equipment and software. Install hardware for systems and users, as required. Packaging and deployment of applications and software updates. Identify, propose, contribute and manage IT projects for continuous improvement. Qualifications, Knowledge & Skills Bachelor's degree in Computer Science, Information Technology, or related field; or relevant work experience for a minimum of five years. Proven experience as a Network Engineer/Administrator, Systems Engineer/Administrator, or similar role, demonstrating proficiency in both networking and systems administration. Strong understanding of network protocols, routing, switching, and network security practices. Familiarity with various operating systems, including Windows and VMWare ESXi and experience in system administration. Proficiency in configuring and managing virtualisation platforms such as VMware. Scripting skills (eg, PowerShell) for network and systems automation and optimisation. Knowledge of hardware components, server architecture, and storage systems (SANs). Familiarity with security tools, encryption, certificates, PKI, authentication, and patch management for both networks and systems. Excellent communication skills to collaborate effectively with technical and non-technical teams. Strong problem-solving abilities for diagnosing and resolving complex network and systems issues. Ability to manage multiple tasks, projects, and priorities while adhering to deadlines. Main benefits: Salary Life Assurance x 4 times annual salary Critical Illness x 2 times annual salary Westfield Health Cover - CashPlan and Hospital Plan Personal Private Pension (currently Scottish Widows). Salary Exchange. 5% company contribution 34 day holiday (includes public holidays) Contribution to fitness club or classes Please send a copy of your CV for more information and to discuss your suitability. Hays Specialist Recruitment Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website.
ASSOCIATE PRINCIPAL, APPIAN SOFTWARE ENGINEERING SALARY: $140k - $145k - $152k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite, 2 days remote Looking for someone to design development testing and do the implementation of appian software. You will need 5 years Front End user experience, JavaScript automating workflows inside appian aws unix linux Java python node js angular 2.0 or react js and Middleware technologies. Working knowledge of devops terraform ansible Jenkins Kubernetes helm and cicd pipelines. Must have a degree and be apian certified developer required Contribute to design, technical direction and architecture including collaborating with various teams to build fit for purpose solutions. Applies expert knowledge of Java, Python, JavaScript, NodeJS, Angular 2.0 or ReactJS and middle-ware technologies in independently designing and developing key services with a focus on continuous integration and delivery Participates in code reviews, proactively identifying and mitigating potential issues and defects as well as assisting with continuous improvement Drives continuous improvement efforts by identifying and championing practical means of reducing time to market while maintaining high quality Qualifications: 5+ years of Front End, User Experience, development (required) 5+ years of experience in JavaScript skills (required) 3 + years of experience automating workflows inside Appian and in conjunction with integration to other tools (required) 3+ years of experience in React application development (required) 3+ years of hands-on HTML5/CSS3 experience (required) Experience with Java and/or Python (required) Experience with popular Javascript frameworks such as React, Node JS, Vue, Angular 2.0 (required) Experience of working with websockets, HTTP 1.1 and HTTP/2 (required) Experience with RESTful APIs and JSON RPC (required) Ability to write clean, bug-free code that is easy to understand and easily maintainable (required) Experience with BDD methodologies & automated acceptance testing (required) Technical Skills: 5+ years hands-on experience in Java, including good understanding of Java fundamentals such as Memory Model, Runtime Environment, Concurrency and Multithreading (required) Past/Current experience of 3+ years working on a large scale cloud native project (platform: Unix/Linux, Type of Systems: event-driven/transaction processing/high performance computing) as Technical Lead. These experiences should include developing/architecting core libraries or framework used by the platform to support fundamental services like storage, alert notifications, security, etc. (required) Appian Process Modeling, Smart Services, Rules and Tempo event services, database, and Web services (required) Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based, Graph (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc (required) Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics (required) Education and/or Experience: BS degree in Computer Science, similar technical field Appian certified developer
22/04/2024
Full time
ASSOCIATE PRINCIPAL, APPIAN SOFTWARE ENGINEERING SALARY: $140k - $145k - $152k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite, 2 days remote Looking for someone to design development testing and do the implementation of appian software. You will need 5 years Front End user experience, JavaScript automating workflows inside appian aws unix linux Java python node js angular 2.0 or react js and Middleware technologies. Working knowledge of devops terraform ansible Jenkins Kubernetes helm and cicd pipelines. Must have a degree and be apian certified developer required Contribute to design, technical direction and architecture including collaborating with various teams to build fit for purpose solutions. Applies expert knowledge of Java, Python, JavaScript, NodeJS, Angular 2.0 or ReactJS and middle-ware technologies in independently designing and developing key services with a focus on continuous integration and delivery Participates in code reviews, proactively identifying and mitigating potential issues and defects as well as assisting with continuous improvement Drives continuous improvement efforts by identifying and championing practical means of reducing time to market while maintaining high quality Qualifications: 5+ years of Front End, User Experience, development (required) 5+ years of experience in JavaScript skills (required) 3 + years of experience automating workflows inside Appian and in conjunction with integration to other tools (required) 3+ years of experience in React application development (required) 3+ years of hands-on HTML5/CSS3 experience (required) Experience with Java and/or Python (required) Experience with popular Javascript frameworks such as React, Node JS, Vue, Angular 2.0 (required) Experience of working with websockets, HTTP 1.1 and HTTP/2 (required) Experience with RESTful APIs and JSON RPC (required) Ability to write clean, bug-free code that is easy to understand and easily maintainable (required) Experience with BDD methodologies & automated acceptance testing (required) Technical Skills: 5+ years hands-on experience in Java, including good understanding of Java fundamentals such as Memory Model, Runtime Environment, Concurrency and Multithreading (required) Past/Current experience of 3+ years working on a large scale cloud native project (platform: Unix/Linux, Type of Systems: event-driven/transaction processing/high performance computing) as Technical Lead. These experiences should include developing/architecting core libraries or framework used by the platform to support fundamental services like storage, alert notifications, security, etc. (required) Appian Process Modeling, Smart Services, Rules and Tempo event services, database, and Web services (required) Experience with cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. (required) Experience with distributed message brokers using Kafka (required) Experience with high speed distributed computing frameworks such as AWS EMR, Hadoop, HDFS, S3, MapReduce, Apache Spark, Apache Hive, Kafka Streams, Apache Flink etc. (required) Experience working with various types of databases like Relational, NoSQL, Object-based, Graph (required) Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc (required) Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics (required) Education and/or Experience: BS degree in Computer Science, similar technical field Appian certified developer
Digital Research Infrastructure Engineer - Linux Specialist PML operations grade 4 £30000 - £45000 DOE Full Time Open Ended Appointment The Role We have an exciting opportunity at PML for an individual with skills in Linux system administration to join the PML s Digital Innovation and Marine Autonomy (DIMA) group. The role provides a business critical link between scientists, PML Applications (commercial work) and our IT Group to support the Linux computing infrastructure as it continues to evolve, underpinning PML science in multiple areas and across all levels. This ranges from data generation, (storage technologies and data management), processing and analysis (high performance computing and technologies such as JupyterHub), to making visual outputs for end users (web technologies and virtualisation) to increase the reach and impact of PML science. About You You will enjoy working with others to help deliver a modern and reliable digital infrastructure to underpin the world leading research carried out at PML. You will understand the importance of stability from existing infrastructure but will also be keen to learn and try new technologies. You will have experience of administering Linux systems, ideally using Ubuntu, and will be able to make use of scripts and common tools such as ansible to manage this. You will understand the importance of taking a proactive approach to identify and resolve and problems and will be able to make use of monitoring software (e.g., Nagios, Grafana) to accomplish this. You will understand best practices in cybersecurity and be able to apply these. Skills Required Linux systems administration and monitoring Linux scripting (e.g., bash and Python) Experience in management of data at the Terrabyte to Petabyte scale and storage technologies such as NFS and S3. Cybersecurity (Understand and apply best practices) Container technologies (Docker and Kubernetes) High performance Computing (Slurm) Virtualisation (VMWare) Key Deliverables Maintain our storage infrastructure to ensure data is distributed across servers based on existing capacity and projected changes in data volumes. This includes regular data moves and liaising with stakeholders to ensure data is backed up and archiving projects are completes as needed. Monitor high performance computing infrastructure to identify and resolve problems either on their own or by working with IT (depending on the nature of the problem). Act of a point of contact between scientists and IT to answer questions, help identify solutions and provide training. Work with the data architect to maintain and develop web infrastructure used to provide existing and planned data search and visualisation services. Manage the NEODAAS GPU cluster (MAGEO), including liaising with IT, vendors and system users. About PML As a marine-focused charity we develop and apply innovative science with a view to ensuring ocean sustainability. With over 40 years of experience, we offer evidence-based solutions to societal challenges. Our impact spans from research publications to informing policies and training future scientists. The science undertaken at PML contributes to UN Sustainable Development Goals by promoting healthy, productive and resilient oceans and seas. To support PML s science it operates in house Linux infrastructure used for processing satellite data, running models and making outputs accessible through web visualisation tools. This infrastructure includes a large amount of storage (6 PB), a High-Performance Computing cluster with over 1500 cores, a 40 GPU cluster (the MAssive GPU cluster for Earth Observation; MAGEO) and a virtual machine cluster. The role will be part of the Digital Innovation and Marine Autonomy (DIMA) group within PML. DIMA is a pioneering digital science group dedicated to advancing PML s world-class and cutting-edge environmental research through the utilisation of state-of-the-art digital and autonomous technologies. The team comprises research software engineers, research infrastructure engineers, marine technologists and scientists who work on a variety of projects using autonomous vessels, satellite data, drones, Artificial Intelligence, High Performance Computing and data visualisation tools to help deliver PML s goals. The team have an enthusiasm for solving problems through collaboration and shared learning.
12/04/2024
Full time
Digital Research Infrastructure Engineer - Linux Specialist PML operations grade 4 £30000 - £45000 DOE Full Time Open Ended Appointment The Role We have an exciting opportunity at PML for an individual with skills in Linux system administration to join the PML s Digital Innovation and Marine Autonomy (DIMA) group. The role provides a business critical link between scientists, PML Applications (commercial work) and our IT Group to support the Linux computing infrastructure as it continues to evolve, underpinning PML science in multiple areas and across all levels. This ranges from data generation, (storage technologies and data management), processing and analysis (high performance computing and technologies such as JupyterHub), to making visual outputs for end users (web technologies and virtualisation) to increase the reach and impact of PML science. About You You will enjoy working with others to help deliver a modern and reliable digital infrastructure to underpin the world leading research carried out at PML. You will understand the importance of stability from existing infrastructure but will also be keen to learn and try new technologies. You will have experience of administering Linux systems, ideally using Ubuntu, and will be able to make use of scripts and common tools such as ansible to manage this. You will understand the importance of taking a proactive approach to identify and resolve and problems and will be able to make use of monitoring software (e.g., Nagios, Grafana) to accomplish this. You will understand best practices in cybersecurity and be able to apply these. Skills Required Linux systems administration and monitoring Linux scripting (e.g., bash and Python) Experience in management of data at the Terrabyte to Petabyte scale and storage technologies such as NFS and S3. Cybersecurity (Understand and apply best practices) Container technologies (Docker and Kubernetes) High performance Computing (Slurm) Virtualisation (VMWare) Key Deliverables Maintain our storage infrastructure to ensure data is distributed across servers based on existing capacity and projected changes in data volumes. This includes regular data moves and liaising with stakeholders to ensure data is backed up and archiving projects are completes as needed. Monitor high performance computing infrastructure to identify and resolve problems either on their own or by working with IT (depending on the nature of the problem). Act of a point of contact between scientists and IT to answer questions, help identify solutions and provide training. Work with the data architect to maintain and develop web infrastructure used to provide existing and planned data search and visualisation services. Manage the NEODAAS GPU cluster (MAGEO), including liaising with IT, vendors and system users. About PML As a marine-focused charity we develop and apply innovative science with a view to ensuring ocean sustainability. With over 40 years of experience, we offer evidence-based solutions to societal challenges. Our impact spans from research publications to informing policies and training future scientists. The science undertaken at PML contributes to UN Sustainable Development Goals by promoting healthy, productive and resilient oceans and seas. To support PML s science it operates in house Linux infrastructure used for processing satellite data, running models and making outputs accessible through web visualisation tools. This infrastructure includes a large amount of storage (6 PB), a High-Performance Computing cluster with over 1500 cores, a 40 GPU cluster (the MAssive GPU cluster for Earth Observation; MAGEO) and a virtual machine cluster. The role will be part of the Digital Innovation and Marine Autonomy (DIMA) group within PML. DIMA is a pioneering digital science group dedicated to advancing PML s world-class and cutting-edge environmental research through the utilisation of state-of-the-art digital and autonomous technologies. The team comprises research software engineers, research infrastructure engineers, marine technologists and scientists who work on a variety of projects using autonomous vessels, satellite data, drones, Artificial Intelligence, High Performance Computing and data visualisation tools to help deliver PML s goals. The team have an enthusiasm for solving problems through collaboration and shared learning.