AI Engineer - ML, Python, AWS, RAG, LLM, ETL - Perm - Hybrid - London - 70 - 80k Plus Benefits My client - a Global media company - are seeking to recruit an experienced AI Engineer to join their team. You will be responsible for providing technical expertise and leadership across all aspects of AI engineering process, from data acquisition, tagging, embedding, protection and storage. Work closely with data scientists, delivery leads, product managers, content matter experts, and technology teams to develop, automate and scale-up advanced AI solutions to address key customer problems whilst also helping to develop methodologies and reusable solutions. Duties include: Prepare and preprocess data to ensure it is ready for use by machine learning models and AI solutions. Work with a variety of data types including PDFs, Word documents, Excel files, HTML, audio, video, and text, as well as various databases. Build LLM applications including RAG applications, fine tuning LLMs and embedding models, agent/reasoning applications. Deploy LLM applications to cloud infrastructure (on AWS is plus) Manage production-grade LLM application involving RAG evaluation and improvements (RAGAS, advanced retrieval techniques), monitoring and visibility tooling and scaling LLM applications (Amazon SageMaker) Stay Current with Emerging Technologies: Keep up-to-date with the latest developments in data engineering, machine learning, and AI technologies to continually enhance data capabilities. Looking for candidates with similar experience with the following: Strong understanding and hands-on experience in various ML techniques and algorithms, including RAG, fine-tuning models and working with large language models. Extensive Langchain and Python programming languages, essential for AI development and data manipulation tasks. Proficiency in handling and processing large datasets, ensuring data quality and accessibility, including expertise in vectorized datasets. Experience with Data Integration Tools: Proficiency in ETL/ELT tools and practices. Competency in deploying and managing AI solutions on AWS cloud infrastructure. Experience with containerization technologies like Docker. Understanding and application of MLOps practices for efficient model life cycle management. Knowledge of data security and privacy practices. Effective problem-solving skills to address challenges encountered in AI development projects. Strong collaboration and communication skills for effective teamwork and conveying technical concepts clearly to stakeholders. Knowledge and experience in MarTech preferred, applied in AI development projects. Willingness to continuously learn and adapt to new technologies and methodologies in AI and data engineering. Company offer excellent benefits, training and career progression AI Engineer - ML, Python, AWS, RAG, LLM, ETL - Perm - Hybrid - London - 70 - 80k Plus Benefits
26/04/2024
Full time
AI Engineer - ML, Python, AWS, RAG, LLM, ETL - Perm - Hybrid - London - 70 - 80k Plus Benefits My client - a Global media company - are seeking to recruit an experienced AI Engineer to join their team. You will be responsible for providing technical expertise and leadership across all aspects of AI engineering process, from data acquisition, tagging, embedding, protection and storage. Work closely with data scientists, delivery leads, product managers, content matter experts, and technology teams to develop, automate and scale-up advanced AI solutions to address key customer problems whilst also helping to develop methodologies and reusable solutions. Duties include: Prepare and preprocess data to ensure it is ready for use by machine learning models and AI solutions. Work with a variety of data types including PDFs, Word documents, Excel files, HTML, audio, video, and text, as well as various databases. Build LLM applications including RAG applications, fine tuning LLMs and embedding models, agent/reasoning applications. Deploy LLM applications to cloud infrastructure (on AWS is plus) Manage production-grade LLM application involving RAG evaluation and improvements (RAGAS, advanced retrieval techniques), monitoring and visibility tooling and scaling LLM applications (Amazon SageMaker) Stay Current with Emerging Technologies: Keep up-to-date with the latest developments in data engineering, machine learning, and AI technologies to continually enhance data capabilities. Looking for candidates with similar experience with the following: Strong understanding and hands-on experience in various ML techniques and algorithms, including RAG, fine-tuning models and working with large language models. Extensive Langchain and Python programming languages, essential for AI development and data manipulation tasks. Proficiency in handling and processing large datasets, ensuring data quality and accessibility, including expertise in vectorized datasets. Experience with Data Integration Tools: Proficiency in ETL/ELT tools and practices. Competency in deploying and managing AI solutions on AWS cloud infrastructure. Experience with containerization technologies like Docker. Understanding and application of MLOps practices for efficient model life cycle management. Knowledge of data security and privacy practices. Effective problem-solving skills to address challenges encountered in AI development projects. Strong collaboration and communication skills for effective teamwork and conveying technical concepts clearly to stakeholders. Knowledge and experience in MarTech preferred, applied in AI development projects. Willingness to continuously learn and adapt to new technologies and methodologies in AI and data engineering. Company offer excellent benefits, training and career progression AI Engineer - ML, Python, AWS, RAG, LLM, ETL - Perm - Hybrid - London - 70 - 80k Plus Benefits
A global medical device company are looking for a R&D Process Development Engineer to join their Research and Development team on a contract basis. Familiarity with processes associated with biomaterial coating is required. The role will involve Laboratory Work, design experiments, analysing data, and optimizing processes to meet performance and regulatory requirements. You'll be part of a cross-functional collaboration. Working closely with a diverse team of scientists, design engineers, process engineers and other functions to ensure project alignment with goals and milestones. You will also need to complete documentation, maintain comprehensive records, documentation, and reports. The role will also involve Project Management. Taking ownership of assigned tasks within projects, managing task specific timelines, budgets effectively. You will work closely with vendors to ensure that the processes are compliant with regulatory requirements and that costs and timelines are maintain. Essential skills: Biomaterial coating experience, eg Chemical Vapour Deposition, Physical Vapour Deposition, Dip coating Experience working with external vendors High level knowledge of the processes Desirable skills: Combination devices experience Project management experience The start date is for ASAP. The initial contract length is for 12 months (there will be options to extend). The role is based in Limerick and can be done mostly remotely. You will only need to come onsite every other week. The rate is €55 per hour, depending on experience, if you have any expenses please let me know and I can factor that into the rate for you. Please visit our website to find out more about our Key Information Documents. Please note that the documents provided contain generic information. If we are successful in finding you an assignment, you will receive a Key Information Document which will be specific to the vendor set-up you have chosen and your placement. Real Staffing, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy| Registered office | London, EC4N 7BE, United Kingdom | Partnership Number | OC387148 England and Wales
26/04/2024
Project-based
A global medical device company are looking for a R&D Process Development Engineer to join their Research and Development team on a contract basis. Familiarity with processes associated with biomaterial coating is required. The role will involve Laboratory Work, design experiments, analysing data, and optimizing processes to meet performance and regulatory requirements. You'll be part of a cross-functional collaboration. Working closely with a diverse team of scientists, design engineers, process engineers and other functions to ensure project alignment with goals and milestones. You will also need to complete documentation, maintain comprehensive records, documentation, and reports. The role will also involve Project Management. Taking ownership of assigned tasks within projects, managing task specific timelines, budgets effectively. You will work closely with vendors to ensure that the processes are compliant with regulatory requirements and that costs and timelines are maintain. Essential skills: Biomaterial coating experience, eg Chemical Vapour Deposition, Physical Vapour Deposition, Dip coating Experience working with external vendors High level knowledge of the processes Desirable skills: Combination devices experience Project management experience The start date is for ASAP. The initial contract length is for 12 months (there will be options to extend). The role is based in Limerick and can be done mostly remotely. You will only need to come onsite every other week. The rate is €55 per hour, depending on experience, if you have any expenses please let me know and I can factor that into the rate for you. Please visit our website to find out more about our Key Information Documents. Please note that the documents provided contain generic information. If we are successful in finding you an assignment, you will receive a Key Information Document which will be specific to the vendor set-up you have chosen and your placement. Real Staffing, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy| Registered office | London, EC4N 7BE, United Kingdom | Partnership Number | OC387148 England and Wales
This is your opportunity to join one of the most recognisable names in international financial services. With a presence in over 60 countries, they are one of Europe's biggest employers and have achieved Top Employer Europe certification. This means you'll be joining a responsible, positive, and thriving business that puts well-being and personal development at the top of its agenda. Expectation: 50% on-site & 50% homeworking At least 5 years of relevant experience Master's degree in Computer Science or equivalent work experience Sound knowledge of English as well as at least one local language (Dutch or French) (both are a strong plus). Duties include: As we migrate from on-premise Servers to Domino Datalab, we are looking for someone with experience in navigating constraining environments and identifying solutions with available building blocks. The challenges of the platform are combining data availability, security and traceability in order to build the Data Science platform of tomorrow. As a Product Owner you must take into account the requirements of your stakeholders, your clients (both data scientists, and AI consumers) as well as current and upcoming regulations in order to deliver a working AI platform for both model design and industrialisation. Domino Datalab allows our data scientist to work mostly in a self-service manner, but nevertheless we must propose an industrialization path for their outputs with an integration into the banking ecosystem and it's constraints (integrity, availability, throughput, ) Another key challenge is to make data available to data scientists in a secure and compliant way. Access should be simple, forthcoming and compliant while enabling both data exploration and use case industrialisation scenarios. As a Product Owner, you build the roadmap for the coming year making sure that the Development Team understands both the product and its target vision. With the Development Team's help, you make sure that work is prioritized in consistent Sprints. Design phase and project methodology: Build the product's big picture including its roadmap and translate this picture towards the development team Clarify the need and draft the product's key functional specifications according to the agile methodology Iteration planning and priorities definition to ensure the proper content delivery Defines the priorities and monitors the Product Backlog, thus s/he continuously prioritizes the key business needs Make sure that the different stakeholders become and stay aligned on the product to be delivered Project follow-up and support: Initiate a priority management approach, ensure the product consistency and quality Actively participate in Scrum ceremonies (agile retrospectives, demos, test labs ) Is the decision maker or has the authority to arbitrate on delivered functionalities Conduct Sprints Backlog live adjustments and follow-up Language requirements (Mandatory) Sound knowledge of English as well as at least one local language (Dutch or French) (both are a strong plus). Required experience/knowledge Technical experience: (Mandatory) 5 years working with AI models and their deployment Business experience: (Mandatory) Expertise in agile methodologies (Mandatory) At least 2 years of recent banking/financial services experience Budget Management Project Management Contact Epiphany Hatch via e-mail at (see below) or call
25/04/2024
Project-based
This is your opportunity to join one of the most recognisable names in international financial services. With a presence in over 60 countries, they are one of Europe's biggest employers and have achieved Top Employer Europe certification. This means you'll be joining a responsible, positive, and thriving business that puts well-being and personal development at the top of its agenda. Expectation: 50% on-site & 50% homeworking At least 5 years of relevant experience Master's degree in Computer Science or equivalent work experience Sound knowledge of English as well as at least one local language (Dutch or French) (both are a strong plus). Duties include: As we migrate from on-premise Servers to Domino Datalab, we are looking for someone with experience in navigating constraining environments and identifying solutions with available building blocks. The challenges of the platform are combining data availability, security and traceability in order to build the Data Science platform of tomorrow. As a Product Owner you must take into account the requirements of your stakeholders, your clients (both data scientists, and AI consumers) as well as current and upcoming regulations in order to deliver a working AI platform for both model design and industrialisation. Domino Datalab allows our data scientist to work mostly in a self-service manner, but nevertheless we must propose an industrialization path for their outputs with an integration into the banking ecosystem and it's constraints (integrity, availability, throughput, ) Another key challenge is to make data available to data scientists in a secure and compliant way. Access should be simple, forthcoming and compliant while enabling both data exploration and use case industrialisation scenarios. As a Product Owner, you build the roadmap for the coming year making sure that the Development Team understands both the product and its target vision. With the Development Team's help, you make sure that work is prioritized in consistent Sprints. Design phase and project methodology: Build the product's big picture including its roadmap and translate this picture towards the development team Clarify the need and draft the product's key functional specifications according to the agile methodology Iteration planning and priorities definition to ensure the proper content delivery Defines the priorities and monitors the Product Backlog, thus s/he continuously prioritizes the key business needs Make sure that the different stakeholders become and stay aligned on the product to be delivered Project follow-up and support: Initiate a priority management approach, ensure the product consistency and quality Actively participate in Scrum ceremonies (agile retrospectives, demos, test labs ) Is the decision maker or has the authority to arbitrate on delivered functionalities Conduct Sprints Backlog live adjustments and follow-up Language requirements (Mandatory) Sound knowledge of English as well as at least one local language (Dutch or French) (both are a strong plus). Required experience/knowledge Technical experience: (Mandatory) 5 years working with AI models and their deployment Business experience: (Mandatory) Expertise in agile methodologies (Mandatory) At least 2 years of recent banking/financial services experience Budget Management Project Management Contact Epiphany Hatch via e-mail at (see below) or call
On-site 2-3 days (and otherwise remote/off-site) in Måløv, Denmark Duration: Expected duration 12 months. Expected experience: Strong Vue. skills and experience with Python is a must Preferred experience: Hands-on experience with Titian Mosaic or a similar research sample inventory system is considered an advantage but not a must. Knowledge of Oracle DB, SQL or PL/SQL (fundamental understanding). Be able to communicate in English Software Development: We would rather look also for someone with a more workflow/Front End focus. Application Configuration: Titian/Mosaic experience or worked with a COTS system and administered/configured it. Technologies to be used: Python Vue CI/CD Git REST APIs AWS and ADO Cloud Oracle database SQL Responsibilities and skills: Contributes to an agile product team that is onboarding Pharma R&D wet-labs to the Titian Mosaic sample inventory system Development, bug fixing and second line support on our in-house build application for sample management handling (build in Vue). Analyze complex lab workflows and translate the sample management needs of scientists into requirements for the IT solution Implements changes in the configuration (user roles, data model) as well as metadata (dropdowns) in the Mosaic system Be able to work in highly changeable organization. Communicate and collaborate with R&ED scientists (typically lab associates/scientists), Research IT experts, Software Engineers, and Product Owners You will work with our agile Inventory Management System product team together with eight other colleagues. They are working on implementing our new browser-based inventory system, Mosaic, from Titian software company as well as an in-house develop web application for sample management. You will be in touch with different laboratory teams in Denmark, UK and US regularly, to ensure business consistency and continuity. Finally, you will collaborate with business analysts and developers in the team to share knowledge and develop new solutions.
22/04/2024
Project-based
On-site 2-3 days (and otherwise remote/off-site) in Måløv, Denmark Duration: Expected duration 12 months. Expected experience: Strong Vue. skills and experience with Python is a must Preferred experience: Hands-on experience with Titian Mosaic or a similar research sample inventory system is considered an advantage but not a must. Knowledge of Oracle DB, SQL or PL/SQL (fundamental understanding). Be able to communicate in English Software Development: We would rather look also for someone with a more workflow/Front End focus. Application Configuration: Titian/Mosaic experience or worked with a COTS system and administered/configured it. Technologies to be used: Python Vue CI/CD Git REST APIs AWS and ADO Cloud Oracle database SQL Responsibilities and skills: Contributes to an agile product team that is onboarding Pharma R&D wet-labs to the Titian Mosaic sample inventory system Development, bug fixing and second line support on our in-house build application for sample management handling (build in Vue). Analyze complex lab workflows and translate the sample management needs of scientists into requirements for the IT solution Implements changes in the configuration (user roles, data model) as well as metadata (dropdowns) in the Mosaic system Be able to work in highly changeable organization. Communicate and collaborate with R&ED scientists (typically lab associates/scientists), Research IT experts, Software Engineers, and Product Owners You will work with our agile Inventory Management System product team together with eight other colleagues. They are working on implementing our new browser-based inventory system, Mosaic, from Titian software company as well as an in-house develop web application for sample management. You will be in touch with different laboratory teams in Denmark, UK and US regularly, to ensure business consistency and continuity. Finally, you will collaborate with business analysts and developers in the team to share knowledge and develop new solutions.
Senior Data Scientist/Data Scientist/Machine Learning Engineer/Exeter/Torquay/Weymouth/Python/C#/Unity/Machine Learning - £45,000 - £55,000 My client is looking for a new Data Scientist with experience in Artificial Intelligence and Machine Learning to join a growing development team as a Data and Machine Learning Developer. Your expertise will directly contribute to the advancement of their Immersive technology solutions (working in Unity/C#) and their applications across various sectors. You MUST have Front End development and Machine Learning experience to be suitable for this position. Requirements: Ph.D. in Computer Science, Data Science, Statistics, or a related field (ideally with an application to human factors, psychology or neuroscience). Prior exposure to and knowledge of physiological sensors, biosensors and data acquisition systems would be an advantage It would be an advantage if you already have SC clearance You also must have been based in the UK for the last 5 years Responsibilities: Experience with programming languages such as Python is needed; familiarity with C# is a plus Create, deploy and refine algorithms that process, analyse, and learn from large psychophysiological data sets Collaborate with team members along with communicating to the relevant stakeholders on technical matters Capable of Integrating Machine Learning models into production systems and deploy them for Real Time or batch processing Senior Data Scientist/Data Scientist/Machine Learning Engineer/Exeter/Torquay/Weymouth/Python/C#/Unity/Machine Learning - £45,000 - £55,000 Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
22/04/2024
Full time
Senior Data Scientist/Data Scientist/Machine Learning Engineer/Exeter/Torquay/Weymouth/Python/C#/Unity/Machine Learning - £45,000 - £55,000 My client is looking for a new Data Scientist with experience in Artificial Intelligence and Machine Learning to join a growing development team as a Data and Machine Learning Developer. Your expertise will directly contribute to the advancement of their Immersive technology solutions (working in Unity/C#) and their applications across various sectors. You MUST have Front End development and Machine Learning experience to be suitable for this position. Requirements: Ph.D. in Computer Science, Data Science, Statistics, or a related field (ideally with an application to human factors, psychology or neuroscience). Prior exposure to and knowledge of physiological sensors, biosensors and data acquisition systems would be an advantage It would be an advantage if you already have SC clearance You also must have been based in the UK for the last 5 years Responsibilities: Experience with programming languages such as Python is needed; familiarity with C# is a plus Create, deploy and refine algorithms that process, analyse, and learn from large psychophysiological data sets Collaborate with team members along with communicating to the relevant stakeholders on technical matters Capable of Integrating Machine Learning models into production systems and deploy them for Real Time or batch processing Senior Data Scientist/Data Scientist/Machine Learning Engineer/Exeter/Torquay/Weymouth/Python/C#/Unity/Machine Learning - £45,000 - £55,000 Modis International Ltd acts as an employment agency for permanent recruitment and an employment business for the supply of temporary workers in the UK. Modis Europe Ltd provide a variety of international solutions that connect clients to the best talent in the world. For all positions based in Switzerland, Modis Europe Ltd works with its licensed Swiss partner Accurity GmbH to ensure that candidate applications are handled in accordance with Swiss law. Both Modis International Ltd and Modis Europe Ltd are Equal Opportunities Employers. By applying for this role your details will be submitted to Modis International Ltd and/or Modis Europe Ltd. Our Candidate Privacy Information Statement which explains how we will use your information is available on the Modis website.
Digital Research Infrastructure Engineer - Linux Specialist PML operations grade 4 £30000 - £45000 DOE Full Time Open Ended Appointment The Role We have an exciting opportunity at PML for an individual with skills in Linux system administration to join the PML s Digital Innovation and Marine Autonomy (DIMA) group. The role provides a business critical link between scientists, PML Applications (commercial work) and our IT Group to support the Linux computing infrastructure as it continues to evolve, underpinning PML science in multiple areas and across all levels. This ranges from data generation, (storage technologies and data management), processing and analysis (high performance computing and technologies such as JupyterHub), to making visual outputs for end users (web technologies and virtualisation) to increase the reach and impact of PML science. About You You will enjoy working with others to help deliver a modern and reliable digital infrastructure to underpin the world leading research carried out at PML. You will understand the importance of stability from existing infrastructure but will also be keen to learn and try new technologies. You will have experience of administering Linux systems, ideally using Ubuntu, and will be able to make use of scripts and common tools such as ansible to manage this. You will understand the importance of taking a proactive approach to identify and resolve and problems and will be able to make use of monitoring software (e.g., Nagios, Grafana) to accomplish this. You will understand best practices in cybersecurity and be able to apply these. Skills Required Linux systems administration and monitoring Linux scripting (e.g., bash and Python) Experience in management of data at the Terrabyte to Petabyte scale and storage technologies such as NFS and S3. Cybersecurity (Understand and apply best practices) Container technologies (Docker and Kubernetes) High performance Computing (Slurm) Virtualisation (VMWare) Key Deliverables Maintain our storage infrastructure to ensure data is distributed across servers based on existing capacity and projected changes in data volumes. This includes regular data moves and liaising with stakeholders to ensure data is backed up and archiving projects are completes as needed. Monitor high performance computing infrastructure to identify and resolve problems either on their own or by working with IT (depending on the nature of the problem). Act of a point of contact between scientists and IT to answer questions, help identify solutions and provide training. Work with the data architect to maintain and develop web infrastructure used to provide existing and planned data search and visualisation services. Manage the NEODAAS GPU cluster (MAGEO), including liaising with IT, vendors and system users. About PML As a marine-focused charity we develop and apply innovative science with a view to ensuring ocean sustainability. With over 40 years of experience, we offer evidence-based solutions to societal challenges. Our impact spans from research publications to informing policies and training future scientists. The science undertaken at PML contributes to UN Sustainable Development Goals by promoting healthy, productive and resilient oceans and seas. To support PML s science it operates in house Linux infrastructure used for processing satellite data, running models and making outputs accessible through web visualisation tools. This infrastructure includes a large amount of storage (6 PB), a High-Performance Computing cluster with over 1500 cores, a 40 GPU cluster (the MAssive GPU cluster for Earth Observation; MAGEO) and a virtual machine cluster. The role will be part of the Digital Innovation and Marine Autonomy (DIMA) group within PML. DIMA is a pioneering digital science group dedicated to advancing PML s world-class and cutting-edge environmental research through the utilisation of state-of-the-art digital and autonomous technologies. The team comprises research software engineers, research infrastructure engineers, marine technologists and scientists who work on a variety of projects using autonomous vessels, satellite data, drones, Artificial Intelligence, High Performance Computing and data visualisation tools to help deliver PML s goals. The team have an enthusiasm for solving problems through collaboration and shared learning.
12/04/2024
Full time
Digital Research Infrastructure Engineer - Linux Specialist PML operations grade 4 £30000 - £45000 DOE Full Time Open Ended Appointment The Role We have an exciting opportunity at PML for an individual with skills in Linux system administration to join the PML s Digital Innovation and Marine Autonomy (DIMA) group. The role provides a business critical link between scientists, PML Applications (commercial work) and our IT Group to support the Linux computing infrastructure as it continues to evolve, underpinning PML science in multiple areas and across all levels. This ranges from data generation, (storage technologies and data management), processing and analysis (high performance computing and technologies such as JupyterHub), to making visual outputs for end users (web technologies and virtualisation) to increase the reach and impact of PML science. About You You will enjoy working with others to help deliver a modern and reliable digital infrastructure to underpin the world leading research carried out at PML. You will understand the importance of stability from existing infrastructure but will also be keen to learn and try new technologies. You will have experience of administering Linux systems, ideally using Ubuntu, and will be able to make use of scripts and common tools such as ansible to manage this. You will understand the importance of taking a proactive approach to identify and resolve and problems and will be able to make use of monitoring software (e.g., Nagios, Grafana) to accomplish this. You will understand best practices in cybersecurity and be able to apply these. Skills Required Linux systems administration and monitoring Linux scripting (e.g., bash and Python) Experience in management of data at the Terrabyte to Petabyte scale and storage technologies such as NFS and S3. Cybersecurity (Understand and apply best practices) Container technologies (Docker and Kubernetes) High performance Computing (Slurm) Virtualisation (VMWare) Key Deliverables Maintain our storage infrastructure to ensure data is distributed across servers based on existing capacity and projected changes in data volumes. This includes regular data moves and liaising with stakeholders to ensure data is backed up and archiving projects are completes as needed. Monitor high performance computing infrastructure to identify and resolve problems either on their own or by working with IT (depending on the nature of the problem). Act of a point of contact between scientists and IT to answer questions, help identify solutions and provide training. Work with the data architect to maintain and develop web infrastructure used to provide existing and planned data search and visualisation services. Manage the NEODAAS GPU cluster (MAGEO), including liaising with IT, vendors and system users. About PML As a marine-focused charity we develop and apply innovative science with a view to ensuring ocean sustainability. With over 40 years of experience, we offer evidence-based solutions to societal challenges. Our impact spans from research publications to informing policies and training future scientists. The science undertaken at PML contributes to UN Sustainable Development Goals by promoting healthy, productive and resilient oceans and seas. To support PML s science it operates in house Linux infrastructure used for processing satellite data, running models and making outputs accessible through web visualisation tools. This infrastructure includes a large amount of storage (6 PB), a High-Performance Computing cluster with over 1500 cores, a 40 GPU cluster (the MAssive GPU cluster for Earth Observation; MAGEO) and a virtual machine cluster. The role will be part of the Digital Innovation and Marine Autonomy (DIMA) group within PML. DIMA is a pioneering digital science group dedicated to advancing PML s world-class and cutting-edge environmental research through the utilisation of state-of-the-art digital and autonomous technologies. The team comprises research software engineers, research infrastructure engineers, marine technologists and scientists who work on a variety of projects using autonomous vessels, satellite data, drones, Artificial Intelligence, High Performance Computing and data visualisation tools to help deliver PML s goals. The team have an enthusiasm for solving problems through collaboration and shared learning.