We are seeking a highly skilled Senior Python Programmer with expertise in machine learning (ML) data wrangling, interfacing, and automation. General Responsibilities The ideal candidate will be proficient in building robust data pipelines and automating complex tasks to support ML initiatives. They will have a keen understanding of observability principles and possess hands-on experience with AWS, Linux, and preferably Kubernetes and Argo. Responsibilities: - Develop and maintain robust data pipelines for ML data wrangling, interfacing, and automation. - Implement automation solutions to streamline data processing and model deployment workflows. - Ensure observability and monitoring of systems, providing insights into performance and reliability. - Utilize AWS services such as S3, Lambda, and networking components for data storage, processing, and permissions management. - Collaborate with DevOps teams to deploy and manage applications in Linux environments. - Support Kubernetes and Argo workflows for scalable and efficient ML model training and deployment. - Manage AWS permissions and network configurations to ensure data security and compliance. - Maintain version control of codebase using Git and enforce best practices for code documentation and production readiness. - Collaborate with data scientists to develop small UI tools for querying data from databases and AWS S3. Requirements : - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Proficiency in Python programming language with a focus on ML data wrangling and automation. - Strong experience with AWS services, including S3, Lambda, networking, and permissions management. - Hands-on experience with Linux environments and Shell Scripting. - Good to have pharmaceutical experience Working remotly and on site in Belgium for the occasional meeting. 6 months + extension
29/04/2024
Project-based
We are seeking a highly skilled Senior Python Programmer with expertise in machine learning (ML) data wrangling, interfacing, and automation. General Responsibilities The ideal candidate will be proficient in building robust data pipelines and automating complex tasks to support ML initiatives. They will have a keen understanding of observability principles and possess hands-on experience with AWS, Linux, and preferably Kubernetes and Argo. Responsibilities: - Develop and maintain robust data pipelines for ML data wrangling, interfacing, and automation. - Implement automation solutions to streamline data processing and model deployment workflows. - Ensure observability and monitoring of systems, providing insights into performance and reliability. - Utilize AWS services such as S3, Lambda, and networking components for data storage, processing, and permissions management. - Collaborate with DevOps teams to deploy and manage applications in Linux environments. - Support Kubernetes and Argo workflows for scalable and efficient ML model training and deployment. - Manage AWS permissions and network configurations to ensure data security and compliance. - Maintain version control of codebase using Git and enforce best practices for code documentation and production readiness. - Collaborate with data scientists to develop small UI tools for querying data from databases and AWS S3. Requirements : - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Proficiency in Python programming language with a focus on ML data wrangling and automation. - Strong experience with AWS services, including S3, Lambda, networking, and permissions management. - Hands-on experience with Linux environments and Shell Scripting. - Good to have pharmaceutical experience Working remotly and on site in Belgium for the occasional meeting. 6 months + extension
Hybrid Python Developer with a minimum of 5 years' experience is needed for a 12 month contract paying 1000PLN to 1500PLN per day depending on experience based in Krakow, with 3 days working on site. The ideal Python developer will be based in Poland and able to write clean, maintainable Python code for various projects, including data analytics tools, financial models and automation scripts. Collaborate with cross-functional teams like quants, product managers, architects and data scientists to implement feature enhancements. Conduct code reviews to ensure the quality and functionality of the code produced by peers. Use version control systems, like GitLab, to keep track of code changes. Write robust automated tests utilizing industry-standard tools such as PyTest, Robot Framework or Selenium. Document code and procedures, assist in setting project milestones and maintain organizational transparency. Tech Stack Python, Django, Flask, Pandas, AWS, Azure, Google Cloud, GitHub, Gitlab, PyTest, Robot Frame work, Selenium, CI/CD pipelines, Agile, CPLEX Qualifications: Bachelor's degree in computer science, software engineering, or a related field or equivalent experience. Minimum of 5 years of professional experience in Python programming. Familiarity with Python frameworks and extensions like Django, Flask or Pandas. Proficient in using cloud services and technologies such as AWS, Azure, or Google Cloud. Preferred Qualifications : Industry-specific experience in finance or banking. Exposure to DevOps practices and CI/CD pipelines. Experience working in an Agile framework, familiarity with Agile ceremonies. Familiarity with linear programming tools like CPLEX. Role - Python Developer Location - Krakow in Poland Duration - 12 month Rate -1000PLN to 1500PLN per day
29/04/2024
Project-based
Hybrid Python Developer with a minimum of 5 years' experience is needed for a 12 month contract paying 1000PLN to 1500PLN per day depending on experience based in Krakow, with 3 days working on site. The ideal Python developer will be based in Poland and able to write clean, maintainable Python code for various projects, including data analytics tools, financial models and automation scripts. Collaborate with cross-functional teams like quants, product managers, architects and data scientists to implement feature enhancements. Conduct code reviews to ensure the quality and functionality of the code produced by peers. Use version control systems, like GitLab, to keep track of code changes. Write robust automated tests utilizing industry-standard tools such as PyTest, Robot Framework or Selenium. Document code and procedures, assist in setting project milestones and maintain organizational transparency. Tech Stack Python, Django, Flask, Pandas, AWS, Azure, Google Cloud, GitHub, Gitlab, PyTest, Robot Frame work, Selenium, CI/CD pipelines, Agile, CPLEX Qualifications: Bachelor's degree in computer science, software engineering, or a related field or equivalent experience. Minimum of 5 years of professional experience in Python programming. Familiarity with Python frameworks and extensions like Django, Flask or Pandas. Proficient in using cloud services and technologies such as AWS, Azure, or Google Cloud. Preferred Qualifications : Industry-specific experience in finance or banking. Exposure to DevOps practices and CI/CD pipelines. Experience working in an Agile framework, familiarity with Agile ceremonies. Familiarity with linear programming tools like CPLEX. Role - Python Developer Location - Krakow in Poland Duration - 12 month Rate -1000PLN to 1500PLN per day
Data Scientist - Cross Asset - Quant Trading Firm A leading Quant Trading firm are looking to hire a Data Scientist within their Cross Asset Data Quant team. You'd be collaborating directly with Quant Researchers and Data Engineering teams, working directly with Machine Learning and Statistical Methods to evaluate new datasets. Your remit will also include working across the full data science suite, working across data exploration and acquisition, utilising methods like NLP. They are looking for someone with: Very strong Python experience A strong mathematic background A background in NLP Knowledge of Data Manipulation and Exploration in a Quantitative Setting Experience working in a Front Office environment If this sounds like a great fit, please apply through this advert.
29/04/2024
Full time
Data Scientist - Cross Asset - Quant Trading Firm A leading Quant Trading firm are looking to hire a Data Scientist within their Cross Asset Data Quant team. You'd be collaborating directly with Quant Researchers and Data Engineering teams, working directly with Machine Learning and Statistical Methods to evaluate new datasets. Your remit will also include working across the full data science suite, working across data exploration and acquisition, utilising methods like NLP. They are looking for someone with: Very strong Python experience A strong mathematic background A background in NLP Knowledge of Data Manipulation and Exploration in a Quantitative Setting Experience working in a Front Office environment If this sounds like a great fit, please apply through this advert.
A global medical device company are looking for a R&D Process Development Engineer to join their Research and Development team on a contract basis. Familiarity with processes associated with biomaterial coating is required. The role will involve Laboratory Work, design experiments, analysing data, and optimizing processes to meet performance and regulatory requirements. You'll be part of a cross-functional collaboration. Working closely with a diverse team of scientists, design engineers, process engineers and other functions to ensure project alignment with goals and milestones. You will also need to complete documentation, maintain comprehensive records, documentation, and reports. The role will also involve Project Management. Taking ownership of assigned tasks within projects, managing task specific timelines, budgets effectively. You will work closely with vendors to ensure that the processes are compliant with regulatory requirements and that costs and timelines are maintain. Essential skills: Biomaterial coating experience, eg Chemical Vapour Deposition, Physical Vapour Deposition, Dip coating Experience working with external vendors High level knowledge of the processes Desirable skills: Combination devices experience Project management experience The start date is for ASAP. The initial contract length is for 12 months (there will be options to extend). The role is based in Limerick and can be done mostly remotely. You will only need to come onsite every other week. The rate is €55 per hour, depending on experience, if you have any expenses please let me know and I can factor that into the rate for you. Please visit our website to find out more about our Key Information Documents. Please note that the documents provided contain generic information. If we are successful in finding you an assignment, you will receive a Key Information Document which will be specific to the vendor set-up you have chosen and your placement. Real Staffing, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy| Registered office | London, EC4N 7BE, United Kingdom | Partnership Number | OC387148 England and Wales
26/04/2024
Project-based
A global medical device company are looking for a R&D Process Development Engineer to join their Research and Development team on a contract basis. Familiarity with processes associated with biomaterial coating is required. The role will involve Laboratory Work, design experiments, analysing data, and optimizing processes to meet performance and regulatory requirements. You'll be part of a cross-functional collaboration. Working closely with a diverse team of scientists, design engineers, process engineers and other functions to ensure project alignment with goals and milestones. You will also need to complete documentation, maintain comprehensive records, documentation, and reports. The role will also involve Project Management. Taking ownership of assigned tasks within projects, managing task specific timelines, budgets effectively. You will work closely with vendors to ensure that the processes are compliant with regulatory requirements and that costs and timelines are maintain. Essential skills: Biomaterial coating experience, eg Chemical Vapour Deposition, Physical Vapour Deposition, Dip coating Experience working with external vendors High level knowledge of the processes Desirable skills: Combination devices experience Project management experience The start date is for ASAP. The initial contract length is for 12 months (there will be options to extend). The role is based in Limerick and can be done mostly remotely. You will only need to come onsite every other week. The rate is €55 per hour, depending on experience, if you have any expenses please let me know and I can factor that into the rate for you. Please visit our website to find out more about our Key Information Documents. Please note that the documents provided contain generic information. If we are successful in finding you an assignment, you will receive a Key Information Document which will be specific to the vendor set-up you have chosen and your placement. Real Staffing, a trading division of SThree Partnership LLP is acting as an Employment Business in relation to this vacancy| Registered office | London, EC4N 7BE, United Kingdom | Partnership Number | OC387148 England and Wales
Digital Research Infrastructure Engineer - Linux Specialist PML operations grade 4 £30000 - £45000 DOE Full Time Open Ended Appointment The Role We have an exciting opportunity at PML for an individual with skills in Linux system administration to join the PML s Digital Innovation and Marine Autonomy (DIMA) group. The role provides a business critical link between scientists, PML Applications (commercial work) and our IT Group to support the Linux computing infrastructure as it continues to evolve, underpinning PML science in multiple areas and across all levels. This ranges from data generation, (storage technologies and data management), processing and analysis (high performance computing and technologies such as JupyterHub), to making visual outputs for end users (web technologies and virtualisation) to increase the reach and impact of PML science. About You You will enjoy working with others to help deliver a modern and reliable digital infrastructure to underpin the world leading research carried out at PML. You will understand the importance of stability from existing infrastructure but will also be keen to learn and try new technologies. You will have experience of administering Linux systems, ideally using Ubuntu, and will be able to make use of scripts and common tools such as ansible to manage this. You will understand the importance of taking a proactive approach to identify and resolve and problems and will be able to make use of monitoring software (e.g., Nagios, Grafana) to accomplish this. You will understand best practices in cybersecurity and be able to apply these. Skills Required Linux systems administration and monitoring Linux scripting (e.g., bash and Python) Experience in management of data at the Terrabyte to Petabyte scale and storage technologies such as NFS and S3. Cybersecurity (Understand and apply best practices) Container technologies (Docker and Kubernetes) High performance Computing (Slurm) Virtualisation (VMWare) Key Deliverables Maintain our storage infrastructure to ensure data is distributed across servers based on existing capacity and projected changes in data volumes. This includes regular data moves and liaising with stakeholders to ensure data is backed up and archiving projects are completes as needed. Monitor high performance computing infrastructure to identify and resolve problems either on their own or by working with IT (depending on the nature of the problem). Act of a point of contact between scientists and IT to answer questions, help identify solutions and provide training. Work with the data architect to maintain and develop web infrastructure used to provide existing and planned data search and visualisation services. Manage the NEODAAS GPU cluster (MAGEO), including liaising with IT, vendors and system users. About PML As a marine-focused charity we develop and apply innovative science with a view to ensuring ocean sustainability. With over 40 years of experience, we offer evidence-based solutions to societal challenges. Our impact spans from research publications to informing policies and training future scientists. The science undertaken at PML contributes to UN Sustainable Development Goals by promoting healthy, productive and resilient oceans and seas. To support PML s science it operates in house Linux infrastructure used for processing satellite data, running models and making outputs accessible through web visualisation tools. This infrastructure includes a large amount of storage (6 PB), a High-Performance Computing cluster with over 1500 cores, a 40 GPU cluster (the MAssive GPU cluster for Earth Observation; MAGEO) and a virtual machine cluster. The role will be part of the Digital Innovation and Marine Autonomy (DIMA) group within PML. DIMA is a pioneering digital science group dedicated to advancing PML s world-class and cutting-edge environmental research through the utilisation of state-of-the-art digital and autonomous technologies. The team comprises research software engineers, research infrastructure engineers, marine technologists and scientists who work on a variety of projects using autonomous vessels, satellite data, drones, Artificial Intelligence, High Performance Computing and data visualisation tools to help deliver PML s goals. The team have an enthusiasm for solving problems through collaboration and shared learning.
12/04/2024
Full time
Digital Research Infrastructure Engineer - Linux Specialist PML operations grade 4 £30000 - £45000 DOE Full Time Open Ended Appointment The Role We have an exciting opportunity at PML for an individual with skills in Linux system administration to join the PML s Digital Innovation and Marine Autonomy (DIMA) group. The role provides a business critical link between scientists, PML Applications (commercial work) and our IT Group to support the Linux computing infrastructure as it continues to evolve, underpinning PML science in multiple areas and across all levels. This ranges from data generation, (storage technologies and data management), processing and analysis (high performance computing and technologies such as JupyterHub), to making visual outputs for end users (web technologies and virtualisation) to increase the reach and impact of PML science. About You You will enjoy working with others to help deliver a modern and reliable digital infrastructure to underpin the world leading research carried out at PML. You will understand the importance of stability from existing infrastructure but will also be keen to learn and try new technologies. You will have experience of administering Linux systems, ideally using Ubuntu, and will be able to make use of scripts and common tools such as ansible to manage this. You will understand the importance of taking a proactive approach to identify and resolve and problems and will be able to make use of monitoring software (e.g., Nagios, Grafana) to accomplish this. You will understand best practices in cybersecurity and be able to apply these. Skills Required Linux systems administration and monitoring Linux scripting (e.g., bash and Python) Experience in management of data at the Terrabyte to Petabyte scale and storage technologies such as NFS and S3. Cybersecurity (Understand and apply best practices) Container technologies (Docker and Kubernetes) High performance Computing (Slurm) Virtualisation (VMWare) Key Deliverables Maintain our storage infrastructure to ensure data is distributed across servers based on existing capacity and projected changes in data volumes. This includes regular data moves and liaising with stakeholders to ensure data is backed up and archiving projects are completes as needed. Monitor high performance computing infrastructure to identify and resolve problems either on their own or by working with IT (depending on the nature of the problem). Act of a point of contact between scientists and IT to answer questions, help identify solutions and provide training. Work with the data architect to maintain and develop web infrastructure used to provide existing and planned data search and visualisation services. Manage the NEODAAS GPU cluster (MAGEO), including liaising with IT, vendors and system users. About PML As a marine-focused charity we develop and apply innovative science with a view to ensuring ocean sustainability. With over 40 years of experience, we offer evidence-based solutions to societal challenges. Our impact spans from research publications to informing policies and training future scientists. The science undertaken at PML contributes to UN Sustainable Development Goals by promoting healthy, productive and resilient oceans and seas. To support PML s science it operates in house Linux infrastructure used for processing satellite data, running models and making outputs accessible through web visualisation tools. This infrastructure includes a large amount of storage (6 PB), a High-Performance Computing cluster with over 1500 cores, a 40 GPU cluster (the MAssive GPU cluster for Earth Observation; MAGEO) and a virtual machine cluster. The role will be part of the Digital Innovation and Marine Autonomy (DIMA) group within PML. DIMA is a pioneering digital science group dedicated to advancing PML s world-class and cutting-edge environmental research through the utilisation of state-of-the-art digital and autonomous technologies. The team comprises research software engineers, research infrastructure engineers, marine technologists and scientists who work on a variety of projects using autonomous vessels, satellite data, drones, Artificial Intelligence, High Performance Computing and data visualisation tools to help deliver PML s goals. The team have an enthusiasm for solving problems through collaboration and shared learning.