Senior Java Developer (Pricing and Risk) - Server Side/Spring/SDLC/DevOps/Maven/AWS - Permanent The role is for an experienced Server Side developer in the team responsible for pricing, risk and analytics solutions for credit derivative products. The risk platform facilitates margin calculations for both overnight and intraday collateral calls, the pricing of Credit Index Options and the external what-if simulation of margins by Members and Clients. The platforms are built upon a Java based architecture with an underlying C++ analytics library and leverage a range of supporting technologies. Server Side Java Developer from a strong technical background with Spring Boot experience. Demonstrable enterprise software engineering with an understanding of working in a secure compute and regulated environment. Aptitude for understanding requirements to changes in pricing, risk and market data stack and able to implement and test successfully. Good awareness of the design, development and SDLC considerations required for development of Financial Services market infrastructure applications. Passion for following DevOps and CI/CD processes to deliver high quality and well tested software using frameworks such as Jenkins/GitLab, Junit, Mockito, Cucumber etc. Knowledge of JMS and experience with ActiveMQ/IBM MQ Knowledge of modern source code management using git. Strong familiarity with Java development toolchains including Maven and IntelliJ. Preferred Skills Some exposure to C++ on Linux. Familiarity with Credit Derivative products Familiarity with AWS Cloud services like EC2, S3, Lambda, EKS. Deployment automation using tools such as Ansible; monitoring using enterprise tools eg DataDog. Experience of on/off-premises cloud solutions including the defining of infrastructure as code using Terraform. Permanent role - hybrid working - Central London based Candidate must be eligible to work in the UK By applying to this job you are sending us your CV, which may contain personal information. Please refer to our Privacy Notice to understand how we process this information. In short, in order to supply you with work finding services, we will hold and process your personal data, and only with your express permission we will share this personal data with a client (or a third party working on behalf of the client) by email or by upload to the Client/third parties vendor management system. By giving us permission to send your CV to a client, this constitutes permission to share the personal data that would be necessary to consider your application, interview you (Phone/video/face to face) and if successful hire you. Scope AT acts as an employment agency for Permanent Recruitment and an employment business for the supply of temporary workers. By applying for this job you accept the Terms and Conditions, Data Protection Policy, Privacy Notice and Disclaimers which can be found at our website.
03/04/2025
Full time
Senior Java Developer (Pricing and Risk) - Server Side/Spring/SDLC/DevOps/Maven/AWS - Permanent The role is for an experienced Server Side developer in the team responsible for pricing, risk and analytics solutions for credit derivative products. The risk platform facilitates margin calculations for both overnight and intraday collateral calls, the pricing of Credit Index Options and the external what-if simulation of margins by Members and Clients. The platforms are built upon a Java based architecture with an underlying C++ analytics library and leverage a range of supporting technologies. Server Side Java Developer from a strong technical background with Spring Boot experience. Demonstrable enterprise software engineering with an understanding of working in a secure compute and regulated environment. Aptitude for understanding requirements to changes in pricing, risk and market data stack and able to implement and test successfully. Good awareness of the design, development and SDLC considerations required for development of Financial Services market infrastructure applications. Passion for following DevOps and CI/CD processes to deliver high quality and well tested software using frameworks such as Jenkins/GitLab, Junit, Mockito, Cucumber etc. Knowledge of JMS and experience with ActiveMQ/IBM MQ Knowledge of modern source code management using git. Strong familiarity with Java development toolchains including Maven and IntelliJ. Preferred Skills Some exposure to C++ on Linux. Familiarity with Credit Derivative products Familiarity with AWS Cloud services like EC2, S3, Lambda, EKS. Deployment automation using tools such as Ansible; monitoring using enterprise tools eg DataDog. Experience of on/off-premises cloud solutions including the defining of infrastructure as code using Terraform. Permanent role - hybrid working - Central London based Candidate must be eligible to work in the UK By applying to this job you are sending us your CV, which may contain personal information. Please refer to our Privacy Notice to understand how we process this information. In short, in order to supply you with work finding services, we will hold and process your personal data, and only with your express permission we will share this personal data with a client (or a third party working on behalf of the client) by email or by upload to the Client/third parties vendor management system. By giving us permission to send your CV to a client, this constitutes permission to share the personal data that would be necessary to consider your application, interview you (Phone/video/face to face) and if successful hire you. Scope AT acts as an employment agency for Permanent Recruitment and an employment business for the supply of temporary workers. By applying for this job you accept the Terms and Conditions, Data Protection Policy, Privacy Notice and Disclaimers which can be found at our website.
Senior Data Scientist (Biostats Engineering) - Remote (RL7733) Job Title - Senior Data Scientist (Biostats Engineering) Location - Remote Ref - RL7733 Salary - Competitive The Client We are partnering with a design-led data, software, and cloud company specializing in AI and advanced analytics. They design, build, and operate data- and AI-driven solutions, products, and experiences on Azure, enabling their business customers to tackle challenges and seize opportunities with greater efficiency and certainty. The Candidate We are looking for an experienced Data Scientist with extensive expertise in: Best practices for R package development Model development and deployment on Databricks Collaboration using version control systems Additional knowledge of data architecture and cloud infrastructure is highly desirable The Role You will work with one of our global biopharma clients, developing high-quality R packages and providing consultancy on Biostatistics model development. Key responsibilities include: Reviewing and optimizing code Integrating existing modelling code into packages Designing and implementing end-to-end modelling and deployment processes on Databricks Focusing on delivering high-impact solutions that exceed customer expectations Key Responsibilities Develop high-quality R packages Provide consultancy on Biostatistics model development and deployment best practices Review and optimize code, integrating existing modelling code into packages Design and implement end-to-end modelling and deployment processes on Databricks Support and collaborate with adjacent teams (eg, product and IT teams) to integrate modelling solutions Continuously innovate with the team and customer, utilizing modern tools to enhance model development and deployment Skills & Experience A successful candidate will demonstrate: A background or work experience in biostatistics or a related field Strong proficiency in R programming and R package development Experience in statistical model deployment and end-to-end MLOps (preferred) Extensive experience with cloud infrastructure, preferably Databricks and Azure Experience with Shiny development (preferred) Ability to work with customer stakeholders to understand business processes and workflows, designing solutions to optimize and automate them DevOps experience and familiarity with software release processes Familiarity with Agile delivery methods To apply for this Senior Data Scientist permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
02/04/2025
Full time
Senior Data Scientist (Biostats Engineering) - Remote (RL7733) Job Title - Senior Data Scientist (Biostats Engineering) Location - Remote Ref - RL7733 Salary - Competitive The Client We are partnering with a design-led data, software, and cloud company specializing in AI and advanced analytics. They design, build, and operate data- and AI-driven solutions, products, and experiences on Azure, enabling their business customers to tackle challenges and seize opportunities with greater efficiency and certainty. The Candidate We are looking for an experienced Data Scientist with extensive expertise in: Best practices for R package development Model development and deployment on Databricks Collaboration using version control systems Additional knowledge of data architecture and cloud infrastructure is highly desirable The Role You will work with one of our global biopharma clients, developing high-quality R packages and providing consultancy on Biostatistics model development. Key responsibilities include: Reviewing and optimizing code Integrating existing modelling code into packages Designing and implementing end-to-end modelling and deployment processes on Databricks Focusing on delivering high-impact solutions that exceed customer expectations Key Responsibilities Develop high-quality R packages Provide consultancy on Biostatistics model development and deployment best practices Review and optimize code, integrating existing modelling code into packages Design and implement end-to-end modelling and deployment processes on Databricks Support and collaborate with adjacent teams (eg, product and IT teams) to integrate modelling solutions Continuously innovate with the team and customer, utilizing modern tools to enhance model development and deployment Skills & Experience A successful candidate will demonstrate: A background or work experience in biostatistics or a related field Strong proficiency in R programming and R package development Experience in statistical model deployment and end-to-end MLOps (preferred) Extensive experience with cloud infrastructure, preferably Databricks and Azure Experience with Shiny development (preferred) Ability to work with customer stakeholders to understand business processes and workflows, designing solutions to optimize and automate them DevOps experience and familiarity with software release processes Familiarity with Agile delivery methods To apply for this Senior Data Scientist permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
MUST BE BASED IN BELGIUM 50% on-site in brussels, 50% remote working Fluency in English and either Dutch or French, with a good understanding of the other language Are you a seasoned Data Engineer looking to make a significant impact in the financial sector? This role offers a unique chance to contribute to a dynamic team within the Affluent & Private Banking department, located in the heart of Brussels. As a Senior Data Engineer, you will play a pivotal role in the Data & End User Tool team. Your mission will be to integrate data based on business requirements, ensuring that the data is solid and seamlessly integrated into the platform. Key Responsibilities: Collaborate with team members to integrate data, translating business needs into robust and cohesive data platforms. Develop prototypes to facilitate business validation. Craft and develop logical and physical data models and data stores, industrialising the feeding and deliveries from integrated data stores (data warehouse, data marts, data sets). Optimise data pipelines across the enterprise for maximum utilisation. Skills & Experience Required: Minimum 5 years of experience in Data Analytics. Proficiency in data modelling, ETL development, job scheduling, SQL Server DB management, and SQL language (both transactional and Scripting). SAS Enterprise Guide or SAS Base is crucial. Familiarity with Microsoft Office. Experience in a banking environment is beneficial. Fluent in either Dutch or French, with a good understanding of the other language. Proficiency in English, both written and spoken. Team player with a proactive attitude Strong communication and influencing abilities. This role is ideal for those looking to leverage their data engineering expertise in a high-impact, collaborative setting. Apply now or email (see below)
01/04/2025
Project-based
MUST BE BASED IN BELGIUM 50% on-site in brussels, 50% remote working Fluency in English and either Dutch or French, with a good understanding of the other language Are you a seasoned Data Engineer looking to make a significant impact in the financial sector? This role offers a unique chance to contribute to a dynamic team within the Affluent & Private Banking department, located in the heart of Brussels. As a Senior Data Engineer, you will play a pivotal role in the Data & End User Tool team. Your mission will be to integrate data based on business requirements, ensuring that the data is solid and seamlessly integrated into the platform. Key Responsibilities: Collaborate with team members to integrate data, translating business needs into robust and cohesive data platforms. Develop prototypes to facilitate business validation. Craft and develop logical and physical data models and data stores, industrialising the feeding and deliveries from integrated data stores (data warehouse, data marts, data sets). Optimise data pipelines across the enterprise for maximum utilisation. Skills & Experience Required: Minimum 5 years of experience in Data Analytics. Proficiency in data modelling, ETL development, job scheduling, SQL Server DB management, and SQL language (both transactional and Scripting). SAS Enterprise Guide or SAS Base is crucial. Familiarity with Microsoft Office. Experience in a banking environment is beneficial. Fluent in either Dutch or French, with a good understanding of the other language. Proficiency in English, both written and spoken. Team player with a proactive attitude Strong communication and influencing abilities. This role is ideal for those looking to leverage their data engineering expertise in a high-impact, collaborative setting. Apply now or email (see below)
Graph Database Data Engineer - Dublin Atrium UK are seeking a highly skilled and experienced Senior Data Engineer with expertise in graph databases to join a dynamic team. The ideal candidate will have a strong background in data engineering, graph querying languages, and data modelling, with a keen interest in leveraging cutting-edge technologies like vector databases and LLMs to drive functional objectives. * Develop, and implement scalable data pipelines using Azure Databricks and Azure Data Lake. * Create and maintain ETL workflows to ensure data quality, integrity, and availability. * Demonstrated ability to manage and analyze data at scale within the Azure ecosystem. * Develop and optimize graph database solutions using querying languages such as Cypher, SPARQL, or GQL. Neo4J DB experience is preferred. * Build and maintain ontologies and knowledge graphs, ensuring efficient and scalable data modelling. * Work with Large Language Models (LLMs) to achieve functional and business objectives. * Ensure data quality, integrity, and security while delivering robust and scalable solutions. * Data Modeling: Strong skills in creating ontologies and knowledge graphs. Presenting data for Graph RAG based solutions. * Vector Databases: Understanding of similarity search techniques and RAG implementations. * Coach junior data engineers on technologies like Azure Databricks and Azure Cosmos database. * Implement CI/CD pipelines using Azure DevOps to streamline deployment processes. * Conduct code reviews, provide mentorship, and facilitate knowledge sharing sessions within the team. * Integrate data from various sources into a centralized data lake, enabling unified access and analysis. * Perform data cleansing, transformation, and migration tasks as required. * Monitor and troubleshoot data pipelines and resolve any issues promptly. Essential: * Highly Experienced hands-on in data engineering with experience working with Graph DB. * Experienced in Azure Databricks, Azure Data Lake, and Azure Cosmos DB & proven track record of building data engineering solutions * Proficient in programming languages such as Python, SQL, and Scala. * Experience with big data technologies including Apache Spark, Hadoop, and Hive. * Knowledge of tools and technologies such as Azure Data Factory, AzureSynapse Analytics, and Apache Kafka. * Solid understanding of CI/CD pipelines and hands-on experience with DevOps tools such as Git for continuous integration and deployment. * Excellent problem-solving skills and the ability to troubleshoot complex data issues. * Strong coaching abilities, Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams. * Communication: Excellent verbal and written communication skills * Qualifications: Microsoft Certified: Azure Data Engineer Associate. Databricks Certified Associate Developer for Apache Spark. Experience with data governance and data security practices. Knowledge of Docker, Podman and Kubernetes for containerization and orchestration. Click Apply now/Contact Lianne to be considered for the Graph Database Data Engineer - Dublin role
28/03/2025
Project-based
Graph Database Data Engineer - Dublin Atrium UK are seeking a highly skilled and experienced Senior Data Engineer with expertise in graph databases to join a dynamic team. The ideal candidate will have a strong background in data engineering, graph querying languages, and data modelling, with a keen interest in leveraging cutting-edge technologies like vector databases and LLMs to drive functional objectives. * Develop, and implement scalable data pipelines using Azure Databricks and Azure Data Lake. * Create and maintain ETL workflows to ensure data quality, integrity, and availability. * Demonstrated ability to manage and analyze data at scale within the Azure ecosystem. * Develop and optimize graph database solutions using querying languages such as Cypher, SPARQL, or GQL. Neo4J DB experience is preferred. * Build and maintain ontologies and knowledge graphs, ensuring efficient and scalable data modelling. * Work with Large Language Models (LLMs) to achieve functional and business objectives. * Ensure data quality, integrity, and security while delivering robust and scalable solutions. * Data Modeling: Strong skills in creating ontologies and knowledge graphs. Presenting data for Graph RAG based solutions. * Vector Databases: Understanding of similarity search techniques and RAG implementations. * Coach junior data engineers on technologies like Azure Databricks and Azure Cosmos database. * Implement CI/CD pipelines using Azure DevOps to streamline deployment processes. * Conduct code reviews, provide mentorship, and facilitate knowledge sharing sessions within the team. * Integrate data from various sources into a centralized data lake, enabling unified access and analysis. * Perform data cleansing, transformation, and migration tasks as required. * Monitor and troubleshoot data pipelines and resolve any issues promptly. Essential: * Highly Experienced hands-on in data engineering with experience working with Graph DB. * Experienced in Azure Databricks, Azure Data Lake, and Azure Cosmos DB & proven track record of building data engineering solutions * Proficient in programming languages such as Python, SQL, and Scala. * Experience with big data technologies including Apache Spark, Hadoop, and Hive. * Knowledge of tools and technologies such as Azure Data Factory, AzureSynapse Analytics, and Apache Kafka. * Solid understanding of CI/CD pipelines and hands-on experience with DevOps tools such as Git for continuous integration and deployment. * Excellent problem-solving skills and the ability to troubleshoot complex data issues. * Strong coaching abilities, Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams. * Communication: Excellent verbal and written communication skills * Qualifications: Microsoft Certified: Azure Data Engineer Associate. Databricks Certified Associate Developer for Apache Spark. Experience with data governance and data security practices. Knowledge of Docker, Podman and Kubernetes for containerization and orchestration. Click Apply now/Contact Lianne to be considered for the Graph Database Data Engineer - Dublin role