We are currently seeking an experienced Data Engineer to join a leading Pharmaceutical client of ours at their Barcelona site. This exciting opportunity will involve contributing towards the utility of the businesses' brand new GenAI capability, where you will be instrumental in supporting the R&D department by extracting and matching bio-sample identifiers with other internal datasets. Our client is looking for a "can do" type of individual, who is able to work independently to solve complex challenges without direct supervision. The role will require someone who is able to modernise the data science infrastructure across a global business and also help deploy AI and Machine Learning models at scale The role requires English language skills, and candidates to be Spain based, ideally Barcelona and surrounding areas as travel is required due to hybrid working conditions, although there is some flexibility for remote work for the right candidate. Key Responsibilities : Apply excellent abstraction and analytical capabilities to perform data transformations and enable the connection of datasets, thinking outside the box as needed Extract data from existing internal sources Extract identifiers from existing platforms Build data models to link regulated and sensitive data Collaborate with cross-functional teams, including data scientists, engineers, and IT to ensure seamless integration of systems Develop and maintain code following best code and engineering practices Automate deployment processes to ensure rapid and reliable delivery of project environments. Document processes, workflows, and technical guidelines to support knowledge transfer. Desirable Requirements : Strong background in Data Engineering Experience with Spark/Pyspark would be very beneficial AWS experience CI/CD and Deployment experience would be very beneficial Experience collaborating with data scientists would be a plus but not required The role offers the flexibility of a hybrid/remote work arrangement and is a 6 month contract with possibility of extension.
13/03/2025
Project-based
We are currently seeking an experienced Data Engineer to join a leading Pharmaceutical client of ours at their Barcelona site. This exciting opportunity will involve contributing towards the utility of the businesses' brand new GenAI capability, where you will be instrumental in supporting the R&D department by extracting and matching bio-sample identifiers with other internal datasets. Our client is looking for a "can do" type of individual, who is able to work independently to solve complex challenges without direct supervision. The role will require someone who is able to modernise the data science infrastructure across a global business and also help deploy AI and Machine Learning models at scale The role requires English language skills, and candidates to be Spain based, ideally Barcelona and surrounding areas as travel is required due to hybrid working conditions, although there is some flexibility for remote work for the right candidate. Key Responsibilities : Apply excellent abstraction and analytical capabilities to perform data transformations and enable the connection of datasets, thinking outside the box as needed Extract data from existing internal sources Extract identifiers from existing platforms Build data models to link regulated and sensitive data Collaborate with cross-functional teams, including data scientists, engineers, and IT to ensure seamless integration of systems Develop and maintain code following best code and engineering practices Automate deployment processes to ensure rapid and reliable delivery of project environments. Document processes, workflows, and technical guidelines to support knowledge transfer. Desirable Requirements : Strong background in Data Engineering Experience with Spark/Pyspark would be very beneficial AWS experience CI/CD and Deployment experience would be very beneficial Experience collaborating with data scientists would be a plus but not required The role offers the flexibility of a hybrid/remote work arrangement and is a 6 month contract with possibility of extension.
Knowledge Engineer, Fully Remote, £80,000 - £100,000 per annum My client, a leading AI solutions company are seeking a Mid-Senior Python Backend Engineer with a passion for knowledge graphs and semantic web technologies. In this role, you will own the full Back End development for an RDF-intensive platform - designing and optimising systems around triple stores (AWS Neptune), Real Time data processing and validation with SHACL, and advanced query capabilities. You will integrate AI-driven SPARQL generation models (LLMs/NLP) to enable intelligent querying of the knowledge graph. Working in a cross-functional squad of 3-8 team members using a Lean Kanban approach, you'll collaborate closely with product, data scientists, and DevOps to deliver high-quality features in a fast-paced, agile environment. Key Responsibilities: Design and Develop Knowledge Graph Backends: Build robust Back End services to manage RDF data in triple stores (AWS Neptune) and vector embeddings in Milvus. Ensure Real Time processing of graph data, including on-the-fly validation with SHACL to maintain data integrity. SPARQL Query Implementation & AI Integration: Create efficient SPARQL queries and endpoints for data retrieval. Integrate NLP/AI models (eg Hugging Face transformers, OpenAI APIs, LlamaIndex AgentFlow) to translate natural language into SPARQL queries, enabling AI-driven query generation and semantic search. API & Microservices Development: Develop and maintain RESTful APIs and GraphQL endpoints (using FastAPI or Flask) to expose knowledge graph data and services. Follow microservices architecture best practices to ensure components are modular, scalable, and easy to maintain. Database & State Management: Manage data storage solutions including PostgreSQL (for application/session state) and caching layers as needed. Use SQLAlchemy or similar ORM for efficient database interactions and maintain data consistency between the relational and graph data stores. Performance Optimisation & Scalability: Optimise SPARQL queries, data indexing (including vector indices in Milvus), and service architecture for low-latency, Real Time responses. Ensure the system scales to handle growing knowledge graph data and high query volumes. DevOps and Deployment: Collaborate with DevOps to containerize and deploy services using Docker and Kubernetes. Implement CI/CD pipelines for automated testing and deployment. Monitor services on cloud platforms (AWS/Azure) for reliability, and participate in performance tuning and troubleshooting as needed. Team Collaboration: Work closely within a small, cross-functional squad (engineers, QA, product, data scientists) to plan and deliver features. Participate in Lean Kanban rituals (eg stand-ups, continuous flow planning) to ensure steady progress. Mentor junior developers when necessary and uphold best practices in code quality, testing, and documentation. Required Skills and Experience: Programming Languages: Strong proficiency in Python (Back End development focus). Solid experience writing and optimizing SPARQL queries for RDF data. Knowledge Graph & Semantic Web: Hands-on experience with RDF and triple stores- ideally AWS Neptune or similar graph databases. Familiarity with RDF schemas/ontologies and concepts like triples, graphs, and URIs. SHACL & Data Validation: Experience using SHACL (Shapes Constraint Language) or similar tools for Real Time data validation in knowledge graphs. Ability to define and enforce data schemas/constraints to ensure data quality. Vector Stores: Practical knowledge of vector databases such as Milvus (or alternatives like FAISS, Pinecone) for storing and querying embeddings. Understanding of how to integrate vector similarity search with knowledge graph data for enhanced query results. Frameworks & Libraries: Proficiency with libraries like RDFLib for handling RDF data in Python and PySHACL for running SHACL validations. Experience with SQLAlchemy (or other ORMs) for PostgreSQL. Familiarity with LlamaIndex (AgentFlow) or similar frameworks for connecting language models to data sources. API Development: Proven experience building Back End RESTful APIs (FastAPI, Flask or similar) and/or GraphQL APIs. Knowledge of designing API contracts, versioning, and authentication/authorization mechanisms. Microservices & Architecture: Understanding of microservices architecture and patterns. Ability to design decoupled services and work with message queues or event streams if needed for Real Time processing. AI/ML Integration: Experience integrating NLP/LLM models (Hugging Face transformers, OpenAI, etc.) into applications. Specifically, comfort with leveraging AI to generate or optimize queries (eg, natural language to SPARQL translation) and working with frameworks like LlamaIndex to bridge AI and the knowledge graph. Databases: Strong SQL skills and experience with PostgreSQL (for transactional data or session state). Ability to write efficient queries and design relational schemas that complement the knowledge graph. Basic understanding of how relational data can link to graph data. Cloud & DevOps: Experience deploying applications on AWS or Azure. Proficiency with Docker for containerization and Kubernetes for orchestration. Experience setting up CI/CD pipelines (GitHub Actions, Jenkins, or similar) to automate testing and deployment. Familiarity with cloud services (AWS Neptune, S3, networking, monitoring tools etc.) is a plus. Agile Collaboration: Comfortable working in an Agile/Lean Kanban software development process. Strong collaboration and communication skills to function effectively in a remote or hybrid work environment. Ability to take ownership of tasks and drive them to completion with minimal supervision, while also engaging with the team for feedback and knowledge sharing.
07/03/2025
Full time
Knowledge Engineer, Fully Remote, £80,000 - £100,000 per annum My client, a leading AI solutions company are seeking a Mid-Senior Python Backend Engineer with a passion for knowledge graphs and semantic web technologies. In this role, you will own the full Back End development for an RDF-intensive platform - designing and optimising systems around triple stores (AWS Neptune), Real Time data processing and validation with SHACL, and advanced query capabilities. You will integrate AI-driven SPARQL generation models (LLMs/NLP) to enable intelligent querying of the knowledge graph. Working in a cross-functional squad of 3-8 team members using a Lean Kanban approach, you'll collaborate closely with product, data scientists, and DevOps to deliver high-quality features in a fast-paced, agile environment. Key Responsibilities: Design and Develop Knowledge Graph Backends: Build robust Back End services to manage RDF data in triple stores (AWS Neptune) and vector embeddings in Milvus. Ensure Real Time processing of graph data, including on-the-fly validation with SHACL to maintain data integrity. SPARQL Query Implementation & AI Integration: Create efficient SPARQL queries and endpoints for data retrieval. Integrate NLP/AI models (eg Hugging Face transformers, OpenAI APIs, LlamaIndex AgentFlow) to translate natural language into SPARQL queries, enabling AI-driven query generation and semantic search. API & Microservices Development: Develop and maintain RESTful APIs and GraphQL endpoints (using FastAPI or Flask) to expose knowledge graph data and services. Follow microservices architecture best practices to ensure components are modular, scalable, and easy to maintain. Database & State Management: Manage data storage solutions including PostgreSQL (for application/session state) and caching layers as needed. Use SQLAlchemy or similar ORM for efficient database interactions and maintain data consistency between the relational and graph data stores. Performance Optimisation & Scalability: Optimise SPARQL queries, data indexing (including vector indices in Milvus), and service architecture for low-latency, Real Time responses. Ensure the system scales to handle growing knowledge graph data and high query volumes. DevOps and Deployment: Collaborate with DevOps to containerize and deploy services using Docker and Kubernetes. Implement CI/CD pipelines for automated testing and deployment. Monitor services on cloud platforms (AWS/Azure) for reliability, and participate in performance tuning and troubleshooting as needed. Team Collaboration: Work closely within a small, cross-functional squad (engineers, QA, product, data scientists) to plan and deliver features. Participate in Lean Kanban rituals (eg stand-ups, continuous flow planning) to ensure steady progress. Mentor junior developers when necessary and uphold best practices in code quality, testing, and documentation. Required Skills and Experience: Programming Languages: Strong proficiency in Python (Back End development focus). Solid experience writing and optimizing SPARQL queries for RDF data. Knowledge Graph & Semantic Web: Hands-on experience with RDF and triple stores- ideally AWS Neptune or similar graph databases. Familiarity with RDF schemas/ontologies and concepts like triples, graphs, and URIs. SHACL & Data Validation: Experience using SHACL (Shapes Constraint Language) or similar tools for Real Time data validation in knowledge graphs. Ability to define and enforce data schemas/constraints to ensure data quality. Vector Stores: Practical knowledge of vector databases such as Milvus (or alternatives like FAISS, Pinecone) for storing and querying embeddings. Understanding of how to integrate vector similarity search with knowledge graph data for enhanced query results. Frameworks & Libraries: Proficiency with libraries like RDFLib for handling RDF data in Python and PySHACL for running SHACL validations. Experience with SQLAlchemy (or other ORMs) for PostgreSQL. Familiarity with LlamaIndex (AgentFlow) or similar frameworks for connecting language models to data sources. API Development: Proven experience building Back End RESTful APIs (FastAPI, Flask or similar) and/or GraphQL APIs. Knowledge of designing API contracts, versioning, and authentication/authorization mechanisms. Microservices & Architecture: Understanding of microservices architecture and patterns. Ability to design decoupled services and work with message queues or event streams if needed for Real Time processing. AI/ML Integration: Experience integrating NLP/LLM models (Hugging Face transformers, OpenAI, etc.) into applications. Specifically, comfort with leveraging AI to generate or optimize queries (eg, natural language to SPARQL translation) and working with frameworks like LlamaIndex to bridge AI and the knowledge graph. Databases: Strong SQL skills and experience with PostgreSQL (for transactional data or session state). Ability to write efficient queries and design relational schemas that complement the knowledge graph. Basic understanding of how relational data can link to graph data. Cloud & DevOps: Experience deploying applications on AWS or Azure. Proficiency with Docker for containerization and Kubernetes for orchestration. Experience setting up CI/CD pipelines (GitHub Actions, Jenkins, or similar) to automate testing and deployment. Familiarity with cloud services (AWS Neptune, S3, networking, monitoring tools etc.) is a plus. Agile Collaboration: Comfortable working in an Agile/Lean Kanban software development process. Strong collaboration and communication skills to function effectively in a remote or hybrid work environment. Ability to take ownership of tasks and drive them to completion with minimal supervision, while also engaging with the team for feedback and knowledge sharing.
Are you an experienced Senior Data Engineer looking for an interesting new job in Zurich? Do you have strong experience of distributed computing, particularly Spark? Are you keen to be a leading force in helping this company achieve their goals through the utilisation of data insights? You will be joining a Data team that consists of very smart Data Scientists and Data Engineers that are part of a "start up" within a larger organisation that are responsible for designing, building and maintain robust, scalable and cost effective data infrastructure that supports the delivery of Real Time data to an underlying algorithm. Technically, the team develop the data pipelines using Python and Spark. If your experience is in Scala or even Java this is ok, but naturally Pyspark is preferred. From a cloud perspective, they use Azure - however experience with AWS or GCP is fine as long as you have good general CI/CD knowledge. Although the organisation has a number of data scientists, they are lacking strong Senior Data Engineers. As such you will get the benefit of playing an important role in developing the data infrastructure, working with some very bright minds and also see that your work contributes to solving a real world challenge. For more information on this Senior Data Engineer position, or any other Data Engineer positions that I have available, please send your CV or alternatively you can call me
07/03/2025
Full time
Are you an experienced Senior Data Engineer looking for an interesting new job in Zurich? Do you have strong experience of distributed computing, particularly Spark? Are you keen to be a leading force in helping this company achieve their goals through the utilisation of data insights? You will be joining a Data team that consists of very smart Data Scientists and Data Engineers that are part of a "start up" within a larger organisation that are responsible for designing, building and maintain robust, scalable and cost effective data infrastructure that supports the delivery of Real Time data to an underlying algorithm. Technically, the team develop the data pipelines using Python and Spark. If your experience is in Scala or even Java this is ok, but naturally Pyspark is preferred. From a cloud perspective, they use Azure - however experience with AWS or GCP is fine as long as you have good general CI/CD knowledge. Although the organisation has a number of data scientists, they are lacking strong Senior Data Engineers. As such you will get the benefit of playing an important role in developing the data infrastructure, working with some very bright minds and also see that your work contributes to solving a real world challenge. For more information on this Senior Data Engineer position, or any other Data Engineer positions that I have available, please send your CV or alternatively you can call me
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Enterprise Firm is currently seeking Senior DevOps Engineer with strong Azure and AI experience. Candidate will be building and maintaining the infrastructure necessary to run and scale AI applications used for legal and staff related tasks, such as contract review, document analysis, legal research, etc. by designing and implementing automated pipelines for deploying AI models, managing cloud resources, ensuring system reliability and security, and collaborating with data scientists and legal teams to optimize AI workflows within the firm's infrastructure. This role will actively contribute to and guide the design, deployment, and management of automated cloud environments for enterprise applications, with a primary focus on Azure. Candidate will both actively contribute to and guide the design, deployment, and management of scalable, automated cloud environments tailored for enterprise applications, with a primary focus on Azure. This hands-on technical leader will utilize extensive expertise in Azure infrastructure to set up and drive a high-performance DevOps team. Proficiency with IaC frameworks is essential to establish and maintain best practices that streamline cloud operations, maximize efficiency, and ensure security compliance. Responsibilities: Design, implement, and optimize CI/CD pipelines in Azure DevOps to drive efficient, reliable releases. Build automated testing and monitoring into the deployment pipeline to ensure robust, resilient system performance. Identify and prioritize opportunities to streamline deployment and increase reliability through automation. Regularly assess and improve deployment processes, tooling, and infrastructure management strategies. Establish and monitor key performance indicators to track team output, system stability, and improvement impact. Proactively explore emerging Azure services, IaC tools, and DevOps methodologies to evolve cloud practices. Work closely with Development, Infrastructure, and Security teams to align cloud solutions with broader organizational needs. Offer hands-on technical support and mentorship to other departments as needed, reinforcing DevOps best practices. Lead incident management and post-mortem processes, driving root-cause analysis and solutions to prevent recurrence. Lead by example in the deployment and configuration of Azure resources, demonstrating best practices in IaC with ARM, Bicep, and/or Terraform. Develop reusable, modular IaC templates to enable consistent, reliable deployments across environments. Maintain and share advanced knowledge of Azure resources, including Azure AD, networking, security, and identity management configurations. Actively lead, mentor, and develop a DevOps team, balancing hands-on responsibilities with team guidance to advance DevOps excellence. Establish and communicate cloud automation strategies in alignment with organizational objectives. Foster a collaborative, high-performing team environment that emphasizes knowledge sharing, continuous learning, and skill enhancement. Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field, or equivalent professional experience. A minimum of 8 years of hands-on experience in DevOps, cloud infrastructure, or systems engineering. Strong proficiency with Azure services and IaC, specifically ARM, Bicep, and/or Terraform. Extensive experience with CI/CD pipelines, ideally in Azure DevOps. Competence in Scripting languages such as PowerShell, Python, or Bash. Operational experience supporting business needs on a 24x7 basis, including the use of ITSM tools, including ServiceNow Ability to create detailed technical diagrams and supporting documentation Demonstrated ability to communicate technological information to business leaders Proven experience with vendor management and negotiation. Ability to work off-hours for scheduled or emergency maintenance and/or upgrades
06/03/2025
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Enterprise Firm is currently seeking Senior DevOps Engineer with strong Azure and AI experience. Candidate will be building and maintaining the infrastructure necessary to run and scale AI applications used for legal and staff related tasks, such as contract review, document analysis, legal research, etc. by designing and implementing automated pipelines for deploying AI models, managing cloud resources, ensuring system reliability and security, and collaborating with data scientists and legal teams to optimize AI workflows within the firm's infrastructure. This role will actively contribute to and guide the design, deployment, and management of automated cloud environments for enterprise applications, with a primary focus on Azure. Candidate will both actively contribute to and guide the design, deployment, and management of scalable, automated cloud environments tailored for enterprise applications, with a primary focus on Azure. This hands-on technical leader will utilize extensive expertise in Azure infrastructure to set up and drive a high-performance DevOps team. Proficiency with IaC frameworks is essential to establish and maintain best practices that streamline cloud operations, maximize efficiency, and ensure security compliance. Responsibilities: Design, implement, and optimize CI/CD pipelines in Azure DevOps to drive efficient, reliable releases. Build automated testing and monitoring into the deployment pipeline to ensure robust, resilient system performance. Identify and prioritize opportunities to streamline deployment and increase reliability through automation. Regularly assess and improve deployment processes, tooling, and infrastructure management strategies. Establish and monitor key performance indicators to track team output, system stability, and improvement impact. Proactively explore emerging Azure services, IaC tools, and DevOps methodologies to evolve cloud practices. Work closely with Development, Infrastructure, and Security teams to align cloud solutions with broader organizational needs. Offer hands-on technical support and mentorship to other departments as needed, reinforcing DevOps best practices. Lead incident management and post-mortem processes, driving root-cause analysis and solutions to prevent recurrence. Lead by example in the deployment and configuration of Azure resources, demonstrating best practices in IaC with ARM, Bicep, and/or Terraform. Develop reusable, modular IaC templates to enable consistent, reliable deployments across environments. Maintain and share advanced knowledge of Azure resources, including Azure AD, networking, security, and identity management configurations. Actively lead, mentor, and develop a DevOps team, balancing hands-on responsibilities with team guidance to advance DevOps excellence. Establish and communicate cloud automation strategies in alignment with organizational objectives. Foster a collaborative, high-performing team environment that emphasizes knowledge sharing, continuous learning, and skill enhancement. Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field, or equivalent professional experience. A minimum of 8 years of hands-on experience in DevOps, cloud infrastructure, or systems engineering. Strong proficiency with Azure services and IaC, specifically ARM, Bicep, and/or Terraform. Extensive experience with CI/CD pipelines, ideally in Azure DevOps. Competence in Scripting languages such as PowerShell, Python, or Bash. Operational experience supporting business needs on a 24x7 basis, including the use of ITSM tools, including ServiceNow Ability to create detailed technical diagrams and supporting documentation Demonstrated ability to communicate technological information to business leaders Proven experience with vendor management and negotiation. Ability to work off-hours for scheduled or emergency maintenance and/or upgrades