Norton Blake
Knowledge Engineer, Fully Remote, £80,000 - £100,000 per annum My client, a leading AI solutions company are seeking a Mid-Senior Python Backend Engineer with a passion for knowledge graphs and semantic web technologies. In this role, you will own the full Back End development for an RDF-intensive platform - designing and optimising systems around triple stores (AWS Neptune), Real Time data processing and validation with SHACL, and advanced query capabilities. You will integrate AI-driven SPARQL generation models (LLMs/NLP) to enable intelligent querying of the knowledge graph. Working in a cross-functional squad of 3-8 team members using a Lean Kanban approach, you'll collaborate closely with product, data scientists, and DevOps to deliver high-quality features in a fast-paced, agile environment. Key Responsibilities: Design and Develop Knowledge Graph Backends: Build robust Back End services to manage RDF data in triple stores (AWS Neptune) and vector embeddings in Milvus. Ensure Real Time processing of graph data, including on-the-fly validation with SHACL to maintain data integrity. SPARQL Query Implementation & AI Integration: Create efficient SPARQL queries and endpoints for data retrieval. Integrate NLP/AI models (eg Hugging Face transformers, OpenAI APIs, LlamaIndex AgentFlow) to translate natural language into SPARQL queries, enabling AI-driven query generation and semantic search. API & Microservices Development: Develop and maintain RESTful APIs and GraphQL endpoints (using FastAPI or Flask) to expose knowledge graph data and services. Follow microservices architecture best practices to ensure components are modular, scalable, and easy to maintain. Database & State Management: Manage data storage solutions including PostgreSQL (for application/session state) and caching layers as needed. Use SQLAlchemy or similar ORM for efficient database interactions and maintain data consistency between the relational and graph data stores. Performance Optimisation & Scalability: Optimise SPARQL queries, data indexing (including vector indices in Milvus), and service architecture for low-latency, Real Time responses. Ensure the system scales to handle growing knowledge graph data and high query volumes. DevOps and Deployment: Collaborate with DevOps to containerize and deploy services using Docker and Kubernetes. Implement CI/CD pipelines for automated testing and deployment. Monitor services on cloud platforms (AWS/Azure) for reliability, and participate in performance tuning and troubleshooting as needed. Team Collaboration: Work closely within a small, cross-functional squad (engineers, QA, product, data scientists) to plan and deliver features. Participate in Lean Kanban rituals (eg stand-ups, continuous flow planning) to ensure steady progress. Mentor junior developers when necessary and uphold best practices in code quality, testing, and documentation. Required Skills and Experience: Programming Languages: Strong proficiency in Python (Back End development focus). Solid experience writing and optimizing SPARQL queries for RDF data. Knowledge Graph & Semantic Web: Hands-on experience with RDF and triple stores- ideally AWS Neptune or similar graph databases. Familiarity with RDF schemas/ontologies and concepts like triples, graphs, and URIs. SHACL & Data Validation: Experience using SHACL (Shapes Constraint Language) or similar tools for Real Time data validation in knowledge graphs. Ability to define and enforce data schemas/constraints to ensure data quality. Vector Stores: Practical knowledge of vector databases such as Milvus (or alternatives like FAISS, Pinecone) for storing and querying embeddings. Understanding of how to integrate vector similarity search with knowledge graph data for enhanced query results. Frameworks & Libraries: Proficiency with libraries like RDFLib for handling RDF data in Python and PySHACL for running SHACL validations. Experience with SQLAlchemy (or other ORMs) for PostgreSQL. Familiarity with LlamaIndex (AgentFlow) or similar frameworks for connecting language models to data sources. API Development: Proven experience building Back End RESTful APIs (FastAPI, Flask or similar) and/or GraphQL APIs. Knowledge of designing API contracts, versioning, and authentication/authorization mechanisms. Microservices & Architecture: Understanding of microservices architecture and patterns. Ability to design decoupled services and work with message queues or event streams if needed for Real Time processing. AI/ML Integration: Experience integrating NLP/LLM models (Hugging Face transformers, OpenAI, etc.) into applications. Specifically, comfort with leveraging AI to generate or optimize queries (eg, natural language to SPARQL translation) and working with frameworks like LlamaIndex to bridge AI and the knowledge graph. Databases: Strong SQL skills and experience with PostgreSQL (for transactional data or session state). Ability to write efficient queries and design relational schemas that complement the knowledge graph. Basic understanding of how relational data can link to graph data. Cloud & DevOps: Experience deploying applications on AWS or Azure. Proficiency with Docker for containerization and Kubernetes for orchestration. Experience setting up CI/CD pipelines (GitHub Actions, Jenkins, or similar) to automate testing and deployment. Familiarity with cloud services (AWS Neptune, S3, networking, monitoring tools etc.) is a plus. Agile Collaboration: Comfortable working in an Agile/Lean Kanban software development process. Strong collaboration and communication skills to function effectively in a remote or hybrid work environment. Ability to take ownership of tasks and drive them to completion with minimal supervision, while also engaging with the team for feedback and knowledge sharing.
Knowledge Engineer, Fully Remote, £80,000 - £100,000 per annum My client, a leading AI solutions company are seeking a Mid-Senior Python Backend Engineer with a passion for knowledge graphs and semantic web technologies. In this role, you will own the full Back End development for an RDF-intensive platform - designing and optimising systems around triple stores (AWS Neptune), Real Time data processing and validation with SHACL, and advanced query capabilities. You will integrate AI-driven SPARQL generation models (LLMs/NLP) to enable intelligent querying of the knowledge graph. Working in a cross-functional squad of 3-8 team members using a Lean Kanban approach, you'll collaborate closely with product, data scientists, and DevOps to deliver high-quality features in a fast-paced, agile environment. Key Responsibilities: Design and Develop Knowledge Graph Backends: Build robust Back End services to manage RDF data in triple stores (AWS Neptune) and vector embeddings in Milvus. Ensure Real Time processing of graph data, including on-the-fly validation with SHACL to maintain data integrity. SPARQL Query Implementation & AI Integration: Create efficient SPARQL queries and endpoints for data retrieval. Integrate NLP/AI models (eg Hugging Face transformers, OpenAI APIs, LlamaIndex AgentFlow) to translate natural language into SPARQL queries, enabling AI-driven query generation and semantic search. API & Microservices Development: Develop and maintain RESTful APIs and GraphQL endpoints (using FastAPI or Flask) to expose knowledge graph data and services. Follow microservices architecture best practices to ensure components are modular, scalable, and easy to maintain. Database & State Management: Manage data storage solutions including PostgreSQL (for application/session state) and caching layers as needed. Use SQLAlchemy or similar ORM for efficient database interactions and maintain data consistency between the relational and graph data stores. Performance Optimisation & Scalability: Optimise SPARQL queries, data indexing (including vector indices in Milvus), and service architecture for low-latency, Real Time responses. Ensure the system scales to handle growing knowledge graph data and high query volumes. DevOps and Deployment: Collaborate with DevOps to containerize and deploy services using Docker and Kubernetes. Implement CI/CD pipelines for automated testing and deployment. Monitor services on cloud platforms (AWS/Azure) for reliability, and participate in performance tuning and troubleshooting as needed. Team Collaboration: Work closely within a small, cross-functional squad (engineers, QA, product, data scientists) to plan and deliver features. Participate in Lean Kanban rituals (eg stand-ups, continuous flow planning) to ensure steady progress. Mentor junior developers when necessary and uphold best practices in code quality, testing, and documentation. Required Skills and Experience: Programming Languages: Strong proficiency in Python (Back End development focus). Solid experience writing and optimizing SPARQL queries for RDF data. Knowledge Graph & Semantic Web: Hands-on experience with RDF and triple stores- ideally AWS Neptune or similar graph databases. Familiarity with RDF schemas/ontologies and concepts like triples, graphs, and URIs. SHACL & Data Validation: Experience using SHACL (Shapes Constraint Language) or similar tools for Real Time data validation in knowledge graphs. Ability to define and enforce data schemas/constraints to ensure data quality. Vector Stores: Practical knowledge of vector databases such as Milvus (or alternatives like FAISS, Pinecone) for storing and querying embeddings. Understanding of how to integrate vector similarity search with knowledge graph data for enhanced query results. Frameworks & Libraries: Proficiency with libraries like RDFLib for handling RDF data in Python and PySHACL for running SHACL validations. Experience with SQLAlchemy (or other ORMs) for PostgreSQL. Familiarity with LlamaIndex (AgentFlow) or similar frameworks for connecting language models to data sources. API Development: Proven experience building Back End RESTful APIs (FastAPI, Flask or similar) and/or GraphQL APIs. Knowledge of designing API contracts, versioning, and authentication/authorization mechanisms. Microservices & Architecture: Understanding of microservices architecture and patterns. Ability to design decoupled services and work with message queues or event streams if needed for Real Time processing. AI/ML Integration: Experience integrating NLP/LLM models (Hugging Face transformers, OpenAI, etc.) into applications. Specifically, comfort with leveraging AI to generate or optimize queries (eg, natural language to SPARQL translation) and working with frameworks like LlamaIndex to bridge AI and the knowledge graph. Databases: Strong SQL skills and experience with PostgreSQL (for transactional data or session state). Ability to write efficient queries and design relational schemas that complement the knowledge graph. Basic understanding of how relational data can link to graph data. Cloud & DevOps: Experience deploying applications on AWS or Azure. Proficiency with Docker for containerization and Kubernetes for orchestration. Experience setting up CI/CD pipelines (GitHub Actions, Jenkins, or similar) to automate testing and deployment. Familiarity with cloud services (AWS Neptune, S3, networking, monitoring tools etc.) is a plus. Agile Collaboration: Comfortable working in an Agile/Lean Kanban software development process. Strong collaboration and communication skills to function effectively in a remote or hybrid work environment. Ability to take ownership of tasks and drive them to completion with minimal supervision, while also engaging with the team for feedback and knowledge sharing.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Enterprise Firm is currently seeking Senior DevOps Engineer with strong Azure and AI experience. Candidate will be building and maintaining the infrastructure necessary to run and scale AI applications used for legal and staff related tasks, such as contract review, document analysis, legal research, etc. by designing and implementing automated pipelines for deploying AI models, managing cloud resources, ensuring system reliability and security, and collaborating with data scientists and legal teams to optimize AI workflows within the firm's infrastructure. This role will actively contribute to and guide the design, deployment, and management of automated cloud environments for enterprise applications, with a primary focus on Azure. Candidate will both actively contribute to and guide the design, deployment, and management of scalable, automated cloud environments tailored for enterprise applications, with a primary focus on Azure. This hands-on technical leader will utilize extensive expertise in Azure infrastructure to set up and drive a high-performance DevOps team. Proficiency with IaC frameworks is essential to establish and maintain best practices that streamline cloud operations, maximize efficiency, and ensure security compliance. Responsibilities: Design, implement, and optimize CI/CD pipelines in Azure DevOps to drive efficient, reliable releases. Build automated testing and monitoring into the deployment pipeline to ensure robust, resilient system performance. Identify and prioritize opportunities to streamline deployment and increase reliability through automation. Regularly assess and improve deployment processes, tooling, and infrastructure management strategies. Establish and monitor key performance indicators to track team output, system stability, and improvement impact. Proactively explore emerging Azure services, IaC tools, and DevOps methodologies to evolve cloud practices. Work closely with Development, Infrastructure, and Security teams to align cloud solutions with broader organizational needs. Offer hands-on technical support and mentorship to other departments as needed, reinforcing DevOps best practices. Lead incident management and post-mortem processes, driving root-cause analysis and solutions to prevent recurrence. Lead by example in the deployment and configuration of Azure resources, demonstrating best practices in IaC with ARM, Bicep, and/or Terraform. Develop reusable, modular IaC templates to enable consistent, reliable deployments across environments. Maintain and share advanced knowledge of Azure resources, including Azure AD, networking, security, and identity management configurations. Actively lead, mentor, and develop a DevOps team, balancing hands-on responsibilities with team guidance to advance DevOps excellence. Establish and communicate cloud automation strategies in alignment with organizational objectives. Foster a collaborative, high-performing team environment that emphasizes knowledge sharing, continuous learning, and skill enhancement. Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field, or equivalent professional experience. A minimum of 8 years of hands-on experience in DevOps, cloud infrastructure, or systems engineering. Strong proficiency with Azure services and IaC, specifically ARM, Bicep, and/or Terraform. Extensive experience with CI/CD pipelines, ideally in Azure DevOps. Competence in Scripting languages such as PowerShell, Python, or Bash. Operational experience supporting business needs on a 24x7 basis, including the use of ITSM tools, including ServiceNow Ability to create detailed technical diagrams and supporting documentation Demonstrated ability to communicate technological information to business leaders Proven experience with vendor management and negotiation. Ability to work off-hours for scheduled or emergency maintenance and/or upgrades
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Enterprise Firm is currently seeking Senior DevOps Engineer with strong Azure and AI experience. Candidate will be building and maintaining the infrastructure necessary to run and scale AI applications used for legal and staff related tasks, such as contract review, document analysis, legal research, etc. by designing and implementing automated pipelines for deploying AI models, managing cloud resources, ensuring system reliability and security, and collaborating with data scientists and legal teams to optimize AI workflows within the firm's infrastructure. This role will actively contribute to and guide the design, deployment, and management of automated cloud environments for enterprise applications, with a primary focus on Azure. Candidate will both actively contribute to and guide the design, deployment, and management of scalable, automated cloud environments tailored for enterprise applications, with a primary focus on Azure. This hands-on technical leader will utilize extensive expertise in Azure infrastructure to set up and drive a high-performance DevOps team. Proficiency with IaC frameworks is essential to establish and maintain best practices that streamline cloud operations, maximize efficiency, and ensure security compliance. Responsibilities: Design, implement, and optimize CI/CD pipelines in Azure DevOps to drive efficient, reliable releases. Build automated testing and monitoring into the deployment pipeline to ensure robust, resilient system performance. Identify and prioritize opportunities to streamline deployment and increase reliability through automation. Regularly assess and improve deployment processes, tooling, and infrastructure management strategies. Establish and monitor key performance indicators to track team output, system stability, and improvement impact. Proactively explore emerging Azure services, IaC tools, and DevOps methodologies to evolve cloud practices. Work closely with Development, Infrastructure, and Security teams to align cloud solutions with broader organizational needs. Offer hands-on technical support and mentorship to other departments as needed, reinforcing DevOps best practices. Lead incident management and post-mortem processes, driving root-cause analysis and solutions to prevent recurrence. Lead by example in the deployment and configuration of Azure resources, demonstrating best practices in IaC with ARM, Bicep, and/or Terraform. Develop reusable, modular IaC templates to enable consistent, reliable deployments across environments. Maintain and share advanced knowledge of Azure resources, including Azure AD, networking, security, and identity management configurations. Actively lead, mentor, and develop a DevOps team, balancing hands-on responsibilities with team guidance to advance DevOps excellence. Establish and communicate cloud automation strategies in alignment with organizational objectives. Foster a collaborative, high-performing team environment that emphasizes knowledge sharing, continuous learning, and skill enhancement. Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field, or equivalent professional experience. A minimum of 8 years of hands-on experience in DevOps, cloud infrastructure, or systems engineering. Strong proficiency with Azure services and IaC, specifically ARM, Bicep, and/or Terraform. Extensive experience with CI/CD pipelines, ideally in Azure DevOps. Competence in Scripting languages such as PowerShell, Python, or Bash. Operational experience supporting business needs on a 24x7 basis, including the use of ITSM tools, including ServiceNow Ability to create detailed technical diagrams and supporting documentation Demonstrated ability to communicate technological information to business leaders Proven experience with vendor management and negotiation. Ability to work off-hours for scheduled or emergency maintenance and/or upgrades