Artificial Intelligence (AI) Engineer Salary: Open + Bonus Location: Chicago, IL Hybrid: 3-5 days on-site/week *We are unable to provide sponsorship for this role* *This role is not open to C2C, contract, or contract to hire* Qualifications Bachelor's degree required; master's degree preferred Experience with AI engineering tools and technologies, including Python, Langchain, LlamaIndex, and Semantic Kernel Strong understanding of large language models 5+ years of experience in AI engineering at the enterprise level Experience with natural language processing, text extraction, and information retrieval techniques Strong understanding of machine learning and deep learning concepts including transformer based GPT models Experience with distributed computing and cloud environments (eg, Microsoft Azure) Responsibilities Prototype and test AI solutions using Python and Streamlit with a focus on natural language processing and text extraction from documents (PyPDF, Azure Document Intelligence) Develop plugins and assistants using LangChain, LlamaIndex, or Semantic Kernel, with expertise in prompt engineering and semantic function design Design and implement Retrieval Augmented Generation (RAG) stores using a combination of classic information retrieval and semantic embeddings stored in vector and graph databases Develop and deploy agents using AutoGen, CrewAI, LangChain Agents, and LlamaIndex Agents Use Gen AI to distill metadata and insights from documents Fine tune LLMs to optimize for domain and cost Collaborate with stakeholders to implement and automate AI powered solutions for common business workflows Enhance documentation procedures, codebase, and adherence to best practices to promote and facilitate knowledge sharing and ensure the upkeep of an organized and reproducible working environment
08/01/2025
Full time
Artificial Intelligence (AI) Engineer Salary: Open + Bonus Location: Chicago, IL Hybrid: 3-5 days on-site/week *We are unable to provide sponsorship for this role* *This role is not open to C2C, contract, or contract to hire* Qualifications Bachelor's degree required; master's degree preferred Experience with AI engineering tools and technologies, including Python, Langchain, LlamaIndex, and Semantic Kernel Strong understanding of large language models 5+ years of experience in AI engineering at the enterprise level Experience with natural language processing, text extraction, and information retrieval techniques Strong understanding of machine learning and deep learning concepts including transformer based GPT models Experience with distributed computing and cloud environments (eg, Microsoft Azure) Responsibilities Prototype and test AI solutions using Python and Streamlit with a focus on natural language processing and text extraction from documents (PyPDF, Azure Document Intelligence) Develop plugins and assistants using LangChain, LlamaIndex, or Semantic Kernel, with expertise in prompt engineering and semantic function design Design and implement Retrieval Augmented Generation (RAG) stores using a combination of classic information retrieval and semantic embeddings stored in vector and graph databases Develop and deploy agents using AutoGen, CrewAI, LangChain Agents, and LlamaIndex Agents Use Gen AI to distill metadata and insights from documents Fine tune LLMs to optimize for domain and cost Collaborate with stakeholders to implement and automate AI powered solutions for common business workflows Enhance documentation procedures, codebase, and adherence to best practices to promote and facilitate knowledge sharing and ensure the upkeep of an organized and reproducible working environment
UKIC DV Cleared Automation Engineer - 12 months+ - £700-800pd Inside IR35 - Full time on site (Gloucestershire) Key Responsibilities As an Automation Engineer, you will: . Integration & Deployment: Seamlessly integrate development outputs into release modules and drive them through various stages into operational service. . Release-to-Service Management: Navigate security and accreditation procedures to support the release-to-service process. . Technical Expertise: Offer deep technical insights into designs, solutions, tools, techniques, and standards. . Solution Development: Create pragmatic solutions for complex real-world challenges. . Incident Management: Troubleshoot, investigate, and resolve infrastructure-related incidents. Required Qualifications and Skills To excel in this role, you should have experience in a number of these technologies: Technical Expertise . Automation Tools: Ansible, Puppet, and Foreman. . Programming & Scripting - Java, Python, Bash, PowerShell, Terraform. . Operating Systems: Red Hat Linux and Windows Server. . Virtualisation: VMware, OpenShift and Hyper-V . Containerisation: Docker or Kubernetes. . Cloud Platforms: AWS Cloud or Azure. . Networking Fundamentals: DNS, DHCP, TCP/IP, routing, switching, and HTTP protocols. UKIC Cleared Automation Engineer - 12 months+ - £700-800pd Inside IR35 - Full time on site (Gloucestershire) Damia Group Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept our Data Protection Policy which can be found on our website. Please note that no terminology in this advert is intended to discriminate on the grounds of a person's gender, marital status, race, religion, colour, age, disability or sexual orientation. Every candidate will be assessed only in accordance with their merits, qualifications and ability to perform the duties of the job. Damia Group is acting as an Employment Business in relation to this vacancy and in accordance to Conduct Regulations 2003.
08/01/2025
Project-based
UKIC DV Cleared Automation Engineer - 12 months+ - £700-800pd Inside IR35 - Full time on site (Gloucestershire) Key Responsibilities As an Automation Engineer, you will: . Integration & Deployment: Seamlessly integrate development outputs into release modules and drive them through various stages into operational service. . Release-to-Service Management: Navigate security and accreditation procedures to support the release-to-service process. . Technical Expertise: Offer deep technical insights into designs, solutions, tools, techniques, and standards. . Solution Development: Create pragmatic solutions for complex real-world challenges. . Incident Management: Troubleshoot, investigate, and resolve infrastructure-related incidents. Required Qualifications and Skills To excel in this role, you should have experience in a number of these technologies: Technical Expertise . Automation Tools: Ansible, Puppet, and Foreman. . Programming & Scripting - Java, Python, Bash, PowerShell, Terraform. . Operating Systems: Red Hat Linux and Windows Server. . Virtualisation: VMware, OpenShift and Hyper-V . Containerisation: Docker or Kubernetes. . Cloud Platforms: AWS Cloud or Azure. . Networking Fundamentals: DNS, DHCP, TCP/IP, routing, switching, and HTTP protocols. UKIC Cleared Automation Engineer - 12 months+ - £700-800pd Inside IR35 - Full time on site (Gloucestershire) Damia Group Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this job you accept our Data Protection Policy which can be found on our website. Please note that no terminology in this advert is intended to discriminate on the grounds of a person's gender, marital status, race, religion, colour, age, disability or sexual orientation. Every candidate will be assessed only in accordance with their merits, qualifications and ability to perform the duties of the job. Damia Group is acting as an Employment Business in relation to this vacancy and in accordance to Conduct Regulations 2003.
DBA (GCP, SQL,Python, Java) 12 Months Hybrid - Every Tuesday on-site in Basildon £321.38 per day (Inside IR35) Overview Join the Purpose Build Data Products team and be part of an innovative journey to transform how data is managed and utilized across our organization. We are dedicated to pioneering an adaptive and collaborative data ecosystem that optimizes every aspect of the data life cycle. Our team focuses on comprehensive data ingestion, ensuring regulatory compliance, and democratizing access to enhanced insights. By fostering a culture of continuous improvement and innovation, we empower every team with actionable and enriched insights. Our goal is to drive transformative outcomes and set a new standard of data-powered success. The successful candidate will be responsible for building scalable data products in a cloud-native environment. You will lead both inbound and outbound data integrations, support global data and analytics initiatives, and develop always-on solutions. Your work will be pivotal in ensuring our data infrastructure is robust, efficient, and adaptable to evolving business requirements. Responsibilities: - Collaborate with GDIA product lines and business partners to understand data requirements and opportunities. Skills Required: Develop custom cloud solutions and pipelines with GCP native tools, Data Prep, Data Proc, Data Fusion, Data Flow, DataForm, DBT, and Big Query - Proficiency in SQL, Python, and PySpark. - Expertise in GCP Cloud and open-source tools like Terraform. - Experience with CI/CD practices and tools such as Tekton. - Knowledge of workflow management platforms like Apache Airflow and Astronomer. - Proficiency in using GitHub for version control and collaboration. - Ability to design and maintain efficient data pipelines. - Familiarity with data security, governance, and compliance best practices. - Strong problem-solving, communication, and collaboration skills. - Ability to work autonomously and in a collaborative environment. - Ability to design pipelines and architectures for data processing. - Experience with data security, governance, and compliance best practices in the cloud. - An understanding of current architecture standards and digital platform services strategy. - Excellent problem-solving skills, with the ability to design and optimize complex data pipelines. - Meticulous approach to data accuracy and quality - Strong communication and collaboration skills, capable of working effectively with both technical and non-technical stakeholders as part of a large global and diverse team. Skills Preferred: Experience of Java, MDM - Front End experience, eg, angular or react. - Experience with data visualization tools (eg, Tableau, Power BI). - Software Quality and Performance (eg, Sonarqube, Checkmarx, FOSSA, and Dynatrace) Experience Required: Strong programming and Scripting experience with SQL, Python, and PySpark. - Ability to work effectively across organizations, product teams and business partners. - Knowledge Agile Methodology, experience in writing user stories - Demonstrated ability to lead data engineering projects, design sessions and deliverables to successful completion. - Experience with GCP Cloud experience with solutions designed and implemented at production scale. - Knowledge of Data Warehouse concepts, experience with Data Warehouse/ETL processes - Strong process discipline and thorough understating of IT processes (ISP, Data Security). - Critical thinking skills to propose data solutions, test, and make them a reality. - Deep understanding of data service ecosystems including data warehousing, lakes, metadata, meshes, fabrics and AI/ML use cases. - User experience advocacy through empathetic stakeholder relationship. - Effective Communication both internally (with team members) and externally (with stakeholders) - Must be able to take customer requirements, conceptualize solutions, and build scalable/extensible systems that can be easily expanded or enhanced in the future. Experience Preferred: Excellent communication, collaboration and influence skills; ability to energize a team. - Knowledge of data, software and architecture operations, data engineering and data management standards, governance and quality - Hands on experience in Python using libraries like NumPy, Pandas, etc. - Extensive knowledge and understanding of GCP offerings, bundled services, especially those associated with data operations Cloud Console, BigQuery, DataFlow, DataFusion, PubSub/Kafka, Looker Studio, VertexAI - Experience with recoding, re-developing and optimizing data operations, data science and analytical workflows and products. - Data Governance concepts including GDPR (General Data Protection Regulation) and how these can impact technical architecture. Disclaimer: This vacancy is being advertised by either Advanced Resource Managers Limited, Advanced Resource Managers IT Limited or Advanced Resource Managers Engineering Limited ("ARM"). ARM is a specialist talent acquisition and management consultancy. We provide technical contingency recruitment and a portfolio of more complex resource solutions. Our specialist recruitment divisions cover the entire technical arena, including some of the most economically and strategically important industries in the UK and the world today. We will never send your CV without your permission. Where the role is marked as Outside IR35 in the advertisement this is subject to receipt of a final Status Determination Statement from the end Client and may be subject to change.
08/01/2025
Project-based
DBA (GCP, SQL,Python, Java) 12 Months Hybrid - Every Tuesday on-site in Basildon £321.38 per day (Inside IR35) Overview Join the Purpose Build Data Products team and be part of an innovative journey to transform how data is managed and utilized across our organization. We are dedicated to pioneering an adaptive and collaborative data ecosystem that optimizes every aspect of the data life cycle. Our team focuses on comprehensive data ingestion, ensuring regulatory compliance, and democratizing access to enhanced insights. By fostering a culture of continuous improvement and innovation, we empower every team with actionable and enriched insights. Our goal is to drive transformative outcomes and set a new standard of data-powered success. The successful candidate will be responsible for building scalable data products in a cloud-native environment. You will lead both inbound and outbound data integrations, support global data and analytics initiatives, and develop always-on solutions. Your work will be pivotal in ensuring our data infrastructure is robust, efficient, and adaptable to evolving business requirements. Responsibilities: - Collaborate with GDIA product lines and business partners to understand data requirements and opportunities. Skills Required: Develop custom cloud solutions and pipelines with GCP native tools, Data Prep, Data Proc, Data Fusion, Data Flow, DataForm, DBT, and Big Query - Proficiency in SQL, Python, and PySpark. - Expertise in GCP Cloud and open-source tools like Terraform. - Experience with CI/CD practices and tools such as Tekton. - Knowledge of workflow management platforms like Apache Airflow and Astronomer. - Proficiency in using GitHub for version control and collaboration. - Ability to design and maintain efficient data pipelines. - Familiarity with data security, governance, and compliance best practices. - Strong problem-solving, communication, and collaboration skills. - Ability to work autonomously and in a collaborative environment. - Ability to design pipelines and architectures for data processing. - Experience with data security, governance, and compliance best practices in the cloud. - An understanding of current architecture standards and digital platform services strategy. - Excellent problem-solving skills, with the ability to design and optimize complex data pipelines. - Meticulous approach to data accuracy and quality - Strong communication and collaboration skills, capable of working effectively with both technical and non-technical stakeholders as part of a large global and diverse team. Skills Preferred: Experience of Java, MDM - Front End experience, eg, angular or react. - Experience with data visualization tools (eg, Tableau, Power BI). - Software Quality and Performance (eg, Sonarqube, Checkmarx, FOSSA, and Dynatrace) Experience Required: Strong programming and Scripting experience with SQL, Python, and PySpark. - Ability to work effectively across organizations, product teams and business partners. - Knowledge Agile Methodology, experience in writing user stories - Demonstrated ability to lead data engineering projects, design sessions and deliverables to successful completion. - Experience with GCP Cloud experience with solutions designed and implemented at production scale. - Knowledge of Data Warehouse concepts, experience with Data Warehouse/ETL processes - Strong process discipline and thorough understating of IT processes (ISP, Data Security). - Critical thinking skills to propose data solutions, test, and make them a reality. - Deep understanding of data service ecosystems including data warehousing, lakes, metadata, meshes, fabrics and AI/ML use cases. - User experience advocacy through empathetic stakeholder relationship. - Effective Communication both internally (with team members) and externally (with stakeholders) - Must be able to take customer requirements, conceptualize solutions, and build scalable/extensible systems that can be easily expanded or enhanced in the future. Experience Preferred: Excellent communication, collaboration and influence skills; ability to energize a team. - Knowledge of data, software and architecture operations, data engineering and data management standards, governance and quality - Hands on experience in Python using libraries like NumPy, Pandas, etc. - Extensive knowledge and understanding of GCP offerings, bundled services, especially those associated with data operations Cloud Console, BigQuery, DataFlow, DataFusion, PubSub/Kafka, Looker Studio, VertexAI - Experience with recoding, re-developing and optimizing data operations, data science and analytical workflows and products. - Data Governance concepts including GDPR (General Data Protection Regulation) and how these can impact technical architecture. Disclaimer: This vacancy is being advertised by either Advanced Resource Managers Limited, Advanced Resource Managers IT Limited or Advanced Resource Managers Engineering Limited ("ARM"). ARM is a specialist talent acquisition and management consultancy. We provide technical contingency recruitment and a portfolio of more complex resource solutions. Our specialist recruitment divisions cover the entire technical arena, including some of the most economically and strategically important industries in the UK and the world today. We will never send your CV without your permission. Where the role is marked as Outside IR35 in the advertisement this is subject to receipt of a final Status Determination Statement from the end Client and may be subject to change.
Python Engineer | SC Cleared SR2 is looking to speak to Python Engineers for an upcoming project with a consultancy and their public sector client. The project is working on the deployment of ML to the cloud and building scalable solutions. You will be working closely with Data Scientists and other software engineers. Experience; Strong Python development experience whilst working on AI/Machine Learning Projects Proven collaboration with Data Science teams Good technical understanding of machine learning and building scalable solutions Associated tech stack, such as Tensorflow, Computer Vision, PyTorch, etc. Must have active SC Clearance. This contract will require travel into London Office weekly - 1 or 2 days per week. We can offer competitive day rates outside IR35 and imminent start dates. Please apply for CV review.
08/01/2025
Project-based
Python Engineer | SC Cleared SR2 is looking to speak to Python Engineers for an upcoming project with a consultancy and their public sector client. The project is working on the deployment of ML to the cloud and building scalable solutions. You will be working closely with Data Scientists and other software engineers. Experience; Strong Python development experience whilst working on AI/Machine Learning Projects Proven collaboration with Data Science teams Good technical understanding of machine learning and building scalable solutions Associated tech stack, such as Tensorflow, Computer Vision, PyTorch, etc. Must have active SC Clearance. This contract will require travel into London Office weekly - 1 or 2 days per week. We can offer competitive day rates outside IR35 and imminent start dates. Please apply for CV review.
Qualient Technology Solutions UK Limited
Reading, Berkshire
Location: Reading/Croydon, UK (Hybrid Role) Employment Type: Permanent Security Clearance: Active SC Clearance Required About Qualient Technology Solutions At Qualient Technology Solutions, we specialize in delivering cutting-edge technology solutions tailored to our clients' needs. We are currently seeking a highly skilled Senior Infrastructure Engineer to join our team. This role is pivotal in supporting and enhancing infrastructure and environment management for a high-profile, secure solution. This hybrid role offers an exciting opportunity to work with innovative technologies and complex systems in a dynamic environment. Key Responsibilities Manage desktop operations, including user account creation, patch management, and fault resolution. Perform server maintenance, backups, capacity tuning, and license management. Develop and implement infrastructure automation using the Puppet Framework. Configure and manage networking (DNS, DHCP, VLAN, SSL) and Firewall settings. Ensure security protocols with tools like Nessus and AlienVault. Collaborate with cross-functional teams and provide technical support. Skills & Experience Required Extensive Experience: At least 15 years in infrastructure management and environment provisioning. Technical Expertise: Proficient in Puppet Framework, Docker, VMware, SAN, and hardware asset management. Programming Skills: Strong knowledge of Ubuntu OS (19.0+), Shell Scripting, Python, and YAML. Tools Knowledge: Hands-on experience with JIRA, Confluence, Git, Jenkins, and Nexus. Networking Proficiency: Expertise in managing DNS, DHCP, VLAN, HTTP, and SSL protocols. Security Focus: Familiarity with tools like Nessus, AlienVault, and antivirus software such as ClamAV and SOPHOS. Cloud Experience: Knowledge of AWS infrastructure and cloud technologies (desirable). Agile Practices: Solid understanding of Agile methodologies and workflows. Technologies Involved Mandatory: Puppet Framework, Ubuntu OS, Shell Scripting, Python, VMware Desirable: AWS Infrastructure, OpenLDAP, ClamAV, SOPHOS What We Offer A competitive salary and benefits package. The opportunity to work in a secure, high-impact environment. Access to cutting-edge tools and technologies. Professional development and growth opportunities. Note: Due to the nature of this role, only candidates with active SC Clearance will be considered.
08/01/2025
Location: Reading/Croydon, UK (Hybrid Role) Employment Type: Permanent Security Clearance: Active SC Clearance Required About Qualient Technology Solutions At Qualient Technology Solutions, we specialize in delivering cutting-edge technology solutions tailored to our clients' needs. We are currently seeking a highly skilled Senior Infrastructure Engineer to join our team. This role is pivotal in supporting and enhancing infrastructure and environment management for a high-profile, secure solution. This hybrid role offers an exciting opportunity to work with innovative technologies and complex systems in a dynamic environment. Key Responsibilities Manage desktop operations, including user account creation, patch management, and fault resolution. Perform server maintenance, backups, capacity tuning, and license management. Develop and implement infrastructure automation using the Puppet Framework. Configure and manage networking (DNS, DHCP, VLAN, SSL) and Firewall settings. Ensure security protocols with tools like Nessus and AlienVault. Collaborate with cross-functional teams and provide technical support. Skills & Experience Required Extensive Experience: At least 15 years in infrastructure management and environment provisioning. Technical Expertise: Proficient in Puppet Framework, Docker, VMware, SAN, and hardware asset management. Programming Skills: Strong knowledge of Ubuntu OS (19.0+), Shell Scripting, Python, and YAML. Tools Knowledge: Hands-on experience with JIRA, Confluence, Git, Jenkins, and Nexus. Networking Proficiency: Expertise in managing DNS, DHCP, VLAN, HTTP, and SSL protocols. Security Focus: Familiarity with tools like Nessus, AlienVault, and antivirus software such as ClamAV and SOPHOS. Cloud Experience: Knowledge of AWS infrastructure and cloud technologies (desirable). Agile Practices: Solid understanding of Agile methodologies and workflows. Technologies Involved Mandatory: Puppet Framework, Ubuntu OS, Shell Scripting, Python, VMware Desirable: AWS Infrastructure, OpenLDAP, ClamAV, SOPHOS What We Offer A competitive salary and benefits package. The opportunity to work in a secure, high-impact environment. Access to cutting-edge tools and technologies. Professional development and growth opportunities. Note: Due to the nature of this role, only candidates with active SC Clearance will be considered.
Manipulation Lead Are you passionate about pushing the boundaries of robotics and AI? A cutting-edge robotics start-up is pushing the limits of AI and automation, and we need your expertise to help power the future of robotics and I'm searching for a Manipulation Lead to join their growing team. What You'll Do: Define and steer research initiatives in manipulation aligned with real-world impact, leading from the front Mentor and inspire the next generation of AI/ML innovators. Collaborate across teams to transform research into cutting-edge solutions. Stay ahead of trends in AI/ML, manipulation, perception and robotics to shape our strategy. Drive IP generation through and represent us on global stages, building partnerships that matter. Contribute to grant and funding proposals to secure external sponsorships. What We're Looking For: Ph.D. (or equivalent experience) in AI, ML, robotics, or related fields. Expertise in manipulation, reinforcement learning, motion planning, and perception. A track record of conducting research and publishing in top-tier AI/ML conferences and journals. Proficiency in Python, C++, or MATLAB with experience in AI/ML libraries. Creativity, curiosity, and a drive to solve real-world robotics challenges. Knowledge of Open-Knowledge Models for Robotics is desirable. Familiarity with Vision-Language Action (VLA) networks and Language Models (LLMs) to enhance robot perception and interaction capabilities. Offices based in London - hybrid working. Relocation opportunities available. Competitive salary + equity + benefits. Join this team as they redefine the future of work, building robots that not only fill critical labour gaps but also unlock human potential in ways we never thought possible. If you're a top-tier engineer excited by the prospect of driving AI innovation in a dynamic, fast-paced environment, we want to hear from you. Lawrence Harvey is acting as an Employment Business in regards to this position.
08/01/2025
Full time
Manipulation Lead Are you passionate about pushing the boundaries of robotics and AI? A cutting-edge robotics start-up is pushing the limits of AI and automation, and we need your expertise to help power the future of robotics and I'm searching for a Manipulation Lead to join their growing team. What You'll Do: Define and steer research initiatives in manipulation aligned with real-world impact, leading from the front Mentor and inspire the next generation of AI/ML innovators. Collaborate across teams to transform research into cutting-edge solutions. Stay ahead of trends in AI/ML, manipulation, perception and robotics to shape our strategy. Drive IP generation through and represent us on global stages, building partnerships that matter. Contribute to grant and funding proposals to secure external sponsorships. What We're Looking For: Ph.D. (or equivalent experience) in AI, ML, robotics, or related fields. Expertise in manipulation, reinforcement learning, motion planning, and perception. A track record of conducting research and publishing in top-tier AI/ML conferences and journals. Proficiency in Python, C++, or MATLAB with experience in AI/ML libraries. Creativity, curiosity, and a drive to solve real-world robotics challenges. Knowledge of Open-Knowledge Models for Robotics is desirable. Familiarity with Vision-Language Action (VLA) networks and Language Models (LLMs) to enhance robot perception and interaction capabilities. Offices based in London - hybrid working. Relocation opportunities available. Competitive salary + equity + benefits. Join this team as they redefine the future of work, building robots that not only fill critical labour gaps but also unlock human potential in ways we never thought possible. If you're a top-tier engineer excited by the prospect of driving AI innovation in a dynamic, fast-paced environment, we want to hear from you. Lawrence Harvey is acting as an Employment Business in regards to this position.
*We are unable to sponsor as this is a permanent Full time role* *Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an AI Engineer. This engineer will focus on AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Responsibilities: The AI Engineer, a member of the AI Engineering team, is responsible for developing and implementing cutting-edge legal AI solutions that drive efficiency, improve decision making, and provide valuable insights across various administrative business groups and legal practices. This role will leverage expertise in AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Prototype and test AI solutions using Python and Streamlit with a focus on natural language processing and text extraction from documents (PyPDF, Azure Document Intelligence) Develop plugins and assistants using LangChain, LlamaIndex, or Semantic Kernel, with expertise in prompt engineering and semantic function design Design and implement Retrieval Augmented Generation (RAG) stores using a combination of classic information retrieval and semantic embeddings stored in vector and graph databases Develop and deploy agents using AutoGen, CrewAI, LangChain Agents, and LlamaIndex Agents Use Gen AI to distill metadata and insights from documents Fine tune LLMs to optimize for domain and cost Collaborate with stakeholders to implement and automate AI powered solutions for common business workflows Enhance documentation procedures, codebase, and adherence to best practices to promote and facilitate knowledge sharing and ensure the upkeep of an organized and reproducible working environment Qualifications: Bachelor's Degree in Computer Science, Engineering, or related field A minimum of 5 years of experience in AI engineering or a related field Proven experience with AI engineering tools and technologies, including Python, Streamlit, Jupyter Notebooks, Langchain, LlamaIndex, and Semantic Kernel Experience with natural language processing, text extraction, and information retrieval techniques Strong understanding of machine learning and deep learning concepts including transformer based GPT models Experience with distributed computing and cloud environments (eg, Microsoft Azure) Solid understanding of large language models. Enterprise level experience, big accounting firms, academia
07/01/2025
Full time
*We are unable to sponsor as this is a permanent Full time role* *Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an AI Engineer. This engineer will focus on AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Responsibilities: The AI Engineer, a member of the AI Engineering team, is responsible for developing and implementing cutting-edge legal AI solutions that drive efficiency, improve decision making, and provide valuable insights across various administrative business groups and legal practices. This role will leverage expertise in AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Prototype and test AI solutions using Python and Streamlit with a focus on natural language processing and text extraction from documents (PyPDF, Azure Document Intelligence) Develop plugins and assistants using LangChain, LlamaIndex, or Semantic Kernel, with expertise in prompt engineering and semantic function design Design and implement Retrieval Augmented Generation (RAG) stores using a combination of classic information retrieval and semantic embeddings stored in vector and graph databases Develop and deploy agents using AutoGen, CrewAI, LangChain Agents, and LlamaIndex Agents Use Gen AI to distill metadata and insights from documents Fine tune LLMs to optimize for domain and cost Collaborate with stakeholders to implement and automate AI powered solutions for common business workflows Enhance documentation procedures, codebase, and adherence to best practices to promote and facilitate knowledge sharing and ensure the upkeep of an organized and reproducible working environment Qualifications: Bachelor's Degree in Computer Science, Engineering, or related field A minimum of 5 years of experience in AI engineering or a related field Proven experience with AI engineering tools and technologies, including Python, Streamlit, Jupyter Notebooks, Langchain, LlamaIndex, and Semantic Kernel Experience with natural language processing, text extraction, and information retrieval techniques Strong understanding of machine learning and deep learning concepts including transformer based GPT models Experience with distributed computing and cloud environments (eg, Microsoft Azure) Solid understanding of large language models. Enterprise level experience, big accounting firms, academia
Company Overview I'm assisting a genuine and World-Wide renowed flagship organisition in their recruitment of a Junior iPaaS (Integration Platform as a Service) Integration Analyst to join their internal technology systems team. This is an exditing and outsatnding opportunity for an individual looking to grow their career in integration technology and play a key role in delivering seamless cloud integrations. Position Overview As a Junior/Mid-Level iPaaS Integration Analyst, you will work closely with senior team members to assist in the design, implementation, and management of integration solutions using iPaaS platforms. You will support internal stakeholders in streamlining their workflows, ensuring smooth data exchange between systems, and providing top-notch customer service. This is an ideal role for someone with a passion for technology, problem-solving, and learning new tools in a fast-paced environment. Key Responsibilities: Assist in the design, configuration, and deployment of integration solutions using iPaaS platforms (eg, MuleSoft, Dell Boomi, Informatica Cloud). Collaborate with senior analysts and technical teams to gather requirements and ensure seamless integration between internal and third-party systems. Troubleshoot integration issues and assist in identifying root causes and implementing solutions. Maintain and monitor the performance of existing integrations to ensure they meet business requirements. Work closely with cross-functional teams to ensure that integrations align with business goals. Participate in quality assurance processes by testing integrations and validating data flows. Provide technical support and training to end-users as needed. Stay up-to-date with the latest iPaaS technologies and industry best practices. Qualifications: Ideally a degree in Computer Science, Information Technology, Engineering or equivalent experience. Basic understanding of integration platforms (iPaaS) and cloud technologies. Familiarity with data formats like XML, JSON, CSV, and APIs. Good undersatnding of SQL. Strong analytical and problem-solving skills. Ability to work in a team-oriented, collaborative environment. Excellent communication skills. Eagerness to learn and grow in the field of integration technology. Previous experience in an integration or technical support role is a plus but not required. Preferred Skills (but not required): Hands-on experience with iPaaS tools such as MuleSoft, Dell Boomi, or Informatica Cloud. Experience with web services (REST, SOAP) and API management. Basic knowledge of programming languages (Java, JavaScript, Python). Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Why Join? Growth Opportunities : the business is committed to your professional development, with opportunities for training, certification, and career progression. Collaborative Culture : Join a supportive and collaborative team that values innovation and knowledge sharing. Competitive Compensation : WThey offer a competitive salary and benefits package, including bonus, health insurance, paid time off, and more. Exciting Projects : Work on cutting-edge integration projects that help businesses streamline their operations. Hybrid Working You must be based in the UK and be able to work frm the company's Manchester office twoce a week. InterQuest Group is acting as an employment agency for this vacancy. InterQuest Group is an equal opportunities employer and we welcome applications from all suitably qualified persons regardless of age, disability, gender, religion/belief, race, marriage, civil partnership, pregnancy, maternity, sex or sexual orientation. Please make us aware if you require any reasonable adjustments throughout the recruitment process.
07/01/2025
Full time
Company Overview I'm assisting a genuine and World-Wide renowed flagship organisition in their recruitment of a Junior iPaaS (Integration Platform as a Service) Integration Analyst to join their internal technology systems team. This is an exditing and outsatnding opportunity for an individual looking to grow their career in integration technology and play a key role in delivering seamless cloud integrations. Position Overview As a Junior/Mid-Level iPaaS Integration Analyst, you will work closely with senior team members to assist in the design, implementation, and management of integration solutions using iPaaS platforms. You will support internal stakeholders in streamlining their workflows, ensuring smooth data exchange between systems, and providing top-notch customer service. This is an ideal role for someone with a passion for technology, problem-solving, and learning new tools in a fast-paced environment. Key Responsibilities: Assist in the design, configuration, and deployment of integration solutions using iPaaS platforms (eg, MuleSoft, Dell Boomi, Informatica Cloud). Collaborate with senior analysts and technical teams to gather requirements and ensure seamless integration between internal and third-party systems. Troubleshoot integration issues and assist in identifying root causes and implementing solutions. Maintain and monitor the performance of existing integrations to ensure they meet business requirements. Work closely with cross-functional teams to ensure that integrations align with business goals. Participate in quality assurance processes by testing integrations and validating data flows. Provide technical support and training to end-users as needed. Stay up-to-date with the latest iPaaS technologies and industry best practices. Qualifications: Ideally a degree in Computer Science, Information Technology, Engineering or equivalent experience. Basic understanding of integration platforms (iPaaS) and cloud technologies. Familiarity with data formats like XML, JSON, CSV, and APIs. Good undersatnding of SQL. Strong analytical and problem-solving skills. Ability to work in a team-oriented, collaborative environment. Excellent communication skills. Eagerness to learn and grow in the field of integration technology. Previous experience in an integration or technical support role is a plus but not required. Preferred Skills (but not required): Hands-on experience with iPaaS tools such as MuleSoft, Dell Boomi, or Informatica Cloud. Experience with web services (REST, SOAP) and API management. Basic knowledge of programming languages (Java, JavaScript, Python). Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Why Join? Growth Opportunities : the business is committed to your professional development, with opportunities for training, certification, and career progression. Collaborative Culture : Join a supportive and collaborative team that values innovation and knowledge sharing. Competitive Compensation : WThey offer a competitive salary and benefits package, including bonus, health insurance, paid time off, and more. Exciting Projects : Work on cutting-edge integration projects that help businesses streamline their operations. Hybrid Working You must be based in the UK and be able to work frm the company's Manchester office twoce a week. InterQuest Group is acting as an employment agency for this vacancy. InterQuest Group is an equal opportunities employer and we welcome applications from all suitably qualified persons regardless of age, disability, gender, religion/belief, race, marriage, civil partnership, pregnancy, maternity, sex or sexual orientation. Please make us aware if you require any reasonable adjustments throughout the recruitment process.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
06/01/2025
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
06/01/2025
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
NO SPONSORSHIP AI Engineer/Developer On site 3 days a week downtown Chicago Salary - 180 - 200K + $1200 - 10K Bonus We are looking for someone with a software development background who just happens to have specialized in AI Workflow automation. This makes their skills far more portable and is super important because the field is changing rapidly. Existing automation tech will be become obsolete very soon. We need people that are more focused on the programming side of the house, versus general office productivity. AI Engineering natural language processing and machine learning design Python Langchain LlamalIndex and Semantic Kernel large language models PyPDF Azure document intelligence prompt engineering In total, there are 20 people associated with this AI team. The Artificial Intelligence team is brand new at this organization. this is a new wave in the Legal space, and we are responsible for creating an AI road map in the Legal industry. AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Ideal Candidate: -Bachelor's Degree in Computer Science, Engineering, or related field -A minimum of 5 years of experience in AI engineering or a related field -MUST HAVE: Proven experience with AI engineering tools and technologies, including Python, Langchain, LlamaIndex, and Semantic Kernel -Understanding of large language models. -You will likely come across a lot of general software developers who want to transition into AI Engineering - This is an acceptable candidate, as long as they have experience with the AI tools listed above. -You will also likely come across Data Scientists that are trying to reinvent themselves as Data/AI Engineers. This candidate is acceptable as well, as long as they have experience with the above listed tools and can build a case on AI agents. The AI Engineer, a member of the AI Engineering team, is responsible for developing and implementing cutting-edge legal AI solutions that drive efficiency, improve decision making, and provide valuable insights across various administrative business groups and legal practices. This role will leverage expertise in AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Duties and Responsibilities: Prototype and test AI solutions using Python and Streamlit with a focus on natural language processing and text extraction from documents (PyPDF, Azure Document Intelligence) Develop plugins and assistants using LangChain, LlamaIndex, or Semantic Kernel, with expertise in prompt engineering and semantic function design Design and implement Retrieval Augmented Generation (RAG) stores using a combination of classic information retrieval and semantic embeddings stored in vector and graph databases Develop and deploy agents using AutoGen, CrewAI, LangChain Agents, and LlamaIndex Agents Use Gen AI to distill metadata and insights from documents Fine tune LLMs to optimize for domain and cost Collaborate with stakeholders to implement and automate AI powered solutions for common business workflows Enhance documentation procedures, codebase, and adherence to best practices to promote and facilitate knowledge sharing and ensure the upkeep of an organized and reproducible working environment Required: Bachelor's Degree in Computer Science, Engineering, or related field A minimum of 5 years of experience in AI engineering or a related field Preferred : Master's Degree in Computer Science, Engineering, or related field Proven experience with AI engineering tools and technologies, including Python, Streamlit, Jupyter Notebooks, Langchain, LlamaIndex, and Semantic Kernel Experience with natural language processing, text extraction, and information retrieval techniques Strong understanding of machine learning and deep learning concepts including transformer based GPT models Experience with distributed computing and cloud environments (eg, Microsoft Azure)
06/01/2025
Full time
NO SPONSORSHIP AI Engineer/Developer On site 3 days a week downtown Chicago Salary - 180 - 200K + $1200 - 10K Bonus We are looking for someone with a software development background who just happens to have specialized in AI Workflow automation. This makes their skills far more portable and is super important because the field is changing rapidly. Existing automation tech will be become obsolete very soon. We need people that are more focused on the programming side of the house, versus general office productivity. AI Engineering natural language processing and machine learning design Python Langchain LlamalIndex and Semantic Kernel large language models PyPDF Azure document intelligence prompt engineering In total, there are 20 people associated with this AI team. The Artificial Intelligence team is brand new at this organization. this is a new wave in the Legal space, and we are responsible for creating an AI road map in the Legal industry. AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Ideal Candidate: -Bachelor's Degree in Computer Science, Engineering, or related field -A minimum of 5 years of experience in AI engineering or a related field -MUST HAVE: Proven experience with AI engineering tools and technologies, including Python, Langchain, LlamaIndex, and Semantic Kernel -Understanding of large language models. -You will likely come across a lot of general software developers who want to transition into AI Engineering - This is an acceptable candidate, as long as they have experience with the AI tools listed above. -You will also likely come across Data Scientists that are trying to reinvent themselves as Data/AI Engineers. This candidate is acceptable as well, as long as they have experience with the above listed tools and can build a case on AI agents. The AI Engineer, a member of the AI Engineering team, is responsible for developing and implementing cutting-edge legal AI solutions that drive efficiency, improve decision making, and provide valuable insights across various administrative business groups and legal practices. This role will leverage expertise in AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Duties and Responsibilities: Prototype and test AI solutions using Python and Streamlit with a focus on natural language processing and text extraction from documents (PyPDF, Azure Document Intelligence) Develop plugins and assistants using LangChain, LlamaIndex, or Semantic Kernel, with expertise in prompt engineering and semantic function design Design and implement Retrieval Augmented Generation (RAG) stores using a combination of classic information retrieval and semantic embeddings stored in vector and graph databases Develop and deploy agents using AutoGen, CrewAI, LangChain Agents, and LlamaIndex Agents Use Gen AI to distill metadata and insights from documents Fine tune LLMs to optimize for domain and cost Collaborate with stakeholders to implement and automate AI powered solutions for common business workflows Enhance documentation procedures, codebase, and adherence to best practices to promote and facilitate knowledge sharing and ensure the upkeep of an organized and reproducible working environment Required: Bachelor's Degree in Computer Science, Engineering, or related field A minimum of 5 years of experience in AI engineering or a related field Preferred : Master's Degree in Computer Science, Engineering, or related field Proven experience with AI engineering tools and technologies, including Python, Streamlit, Jupyter Notebooks, Langchain, LlamaIndex, and Semantic Kernel Experience with natural language processing, text extraction, and information retrieval techniques Strong understanding of machine learning and deep learning concepts including transformer based GPT models Experience with distributed computing and cloud environments (eg, Microsoft Azure)
Spectrum IT Recruitment (South) Ltd
Fareham, Hampshire
Python Developer Python, Django, TypeScript, JavaScript Remote working Salary £60k - £65k plus benefits An established company who are Embedded within one of the UK's most innovative and established tech hubs - fully funded, well equipped and well prepared to launch their latest tech incarnation - a pet tag tracking service with a unique proposition set to disrupt the sector. This is not the first start-up this company has launched; their wealthy owners have been doing this for 10+ years and can demonstrate £mm income from several of their previous creations. As such they have an amazing office in Fareham (near Whiteley) with a no expense spared look and feel. You will be an opportunity to work hybrid/remote and given a huge investment in tech & tools, free hot food cooked in the company owned restaurant, nights out, spa breaks and much much more! Who are you? Full Stack Developer/Python Engineer/Python Developer You will need expertise in open source and object-oriented programming. This product is built on Python, Django and TypeScript and you will get to leverage the best available technology to build the solution, pretty much from scratch. You will join a small team of Python Engineers and work closely with senior management and development teams to ensure the successful development of each project. Your Skills & Experience Python Django framework TypeScript JavaScript, jQuery HTML, CSS API models Salary and Scope Salary £60k - £65k plus excellent benefits and training We are looking for people on an upward career trajectory who want to be part of a journey and help to mould and nurture a product and eventually a team. Please send your CV and details through to Tom Rayner on email (see below) Spectrum IT Recruitment (South) Limited is acting as an Employment Agency in relation to this vacancy.
06/01/2025
Full time
Python Developer Python, Django, TypeScript, JavaScript Remote working Salary £60k - £65k plus benefits An established company who are Embedded within one of the UK's most innovative and established tech hubs - fully funded, well equipped and well prepared to launch their latest tech incarnation - a pet tag tracking service with a unique proposition set to disrupt the sector. This is not the first start-up this company has launched; their wealthy owners have been doing this for 10+ years and can demonstrate £mm income from several of their previous creations. As such they have an amazing office in Fareham (near Whiteley) with a no expense spared look and feel. You will be an opportunity to work hybrid/remote and given a huge investment in tech & tools, free hot food cooked in the company owned restaurant, nights out, spa breaks and much much more! Who are you? Full Stack Developer/Python Engineer/Python Developer You will need expertise in open source and object-oriented programming. This product is built on Python, Django and TypeScript and you will get to leverage the best available technology to build the solution, pretty much from scratch. You will join a small team of Python Engineers and work closely with senior management and development teams to ensure the successful development of each project. Your Skills & Experience Python Django framework TypeScript JavaScript, jQuery HTML, CSS API models Salary and Scope Salary £60k - £65k plus excellent benefits and training We are looking for people on an upward career trajectory who want to be part of a journey and help to mould and nurture a product and eventually a team. Please send your CV and details through to Tom Rayner on email (see below) Spectrum IT Recruitment (South) Limited is acting as an Employment Agency in relation to this vacancy.
Python Developer Python, Django, TypeScript, JavaScript Remote working Salary £60k - £65k plus benefits An established company who are Embedded within one of the UK's most innovative and established tech hubs - fully funded, well equipped and well prepared to launch their latest tech incarnation - a pet tag tracking service with a unique proposition set to disrupt the sector. This is not the first start-up this company has launched; their wealthy owners have been doing this for 10+ years and can demonstrate £mm income from several of their previous creations. As such they have an amazing office in the south with a no expense spared look and feel. You will be an opportunity to work hybrid/remote and given a huge investment in tech & tools, free hot food cooked in the company owned restaurant, nights out, spa breaks and much much more! Who are you? Full Stack Developer/Python Engineer/Python Developer You will need expertise in open source and object-oriented programming. This product is built on Python, Django and TypeScript and you will get to leverage the best available technology to build the solution, pretty much from scratch. You will join a small team of Python Engineers and work closely with senior management and development teams to ensure the successful development of each project. Your Skills & Experience Python Django framework TypeScript JavaScript, jQuery HTML, CSS API models Salary and Scope Salary £60k - £65k plus excellent benefits and training We are looking for people on an upward career trajectory who want to be part of a journey and help to mould and nurture a product and eventually a team. Please send your CV and details through to Tom Rayner on email (see below) Spectrum IT Recruitment (South) Limited is acting as an Employment Agency in relation to this vacancy.
06/01/2025
Full time
Python Developer Python, Django, TypeScript, JavaScript Remote working Salary £60k - £65k plus benefits An established company who are Embedded within one of the UK's most innovative and established tech hubs - fully funded, well equipped and well prepared to launch their latest tech incarnation - a pet tag tracking service with a unique proposition set to disrupt the sector. This is not the first start-up this company has launched; their wealthy owners have been doing this for 10+ years and can demonstrate £mm income from several of their previous creations. As such they have an amazing office in the south with a no expense spared look and feel. You will be an opportunity to work hybrid/remote and given a huge investment in tech & tools, free hot food cooked in the company owned restaurant, nights out, spa breaks and much much more! Who are you? Full Stack Developer/Python Engineer/Python Developer You will need expertise in open source and object-oriented programming. This product is built on Python, Django and TypeScript and you will get to leverage the best available technology to build the solution, pretty much from scratch. You will join a small team of Python Engineers and work closely with senior management and development teams to ensure the successful development of each project. Your Skills & Experience Python Django framework TypeScript JavaScript, jQuery HTML, CSS API models Salary and Scope Salary £60k - £65k plus excellent benefits and training We are looking for people on an upward career trajectory who want to be part of a journey and help to mould and nurture a product and eventually a team. Please send your CV and details through to Tom Rayner on email (see below) Spectrum IT Recruitment (South) Limited is acting as an Employment Agency in relation to this vacancy.
Security Engineering Lead - Artificial Intelligence Salary - £90-95k + Bonus + Benefits Location - Manchester (2 days per week on site) Newly created role with a major global Banking institution, who are looking to hire a Security Engineering Lead to protect them from the ever-evolving AI Landscape whilst they embark on a major digital transformation. You will be responsible for ensuring their internal ML (machine learning) AI (artificial Intelligence), LLM (Large Language Models) and GAN (Generative Adversarial Networks) models remain secure but also ensuring they are secured externally against each of these concepts. Seriously exciting role with a really broad remit covering threat modelling, attack simulation, cyber detection/response and cloud security engineering; focused specifically on GenAI related cyber security risks and threats. Genuinely an incredible opportunity for an experienced security engineering lead to build out and establish the AI Security Engineering function for one of the largest and most reputable banks worldwide, in what is comfortably the most interesting and innovative threat actor environment emerging in cyber security. Key Responsibilities: Security Engineering: Development and implementation of protocols, algorithms, and software applications to protect sensitive data and systems. Threat Modelling and Risk Analysis: Identifying potential attack vectors in GenAI systems Incident Response: Detecting adversarial attacks or misuse of generated content with GenAI Cloud Security: Securing AI workloads on cloud platforms (AWS, Azure) involving encryption, secure key management, and robust access controls. Key Requirements: Strong understanding of machine learning algorithms, data processing, and AI frameworks (TensorFlow, PyTorch, Scikit-learn, etc). Hands-on experience with deploying, maintaining and fine-tuning security monitoring and detection tooling. Experience with threat modelling, penetration testing, and vulnerability assessments in AI environments. Background in a DevSecOps/Software Sec Engineering environment with knowledge of ability to script/code in Python. Experience securing AI workloads on major public cloud platforms (AWS or Azure) Lawrence Harvey is acting as an Employment Business in regards to this position.
03/01/2025
Full time
Security Engineering Lead - Artificial Intelligence Salary - £90-95k + Bonus + Benefits Location - Manchester (2 days per week on site) Newly created role with a major global Banking institution, who are looking to hire a Security Engineering Lead to protect them from the ever-evolving AI Landscape whilst they embark on a major digital transformation. You will be responsible for ensuring their internal ML (machine learning) AI (artificial Intelligence), LLM (Large Language Models) and GAN (Generative Adversarial Networks) models remain secure but also ensuring they are secured externally against each of these concepts. Seriously exciting role with a really broad remit covering threat modelling, attack simulation, cyber detection/response and cloud security engineering; focused specifically on GenAI related cyber security risks and threats. Genuinely an incredible opportunity for an experienced security engineering lead to build out and establish the AI Security Engineering function for one of the largest and most reputable banks worldwide, in what is comfortably the most interesting and innovative threat actor environment emerging in cyber security. Key Responsibilities: Security Engineering: Development and implementation of protocols, algorithms, and software applications to protect sensitive data and systems. Threat Modelling and Risk Analysis: Identifying potential attack vectors in GenAI systems Incident Response: Detecting adversarial attacks or misuse of generated content with GenAI Cloud Security: Securing AI workloads on cloud platforms (AWS, Azure) involving encryption, secure key management, and robust access controls. Key Requirements: Strong understanding of machine learning algorithms, data processing, and AI frameworks (TensorFlow, PyTorch, Scikit-learn, etc). Hands-on experience with deploying, maintaining and fine-tuning security monitoring and detection tooling. Experience with threat modelling, penetration testing, and vulnerability assessments in AI environments. Background in a DevSecOps/Software Sec Engineering environment with knowledge of ability to script/code in Python. Experience securing AI workloads on major public cloud platforms (AWS or Azure) Lawrence Harvey is acting as an Employment Business in regards to this position.
*We are unable to sponsor for this permanent Full time role* *Position is Bonus eligible* Prestigious Financial Company is currently seeking an AWS DevOps Software Engineer with Kafka experience. Candidate will provide subject matter expertise for ongoing support of applications deployed to non-production AWS environments and supporting 3rd party applications. Identify root causes and automate solutions in support of development. Candidate will have a deep understanding of DevOps practices, leadership skills, and expertise in various tools and technologies. You will be working in a fast-paced, dynamic environment, using cutting-edge tools and cloud technologies. Manage day to day activities when called upon. Responsibilities: Desing Develop release and support, Cloud Native applications running on Containers Kubernetes and Docker within AWS. DevOps Strategy: Develop and implement DevOps strategies and best practices to enhance development, testing, and deployment processes. Possess in-depth knowledge and hands-on experience with DevOps tools and technologies, including but not limited to GitHub, Jenkins, Terraform, Ansible, Kafka, AWS, Apigee. Support the lower environments for incident and problem management. Resolve complex support issues in non-production environments. Create procedural and troubleshooting documentation related to cloud native applications. Write complex automation scripts using common automation tools, such as yaml, Json, Bash, Groovy, Ansible, Terraform and python, Perform other duties as assigned Qualifications: Excellent problem-solving skills. Ability to work independently. Ability to work with management to prioritize tasks. Demonstrate strong confidence in abilities and knowledge. Ability to work well in crisis situations. Ability to work under minimal supervision. Flexibility to be on call from 5 PM to 7 AM for 3 months per year. Good written and oral communication skills. Technical Skills: Expertise on Kubernetes and Docker, including best practices Expertise in cloud containerization; design, develop and troubleshoot Strong programming or Scripting skills in yaml, Helm Charts, Json, Bash, Groovy, Ansible, Terraform, python or Java. Advance level on Networking technologies CI/CD tools such as Artifactory, Jenkins, and GIT, SonarQube Experience with cloud-based systems such as AWS, Azure, or Google Cloud, including expertise in IaC and CaC; Ansible, Terraform Experience with Kafka infrastructure and processes Understanding of software development methodologies and Agile practices Excellent analytical and problem-solving skills, with the ability to troubleshoot and identify the root cause of issues Good verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams. Familiarity with monitoring and logging tools such Elk stack, Splunk. Familiarity with Technologies used to support microservices. Minimum 7 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with Kubernetes. Minimum 3 years experience working on Scripting or Programming Bachelor's degree in a related area Cloud Certification a plus
02/01/2025
Full time
*We are unable to sponsor for this permanent Full time role* *Position is Bonus eligible* Prestigious Financial Company is currently seeking an AWS DevOps Software Engineer with Kafka experience. Candidate will provide subject matter expertise for ongoing support of applications deployed to non-production AWS environments and supporting 3rd party applications. Identify root causes and automate solutions in support of development. Candidate will have a deep understanding of DevOps practices, leadership skills, and expertise in various tools and technologies. You will be working in a fast-paced, dynamic environment, using cutting-edge tools and cloud technologies. Manage day to day activities when called upon. Responsibilities: Desing Develop release and support, Cloud Native applications running on Containers Kubernetes and Docker within AWS. DevOps Strategy: Develop and implement DevOps strategies and best practices to enhance development, testing, and deployment processes. Possess in-depth knowledge and hands-on experience with DevOps tools and technologies, including but not limited to GitHub, Jenkins, Terraform, Ansible, Kafka, AWS, Apigee. Support the lower environments for incident and problem management. Resolve complex support issues in non-production environments. Create procedural and troubleshooting documentation related to cloud native applications. Write complex automation scripts using common automation tools, such as yaml, Json, Bash, Groovy, Ansible, Terraform and python, Perform other duties as assigned Qualifications: Excellent problem-solving skills. Ability to work independently. Ability to work with management to prioritize tasks. Demonstrate strong confidence in abilities and knowledge. Ability to work well in crisis situations. Ability to work under minimal supervision. Flexibility to be on call from 5 PM to 7 AM for 3 months per year. Good written and oral communication skills. Technical Skills: Expertise on Kubernetes and Docker, including best practices Expertise in cloud containerization; design, develop and troubleshoot Strong programming or Scripting skills in yaml, Helm Charts, Json, Bash, Groovy, Ansible, Terraform, python or Java. Advance level on Networking technologies CI/CD tools such as Artifactory, Jenkins, and GIT, SonarQube Experience with cloud-based systems such as AWS, Azure, or Google Cloud, including expertise in IaC and CaC; Ansible, Terraform Experience with Kafka infrastructure and processes Understanding of software development methodologies and Agile practices Excellent analytical and problem-solving skills, with the ability to troubleshoot and identify the root cause of issues Good verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams. Familiarity with monitoring and logging tools such Elk stack, Splunk. Familiarity with Technologies used to support microservices. Minimum 7 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with Kubernetes. Minimum 3 years experience working on Scripting or Programming Bachelor's degree in a related area Cloud Certification a plus
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is Bonus eligible* Prestigious Financial Company is currently seeking an AWS DevOps Software Engineer with Kafka experience. Candidate will provide subject matter expertise for ongoing support of applications deployed to non-production AWS environments and supporting 3rd party applications. Identify root causes and automate solutions in support of development. Candidate will have a deep understanding of DevOps practices, leadership skills, and expertise in various tools and technologies. You will be working in a fast-paced, dynamic environment, using cutting-edge tools and cloud technologies. Manage day to day activities when called upon. Responsibilities: Desing Develop release and support, Cloud Native applications running on Containers Kubernetes and Docker within AWS. DevOps Strategy: Develop and implement DevOps strategies and best practices to enhance development, testing, and deployment processes. Possess in-depth knowledge and hands-on experience with DevOps tools and technologies, including but not limited to GitHub, Jenkins, Terraform, Ansible, Kafka, AWS, Apigee. Support the lower environments for incident and problem management. Resolve complex support issues in non-production environments. Create procedural and troubleshooting documentation related to cloud native applications. Write complex automation scripts using common automation tools, such as yaml, Json, Bash, Groovy, Ansible, Terraform and python, Perform other duties as assigned Qualifications: Excellent problem-solving skills. Ability to work independently. Ability to work with management to prioritize tasks. Demonstrate strong confidence in abilities and knowledge. Ability to work well in crisis situations. Ability to work under minimal supervision. Flexibility to be on call from 5 PM to 7 AM for 3 months per year. Good written and oral communication skills. Technical Skills: Expertise on Kubernetes and Docker, including best practices Expertise in cloud containerization; design, develop and troubleshoot Strong programming or Scripting skills in yaml, Helm Charts, Json, Bash, Groovy, Ansible, Terraform, python or Java. Advance level on Networking technologies CI/CD tools such as Artifactory, Jenkins, and GIT, SonarQube Experience with cloud-based systems such as AWS, Azure, or Google Cloud, including expertise in IaC and CaC; Ansible, Terraform Experience with Kafka infrastructure and processes Understanding of software development methodologies and Agile practices Excellent analytical and problem-solving skills, with the ability to troubleshoot and identify the root cause of issues Good verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams. Familiarity with monitoring and logging tools such Elk stack, Splunk. Familiarity with Technologies used to support microservices. Minimum 7 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with Kubernetes. Minimum 3 years experience working on Scripting or Programming Bachelor's degree in a related area Cloud Certification a plus
02/01/2025
Full time
*We are unable to sponsor for this permanent Full time role* *Position is Bonus eligible* Prestigious Financial Company is currently seeking an AWS DevOps Software Engineer with Kafka experience. Candidate will provide subject matter expertise for ongoing support of applications deployed to non-production AWS environments and supporting 3rd party applications. Identify root causes and automate solutions in support of development. Candidate will have a deep understanding of DevOps practices, leadership skills, and expertise in various tools and technologies. You will be working in a fast-paced, dynamic environment, using cutting-edge tools and cloud technologies. Manage day to day activities when called upon. Responsibilities: Desing Develop release and support, Cloud Native applications running on Containers Kubernetes and Docker within AWS. DevOps Strategy: Develop and implement DevOps strategies and best practices to enhance development, testing, and deployment processes. Possess in-depth knowledge and hands-on experience with DevOps tools and technologies, including but not limited to GitHub, Jenkins, Terraform, Ansible, Kafka, AWS, Apigee. Support the lower environments for incident and problem management. Resolve complex support issues in non-production environments. Create procedural and troubleshooting documentation related to cloud native applications. Write complex automation scripts using common automation tools, such as yaml, Json, Bash, Groovy, Ansible, Terraform and python, Perform other duties as assigned Qualifications: Excellent problem-solving skills. Ability to work independently. Ability to work with management to prioritize tasks. Demonstrate strong confidence in abilities and knowledge. Ability to work well in crisis situations. Ability to work under minimal supervision. Flexibility to be on call from 5 PM to 7 AM for 3 months per year. Good written and oral communication skills. Technical Skills: Expertise on Kubernetes and Docker, including best practices Expertise in cloud containerization; design, develop and troubleshoot Strong programming or Scripting skills in yaml, Helm Charts, Json, Bash, Groovy, Ansible, Terraform, python or Java. Advance level on Networking technologies CI/CD tools such as Artifactory, Jenkins, and GIT, SonarQube Experience with cloud-based systems such as AWS, Azure, or Google Cloud, including expertise in IaC and CaC; Ansible, Terraform Experience with Kafka infrastructure and processes Understanding of software development methodologies and Agile practices Excellent analytical and problem-solving skills, with the ability to troubleshoot and identify the root cause of issues Good verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams. Familiarity with monitoring and logging tools such Elk stack, Splunk. Familiarity with Technologies used to support microservices. Minimum 7 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with Kubernetes. Minimum 3 years experience working on Scripting or Programming Bachelor's degree in a related area Cloud Certification a plus
Machine Learning Engineering Manager £80,000 - £90,000 + exceptional benefits Leeds/Hybrid Our client, a very well reputable tech first business, is looking to hire an experienced Machine Learning Manager to help build out their platform capability, manage the Machine Learning development life cycle, and the team. In this position, you will play a crucial role to expand the platform capability and processes to assist with the Data Science teams. This role will help to evolve the platform and ensure it's robust and scalable. You'll be a true advocate for ML, working with technical and non-technical stakeholders. Experience Required: Extensive experience in training, deploying and maintaining Machine Learning models. Data Warehousing and ETL tools. Python and surrounding ML tech; PySpark, Snowflake, Scikit Learn, TensorFlow, PyTorch etc. Infrastructure as Code - Terraform, Ansible. Stakeholder Management - Tech and Non-Technical. The Offer: Base Salary: £80,000 - £90,000 Generous Bonus - Discressionary Enhanced Pension, Health Insurance, Life Assurance, plus Additional Flexi Benefits Hybrid & Remote working This is a fantastic opportunity to really take ownership of the ML capability and platform, and truly shape it to be something game changing. We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation or age.
02/01/2025
Full time
Machine Learning Engineering Manager £80,000 - £90,000 + exceptional benefits Leeds/Hybrid Our client, a very well reputable tech first business, is looking to hire an experienced Machine Learning Manager to help build out their platform capability, manage the Machine Learning development life cycle, and the team. In this position, you will play a crucial role to expand the platform capability and processes to assist with the Data Science teams. This role will help to evolve the platform and ensure it's robust and scalable. You'll be a true advocate for ML, working with technical and non-technical stakeholders. Experience Required: Extensive experience in training, deploying and maintaining Machine Learning models. Data Warehousing and ETL tools. Python and surrounding ML tech; PySpark, Snowflake, Scikit Learn, TensorFlow, PyTorch etc. Infrastructure as Code - Terraform, Ansible. Stakeholder Management - Tech and Non-Technical. The Offer: Base Salary: £80,000 - £90,000 Generous Bonus - Discressionary Enhanced Pension, Health Insurance, Life Assurance, plus Additional Flexi Benefits Hybrid & Remote working This is a fantastic opportunity to really take ownership of the ML capability and platform, and truly shape it to be something game changing. We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation or age.
Request Technology - Craig Johnson
Chicago, Illinois
*This role is 3 days onsite each week in Chicago, hybrid remote* *We are unable to provide sponsorship for this permanent Full time role in Chicago* *Position is bonus eligible* Prestigious Firm is currently seeking an Lead Generative AI Engineer. Candidate will be responsible for developing and implementing cutting-edge legal AI solutions that drive efficiency, improve decision making, and provide valuable insights across various administrative business groups and legal practices. This role will leverage expertise in AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Responsibilities: Prototype and test AI solutions using Python and Streamlit with a focus on natural language processing and text extraction from documents (PyPDF, Azure Document Intelligence) Develop plugins and assistants using LangChain, LlamaIndex, or Semantic Kernel, with expertise in prompt engineering and semantic function design Design and implement Retrieval Augmented Generation (RAG) stores using a combination of classic information retrieval and semantic embeddings stored in vector and graph databases Develop and deploy agents using AutoGen, CrewAI, LangChain Agents, and LlamaIndex Agents Use Gen AI to distill metadata and insights from documents Fine tune LLMs to optimize for domain and cost Collaborate with stakeholders to implement and automate AI powered solutions for common business workflows Enhance documentation procedures, codebase, and adherence to best practices to promote and facilitate knowledge sharing and ensure the upkeep of an organized and reproducible working environment Qualifications: Bachelor's Degree in Computer Science, Engineering, or related field 5+ years of experience in AI engineering or a related field Proven experience with AI engineering tools and technologies, including Python, Streamlit, Jupyter Notebooks, Langchain, LlamaIndex, and Semantic Kernel Experience with natural language processing, text extraction, and information retrieval techniques Strong understanding of machine learning and deep learning concepts including transformer based GPT models Experience with distributed computing and cloud environments (eg, Microsoft Azure) Strong organizational, communication, and problem-solving skills, with the ability to work harmoniously and effectively with others Ability to preserve confidentiality and exercise discretion, with strong attention to detail and good judgment
02/01/2025
Full time
*This role is 3 days onsite each week in Chicago, hybrid remote* *We are unable to provide sponsorship for this permanent Full time role in Chicago* *Position is bonus eligible* Prestigious Firm is currently seeking an Lead Generative AI Engineer. Candidate will be responsible for developing and implementing cutting-edge legal AI solutions that drive efficiency, improve decision making, and provide valuable insights across various administrative business groups and legal practices. This role will leverage expertise in AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Responsibilities: Prototype and test AI solutions using Python and Streamlit with a focus on natural language processing and text extraction from documents (PyPDF, Azure Document Intelligence) Develop plugins and assistants using LangChain, LlamaIndex, or Semantic Kernel, with expertise in prompt engineering and semantic function design Design and implement Retrieval Augmented Generation (RAG) stores using a combination of classic information retrieval and semantic embeddings stored in vector and graph databases Develop and deploy agents using AutoGen, CrewAI, LangChain Agents, and LlamaIndex Agents Use Gen AI to distill metadata and insights from documents Fine tune LLMs to optimize for domain and cost Collaborate with stakeholders to implement and automate AI powered solutions for common business workflows Enhance documentation procedures, codebase, and adherence to best practices to promote and facilitate knowledge sharing and ensure the upkeep of an organized and reproducible working environment Qualifications: Bachelor's Degree in Computer Science, Engineering, or related field 5+ years of experience in AI engineering or a related field Proven experience with AI engineering tools and technologies, including Python, Streamlit, Jupyter Notebooks, Langchain, LlamaIndex, and Semantic Kernel Experience with natural language processing, text extraction, and information retrieval techniques Strong understanding of machine learning and deep learning concepts including transformer based GPT models Experience with distributed computing and cloud environments (eg, Microsoft Azure) Strong organizational, communication, and problem-solving skills, with the ability to work harmoniously and effectively with others Ability to preserve confidentiality and exercise discretion, with strong attention to detail and good judgment
Okta Specialist Remote £450-£500 per day outside IR35 I am working with a forward-thinking organisation seeking an experienced Okta Platform Engineer or Okta Specialist to lead their Identity and Access Management (IAM) initiatives. This is an exciting opportunity to work on cutting-edge projects, improving the security and efficiency of their systems. Responsibilities Administer and optimise the Okta platform, including users, groups, and access policies. Develop integrations with internal and external applications using SSO, SAML, OIDC, and SCIM. Configure and manage multi-factor authentication (MFA) and adaptive access policies. Automate user life cycle processes using Okta Workflows or scripts. Troubleshoot and resolve Okta-related issues promptly. Collaborate with IT and security teams to align IAM strategies with business goals. Maintain clear documentation of configurations and processes. Key Skills and Experience Strong hands-on experience with Okta platform configuration and management. Deep understanding of IAM principles and protocols (eg, SAML, OIDC, SCIM). Scripting experience (eg, Python, PowerShell) and familiarity with automation tools. Knowledge of cybersecurity best practices and zero-trust security models. Strong problem-solving skills and the ability to communicate effectively with technical and non-technical stakeholders. Talent International UK Limited and it's subsidary Rethink Digital Gurus Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this opportunity, you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website
23/12/2024
Project-based
Okta Specialist Remote £450-£500 per day outside IR35 I am working with a forward-thinking organisation seeking an experienced Okta Platform Engineer or Okta Specialist to lead their Identity and Access Management (IAM) initiatives. This is an exciting opportunity to work on cutting-edge projects, improving the security and efficiency of their systems. Responsibilities Administer and optimise the Okta platform, including users, groups, and access policies. Develop integrations with internal and external applications using SSO, SAML, OIDC, and SCIM. Configure and manage multi-factor authentication (MFA) and adaptive access policies. Automate user life cycle processes using Okta Workflows or scripts. Troubleshoot and resolve Okta-related issues promptly. Collaborate with IT and security teams to align IAM strategies with business goals. Maintain clear documentation of configurations and processes. Key Skills and Experience Strong hands-on experience with Okta platform configuration and management. Deep understanding of IAM principles and protocols (eg, SAML, OIDC, SCIM). Scripting experience (eg, Python, PowerShell) and familiarity with automation tools. Knowledge of cybersecurity best practices and zero-trust security models. Strong problem-solving skills and the ability to communicate effectively with technical and non-technical stakeholders. Talent International UK Limited and it's subsidary Rethink Digital Gurus Limited acts as an employment agency for permanent recruitment and employment business for the supply of temporary workers. By applying for this opportunity, you accept the T&C's, Privacy Policy and Disclaimers which can be found on our website
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
17/12/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications