Senior Azure Data Engineer The need for experienced Data personnel is growing at a rapid rate and companies across the industry are looking to invest heavily into bringing in world class talent. We are one of those companies. With an already established data team across multiple million pound projects. The role involves designing data architectures, developing pipelines using Azure technologies, across projects in Renewable Energy, Construction, Oil and Gas and more. Team & Project Management: Lead Data Engineering teams to deliver client data solutions and ensure best practices in data engineering. Data Architecture & Strategy: Design data landscapes aligned with data strategies and translate user requirements into effective solutions. Cloud Data Solutions: Build and manage data pipelines using Azure cloud technologies (eg, Databricks, Azure Data Factory, Data Lake, Synapse). Technical Implementation & Quality Assurance: Oversee technical deployments, resolve data quality issues within SLAs, and develop DataOps best practices. Mentorship & Leadership: Inspire and develop team members, fostering an inclusive and high-performance culture. Process & Documentation: Maintain Turner & Townsend's documentation and processes for data engineering. Required Skills & Experience: Cloud & Data Engineering: Expertise in Azure data tools (Databricks, Data Factory, Synapse, etc.) and DataOps tools like Great Expectations. Programming & Development: Proficiency in SQL, Python, Spark, DAX, and C#. DevOps & CI/CD: Experience with Azure DevOps, Git, Terraform/Bicep, and version control. Communication: Strong ability to explain technical concepts to mixed audiences. Desirable Skills: Advanced Analytics & ML: Experience with MLOps, Real Time data, and machine learning engineering. Data Modeling & Infrastructure: Knowledge of Kimball modelling, Kubernetes, Docker, and Azure solution design. Collaborative Work: Experience working with multidisciplinary teams, including software engineers, DevOps, and data scientists. If you believe your expertise is a match for this role and you are interested in taking on the challenge, feel free to apply for this role or contact me direct: (see below)
07/02/2025
Full time
Senior Azure Data Engineer The need for experienced Data personnel is growing at a rapid rate and companies across the industry are looking to invest heavily into bringing in world class talent. We are one of those companies. With an already established data team across multiple million pound projects. The role involves designing data architectures, developing pipelines using Azure technologies, across projects in Renewable Energy, Construction, Oil and Gas and more. Team & Project Management: Lead Data Engineering teams to deliver client data solutions and ensure best practices in data engineering. Data Architecture & Strategy: Design data landscapes aligned with data strategies and translate user requirements into effective solutions. Cloud Data Solutions: Build and manage data pipelines using Azure cloud technologies (eg, Databricks, Azure Data Factory, Data Lake, Synapse). Technical Implementation & Quality Assurance: Oversee technical deployments, resolve data quality issues within SLAs, and develop DataOps best practices. Mentorship & Leadership: Inspire and develop team members, fostering an inclusive and high-performance culture. Process & Documentation: Maintain Turner & Townsend's documentation and processes for data engineering. Required Skills & Experience: Cloud & Data Engineering: Expertise in Azure data tools (Databricks, Data Factory, Synapse, etc.) and DataOps tools like Great Expectations. Programming & Development: Proficiency in SQL, Python, Spark, DAX, and C#. DevOps & CI/CD: Experience with Azure DevOps, Git, Terraform/Bicep, and version control. Communication: Strong ability to explain technical concepts to mixed audiences. Desirable Skills: Advanced Analytics & ML: Experience with MLOps, Real Time data, and machine learning engineering. Data Modeling & Infrastructure: Knowledge of Kimball modelling, Kubernetes, Docker, and Azure solution design. Collaborative Work: Experience working with multidisciplinary teams, including software engineers, DevOps, and data scientists. If you believe your expertise is a match for this role and you are interested in taking on the challenge, feel free to apply for this role or contact me direct: (see below)
Sanderson Recruitment Plc
Cardiff, South Glamorgan
Pricing Optimisation Lead The Company I am currently working in partnership with a leading financial services company, and they are currently looking for a Pricing Optimisation Lead to join them. Their offices are based in Cardiff, but the role offers remote working. The Role The main purpose of the role is to deliver advanced optimisation models and a framework of continuous improvement and innovation. They are looking for a technical leader who can understand customer behaviours to optimise pricing and commercial models. The role involves managing pricing processes and further developing their dynamic pricing model. Key skills: Proficient in SQL and Python. Experience as a Data Scientist Experience in understanding customer behaviours and pricing models Strong problem-solving skills with keen attention to detail If you are interested in this vacancy, please apply to the role or email me directly at (see below)
07/02/2025
Full time
Pricing Optimisation Lead The Company I am currently working in partnership with a leading financial services company, and they are currently looking for a Pricing Optimisation Lead to join them. Their offices are based in Cardiff, but the role offers remote working. The Role The main purpose of the role is to deliver advanced optimisation models and a framework of continuous improvement and innovation. They are looking for a technical leader who can understand customer behaviours to optimise pricing and commercial models. The role involves managing pricing processes and further developing their dynamic pricing model. Key skills: Proficient in SQL and Python. Experience as a Data Scientist Experience in understanding customer behaviours and pricing models Strong problem-solving skills with keen attention to detail If you are interested in this vacancy, please apply to the role or email me directly at (see below)
Job Title: Senior Data Scientist Location: Leeds (Hybrid - 2 days in the office per week) Salary: Up to £68,000 Our client, a very well reputable tech first business, is looking to hire an experienced Senior Data Scientist as they continue to scale their data capabilities. You will work closely with a technical engineering team to deliver innovative, data-driven solutions that optimise operational performance, improve efficiency, and reduce costs. This role offers the opportunity to work on meaningful projects in a regulated industry, contributing to impactful analytics initiatives in collaboration with highly skilled professionals. Senior Data Scientist Responsibilities: Partner with engineering and technology teams to identify opportunities for applying data science techniques to deliver measurable business value. Develop and deploy predictive models, time-series forecasting, and natural language processing solutions to address real-world operational challenges. Design and implement scalable data pipelines and frameworks to ensure data quality, accuracy, and integrity. Present findings and actionable recommendations through data storytelling to technical and non-technical stakeholders. Keep up to date with advancements in data science and analytics technologies, incorporating best practices into project delivery. Ensure compliance with regulatory and internal policy standards in all data-driven initiatives. Senior Data Scientist Requirements: Proven experience delivering end-to-end data science projects, from proof-of-concept through to production, with a focus on operational outcomes. Advanced proficiency in Python and machine learning frameworks such as Scikit Learn, TensorFlow, or PyTorch. Strong SQL skills and experience with data platforms (eg, Snowflake, cloud-based systems) and visualisation tools. Familiarity with data governance and best practices for assessing and improving data quality. Highly numerate with a solid statistical background and a commitment to delivering high-quality results. Prior exposure to regulated industries or technical engineering environments is desirable but not essential. What's in it for me? Hybrid working - collaborate in the office 2 days per week with flexibility to work remotely. Competitive salary package with a discretionary bonus. Professional development opportunities, including certifications, industry events, and training courses. Access to additional employee benefits that promote well-being and work-life balance. We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation, or age.
06/02/2025
Full time
Job Title: Senior Data Scientist Location: Leeds (Hybrid - 2 days in the office per week) Salary: Up to £68,000 Our client, a very well reputable tech first business, is looking to hire an experienced Senior Data Scientist as they continue to scale their data capabilities. You will work closely with a technical engineering team to deliver innovative, data-driven solutions that optimise operational performance, improve efficiency, and reduce costs. This role offers the opportunity to work on meaningful projects in a regulated industry, contributing to impactful analytics initiatives in collaboration with highly skilled professionals. Senior Data Scientist Responsibilities: Partner with engineering and technology teams to identify opportunities for applying data science techniques to deliver measurable business value. Develop and deploy predictive models, time-series forecasting, and natural language processing solutions to address real-world operational challenges. Design and implement scalable data pipelines and frameworks to ensure data quality, accuracy, and integrity. Present findings and actionable recommendations through data storytelling to technical and non-technical stakeholders. Keep up to date with advancements in data science and analytics technologies, incorporating best practices into project delivery. Ensure compliance with regulatory and internal policy standards in all data-driven initiatives. Senior Data Scientist Requirements: Proven experience delivering end-to-end data science projects, from proof-of-concept through to production, with a focus on operational outcomes. Advanced proficiency in Python and machine learning frameworks such as Scikit Learn, TensorFlow, or PyTorch. Strong SQL skills and experience with data platforms (eg, Snowflake, cloud-based systems) and visualisation tools. Familiarity with data governance and best practices for assessing and improving data quality. Highly numerate with a solid statistical background and a commitment to delivering high-quality results. Prior exposure to regulated industries or technical engineering environments is desirable but not essential. What's in it for me? Hybrid working - collaborate in the office 2 days per week with flexibility to work remotely. Competitive salary package with a discretionary bonus. Professional development opportunities, including certifications, industry events, and training courses. Access to additional employee benefits that promote well-being and work-life balance. We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation, or age.
Machine Learning Platform Engineer Whitehall Resources are currently looking for a Machine Learning Platform Engineer onsite in Sweden for an initial 8 month contract. Job Description: Our client are seeking a Machine Learning Platform Engineer to join our platform development team. The ideal candidate has a strong understanding of designing and building ML platforms in a multi-application and multi-model setup with focus on automation, traceability, monitoring, scalability, and reusability at both the model and data level. Preferably, the candidate has an interest to understand concepts of energy systems and potential business value provided by the platform. Key Responsibilities: Support in designing and building an ML platform that delivers both multiple applications as well as multiple models per application, with a focus on automation, traceability, monitoring, scalability, and reusability Support data scientists to reduce the time from idea and exploration to testing by building a collaborative platform Managing and serving data in a validated and usable format from IoT sources with irregularly sampled data Take initiatives and work together with colleagues, Product Owners, and Architects to find the best solutions for the platform. Qualifications: Proven experience in designing and building ML platforms Fluent in Python and experience in pyspark Experience with Azure services such as Azure ML, Azure blob storage, Event hubs, Azure Data Factory, Azure DevOps, Azure Data Explorer, Azure Databricks, Kusto Query Language Understanding of applied MLOps and the complexity of managing data from IoT sources Excellent collaboration skills to support data scientists and reduce the time from idea and exploration to PoC/MVP Knowledge and experience of concepts such as lambda architecture, medallion architecture, feature stores, event sampled data, MLOps, CRISP-DM, edge compute, deploy code vs deploy model, CI/CD, common data platform patterns All of our opportunities require that applicants are eligible to work in the specified country/location, unless otherwise stated in the job description. Whitehall Resources are an equal opportunities employer who value a diverse and inclusive working environment. All qualified applicants will receive consideration for employment without regard to race, religion, gender identity or expression, sexual orientation, national origin, pregnancy, disability, age, veteran status, or other characteristics.
05/02/2025
Project-based
Machine Learning Platform Engineer Whitehall Resources are currently looking for a Machine Learning Platform Engineer onsite in Sweden for an initial 8 month contract. Job Description: Our client are seeking a Machine Learning Platform Engineer to join our platform development team. The ideal candidate has a strong understanding of designing and building ML platforms in a multi-application and multi-model setup with focus on automation, traceability, monitoring, scalability, and reusability at both the model and data level. Preferably, the candidate has an interest to understand concepts of energy systems and potential business value provided by the platform. Key Responsibilities: Support in designing and building an ML platform that delivers both multiple applications as well as multiple models per application, with a focus on automation, traceability, monitoring, scalability, and reusability Support data scientists to reduce the time from idea and exploration to testing by building a collaborative platform Managing and serving data in a validated and usable format from IoT sources with irregularly sampled data Take initiatives and work together with colleagues, Product Owners, and Architects to find the best solutions for the platform. Qualifications: Proven experience in designing and building ML platforms Fluent in Python and experience in pyspark Experience with Azure services such as Azure ML, Azure blob storage, Event hubs, Azure Data Factory, Azure DevOps, Azure Data Explorer, Azure Databricks, Kusto Query Language Understanding of applied MLOps and the complexity of managing data from IoT sources Excellent collaboration skills to support data scientists and reduce the time from idea and exploration to PoC/MVP Knowledge and experience of concepts such as lambda architecture, medallion architecture, feature stores, event sampled data, MLOps, CRISP-DM, edge compute, deploy code vs deploy model, CI/CD, common data platform patterns All of our opportunities require that applicants are eligible to work in the specified country/location, unless otherwise stated in the job description. Whitehall Resources are an equal opportunities employer who value a diverse and inclusive working environment. All qualified applicants will receive consideration for employment without regard to race, religion, gender identity or expression, sexual orientation, national origin, pregnancy, disability, age, veteran status, or other characteristics.
Senior Platform Engineer - DV Cleared Up to £100,000 + Stock Options London, Hybrid (Must have active DV clearance) Would you be interested in joining a new Platform Engineering team for a self-funded scale-up that build and deliver Data Analytics products for government bodies? This data scale-up operate mainly within the public sector, helping government bodies that have vast swathes of data to make better use of it. They're looking to build a new team of Platform Engineers for a new longstanding secure government project, that involves working alongside data engineers and data scientists to manage, support and build the infrastructure. Tech stack, therefore, skills needed: Kubernetes or Openshift ELK Stack, ElasticSearch or Opensearch Linux CI/CD Nice to haves, but definitely not essential: Cloudera Hadoop Spark Kafka Hbase Hue Atlas Logistics: up to £100,000 base salary 30 days holiday + bank holidays Private health care Annual trips to see the northern lights! Location: Central London Please note, candidates MUST already have DV clearance, SC clearance alone doesn't meet the requirement. Apply now, or contact (see below) for more information We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation or age
05/02/2025
Full time
Senior Platform Engineer - DV Cleared Up to £100,000 + Stock Options London, Hybrid (Must have active DV clearance) Would you be interested in joining a new Platform Engineering team for a self-funded scale-up that build and deliver Data Analytics products for government bodies? This data scale-up operate mainly within the public sector, helping government bodies that have vast swathes of data to make better use of it. They're looking to build a new team of Platform Engineers for a new longstanding secure government project, that involves working alongside data engineers and data scientists to manage, support and build the infrastructure. Tech stack, therefore, skills needed: Kubernetes or Openshift ELK Stack, ElasticSearch or Opensearch Linux CI/CD Nice to haves, but definitely not essential: Cloudera Hadoop Spark Kafka Hbase Hue Atlas Logistics: up to £100,000 base salary 30 days holiday + bank holidays Private health care Annual trips to see the northern lights! Location: Central London Please note, candidates MUST already have DV clearance, SC clearance alone doesn't meet the requirement. Apply now, or contact (see below) for more information We are an equal opportunities employer and welcome applications from all suitably qualified persons regardless of their race, sex, disability, religion/belief, sexual orientation or age
Investment Banking - Senior C++ Developer - Glasgow - C++, Python, SDLC, Linux The candidate will be working with a UK based team of passionate programming-language subject matter experts, as well as developers, data scientists and technical leads across the entire firm. They will be responsible for helping to maintain an extensive library of C++ plug-ins for kdb users that as well as engineering internal tools and libraries where necessary. We are looking for a candidate who is keen to work with new languages. This is primarily a C++ role but openness to learn and work with KDB+/q and python will be required. Prior knowledge of KDB not necessary. Skills required: Core C++ development Python development Familiarity with the enterprise Software Development Lifecycle (SDLC) Familiarity with Linux Good communication/organisation skills Skills desired: Prior experience with OCI containerisation tools/platforms (such as Docker, Kubernetes) Prior kdb+/q experience (or willingness to learn on the job) High-level understanding of Windows development By applying to this job you are sending us your CV, which may contain personal information. Please refer to our Privacy Notice to understand how we process this information. In short, in order to supply you with work finding services, we will hold and process your personal data, and only with your express permission we will share this personal data with a client (or a third party working on behalf of the client) by email or by upload to the Client/third parties vendor management system. By giving us permission to send your CV to a client, this constitutes permission to share the personal data that would be necessary to consider your application, interview you (Phone/video/face to face) and if successful hire you. Scope AT acts as an employment agency for Permanent Recruitment and an employment business for the supply of temporary workers. By applying for this job you accept the Terms and Conditions, Data Protection Policy, Privacy Notice and Disclaimers which can be found at our website.
05/02/2025
Project-based
Investment Banking - Senior C++ Developer - Glasgow - C++, Python, SDLC, Linux The candidate will be working with a UK based team of passionate programming-language subject matter experts, as well as developers, data scientists and technical leads across the entire firm. They will be responsible for helping to maintain an extensive library of C++ plug-ins for kdb users that as well as engineering internal tools and libraries where necessary. We are looking for a candidate who is keen to work with new languages. This is primarily a C++ role but openness to learn and work with KDB+/q and python will be required. Prior knowledge of KDB not necessary. Skills required: Core C++ development Python development Familiarity with the enterprise Software Development Lifecycle (SDLC) Familiarity with Linux Good communication/organisation skills Skills desired: Prior experience with OCI containerisation tools/platforms (such as Docker, Kubernetes) Prior kdb+/q experience (or willingness to learn on the job) High-level understanding of Windows development By applying to this job you are sending us your CV, which may contain personal information. Please refer to our Privacy Notice to understand how we process this information. In short, in order to supply you with work finding services, we will hold and process your personal data, and only with your express permission we will share this personal data with a client (or a third party working on behalf of the client) by email or by upload to the Client/third parties vendor management system. By giving us permission to send your CV to a client, this constitutes permission to share the personal data that would be necessary to consider your application, interview you (Phone/video/face to face) and if successful hire you. Scope AT acts as an employment agency for Permanent Recruitment and an employment business for the supply of temporary workers. By applying for this job you accept the Terms and Conditions, Data Protection Policy, Privacy Notice and Disclaimers which can be found at our website.
Investment Banking Python/Javascript AI Engineer - AI/ML Models/risk/NLP - Glasgow (Contract) Our IB client is looking for a skilled and experienced Developer to join our architecture delivery team. This role focuses on build AI Architect empowering architects and developers in making informed, data-driven decisions, automating repetitive architecture tasks, and streamlining documentation workflows. Key Responsibilities: Design, develop, and implement a scalable, AI-driven architecture platform. Work closely with architects and data scientists to embed AI/ML models into the system for enhanced decision-making, such as recommendation engines. Drive the adoption of AI Architect and best practices across the development teams, ensuring consistency and alignment with enterprise standards. Participate in and lead architecture communities of practice to foster knowledge-sharing and innovation within the organization. Stay updated on the latest architecture and technology trends relevant to financial services, such as cloud computing, data security, AI, and distributed systems. Skills/Qualifications: 5+ years of experience in at least one of the following: JavaScript, Java, TypeScript, or Python End-to-end Systems Development: Proven ability to architect and build complex systems with a long-term vision Expertise in financial services applications, including knowledge of transaction processing, risk management, and data security. Excellent communication skills, with the ability to present complex architectural ideas to diverse stakeholders. Strong problem-solving and critical thinking skills, with a track record of innovative solution design in complex environments. Understanding of experimental design, statistical analysis, and data-driven decision making. Proficiency in collaborating with data scientists to translate advanced models into scalable production code Familiarity with AI-driven frameworks like knowledge graphs, natural language processing (NLP), or recommendation systems is a big plus. Inside IR35 - Hybrid - Glasgow based - 12 months initial contract By applying to this job you are sending us your CV, which may contain personal information. Please refer to our Privacy Notice to understand how we process this information. In short, in order to supply you with work finding services, we will hold and process your personal data, and only with your express permission we will share this personal data with a client (or a third party working on behalf of the client) by email or by upload to the Client/third parties vendor management system. By giving us permission to send your CV to a client, this constitutes permission to share the personal data that would be necessary to consider your application, interview you (Phone/video/face to face) and if successful hire you. Scope AT acts as an employment agency for Permanent Recruitment and an employment business for the supply of temporary workers. By applying for this job you accept the Terms and Conditions, Data Protection Policy, Privacy Notice and Disclaimers which can be found at our website.
05/02/2025
Project-based
Investment Banking Python/Javascript AI Engineer - AI/ML Models/risk/NLP - Glasgow (Contract) Our IB client is looking for a skilled and experienced Developer to join our architecture delivery team. This role focuses on build AI Architect empowering architects and developers in making informed, data-driven decisions, automating repetitive architecture tasks, and streamlining documentation workflows. Key Responsibilities: Design, develop, and implement a scalable, AI-driven architecture platform. Work closely with architects and data scientists to embed AI/ML models into the system for enhanced decision-making, such as recommendation engines. Drive the adoption of AI Architect and best practices across the development teams, ensuring consistency and alignment with enterprise standards. Participate in and lead architecture communities of practice to foster knowledge-sharing and innovation within the organization. Stay updated on the latest architecture and technology trends relevant to financial services, such as cloud computing, data security, AI, and distributed systems. Skills/Qualifications: 5+ years of experience in at least one of the following: JavaScript, Java, TypeScript, or Python End-to-end Systems Development: Proven ability to architect and build complex systems with a long-term vision Expertise in financial services applications, including knowledge of transaction processing, risk management, and data security. Excellent communication skills, with the ability to present complex architectural ideas to diverse stakeholders. Strong problem-solving and critical thinking skills, with a track record of innovative solution design in complex environments. Understanding of experimental design, statistical analysis, and data-driven decision making. Proficiency in collaborating with data scientists to translate advanced models into scalable production code Familiarity with AI-driven frameworks like knowledge graphs, natural language processing (NLP), or recommendation systems is a big plus. Inside IR35 - Hybrid - Glasgow based - 12 months initial contract By applying to this job you are sending us your CV, which may contain personal information. Please refer to our Privacy Notice to understand how we process this information. In short, in order to supply you with work finding services, we will hold and process your personal data, and only with your express permission we will share this personal data with a client (or a third party working on behalf of the client) by email or by upload to the Client/third parties vendor management system. By giving us permission to send your CV to a client, this constitutes permission to share the personal data that would be necessary to consider your application, interview you (Phone/video/face to face) and if successful hire you. Scope AT acts as an employment agency for Permanent Recruitment and an employment business for the supply of temporary workers. By applying for this job you accept the Terms and Conditions, Data Protection Policy, Privacy Notice and Disclaimers which can be found at our website.
Are you an experienced Senior Data Engineer looking for an interesting new job in Zurich? Do you have strong experience of distributed computing, particularly Spark? Are you keen to be a leading force in helping this company achieve their goals through the utilisation of data insights? You will be joining a Data team that consists of very smart Data Scientists and Data Engineers that are part of a "start up" within a larger organisation that are responsible for designing, building and maintain robust, scalable and cost effective data infrastructure that supports the delivery of Real Time data to an underlying algorithm. Technically, the team develop the data pipelines using Python and Spark. If your experience is in Scala or even Java this is ok, but naturally Pyspark is preferred. From a cloud perspective, they use Azure - however experience with AWS or GCP is fine as long as you have good general CI/CD knowledge. Although the organisation has a number of data scientists, they are lacking strong Senior Data Engineers. As such you will get the benefit of playing an important role in developing the data infrastructure, working with some very bright minds and also see that your work contributes to solving a real world challenge. For more information on this Senior Data Engineer position, or any other Data Engineer positions that I have available, please send your CV or alternatively you can call me
05/02/2025
Full time
Are you an experienced Senior Data Engineer looking for an interesting new job in Zurich? Do you have strong experience of distributed computing, particularly Spark? Are you keen to be a leading force in helping this company achieve their goals through the utilisation of data insights? You will be joining a Data team that consists of very smart Data Scientists and Data Engineers that are part of a "start up" within a larger organisation that are responsible for designing, building and maintain robust, scalable and cost effective data infrastructure that supports the delivery of Real Time data to an underlying algorithm. Technically, the team develop the data pipelines using Python and Spark. If your experience is in Scala or even Java this is ok, but naturally Pyspark is preferred. From a cloud perspective, they use Azure - however experience with AWS or GCP is fine as long as you have good general CI/CD knowledge. Although the organisation has a number of data scientists, they are lacking strong Senior Data Engineers. As such you will get the benefit of playing an important role in developing the data infrastructure, working with some very bright minds and also see that your work contributes to solving a real world challenge. For more information on this Senior Data Engineer position, or any other Data Engineer positions that I have available, please send your CV or alternatively you can call me
We are currently looking on behalf of one of our important clients for a Data Architect. The role is permanent position based in Zurich Canton & comes with good home office allowance. Your role: Drive the architecture design, implementation & improvement of data platforms & guide & support organization-wide data-driven solutions. Design, implement & re-architect complex data systems& pipelines in close collaboration with key business divisions & data users. Design & evolve a standard data architecture & infrastructure. Work across multiple teams to align objectives. Assist in the implementation of organization-wide Data Strategy. Further enhance advanced data analytics capabilities required to enable successful business outcomes. Solve technical programs of different levels of complexity. Work with solution engineers, data scientists & product owners to help deliver their products. Lead activities, projects & advances across multiple user groups & partners. Your Skills & Experience: At least 7 years of relevant professional experience in Data Engineering & at least 3 years in Data Architecture. Experienced in Defining & Conceptualizing Multi-Purpose Data Systems. Very proficient in Data Modelling & ELT Principles. Hands-on experience in delivering Big Data Projects. An up-to-date knowledge of Data Platforms such as Kafka & Snowflake. Skilled in Producing Designs of Complex IT Systems, including Requirements Discovery & Analysis. Accustomed to working with Real Time/Streaming Technologies. Experienced in Dashboard Presentations & Patterns. Skilled in building API Layers & Integration Systems. Your Profile: Completed University Degree in the area Computer Science or Similar. Dedicated, ambitious, innovative, analytical, organized & solution, customer & results-oriented. A team-player & strong communication & conflict resolution skills. Available for possible on-call duties when required. Fluent in English (spoken & written). Any additional language skills in German, French or Spanish are considered advantageous.
04/02/2025
Full time
We are currently looking on behalf of one of our important clients for a Data Architect. The role is permanent position based in Zurich Canton & comes with good home office allowance. Your role: Drive the architecture design, implementation & improvement of data platforms & guide & support organization-wide data-driven solutions. Design, implement & re-architect complex data systems& pipelines in close collaboration with key business divisions & data users. Design & evolve a standard data architecture & infrastructure. Work across multiple teams to align objectives. Assist in the implementation of organization-wide Data Strategy. Further enhance advanced data analytics capabilities required to enable successful business outcomes. Solve technical programs of different levels of complexity. Work with solution engineers, data scientists & product owners to help deliver their products. Lead activities, projects & advances across multiple user groups & partners. Your Skills & Experience: At least 7 years of relevant professional experience in Data Engineering & at least 3 years in Data Architecture. Experienced in Defining & Conceptualizing Multi-Purpose Data Systems. Very proficient in Data Modelling & ELT Principles. Hands-on experience in delivering Big Data Projects. An up-to-date knowledge of Data Platforms such as Kafka & Snowflake. Skilled in Producing Designs of Complex IT Systems, including Requirements Discovery & Analysis. Accustomed to working with Real Time/Streaming Technologies. Experienced in Dashboard Presentations & Patterns. Skilled in building API Layers & Integration Systems. Your Profile: Completed University Degree in the area Computer Science or Similar. Dedicated, ambitious, innovative, analytical, organized & solution, customer & results-oriented. A team-player & strong communication & conflict resolution skills. Available for possible on-call duties when required. Fluent in English (spoken & written). Any additional language skills in German, French or Spanish are considered advantageous.
Data specialist/curator for Large Language Models-derived toxicological (meta)data (m/f/d) - Data Validation / Quality Assurance / Toxicology/ Documentation / English Project: For our customer a big pharmaceutical company in Basel we are looking for a highly qualified Data specialist/curator for Large Language Models-derived toxicological (meta)data (m/f/d). Background: We believe it's urgent to deliver medical solutions right now - even as we develop innovations for the future. We are passionate about transforming patients' lives and we are fearless in both decision and action. And we believe that good business means a better world. We commit ourselves to scientific rigor, unassailable ethics, and access to medical innovations for all. We do this today to build a better tomorrow. Pharmaceutical Sciences (PS) is a global function within Roche Pharma Research and Early Development (pRED). As a team member in the Prediction Modelling (PM) Chapter of PS, you will work in close collaboration with toxicologists as well as other scientists in pRED, having access to state-of-the-art bioinformatics and biostatistics tools and methods and gaining toxicological insights from experts in the field. Large Language Models (LLMs) have evolved beyond simple text completion tools into sophisticated systems capable of data extraction, summarization, enrichment, and knowledge capture. These advancements enable the retrieval of information previously locked within documents, reports, presentations, and meeting records, making it possible to repurpose this data and reverse translate knowledge from the past to inform future insights. The position in question focuses on transforming historical toxicology documents into structured, high-quality datasets through AI-powered extraction while supporting the application of LLMs for toxicological data understanding and enrichment from historical documents. The extracted data will be used to enhance existing data repositories and serve as a foundation for creating new ones tailored to the specific needs of toxicologists. While LLMs hold immense potential, they are also prone to generating inaccurate or fabricated information, commonly referred to as "hallucinations." To address this, rigorous data curation is essential. Curating and validating subsets of the extracted data will not only improve the quality and reliability of the outputs but also contribute to refining and enhancing the performance of the models themselves The perfect candidate: We are looking for a highly skilled and detail-oriented Toxicological Data Curator with a background in biology or toxicology or drug development or veterinary medicine etc. In this role, you will ensure the accuracy, consistency, and scientific validity of toxicological data captured by large language models (LLMs). You will cross-check data outputs against original sources, evaluate the reliability of AI-derived information, and contribute to the development of high-quality datasets to support drug safety and development efforts. Tasks & Responsibilities: Data Validation: Review toxicological data output generated by LLMs and validate it against original resources (eg, research articles, regulatory documents, toxicology databases). Identify and document discrepancies, errors, or ambiguities in AI-derived data. Build up pipeline/dataset (choose one if needed) for toxicological evaluation tasks . Quality Assurance: Ensure consistency, completeness, and scientific accuracy of curated toxicological datasets. Adhere to established quality control protocols and contribute to improving workflows as needed. . Toxicology Expertise: Apply toxicological knowledge to evaluate data related to preclinical and clinical toxicities, mechanisms of toxicity, safety biomarkers, and risk assessments. Assess relevance and applicability of curated data to specific drug development scenarios. . Collaboration: Work closely with computational toxicologists, data scientists, and cross-functional teams to align on curation standards and requirements. Provide feedback to enhance LLM performance based on identified gaps or inaccuracies in the extracted data. . Documentation and Reporting: Maintain detailed records of validation processes and outcomes. Prepare periodic reports summarizing curation progress, data quality metrics, and key findings. Must Haves: . Preferred Master's in Biology, Pharmacology, Toxicology, Drug Development, Biotechnology or a related field. . Experience with data curation, annotation, or systematic review methodologies is a plus. . Basic understanding of machine learning, LLMs, or natural language processing (NLP) tools is desirable but not required. . English fluent (mind. C1 Level) . Strong attention to detail and commitment to data accuracy. . Excellent critical thinking and problem-solving skills, particularly in evaluating scientific information. . Ability to synthesize complex toxicological data and present clear, actionable conclusions. . Effective written and verbal communication skills for reporting and collaboration. Reference Nr.: 923799TP Role: Data specialist/curator for Large Language Models-derived toxicological (meta)data (m/f/d) Industrie: Pharma Workplace: Basel Pensum: 80-100% Start: 01.03.2025 Duration: 6 Deadline :10.02.2025 If you are interested in this position, please send us your complete dossier. About us : ITech Consult is an ISO 9001:2015 certified Swiss company with offices in Germany and Ireland. ITech Consult specialises in the placement of highly qualified candidates for recruitment in the fields of IT, Life Science & Engineering. We offer staff leasing & payroll services. For our candidates this is free of charge, also for Payroll we do not charge you any additional fees.
04/02/2025
Project-based
Data specialist/curator for Large Language Models-derived toxicological (meta)data (m/f/d) - Data Validation / Quality Assurance / Toxicology/ Documentation / English Project: For our customer a big pharmaceutical company in Basel we are looking for a highly qualified Data specialist/curator for Large Language Models-derived toxicological (meta)data (m/f/d). Background: We believe it's urgent to deliver medical solutions right now - even as we develop innovations for the future. We are passionate about transforming patients' lives and we are fearless in both decision and action. And we believe that good business means a better world. We commit ourselves to scientific rigor, unassailable ethics, and access to medical innovations for all. We do this today to build a better tomorrow. Pharmaceutical Sciences (PS) is a global function within Roche Pharma Research and Early Development (pRED). As a team member in the Prediction Modelling (PM) Chapter of PS, you will work in close collaboration with toxicologists as well as other scientists in pRED, having access to state-of-the-art bioinformatics and biostatistics tools and methods and gaining toxicological insights from experts in the field. Large Language Models (LLMs) have evolved beyond simple text completion tools into sophisticated systems capable of data extraction, summarization, enrichment, and knowledge capture. These advancements enable the retrieval of information previously locked within documents, reports, presentations, and meeting records, making it possible to repurpose this data and reverse translate knowledge from the past to inform future insights. The position in question focuses on transforming historical toxicology documents into structured, high-quality datasets through AI-powered extraction while supporting the application of LLMs for toxicological data understanding and enrichment from historical documents. The extracted data will be used to enhance existing data repositories and serve as a foundation for creating new ones tailored to the specific needs of toxicologists. While LLMs hold immense potential, they are also prone to generating inaccurate or fabricated information, commonly referred to as "hallucinations." To address this, rigorous data curation is essential. Curating and validating subsets of the extracted data will not only improve the quality and reliability of the outputs but also contribute to refining and enhancing the performance of the models themselves The perfect candidate: We are looking for a highly skilled and detail-oriented Toxicological Data Curator with a background in biology or toxicology or drug development or veterinary medicine etc. In this role, you will ensure the accuracy, consistency, and scientific validity of toxicological data captured by large language models (LLMs). You will cross-check data outputs against original sources, evaluate the reliability of AI-derived information, and contribute to the development of high-quality datasets to support drug safety and development efforts. Tasks & Responsibilities: Data Validation: Review toxicological data output generated by LLMs and validate it against original resources (eg, research articles, regulatory documents, toxicology databases). Identify and document discrepancies, errors, or ambiguities in AI-derived data. Build up pipeline/dataset (choose one if needed) for toxicological evaluation tasks . Quality Assurance: Ensure consistency, completeness, and scientific accuracy of curated toxicological datasets. Adhere to established quality control protocols and contribute to improving workflows as needed. . Toxicology Expertise: Apply toxicological knowledge to evaluate data related to preclinical and clinical toxicities, mechanisms of toxicity, safety biomarkers, and risk assessments. Assess relevance and applicability of curated data to specific drug development scenarios. . Collaboration: Work closely with computational toxicologists, data scientists, and cross-functional teams to align on curation standards and requirements. Provide feedback to enhance LLM performance based on identified gaps or inaccuracies in the extracted data. . Documentation and Reporting: Maintain detailed records of validation processes and outcomes. Prepare periodic reports summarizing curation progress, data quality metrics, and key findings. Must Haves: . Preferred Master's in Biology, Pharmacology, Toxicology, Drug Development, Biotechnology or a related field. . Experience with data curation, annotation, or systematic review methodologies is a plus. . Basic understanding of machine learning, LLMs, or natural language processing (NLP) tools is desirable but not required. . English fluent (mind. C1 Level) . Strong attention to detail and commitment to data accuracy. . Excellent critical thinking and problem-solving skills, particularly in evaluating scientific information. . Ability to synthesize complex toxicological data and present clear, actionable conclusions. . Effective written and verbal communication skills for reporting and collaboration. Reference Nr.: 923799TP Role: Data specialist/curator for Large Language Models-derived toxicological (meta)data (m/f/d) Industrie: Pharma Workplace: Basel Pensum: 80-100% Start: 01.03.2025 Duration: 6 Deadline :10.02.2025 If you are interested in this position, please send us your complete dossier. About us : ITech Consult is an ISO 9001:2015 certified Swiss company with offices in Germany and Ireland. ITech Consult specialises in the placement of highly qualified candidates for recruitment in the fields of IT, Life Science & Engineering. We offer staff leasing & payroll services. For our candidates this is free of charge, also for Payroll we do not charge you any additional fees.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Principal Data Analytics ETL Analyst. Candidate will be responsible for expanding the analytics capabilities by making data accessible and usable to analysts throughout the organization. In this role you will lead the design and build of our internal analytics data warehouse and the maintenance of the supporting extract, load, and transform processes. You will demonstrate and disseminate expertise in data sets and support teams across the organization in successfully harnessing this data. You will collaborate with business users and technical teams across the organization to facilitate data-driven decision making by enabling exploration and analysis of historical and near Real Time data access using cloud-based tools and technologies. Lastly, you will be responsible for gathering requirements and designing solutions that address the problem at hand while also anticipating yet-to-be-asked analytical questions and developing and maintaining our analytics platform to meet the company's security and IT standards. Responsibilities: Partner with Data Architecture and other relevant teams to design and implement new cloud data warehouse infrastructure for internal facing analytics Work with various business and functional teams to understand their data and technical requirements and ensure delivered solutions address needs Manage the validation of data models to ensure information is available in our analytics warehouse for downstream uses, such as ad hoc analysis and dashboard development Maintain performance requirements of our analytics warehouse by tuning warehouse optimizations and storage processes Direct and enable the team to collaborate with Data Governance team and DBAs to design access controls around our analytics warehouse to meet business and Data Governance needs Approve documentation and testing to ensure data is accurate and easily understandable Promote self-service capabilities and data literacy for business users leveraging the platform through development of training presentations and resources Discover and share best practices for data and analytics engineering with members of the team Invest in continued learning on data and analytics engineering best practices and evaluate them for fit in improving maintainability and reliability of analytics infrastructure Qualifications: [Required] Ability to collaborate with multiple partners (eg, Business Functional areas, Data Platform, Platform Engineering, Security Services, Data Governance, Information Governance, etc.) to craft solutions that align business goals with internal security and development standards [Required] Ability to communicate technical concepts to audiences with varying levels of technical background and synthesize non-technical requests into technical output [Required] Comfortable supporting business analysts on high-priority projects [Required] High attention to detail and ability to think structurally about a solution [Required] Knowledge of and experience working with analytics/reporting technology and underlying databases [Required] Strong presentation and communication skills, including ability to clearly explain deliverables/results to non-technical audiences [Required] Experience working within an agile environment= Technical Skills: Demonstrated proficiency in: [Required] Experience implementing and maintaining cloud-based data warehouses and curating a semantic layer that meets the needs of business stakeholders [Required] Knowledge of and experience working with various analytics/reporting technologies [Required] Strong presentation and communication skills, including ability to clearly explain deliverables/results to non-technical audiences [Required] Ability to complete work iteratively in an agile environment [Required] Proficiency in SQL [Preferred] Experience with Python and/or R [Preferred] Experience with visualization/reporting tools, such as Tableau [Preferred] Experience with ETL tools, such as Alteryx Education and/or Experience: [Required] Bachelor's degree in quantitative discipline (eg, Statistics, Computer Science, Mathematics, Physics, Electrical Engineering, Industrial Engineering) or equivalent professional experience [Preferred] Master's degree [Required] 10+ years of experience as a data engineer, analytics engineer, Business Intelligence analyst, or data scientist Certificates or Licenses: [Preferred] Cloud platform certification [Preferred] BI tool certification
03/02/2025
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Principal Data Analytics ETL Analyst. Candidate will be responsible for expanding the analytics capabilities by making data accessible and usable to analysts throughout the organization. In this role you will lead the design and build of our internal analytics data warehouse and the maintenance of the supporting extract, load, and transform processes. You will demonstrate and disseminate expertise in data sets and support teams across the organization in successfully harnessing this data. You will collaborate with business users and technical teams across the organization to facilitate data-driven decision making by enabling exploration and analysis of historical and near Real Time data access using cloud-based tools and technologies. Lastly, you will be responsible for gathering requirements and designing solutions that address the problem at hand while also anticipating yet-to-be-asked analytical questions and developing and maintaining our analytics platform to meet the company's security and IT standards. Responsibilities: Partner with Data Architecture and other relevant teams to design and implement new cloud data warehouse infrastructure for internal facing analytics Work with various business and functional teams to understand their data and technical requirements and ensure delivered solutions address needs Manage the validation of data models to ensure information is available in our analytics warehouse for downstream uses, such as ad hoc analysis and dashboard development Maintain performance requirements of our analytics warehouse by tuning warehouse optimizations and storage processes Direct and enable the team to collaborate with Data Governance team and DBAs to design access controls around our analytics warehouse to meet business and Data Governance needs Approve documentation and testing to ensure data is accurate and easily understandable Promote self-service capabilities and data literacy for business users leveraging the platform through development of training presentations and resources Discover and share best practices for data and analytics engineering with members of the team Invest in continued learning on data and analytics engineering best practices and evaluate them for fit in improving maintainability and reliability of analytics infrastructure Qualifications: [Required] Ability to collaborate with multiple partners (eg, Business Functional areas, Data Platform, Platform Engineering, Security Services, Data Governance, Information Governance, etc.) to craft solutions that align business goals with internal security and development standards [Required] Ability to communicate technical concepts to audiences with varying levels of technical background and synthesize non-technical requests into technical output [Required] Comfortable supporting business analysts on high-priority projects [Required] High attention to detail and ability to think structurally about a solution [Required] Knowledge of and experience working with analytics/reporting technology and underlying databases [Required] Strong presentation and communication skills, including ability to clearly explain deliverables/results to non-technical audiences [Required] Experience working within an agile environment= Technical Skills: Demonstrated proficiency in: [Required] Experience implementing and maintaining cloud-based data warehouses and curating a semantic layer that meets the needs of business stakeholders [Required] Knowledge of and experience working with various analytics/reporting technologies [Required] Strong presentation and communication skills, including ability to clearly explain deliverables/results to non-technical audiences [Required] Ability to complete work iteratively in an agile environment [Required] Proficiency in SQL [Preferred] Experience with Python and/or R [Preferred] Experience with visualization/reporting tools, such as Tableau [Preferred] Experience with ETL tools, such as Alteryx Education and/or Experience: [Required] Bachelor's degree in quantitative discipline (eg, Statistics, Computer Science, Mathematics, Physics, Electrical Engineering, Industrial Engineering) or equivalent professional experience [Preferred] Master's degree [Required] 10+ years of experience as a data engineer, analytics engineer, Business Intelligence analyst, or data scientist Certificates or Licenses: [Preferred] Cloud platform certification [Preferred] BI tool certification
NO SPONSORSHIP Associate Principal, Data Analytics Engineering SALARY: $110k flex plus 10% bonus LOCATION: Chicago, IL Hybrid 3 days in office and 2 days remote You will be expanding analytics capabilities to design and build internal analytics within data warehouse using on-premises and cloud-based tools. You will create dashboards or visualization using the tools tableau powerBI SQL Queries Alteryx Jira services now. GIT a big plus, AWS or loud data warehouse airflow bs degree masters preferred this is working for operational risk 5 years experience building dashboards any audit risk knowledge is a plus This role will drive a team responsible for expanding analytics capabilities by making internal corporate data accessible and usable to analysts throughout the organization. Primary Duties and Responsibilities: Work closely with data analyst and business stakeholders to understand their data needs and provide support in data access, data preparation, and ad hoc queries Automate data processes to reduce manual interventions, improve data processing efficiency and optimize data workflow for performance scalability Integrate data form multiple sources and ensure data consistency and quality Build data models to ensure information is available in our analytics warehouse for downstream uses, such as analysis and create dashboards or visualizations using Tableau, Power BI to present insights Maintain performance requirements of our analytics warehouse by tuning optimizations and processes Create documentation and testing to ensure data is accurate and easily understandable Promote self-service capabilities and data literacy for business users leveraging the platform through development of training presentations and resources Discover and share best practices for data and analytics engineering with members of the team Invest in your continued learning on data and analytics engineering best practices and evaluate them for fit in improving maintainability and reliability of analytics infrastructure Qualifications: Ability to collaborate with multiple partners (eg, Corporate Risk, Compliance, Audit, Production Operations, DBAs, Data Architecture, Security) to craft solutions that align business goals with internal security and development standards Ability to communicate technical concepts to audiences with varying levels of technical background and synthesize non-technical requests into technical output Comfortable supporting business analysts on high-priority projects. High attention to detail and ability to think structurally about a solution Experience working within an agile environment Technical Skills & Background Ability to write and optimize complex analytical (SELECT) SQL queries Experience with data viz/prep tools Tableau and Alteryx [Preferred] Experience with SaaS tools and their backends, such as Jira and ServiceNow [Preferred] Applied knowledge of Python for writing custom pipeline code (virtual environments, functional programming, and unit testing) [Preferred] Experience with a source code repository system (preferably Git) [Preferred] Familiarity with at least one cloud data platform, such as AWS or GCP [Preferred] Experience creating and/or maintaining a cloud data warehouse or database [Preferred] Exposure to data orchestration tools, such as Airflow [Preferred] Understanding of applied statistics and hands-on experience applying these concepts Bachelor's degree in quantitative discipline (eg, Statistics, Computer Science, Mathematics, Physics, Electrical Engineering, Industrial Engineering) or equivalent professional experience 5+ years of experience as a business analyst, data analyst, data engineer, research analyst, data engineer, analytics engineer, Business Intelligence analyst, data analyst, data scientist, or research analyst
03/02/2025
Full time
NO SPONSORSHIP Associate Principal, Data Analytics Engineering SALARY: $110k flex plus 10% bonus LOCATION: Chicago, IL Hybrid 3 days in office and 2 days remote You will be expanding analytics capabilities to design and build internal analytics within data warehouse using on-premises and cloud-based tools. You will create dashboards or visualization using the tools tableau powerBI SQL Queries Alteryx Jira services now. GIT a big plus, AWS or loud data warehouse airflow bs degree masters preferred this is working for operational risk 5 years experience building dashboards any audit risk knowledge is a plus This role will drive a team responsible for expanding analytics capabilities by making internal corporate data accessible and usable to analysts throughout the organization. Primary Duties and Responsibilities: Work closely with data analyst and business stakeholders to understand their data needs and provide support in data access, data preparation, and ad hoc queries Automate data processes to reduce manual interventions, improve data processing efficiency and optimize data workflow for performance scalability Integrate data form multiple sources and ensure data consistency and quality Build data models to ensure information is available in our analytics warehouse for downstream uses, such as analysis and create dashboards or visualizations using Tableau, Power BI to present insights Maintain performance requirements of our analytics warehouse by tuning optimizations and processes Create documentation and testing to ensure data is accurate and easily understandable Promote self-service capabilities and data literacy for business users leveraging the platform through development of training presentations and resources Discover and share best practices for data and analytics engineering with members of the team Invest in your continued learning on data and analytics engineering best practices and evaluate them for fit in improving maintainability and reliability of analytics infrastructure Qualifications: Ability to collaborate with multiple partners (eg, Corporate Risk, Compliance, Audit, Production Operations, DBAs, Data Architecture, Security) to craft solutions that align business goals with internal security and development standards Ability to communicate technical concepts to audiences with varying levels of technical background and synthesize non-technical requests into technical output Comfortable supporting business analysts on high-priority projects. High attention to detail and ability to think structurally about a solution Experience working within an agile environment Technical Skills & Background Ability to write and optimize complex analytical (SELECT) SQL queries Experience with data viz/prep tools Tableau and Alteryx [Preferred] Experience with SaaS tools and their backends, such as Jira and ServiceNow [Preferred] Applied knowledge of Python for writing custom pipeline code (virtual environments, functional programming, and unit testing) [Preferred] Experience with a source code repository system (preferably Git) [Preferred] Familiarity with at least one cloud data platform, such as AWS or GCP [Preferred] Experience creating and/or maintaining a cloud data warehouse or database [Preferred] Exposure to data orchestration tools, such as Airflow [Preferred] Understanding of applied statistics and hands-on experience applying these concepts Bachelor's degree in quantitative discipline (eg, Statistics, Computer Science, Mathematics, Physics, Electrical Engineering, Industrial Engineering) or equivalent professional experience 5+ years of experience as a business analyst, data analyst, data engineer, research analyst, data engineer, analytics engineer, Business Intelligence analyst, data analyst, data scientist, or research analyst
NO SPONSORSHIP AI Engineer/Developer On site 3 days a week downtown Chicago Salary - 180 - 200K + $1200 - 10K Bonus We are looking for someone with a software development background who just happens to have specialized in AI Workflow automation. This makes their skills far more portable and is super important because the field is changing rapidly. Existing automation tech will be become obsolete very soon. We need people that are more focused on the programming side of the house, versus general office productivity. AI Engineering natural language processing and machine learning design Python Langchain LlamalIndex and Semantic Kernel large language models PyPDF Azure document intelligence prompt engineering In total, there are 20 people associated with this AI team. The Artificial Intelligence team is brand new at this organization. this is a new wave in the Legal space, and we are responsible for creating an AI road map in the Legal industry. AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Ideal Candidate: -Bachelor's Degree in Computer Science, Engineering, or related field -A minimum of 5 years of experience in AI engineering or a related field -MUST HAVE: Proven experience with AI engineering tools and technologies, including Python, Langchain, LlamaIndex, and Semantic Kernel -Understanding of large language models. -You will likely come across a lot of general software developers who want to transition into AI Engineering - This is an acceptable candidate, as long as they have experience with the AI tools listed above. -You will also likely come across Data Scientists that are trying to reinvent themselves as Data/AI Engineers. This candidate is acceptable as well, as long as they have experience with the above listed tools and can build a case on AI agents. The AI Engineer, a member of the AI Engineering team, is responsible for developing and implementing cutting-edge legal AI solutions that drive efficiency, improve decision making, and provide valuable insights across various administrative business groups and legal practices. This role will leverage expertise in AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Duties and Responsibilities: Prototype and test AI solutions using Python and Streamlit with a focus on natural language processing and text extraction from documents (PyPDF, Azure Document Intelligence) Develop plugins and assistants using LangChain, LlamaIndex, or Semantic Kernel, with expertise in prompt engineering and semantic function design Design and implement Retrieval Augmented Generation (RAG) stores using a combination of classic information retrieval and semantic embeddings stored in vector and graph databases Develop and deploy agents using AutoGen, CrewAI, LangChain Agents, and LlamaIndex Agents Use Gen AI to distill metadata and insights from documents Fine tune LLMs to optimize for domain and cost Collaborate with stakeholders to implement and automate AI powered solutions for common business workflows Enhance documentation procedures, codebase, and adherence to best practices to promote and facilitate knowledge sharing and ensure the upkeep of an organized and reproducible working environment Required: Bachelor's Degree in Computer Science, Engineering, or related field A minimum of 5 years of experience in AI engineering or a related field Preferred : Master's Degree in Computer Science, Engineering, or related field Proven experience with AI engineering tools and technologies, including Python, Streamlit, Jupyter Notebooks, Langchain, LlamaIndex, and Semantic Kernel Experience with natural language processing, text extraction, and information retrieval techniques Strong understanding of machine learning and deep learning concepts including transformer based GPT models Experience with distributed computing and cloud environments (eg, Microsoft Azure)
03/02/2025
Full time
NO SPONSORSHIP AI Engineer/Developer On site 3 days a week downtown Chicago Salary - 180 - 200K + $1200 - 10K Bonus We are looking for someone with a software development background who just happens to have specialized in AI Workflow automation. This makes their skills far more portable and is super important because the field is changing rapidly. Existing automation tech will be become obsolete very soon. We need people that are more focused on the programming side of the house, versus general office productivity. AI Engineering natural language processing and machine learning design Python Langchain LlamalIndex and Semantic Kernel large language models PyPDF Azure document intelligence prompt engineering In total, there are 20 people associated with this AI team. The Artificial Intelligence team is brand new at this organization. this is a new wave in the Legal space, and we are responsible for creating an AI road map in the Legal industry. AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Ideal Candidate: -Bachelor's Degree in Computer Science, Engineering, or related field -A minimum of 5 years of experience in AI engineering or a related field -MUST HAVE: Proven experience with AI engineering tools and technologies, including Python, Langchain, LlamaIndex, and Semantic Kernel -Understanding of large language models. -You will likely come across a lot of general software developers who want to transition into AI Engineering - This is an acceptable candidate, as long as they have experience with the AI tools listed above. -You will also likely come across Data Scientists that are trying to reinvent themselves as Data/AI Engineers. This candidate is acceptable as well, as long as they have experience with the above listed tools and can build a case on AI agents. The AI Engineer, a member of the AI Engineering team, is responsible for developing and implementing cutting-edge legal AI solutions that drive efficiency, improve decision making, and provide valuable insights across various administrative business groups and legal practices. This role will leverage expertise in AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Duties and Responsibilities: Prototype and test AI solutions using Python and Streamlit with a focus on natural language processing and text extraction from documents (PyPDF, Azure Document Intelligence) Develop plugins and assistants using LangChain, LlamaIndex, or Semantic Kernel, with expertise in prompt engineering and semantic function design Design and implement Retrieval Augmented Generation (RAG) stores using a combination of classic information retrieval and semantic embeddings stored in vector and graph databases Develop and deploy agents using AutoGen, CrewAI, LangChain Agents, and LlamaIndex Agents Use Gen AI to distill metadata and insights from documents Fine tune LLMs to optimize for domain and cost Collaborate with stakeholders to implement and automate AI powered solutions for common business workflows Enhance documentation procedures, codebase, and adherence to best practices to promote and facilitate knowledge sharing and ensure the upkeep of an organized and reproducible working environment Required: Bachelor's Degree in Computer Science, Engineering, or related field A minimum of 5 years of experience in AI engineering or a related field Preferred : Master's Degree in Computer Science, Engineering, or related field Proven experience with AI engineering tools and technologies, including Python, Streamlit, Jupyter Notebooks, Langchain, LlamaIndex, and Semantic Kernel Experience with natural language processing, text extraction, and information retrieval techniques Strong understanding of machine learning and deep learning concepts including transformer based GPT models Experience with distributed computing and cloud environments (eg, Microsoft Azure)
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal, Data Analytics Engineering. This principal will lead the design and build of internal data analytics, data warehouse, and maintain ETL processes. They will also manage the validation of data models to make sure the information is available in analytics warehouse. Required: Financial industry experience, SQL, Python, Alteryx, Tableau. Responsibilities: Partner with Data Architecture and other relevant teams to design and implement new cloud data warehouse infrastructure for internal facing analytics Manage the validation of data models to ensure information is available in our analytics warehouse for downstream uses, such as ad hoc analysis and dashboard development Maintain performance requirements of our analytics warehouse by tuning warehouse optimizations and storage processes Direct and enable the team to collaborate with Data Governance team and DBAs to design access controls around our analytics warehouse to meet business and Data Governance needs Approve documentation and testing to ensure data is accurate and easily understandable Promote self-service capabilities and data literacy for business users leveraging the platform through development of training presentations and resources Discover and share best practices for data and analytics engineering with members of the team Invest in continued learning on data and analytics engineering best practices and evaluate them for fit in improving maintainability and reliability of analytics infrastructure Qualifications: Bachelor's degree in quantitative discipline (eg, Statistics, Computer Science, Mathematics, Physics, Electrical Engineering, Industrial Engineering) or equivalent professional experience 10+ years of experience as a data engineer, analytics engineer, Business Intelligence analyst, or data scientist Experience implementing and maintaining cloud-based data warehouses and curating a semantic layer that meets the needs of business stakeholders Proficiency in SQL Experience with Python and/or R Experience with visualization/reporting tools, such as Tableau Experience with ETL tools, such as Alteryx Ability to collaborate with multiple partners (eg, Business Functional areas, Data Platform, Platform Engineering, Security Services, Data Governance, Information Governance, etc.) to craft solutions that align business goals with internal security and development standards Knowledge of and experience working with analytics/reporting technology and underlying databases Experience working within an agile environment
03/02/2025
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal, Data Analytics Engineering. This principal will lead the design and build of internal data analytics, data warehouse, and maintain ETL processes. They will also manage the validation of data models to make sure the information is available in analytics warehouse. Required: Financial industry experience, SQL, Python, Alteryx, Tableau. Responsibilities: Partner with Data Architecture and other relevant teams to design and implement new cloud data warehouse infrastructure for internal facing analytics Manage the validation of data models to ensure information is available in our analytics warehouse for downstream uses, such as ad hoc analysis and dashboard development Maintain performance requirements of our analytics warehouse by tuning warehouse optimizations and storage processes Direct and enable the team to collaborate with Data Governance team and DBAs to design access controls around our analytics warehouse to meet business and Data Governance needs Approve documentation and testing to ensure data is accurate and easily understandable Promote self-service capabilities and data literacy for business users leveraging the platform through development of training presentations and resources Discover and share best practices for data and analytics engineering with members of the team Invest in continued learning on data and analytics engineering best practices and evaluate them for fit in improving maintainability and reliability of analytics infrastructure Qualifications: Bachelor's degree in quantitative discipline (eg, Statistics, Computer Science, Mathematics, Physics, Electrical Engineering, Industrial Engineering) or equivalent professional experience 10+ years of experience as a data engineer, analytics engineer, Business Intelligence analyst, or data scientist Experience implementing and maintaining cloud-based data warehouses and curating a semantic layer that meets the needs of business stakeholders Proficiency in SQL Experience with Python and/or R Experience with visualization/reporting tools, such as Tableau Experience with ETL tools, such as Alteryx Ability to collaborate with multiple partners (eg, Business Functional areas, Data Platform, Platform Engineering, Security Services, Data Governance, Information Governance, etc.) to craft solutions that align business goals with internal security and development standards Knowledge of and experience working with analytics/reporting technology and underlying databases Experience working within an agile environment
We are currently looking on behalf of one of our important clients for a Senior Data Scientist. The role is permanent position based in Zurich Canton & comes with good home office allowance. Your role: Hold responsibility for shaping the Intelligence behind an associated product Develop the product solutions in relation to the areas of the detection, localization & quantification. Write algorithms & work with data engineers to run them in a productive environment. Work with noisy & sparse data & write production-ready code. Your Skills & Experience: At least 3 years of relevant professional experience including strong experiences in Data Science, Algorithm Development & Physical Modeling. An in-depth knowledge of Statistics, Physics & Data Analysis. Skills & expertise in Python, SQL & Apache Spark. Ideally experienced in MLOps. Any experience in developing IoT Solutions is considered advantageous. Your Profile: Completed University Degree in the area Computer Science, Engineering, Physics or Similar. Motivated to take responsibility & drive innovative ideas. Dynamic & adaptable. Fluent in English (spoken & written). Any German language skills are considered very advantageous.
03/02/2025
Full time
We are currently looking on behalf of one of our important clients for a Senior Data Scientist. The role is permanent position based in Zurich Canton & comes with good home office allowance. Your role: Hold responsibility for shaping the Intelligence behind an associated product Develop the product solutions in relation to the areas of the detection, localization & quantification. Write algorithms & work with data engineers to run them in a productive environment. Work with noisy & sparse data & write production-ready code. Your Skills & Experience: At least 3 years of relevant professional experience including strong experiences in Data Science, Algorithm Development & Physical Modeling. An in-depth knowledge of Statistics, Physics & Data Analysis. Skills & expertise in Python, SQL & Apache Spark. Ideally experienced in MLOps. Any experience in developing IoT Solutions is considered advantageous. Your Profile: Completed University Degree in the area Computer Science, Engineering, Physics or Similar. Motivated to take responsibility & drive innovative ideas. Dynamic & adaptable. Fluent in English (spoken & written). Any German language skills are considered very advantageous.