Are you an experienced Senior Data Engineer looking for an interesting new job in Zurich? Do you have strong experience of distributed computing, particularly Spark? Are you keen to be a leading force in helping this company achieve their goals through the utilisation of data insights? You will be joining a Data team that consists of very smart Data Scientists and Data Engineers that are part of a "start up" within a larger organisation that are responsible for designing, building and maintain robust, scalable and cost effective data infrastructure that supports the delivery of Real Time data to an underlying algorithm. Technically, the team develop the data pipelines using Python and Spark. If your experience is in Scala or even Java this is ok, but naturally Pyspark is preferred. From a cloud perspective, they use Azure - however experience with AWS or GCP is fine as long as you have good general CI/CD knowledge. Although the organisation has a number of data scientists, they are lacking strong Senior Data Engineers. As such you will get the benefit of playing an important role in developing the data infrastructure, working with some very bright minds and also see that your work contributes to solving a real world challenge. For more information on this Senior Data Engineer position, or any other Data Engineer positions that I have available, please send your CV or alternatively you can call me
05/02/2025
Full time
Are you an experienced Senior Data Engineer looking for an interesting new job in Zurich? Do you have strong experience of distributed computing, particularly Spark? Are you keen to be a leading force in helping this company achieve their goals through the utilisation of data insights? You will be joining a Data team that consists of very smart Data Scientists and Data Engineers that are part of a "start up" within a larger organisation that are responsible for designing, building and maintain robust, scalable and cost effective data infrastructure that supports the delivery of Real Time data to an underlying algorithm. Technically, the team develop the data pipelines using Python and Spark. If your experience is in Scala or even Java this is ok, but naturally Pyspark is preferred. From a cloud perspective, they use Azure - however experience with AWS or GCP is fine as long as you have good general CI/CD knowledge. Although the organisation has a number of data scientists, they are lacking strong Senior Data Engineers. As such you will get the benefit of playing an important role in developing the data infrastructure, working with some very bright minds and also see that your work contributes to solving a real world challenge. For more information on this Senior Data Engineer position, or any other Data Engineer positions that I have available, please send your CV or alternatively you can call me
Data Scientist Lead - Data Strategy - Data Literacy - Machine Learning - Artificial Intelligence My high-end client are looking for a Data Science Lead who will manage a team of high functioning Data Scientists to help with our growth strategy. You will ensure that your team are contributing to both the short- and long-term decision making of the business, as you help to embed the utilisation of data as a product, this could range from building propensity modelling, LTV predictions, incremental testing, segmentation modelling, econometrics in addition to guiding each team member to set ambitious strategies, working towards these in a data driven way. Duties/Responsibilities: Data Science strategy, vision, and roadmap - Support the Head of Data & Insight by defining and communicating the data science strategy, vision, and roadmap. Business requirements and data development through Data Science - Collaborating with business stakeholders and Data Council to gather and document detailed requirements for data science use cases, including defining use cases, acceptance criteria and functional specifications. Managing a team of Data Scientists - Day-to-day management of a team of data science specialists and Helping the team to determine the best route for tackling use cases Improving data literacy across the business - Engage with both technical and non-technical audiences to explain the business value of a data science projects. Project delivery - Required to build innovative and effective approaches to solving analytics problems and communicating clear and relevant results and methodologies to internal stakeholders. Knowledge, Skills and Experience Extensive experience of managing multidisciplinary data teams (including data science, software development, research) Experience of in the field practitioner experience of various analytical methods including Machine Learning & Artificial Intelligence Experience of Business Analytics Experience stakeholder management, including senior leaders, Executives and Board members. Marketing experience, with a track record of achieving growth using data driven reasoning
04/02/2025
Full time
Data Scientist Lead - Data Strategy - Data Literacy - Machine Learning - Artificial Intelligence My high-end client are looking for a Data Science Lead who will manage a team of high functioning Data Scientists to help with our growth strategy. You will ensure that your team are contributing to both the short- and long-term decision making of the business, as you help to embed the utilisation of data as a product, this could range from building propensity modelling, LTV predictions, incremental testing, segmentation modelling, econometrics in addition to guiding each team member to set ambitious strategies, working towards these in a data driven way. Duties/Responsibilities: Data Science strategy, vision, and roadmap - Support the Head of Data & Insight by defining and communicating the data science strategy, vision, and roadmap. Business requirements and data development through Data Science - Collaborating with business stakeholders and Data Council to gather and document detailed requirements for data science use cases, including defining use cases, acceptance criteria and functional specifications. Managing a team of Data Scientists - Day-to-day management of a team of data science specialists and Helping the team to determine the best route for tackling use cases Improving data literacy across the business - Engage with both technical and non-technical audiences to explain the business value of a data science projects. Project delivery - Required to build innovative and effective approaches to solving analytics problems and communicating clear and relevant results and methodologies to internal stakeholders. Knowledge, Skills and Experience Extensive experience of managing multidisciplinary data teams (including data science, software development, research) Experience of in the field practitioner experience of various analytical methods including Machine Learning & Artificial Intelligence Experience of Business Analytics Experience stakeholder management, including senior leaders, Executives and Board members. Marketing experience, with a track record of achieving growth using data driven reasoning
Unlock the Future of AI! Are you a trailblazer in Generative AI and Machine Learning ? Do you have a solid software engineering background and the ability to collaborate with and technically guide engineers and data scientists? If yes, this opportunity is for you to revolutionize AI-driven solutions and deliver cutting-edge innovation. About the Role As a Solution Architect - Generative AI , you'll transform prototypes into enterprise-grade systems. Collaborate with Data Scientists, Machine Learning Engineers, and Product Managers to design, build, and scale AI-powered solutions. Your role will bridge technology and business, ensuring the seamless integration of Generative AI models into production while promoting best practices and fostering innovation. What You'll Do Design scalable, robust, and high-performance architectures for Generative AI solutions . Work closely with engineers, offering technical expertise and hands-on support in areas such as model deployment, CI/CD pipelines, and performance optimization. Integrate Generative AI models into enterprise ecosystems, ensuring data alignment and operational excellence. Stay ahead of emerging trends like synthetic data and transfer learning, and advocate their application across the organization. Develop reusable standards and frameworks to streamline AI innovation and deployment. Collaborate with stakeholders to define strategic roadmaps aligned with business objectives. Foster effective communication across teams to align goals and execute AI projects seamlessly. What You Bring Expertise in Generative AI : Proficiency with LLM models (eg, GPT, DALL-E), including strengths, limitations, and productionization. Strong software background : Hands-on experience with Python development, cloud platforms (eg, Azure AI services), and modern software practices (CI/CD, unit testing). Data expertise : Familiarity with diverse databases (eg, PostgreSQL, MongoDB, Neo4j), data pipelines (eg, Apache Spark, Airflow), and advanced architectures like data lakes. Exceptional communication and leadership skills to guide and inspire technical and non-technical teams alike. Proven track record of deploying AI/ML solutions at scale within large organizations. Experience with architectural principles (microservices, event-driven design) and Agile/Scrum methodologies. Why Join Us? This is your chance to lead the way in AI innovation, collaborate with top talent, and create impactful solutions that shape the future. Work in a collaborative and innovative environment where your expertise and creativity are highly valued. Application Process Apply today to be a key player in driving AI innovation and technical excellence!
04/02/2025
Project-based
Unlock the Future of AI! Are you a trailblazer in Generative AI and Machine Learning ? Do you have a solid software engineering background and the ability to collaborate with and technically guide engineers and data scientists? If yes, this opportunity is for you to revolutionize AI-driven solutions and deliver cutting-edge innovation. About the Role As a Solution Architect - Generative AI , you'll transform prototypes into enterprise-grade systems. Collaborate with Data Scientists, Machine Learning Engineers, and Product Managers to design, build, and scale AI-powered solutions. Your role will bridge technology and business, ensuring the seamless integration of Generative AI models into production while promoting best practices and fostering innovation. What You'll Do Design scalable, robust, and high-performance architectures for Generative AI solutions . Work closely with engineers, offering technical expertise and hands-on support in areas such as model deployment, CI/CD pipelines, and performance optimization. Integrate Generative AI models into enterprise ecosystems, ensuring data alignment and operational excellence. Stay ahead of emerging trends like synthetic data and transfer learning, and advocate their application across the organization. Develop reusable standards and frameworks to streamline AI innovation and deployment. Collaborate with stakeholders to define strategic roadmaps aligned with business objectives. Foster effective communication across teams to align goals and execute AI projects seamlessly. What You Bring Expertise in Generative AI : Proficiency with LLM models (eg, GPT, DALL-E), including strengths, limitations, and productionization. Strong software background : Hands-on experience with Python development, cloud platforms (eg, Azure AI services), and modern software practices (CI/CD, unit testing). Data expertise : Familiarity with diverse databases (eg, PostgreSQL, MongoDB, Neo4j), data pipelines (eg, Apache Spark, Airflow), and advanced architectures like data lakes. Exceptional communication and leadership skills to guide and inspire technical and non-technical teams alike. Proven track record of deploying AI/ML solutions at scale within large organizations. Experience with architectural principles (microservices, event-driven design) and Agile/Scrum methodologies. Why Join Us? This is your chance to lead the way in AI innovation, collaborate with top talent, and create impactful solutions that shape the future. Work in a collaborative and innovative environment where your expertise and creativity are highly valued. Application Process Apply today to be a key player in driving AI innovation and technical excellence!
We are currently looking on behalf of one of our important clients for a Data Architect. The role is permanent position based in Zurich Canton & comes with good home office allowance. Your role: Drive the architecture design, implementation & improvement of data platforms & guide & support organization-wide data-driven solutions. Design, implement & re-architect complex data systems& pipelines in close collaboration with key business divisions & data users. Design & evolve a standard data architecture & infrastructure. Work across multiple teams to align objectives. Assist in the implementation of organization-wide Data Strategy. Further enhance advanced data analytics capabilities required to enable successful business outcomes. Solve technical programs of different levels of complexity. Work with solution engineers, data scientists & product owners to help deliver their products. Lead activities, projects & advances across multiple user groups & partners. Your Skills & Experience: At least 7 years of relevant professional experience in Data Engineering & at least 3 years in Data Architecture. Experienced in Defining & Conceptualizing Multi-Purpose Data Systems. Very proficient in Data Modelling & ELT Principles. Hands-on experience in delivering Big Data Projects. An up-to-date knowledge of Data Platforms such as Kafka & Snowflake. Skilled in Producing Designs of Complex IT Systems, including Requirements Discovery & Analysis. Accustomed to working with Real Time/Streaming Technologies. Experienced in Dashboard Presentations & Patterns. Skilled in building API Layers & Integration Systems. Your Profile: Completed University Degree in the area Computer Science or Similar. Dedicated, ambitious, innovative, analytical, organized & solution, customer & results-oriented. A team-player & strong communication & conflict resolution skills. Available for possible on-call duties when required. Fluent in English (spoken & written). Any additional language skills in German, French or Spanish are considered advantageous.
04/02/2025
Full time
We are currently looking on behalf of one of our important clients for a Data Architect. The role is permanent position based in Zurich Canton & comes with good home office allowance. Your role: Drive the architecture design, implementation & improvement of data platforms & guide & support organization-wide data-driven solutions. Design, implement & re-architect complex data systems& pipelines in close collaboration with key business divisions & data users. Design & evolve a standard data architecture & infrastructure. Work across multiple teams to align objectives. Assist in the implementation of organization-wide Data Strategy. Further enhance advanced data analytics capabilities required to enable successful business outcomes. Solve technical programs of different levels of complexity. Work with solution engineers, data scientists & product owners to help deliver their products. Lead activities, projects & advances across multiple user groups & partners. Your Skills & Experience: At least 7 years of relevant professional experience in Data Engineering & at least 3 years in Data Architecture. Experienced in Defining & Conceptualizing Multi-Purpose Data Systems. Very proficient in Data Modelling & ELT Principles. Hands-on experience in delivering Big Data Projects. An up-to-date knowledge of Data Platforms such as Kafka & Snowflake. Skilled in Producing Designs of Complex IT Systems, including Requirements Discovery & Analysis. Accustomed to working with Real Time/Streaming Technologies. Experienced in Dashboard Presentations & Patterns. Skilled in building API Layers & Integration Systems. Your Profile: Completed University Degree in the area Computer Science or Similar. Dedicated, ambitious, innovative, analytical, organized & solution, customer & results-oriented. A team-player & strong communication & conflict resolution skills. Available for possible on-call duties when required. Fluent in English (spoken & written). Any additional language skills in German, French or Spanish are considered advantageous.
Data specialist/curator for Large Language Models-derived toxicological (meta)data (m/f/d) - Data Validation / Quality Assurance / Toxicology/ Documentation / English Project: For our customer a big pharmaceutical company in Basel we are looking for a highly qualified Data specialist/curator for Large Language Models-derived toxicological (meta)data (m/f/d). Background: We believe it's urgent to deliver medical solutions right now - even as we develop innovations for the future. We are passionate about transforming patients' lives and we are fearless in both decision and action. And we believe that good business means a better world. We commit ourselves to scientific rigor, unassailable ethics, and access to medical innovations for all. We do this today to build a better tomorrow. Pharmaceutical Sciences (PS) is a global function within Roche Pharma Research and Early Development (pRED). As a team member in the Prediction Modelling (PM) Chapter of PS, you will work in close collaboration with toxicologists as well as other scientists in pRED, having access to state-of-the-art bioinformatics and biostatistics tools and methods and gaining toxicological insights from experts in the field. Large Language Models (LLMs) have evolved beyond simple text completion tools into sophisticated systems capable of data extraction, summarization, enrichment, and knowledge capture. These advancements enable the retrieval of information previously locked within documents, reports, presentations, and meeting records, making it possible to repurpose this data and reverse translate knowledge from the past to inform future insights. The position in question focuses on transforming historical toxicology documents into structured, high-quality datasets through AI-powered extraction while supporting the application of LLMs for toxicological data understanding and enrichment from historical documents. The extracted data will be used to enhance existing data repositories and serve as a foundation for creating new ones tailored to the specific needs of toxicologists. While LLMs hold immense potential, they are also prone to generating inaccurate or fabricated information, commonly referred to as "hallucinations." To address this, rigorous data curation is essential. Curating and validating subsets of the extracted data will not only improve the quality and reliability of the outputs but also contribute to refining and enhancing the performance of the models themselves The perfect candidate: We are looking for a highly skilled and detail-oriented Toxicological Data Curator with a background in biology or toxicology or drug development or veterinary medicine etc. In this role, you will ensure the accuracy, consistency, and scientific validity of toxicological data captured by large language models (LLMs). You will cross-check data outputs against original sources, evaluate the reliability of AI-derived information, and contribute to the development of high-quality datasets to support drug safety and development efforts. Tasks & Responsibilities: Data Validation: Review toxicological data output generated by LLMs and validate it against original resources (eg, research articles, regulatory documents, toxicology databases). Identify and document discrepancies, errors, or ambiguities in AI-derived data. Build up pipeline/dataset (choose one if needed) for toxicological evaluation tasks . Quality Assurance: Ensure consistency, completeness, and scientific accuracy of curated toxicological datasets. Adhere to established quality control protocols and contribute to improving workflows as needed. . Toxicology Expertise: Apply toxicological knowledge to evaluate data related to preclinical and clinical toxicities, mechanisms of toxicity, safety biomarkers, and risk assessments. Assess relevance and applicability of curated data to specific drug development scenarios. . Collaboration: Work closely with computational toxicologists, data scientists, and cross-functional teams to align on curation standards and requirements. Provide feedback to enhance LLM performance based on identified gaps or inaccuracies in the extracted data. . Documentation and Reporting: Maintain detailed records of validation processes and outcomes. Prepare periodic reports summarizing curation progress, data quality metrics, and key findings. Must Haves: . Preferred Master's in Biology, Pharmacology, Toxicology, Drug Development, Biotechnology or a related field. . Experience with data curation, annotation, or systematic review methodologies is a plus. . Basic understanding of machine learning, LLMs, or natural language processing (NLP) tools is desirable but not required. . English fluent (mind. C1 Level) . Strong attention to detail and commitment to data accuracy. . Excellent critical thinking and problem-solving skills, particularly in evaluating scientific information. . Ability to synthesize complex toxicological data and present clear, actionable conclusions. . Effective written and verbal communication skills for reporting and collaboration. Reference Nr.: 923799TP Role: Data specialist/curator for Large Language Models-derived toxicological (meta)data (m/f/d) Industrie: Pharma Workplace: Basel Pensum: 80-100% Start: 01.03.2025 Duration: 6 Deadline :10.02.2025 If you are interested in this position, please send us your complete dossier. About us : ITech Consult is an ISO 9001:2015 certified Swiss company with offices in Germany and Ireland. ITech Consult specialises in the placement of highly qualified candidates for recruitment in the fields of IT, Life Science & Engineering. We offer staff leasing & payroll services. For our candidates this is free of charge, also for Payroll we do not charge you any additional fees.
04/02/2025
Project-based
Data specialist/curator for Large Language Models-derived toxicological (meta)data (m/f/d) - Data Validation / Quality Assurance / Toxicology/ Documentation / English Project: For our customer a big pharmaceutical company in Basel we are looking for a highly qualified Data specialist/curator for Large Language Models-derived toxicological (meta)data (m/f/d). Background: We believe it's urgent to deliver medical solutions right now - even as we develop innovations for the future. We are passionate about transforming patients' lives and we are fearless in both decision and action. And we believe that good business means a better world. We commit ourselves to scientific rigor, unassailable ethics, and access to medical innovations for all. We do this today to build a better tomorrow. Pharmaceutical Sciences (PS) is a global function within Roche Pharma Research and Early Development (pRED). As a team member in the Prediction Modelling (PM) Chapter of PS, you will work in close collaboration with toxicologists as well as other scientists in pRED, having access to state-of-the-art bioinformatics and biostatistics tools and methods and gaining toxicological insights from experts in the field. Large Language Models (LLMs) have evolved beyond simple text completion tools into sophisticated systems capable of data extraction, summarization, enrichment, and knowledge capture. These advancements enable the retrieval of information previously locked within documents, reports, presentations, and meeting records, making it possible to repurpose this data and reverse translate knowledge from the past to inform future insights. The position in question focuses on transforming historical toxicology documents into structured, high-quality datasets through AI-powered extraction while supporting the application of LLMs for toxicological data understanding and enrichment from historical documents. The extracted data will be used to enhance existing data repositories and serve as a foundation for creating new ones tailored to the specific needs of toxicologists. While LLMs hold immense potential, they are also prone to generating inaccurate or fabricated information, commonly referred to as "hallucinations." To address this, rigorous data curation is essential. Curating and validating subsets of the extracted data will not only improve the quality and reliability of the outputs but also contribute to refining and enhancing the performance of the models themselves The perfect candidate: We are looking for a highly skilled and detail-oriented Toxicological Data Curator with a background in biology or toxicology or drug development or veterinary medicine etc. In this role, you will ensure the accuracy, consistency, and scientific validity of toxicological data captured by large language models (LLMs). You will cross-check data outputs against original sources, evaluate the reliability of AI-derived information, and contribute to the development of high-quality datasets to support drug safety and development efforts. Tasks & Responsibilities: Data Validation: Review toxicological data output generated by LLMs and validate it against original resources (eg, research articles, regulatory documents, toxicology databases). Identify and document discrepancies, errors, or ambiguities in AI-derived data. Build up pipeline/dataset (choose one if needed) for toxicological evaluation tasks . Quality Assurance: Ensure consistency, completeness, and scientific accuracy of curated toxicological datasets. Adhere to established quality control protocols and contribute to improving workflows as needed. . Toxicology Expertise: Apply toxicological knowledge to evaluate data related to preclinical and clinical toxicities, mechanisms of toxicity, safety biomarkers, and risk assessments. Assess relevance and applicability of curated data to specific drug development scenarios. . Collaboration: Work closely with computational toxicologists, data scientists, and cross-functional teams to align on curation standards and requirements. Provide feedback to enhance LLM performance based on identified gaps or inaccuracies in the extracted data. . Documentation and Reporting: Maintain detailed records of validation processes and outcomes. Prepare periodic reports summarizing curation progress, data quality metrics, and key findings. Must Haves: . Preferred Master's in Biology, Pharmacology, Toxicology, Drug Development, Biotechnology or a related field. . Experience with data curation, annotation, or systematic review methodologies is a plus. . Basic understanding of machine learning, LLMs, or natural language processing (NLP) tools is desirable but not required. . English fluent (mind. C1 Level) . Strong attention to detail and commitment to data accuracy. . Excellent critical thinking and problem-solving skills, particularly in evaluating scientific information. . Ability to synthesize complex toxicological data and present clear, actionable conclusions. . Effective written and verbal communication skills for reporting and collaboration. Reference Nr.: 923799TP Role: Data specialist/curator for Large Language Models-derived toxicological (meta)data (m/f/d) Industrie: Pharma Workplace: Basel Pensum: 80-100% Start: 01.03.2025 Duration: 6 Deadline :10.02.2025 If you are interested in this position, please send us your complete dossier. About us : ITech Consult is an ISO 9001:2015 certified Swiss company with offices in Germany and Ireland. ITech Consult specialises in the placement of highly qualified candidates for recruitment in the fields of IT, Life Science & Engineering. We offer staff leasing & payroll services. For our candidates this is free of charge, also for Payroll we do not charge you any additional fees.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Principal Data Analytics ETL Analyst. Candidate will be responsible for expanding the analytics capabilities by making data accessible and usable to analysts throughout the organization. In this role you will lead the design and build of our internal analytics data warehouse and the maintenance of the supporting extract, load, and transform processes. You will demonstrate and disseminate expertise in data sets and support teams across the organization in successfully harnessing this data. You will collaborate with business users and technical teams across the organization to facilitate data-driven decision making by enabling exploration and analysis of historical and near Real Time data access using cloud-based tools and technologies. Lastly, you will be responsible for gathering requirements and designing solutions that address the problem at hand while also anticipating yet-to-be-asked analytical questions and developing and maintaining our analytics platform to meet the company's security and IT standards. Responsibilities: Partner with Data Architecture and other relevant teams to design and implement new cloud data warehouse infrastructure for internal facing analytics Work with various business and functional teams to understand their data and technical requirements and ensure delivered solutions address needs Manage the validation of data models to ensure information is available in our analytics warehouse for downstream uses, such as ad hoc analysis and dashboard development Maintain performance requirements of our analytics warehouse by tuning warehouse optimizations and storage processes Direct and enable the team to collaborate with Data Governance team and DBAs to design access controls around our analytics warehouse to meet business and Data Governance needs Approve documentation and testing to ensure data is accurate and easily understandable Promote self-service capabilities and data literacy for business users leveraging the platform through development of training presentations and resources Discover and share best practices for data and analytics engineering with members of the team Invest in continued learning on data and analytics engineering best practices and evaluate them for fit in improving maintainability and reliability of analytics infrastructure Qualifications: [Required] Ability to collaborate with multiple partners (eg, Business Functional areas, Data Platform, Platform Engineering, Security Services, Data Governance, Information Governance, etc.) to craft solutions that align business goals with internal security and development standards [Required] Ability to communicate technical concepts to audiences with varying levels of technical background and synthesize non-technical requests into technical output [Required] Comfortable supporting business analysts on high-priority projects [Required] High attention to detail and ability to think structurally about a solution [Required] Knowledge of and experience working with analytics/reporting technology and underlying databases [Required] Strong presentation and communication skills, including ability to clearly explain deliverables/results to non-technical audiences [Required] Experience working within an agile environment= Technical Skills: Demonstrated proficiency in: [Required] Experience implementing and maintaining cloud-based data warehouses and curating a semantic layer that meets the needs of business stakeholders [Required] Knowledge of and experience working with various analytics/reporting technologies [Required] Strong presentation and communication skills, including ability to clearly explain deliverables/results to non-technical audiences [Required] Ability to complete work iteratively in an agile environment [Required] Proficiency in SQL [Preferred] Experience with Python and/or R [Preferred] Experience with visualization/reporting tools, such as Tableau [Preferred] Experience with ETL tools, such as Alteryx Education and/or Experience: [Required] Bachelor's degree in quantitative discipline (eg, Statistics, Computer Science, Mathematics, Physics, Electrical Engineering, Industrial Engineering) or equivalent professional experience [Preferred] Master's degree [Required] 10+ years of experience as a data engineer, analytics engineer, Business Intelligence analyst, or data scientist Certificates or Licenses: [Preferred] Cloud platform certification [Preferred] BI tool certification
03/02/2025
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Principal Data Analytics ETL Analyst. Candidate will be responsible for expanding the analytics capabilities by making data accessible and usable to analysts throughout the organization. In this role you will lead the design and build of our internal analytics data warehouse and the maintenance of the supporting extract, load, and transform processes. You will demonstrate and disseminate expertise in data sets and support teams across the organization in successfully harnessing this data. You will collaborate with business users and technical teams across the organization to facilitate data-driven decision making by enabling exploration and analysis of historical and near Real Time data access using cloud-based tools and technologies. Lastly, you will be responsible for gathering requirements and designing solutions that address the problem at hand while also anticipating yet-to-be-asked analytical questions and developing and maintaining our analytics platform to meet the company's security and IT standards. Responsibilities: Partner with Data Architecture and other relevant teams to design and implement new cloud data warehouse infrastructure for internal facing analytics Work with various business and functional teams to understand their data and technical requirements and ensure delivered solutions address needs Manage the validation of data models to ensure information is available in our analytics warehouse for downstream uses, such as ad hoc analysis and dashboard development Maintain performance requirements of our analytics warehouse by tuning warehouse optimizations and storage processes Direct and enable the team to collaborate with Data Governance team and DBAs to design access controls around our analytics warehouse to meet business and Data Governance needs Approve documentation and testing to ensure data is accurate and easily understandable Promote self-service capabilities and data literacy for business users leveraging the platform through development of training presentations and resources Discover and share best practices for data and analytics engineering with members of the team Invest in continued learning on data and analytics engineering best practices and evaluate them for fit in improving maintainability and reliability of analytics infrastructure Qualifications: [Required] Ability to collaborate with multiple partners (eg, Business Functional areas, Data Platform, Platform Engineering, Security Services, Data Governance, Information Governance, etc.) to craft solutions that align business goals with internal security and development standards [Required] Ability to communicate technical concepts to audiences with varying levels of technical background and synthesize non-technical requests into technical output [Required] Comfortable supporting business analysts on high-priority projects [Required] High attention to detail and ability to think structurally about a solution [Required] Knowledge of and experience working with analytics/reporting technology and underlying databases [Required] Strong presentation and communication skills, including ability to clearly explain deliverables/results to non-technical audiences [Required] Experience working within an agile environment= Technical Skills: Demonstrated proficiency in: [Required] Experience implementing and maintaining cloud-based data warehouses and curating a semantic layer that meets the needs of business stakeholders [Required] Knowledge of and experience working with various analytics/reporting technologies [Required] Strong presentation and communication skills, including ability to clearly explain deliverables/results to non-technical audiences [Required] Ability to complete work iteratively in an agile environment [Required] Proficiency in SQL [Preferred] Experience with Python and/or R [Preferred] Experience with visualization/reporting tools, such as Tableau [Preferred] Experience with ETL tools, such as Alteryx Education and/or Experience: [Required] Bachelor's degree in quantitative discipline (eg, Statistics, Computer Science, Mathematics, Physics, Electrical Engineering, Industrial Engineering) or equivalent professional experience [Preferred] Master's degree [Required] 10+ years of experience as a data engineer, analytics engineer, Business Intelligence analyst, or data scientist Certificates or Licenses: [Preferred] Cloud platform certification [Preferred] BI tool certification
NO SPONSORSHIP Associate Principal, Data Analytics Engineering SALARY: $110k flex plus 10% bonus LOCATION: Chicago, IL Hybrid 3 days in office and 2 days remote You will be expanding analytics capabilities to design and build internal analytics within data warehouse using on-premises and cloud-based tools. You will create dashboards or visualization using the tools tableau powerBI SQL Queries Alteryx Jira services now. GIT a big plus, AWS or loud data warehouse airflow bs degree masters preferred this is working for operational risk 5 years experience building dashboards any audit risk knowledge is a plus This role will drive a team responsible for expanding analytics capabilities by making internal corporate data accessible and usable to analysts throughout the organization. Primary Duties and Responsibilities: Work closely with data analyst and business stakeholders to understand their data needs and provide support in data access, data preparation, and ad hoc queries Automate data processes to reduce manual interventions, improve data processing efficiency and optimize data workflow for performance scalability Integrate data form multiple sources and ensure data consistency and quality Build data models to ensure information is available in our analytics warehouse for downstream uses, such as analysis and create dashboards or visualizations using Tableau, Power BI to present insights Maintain performance requirements of our analytics warehouse by tuning optimizations and processes Create documentation and testing to ensure data is accurate and easily understandable Promote self-service capabilities and data literacy for business users leveraging the platform through development of training presentations and resources Discover and share best practices for data and analytics engineering with members of the team Invest in your continued learning on data and analytics engineering best practices and evaluate them for fit in improving maintainability and reliability of analytics infrastructure Qualifications: Ability to collaborate with multiple partners (eg, Corporate Risk, Compliance, Audit, Production Operations, DBAs, Data Architecture, Security) to craft solutions that align business goals with internal security and development standards Ability to communicate technical concepts to audiences with varying levels of technical background and synthesize non-technical requests into technical output Comfortable supporting business analysts on high-priority projects. High attention to detail and ability to think structurally about a solution Experience working within an agile environment Technical Skills & Background Ability to write and optimize complex analytical (SELECT) SQL queries Experience with data viz/prep tools Tableau and Alteryx [Preferred] Experience with SaaS tools and their backends, such as Jira and ServiceNow [Preferred] Applied knowledge of Python for writing custom pipeline code (virtual environments, functional programming, and unit testing) [Preferred] Experience with a source code repository system (preferably Git) [Preferred] Familiarity with at least one cloud data platform, such as AWS or GCP [Preferred] Experience creating and/or maintaining a cloud data warehouse or database [Preferred] Exposure to data orchestration tools, such as Airflow [Preferred] Understanding of applied statistics and hands-on experience applying these concepts Bachelor's degree in quantitative discipline (eg, Statistics, Computer Science, Mathematics, Physics, Electrical Engineering, Industrial Engineering) or equivalent professional experience 5+ years of experience as a business analyst, data analyst, data engineer, research analyst, data engineer, analytics engineer, Business Intelligence analyst, data analyst, data scientist, or research analyst
03/02/2025
Full time
NO SPONSORSHIP Associate Principal, Data Analytics Engineering SALARY: $110k flex plus 10% bonus LOCATION: Chicago, IL Hybrid 3 days in office and 2 days remote You will be expanding analytics capabilities to design and build internal analytics within data warehouse using on-premises and cloud-based tools. You will create dashboards or visualization using the tools tableau powerBI SQL Queries Alteryx Jira services now. GIT a big plus, AWS or loud data warehouse airflow bs degree masters preferred this is working for operational risk 5 years experience building dashboards any audit risk knowledge is a plus This role will drive a team responsible for expanding analytics capabilities by making internal corporate data accessible and usable to analysts throughout the organization. Primary Duties and Responsibilities: Work closely with data analyst and business stakeholders to understand their data needs and provide support in data access, data preparation, and ad hoc queries Automate data processes to reduce manual interventions, improve data processing efficiency and optimize data workflow for performance scalability Integrate data form multiple sources and ensure data consistency and quality Build data models to ensure information is available in our analytics warehouse for downstream uses, such as analysis and create dashboards or visualizations using Tableau, Power BI to present insights Maintain performance requirements of our analytics warehouse by tuning optimizations and processes Create documentation and testing to ensure data is accurate and easily understandable Promote self-service capabilities and data literacy for business users leveraging the platform through development of training presentations and resources Discover and share best practices for data and analytics engineering with members of the team Invest in your continued learning on data and analytics engineering best practices and evaluate them for fit in improving maintainability and reliability of analytics infrastructure Qualifications: Ability to collaborate with multiple partners (eg, Corporate Risk, Compliance, Audit, Production Operations, DBAs, Data Architecture, Security) to craft solutions that align business goals with internal security and development standards Ability to communicate technical concepts to audiences with varying levels of technical background and synthesize non-technical requests into technical output Comfortable supporting business analysts on high-priority projects. High attention to detail and ability to think structurally about a solution Experience working within an agile environment Technical Skills & Background Ability to write and optimize complex analytical (SELECT) SQL queries Experience with data viz/prep tools Tableau and Alteryx [Preferred] Experience with SaaS tools and their backends, such as Jira and ServiceNow [Preferred] Applied knowledge of Python for writing custom pipeline code (virtual environments, functional programming, and unit testing) [Preferred] Experience with a source code repository system (preferably Git) [Preferred] Familiarity with at least one cloud data platform, such as AWS or GCP [Preferred] Experience creating and/or maintaining a cloud data warehouse or database [Preferred] Exposure to data orchestration tools, such as Airflow [Preferred] Understanding of applied statistics and hands-on experience applying these concepts Bachelor's degree in quantitative discipline (eg, Statistics, Computer Science, Mathematics, Physics, Electrical Engineering, Industrial Engineering) or equivalent professional experience 5+ years of experience as a business analyst, data analyst, data engineer, research analyst, data engineer, analytics engineer, Business Intelligence analyst, data analyst, data scientist, or research analyst
NO SPONSORSHIP AI Engineer/Developer On site 3 days a week downtown Chicago Salary - 180 - 200K + $1200 - 10K Bonus We are looking for someone with a software development background who just happens to have specialized in AI Workflow automation. This makes their skills far more portable and is super important because the field is changing rapidly. Existing automation tech will be become obsolete very soon. We need people that are more focused on the programming side of the house, versus general office productivity. AI Engineering natural language processing and machine learning design Python Langchain LlamalIndex and Semantic Kernel large language models PyPDF Azure document intelligence prompt engineering In total, there are 20 people associated with this AI team. The Artificial Intelligence team is brand new at this organization. this is a new wave in the Legal space, and we are responsible for creating an AI road map in the Legal industry. AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Ideal Candidate: -Bachelor's Degree in Computer Science, Engineering, or related field -A minimum of 5 years of experience in AI engineering or a related field -MUST HAVE: Proven experience with AI engineering tools and technologies, including Python, Langchain, LlamaIndex, and Semantic Kernel -Understanding of large language models. -You will likely come across a lot of general software developers who want to transition into AI Engineering - This is an acceptable candidate, as long as they have experience with the AI tools listed above. -You will also likely come across Data Scientists that are trying to reinvent themselves as Data/AI Engineers. This candidate is acceptable as well, as long as they have experience with the above listed tools and can build a case on AI agents. The AI Engineer, a member of the AI Engineering team, is responsible for developing and implementing cutting-edge legal AI solutions that drive efficiency, improve decision making, and provide valuable insights across various administrative business groups and legal practices. This role will leverage expertise in AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Duties and Responsibilities: Prototype and test AI solutions using Python and Streamlit with a focus on natural language processing and text extraction from documents (PyPDF, Azure Document Intelligence) Develop plugins and assistants using LangChain, LlamaIndex, or Semantic Kernel, with expertise in prompt engineering and semantic function design Design and implement Retrieval Augmented Generation (RAG) stores using a combination of classic information retrieval and semantic embeddings stored in vector and graph databases Develop and deploy agents using AutoGen, CrewAI, LangChain Agents, and LlamaIndex Agents Use Gen AI to distill metadata and insights from documents Fine tune LLMs to optimize for domain and cost Collaborate with stakeholders to implement and automate AI powered solutions for common business workflows Enhance documentation procedures, codebase, and adherence to best practices to promote and facilitate knowledge sharing and ensure the upkeep of an organized and reproducible working environment Required: Bachelor's Degree in Computer Science, Engineering, or related field A minimum of 5 years of experience in AI engineering or a related field Preferred : Master's Degree in Computer Science, Engineering, or related field Proven experience with AI engineering tools and technologies, including Python, Streamlit, Jupyter Notebooks, Langchain, LlamaIndex, and Semantic Kernel Experience with natural language processing, text extraction, and information retrieval techniques Strong understanding of machine learning and deep learning concepts including transformer based GPT models Experience with distributed computing and cloud environments (eg, Microsoft Azure)
03/02/2025
Full time
NO SPONSORSHIP AI Engineer/Developer On site 3 days a week downtown Chicago Salary - 180 - 200K + $1200 - 10K Bonus We are looking for someone with a software development background who just happens to have specialized in AI Workflow automation. This makes their skills far more portable and is super important because the field is changing rapidly. Existing automation tech will be become obsolete very soon. We need people that are more focused on the programming side of the house, versus general office productivity. AI Engineering natural language processing and machine learning design Python Langchain LlamalIndex and Semantic Kernel large language models PyPDF Azure document intelligence prompt engineering In total, there are 20 people associated with this AI team. The Artificial Intelligence team is brand new at this organization. this is a new wave in the Legal space, and we are responsible for creating an AI road map in the Legal industry. AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Ideal Candidate: -Bachelor's Degree in Computer Science, Engineering, or related field -A minimum of 5 years of experience in AI engineering or a related field -MUST HAVE: Proven experience with AI engineering tools and technologies, including Python, Langchain, LlamaIndex, and Semantic Kernel -Understanding of large language models. -You will likely come across a lot of general software developers who want to transition into AI Engineering - This is an acceptable candidate, as long as they have experience with the AI tools listed above. -You will also likely come across Data Scientists that are trying to reinvent themselves as Data/AI Engineers. This candidate is acceptable as well, as long as they have experience with the above listed tools and can build a case on AI agents. The AI Engineer, a member of the AI Engineering team, is responsible for developing and implementing cutting-edge legal AI solutions that drive efficiency, improve decision making, and provide valuable insights across various administrative business groups and legal practices. This role will leverage expertise in AI engineering, natural language processing, and machine learning to design, develop, and deploy innovative solutions that capitalize on both structured and unstructured data. Duties and Responsibilities: Prototype and test AI solutions using Python and Streamlit with a focus on natural language processing and text extraction from documents (PyPDF, Azure Document Intelligence) Develop plugins and assistants using LangChain, LlamaIndex, or Semantic Kernel, with expertise in prompt engineering and semantic function design Design and implement Retrieval Augmented Generation (RAG) stores using a combination of classic information retrieval and semantic embeddings stored in vector and graph databases Develop and deploy agents using AutoGen, CrewAI, LangChain Agents, and LlamaIndex Agents Use Gen AI to distill metadata and insights from documents Fine tune LLMs to optimize for domain and cost Collaborate with stakeholders to implement and automate AI powered solutions for common business workflows Enhance documentation procedures, codebase, and adherence to best practices to promote and facilitate knowledge sharing and ensure the upkeep of an organized and reproducible working environment Required: Bachelor's Degree in Computer Science, Engineering, or related field A minimum of 5 years of experience in AI engineering or a related field Preferred : Master's Degree in Computer Science, Engineering, or related field Proven experience with AI engineering tools and technologies, including Python, Streamlit, Jupyter Notebooks, Langchain, LlamaIndex, and Semantic Kernel Experience with natural language processing, text extraction, and information retrieval techniques Strong understanding of machine learning and deep learning concepts including transformer based GPT models Experience with distributed computing and cloud environments (eg, Microsoft Azure)
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal, Data Analytics Engineering. This principal will lead the design and build of internal data analytics, data warehouse, and maintain ETL processes. They will also manage the validation of data models to make sure the information is available in analytics warehouse. Required: Financial industry experience, SQL, Python, Alteryx, Tableau. Responsibilities: Partner with Data Architecture and other relevant teams to design and implement new cloud data warehouse infrastructure for internal facing analytics Manage the validation of data models to ensure information is available in our analytics warehouse for downstream uses, such as ad hoc analysis and dashboard development Maintain performance requirements of our analytics warehouse by tuning warehouse optimizations and storage processes Direct and enable the team to collaborate with Data Governance team and DBAs to design access controls around our analytics warehouse to meet business and Data Governance needs Approve documentation and testing to ensure data is accurate and easily understandable Promote self-service capabilities and data literacy for business users leveraging the platform through development of training presentations and resources Discover and share best practices for data and analytics engineering with members of the team Invest in continued learning on data and analytics engineering best practices and evaluate them for fit in improving maintainability and reliability of analytics infrastructure Qualifications: Bachelor's degree in quantitative discipline (eg, Statistics, Computer Science, Mathematics, Physics, Electrical Engineering, Industrial Engineering) or equivalent professional experience 10+ years of experience as a data engineer, analytics engineer, Business Intelligence analyst, or data scientist Experience implementing and maintaining cloud-based data warehouses and curating a semantic layer that meets the needs of business stakeholders Proficiency in SQL Experience with Python and/or R Experience with visualization/reporting tools, such as Tableau Experience with ETL tools, such as Alteryx Ability to collaborate with multiple partners (eg, Business Functional areas, Data Platform, Platform Engineering, Security Services, Data Governance, Information Governance, etc.) to craft solutions that align business goals with internal security and development standards Knowledge of and experience working with analytics/reporting technology and underlying databases Experience working within an agile environment
03/02/2025
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal, Data Analytics Engineering. This principal will lead the design and build of internal data analytics, data warehouse, and maintain ETL processes. They will also manage the validation of data models to make sure the information is available in analytics warehouse. Required: Financial industry experience, SQL, Python, Alteryx, Tableau. Responsibilities: Partner with Data Architecture and other relevant teams to design and implement new cloud data warehouse infrastructure for internal facing analytics Manage the validation of data models to ensure information is available in our analytics warehouse for downstream uses, such as ad hoc analysis and dashboard development Maintain performance requirements of our analytics warehouse by tuning warehouse optimizations and storage processes Direct and enable the team to collaborate with Data Governance team and DBAs to design access controls around our analytics warehouse to meet business and Data Governance needs Approve documentation and testing to ensure data is accurate and easily understandable Promote self-service capabilities and data literacy for business users leveraging the platform through development of training presentations and resources Discover and share best practices for data and analytics engineering with members of the team Invest in continued learning on data and analytics engineering best practices and evaluate them for fit in improving maintainability and reliability of analytics infrastructure Qualifications: Bachelor's degree in quantitative discipline (eg, Statistics, Computer Science, Mathematics, Physics, Electrical Engineering, Industrial Engineering) or equivalent professional experience 10+ years of experience as a data engineer, analytics engineer, Business Intelligence analyst, or data scientist Experience implementing and maintaining cloud-based data warehouses and curating a semantic layer that meets the needs of business stakeholders Proficiency in SQL Experience with Python and/or R Experience with visualization/reporting tools, such as Tableau Experience with ETL tools, such as Alteryx Ability to collaborate with multiple partners (eg, Business Functional areas, Data Platform, Platform Engineering, Security Services, Data Governance, Information Governance, etc.) to craft solutions that align business goals with internal security and development standards Knowledge of and experience working with analytics/reporting technology and underlying databases Experience working within an agile environment
We are currently looking on behalf of one of our important clients for a Senior Data Scientist. The role is permanent position based in Zurich Canton & comes with good home office allowance. Your role: Hold responsibility for shaping the Intelligence behind an associated product Develop the product solutions in relation to the areas of the detection, localization & quantification. Write algorithms & work with data engineers to run them in a productive environment. Work with noisy & sparse data & write production-ready code. Your Skills & Experience: At least 3 years of relevant professional experience including strong experiences in Data Science, Algorithm Development & Physical Modeling. An in-depth knowledge of Statistics, Physics & Data Analysis. Skills & expertise in Python, SQL & Apache Spark. Ideally experienced in MLOps. Any experience in developing IoT Solutions is considered advantageous. Your Profile: Completed University Degree in the area Computer Science, Engineering, Physics or Similar. Motivated to take responsibility & drive innovative ideas. Dynamic & adaptable. Fluent in English (spoken & written). Any German language skills are considered very advantageous.
03/02/2025
Full time
We are currently looking on behalf of one of our important clients for a Senior Data Scientist. The role is permanent position based in Zurich Canton & comes with good home office allowance. Your role: Hold responsibility for shaping the Intelligence behind an associated product Develop the product solutions in relation to the areas of the detection, localization & quantification. Write algorithms & work with data engineers to run them in a productive environment. Work with noisy & sparse data & write production-ready code. Your Skills & Experience: At least 3 years of relevant professional experience including strong experiences in Data Science, Algorithm Development & Physical Modeling. An in-depth knowledge of Statistics, Physics & Data Analysis. Skills & expertise in Python, SQL & Apache Spark. Ideally experienced in MLOps. Any experience in developing IoT Solutions is considered advantageous. Your Profile: Completed University Degree in the area Computer Science, Engineering, Physics or Similar. Motivated to take responsibility & drive innovative ideas. Dynamic & adaptable. Fluent in English (spoken & written). Any German language skills are considered very advantageous.
Venesky-Brown's client, a public sector organisation in Cardiff, is currently looking to recruit a Junior Technical Architect for an initial 6 week contract with potential to extend on a rate of £256.93/day (Inside IR35) - working a minimum of 15 hours per week, with the potential to increase to 37.5 hours a week. This role will be working from home with one day a week in the office. Responsibilities: - Lead the definition of technical strategies, visions and designs across teams based on Google Cloud Platform best practice to achieve organisational objectives. - Direct the delivery of technical strategies, visions and designs across teams. - Drive best practices in cloud-based data engineering, automation, and DevOps for data solutions. - Collaborate with data scientists, analysts, and other engineering teams to define architecture principles and patterns. - Lead and guide the team, contributing to team skill growth and knowledge sharing. - Design and implement scalable, reliable data models in the Google Cloud Platform. - Ensure the security, compliance, and performance of Google Cloud Platform data solutions. Essential Skills: - Master's Degree in Computer Science, Engineering, Information Technology, or related field. - Cloud Platform Certification (AWS, Azure, or Google Cloud Platform) AWS Certified Data Analytics - Specialty Google Professional Architect Microsoft Certified: Azure Data Engineer - Significant experience with Google Cloud Platform for data and analytics. - Ability to define strategies and visions across teams based on best practice, which align with organisational objectives. - Ability to direct the implementation of strategies and visions for example by creating and implementing roadmaps or plans. - Ability to make and guide architectural decisions, identify and address associated risks, and use governance and assurance to make design decisions. - Can define architectural principles and patterns using best practice. - Ability to produce relevant data models, explaining which models to use for which purpose and advising on industry best practice. - Can develop and maintain strategies in response to feedback and findings. - Ability to create technical designs characterised by high risk, impact and complexity. - Can lead and guide others in creating technical designs to achieve organisational objectives. - Can understand the impact on the organisation of emerging trends in data tools, analysis techniques and data usage. - Ability to communicate between technical and non-technical stakeholders, mediate in difficult architectural discussion and gain support from stakeholders for architectural initiatives. - Can work collaboratively in a group and adapt feedback to ensure it's effective. - Can use feedback to optimise and refine standards for technical designs throughout the life cycle. - Ability to track emerging internal and external issues, solve problems with the most appropriate actions influencing colleagues across the organisation. Desirable Skills: - Google Professional Data Engineer - Data Engineering Certifications (eg, Cloudera, Databricks) - Data-related Certifications (eg, Microsoft Certified: Data Scientist Associate, Databricks Certified Data Engineer) - Ability to design an appropriate metadata repository, improve existing repositories, understand a range of tools to manage metadata and advise team members on metadata management. - Ability to undertake data profiling and source system analysis, presenting clear vision and insights to colleagues to support the end use of data. - Can understand industry-recognised data modelling patterns and standards with the ability to compare and align different data models. - Can design data architecture to deal with problems spanning different business areas, identifying links between problems to devise common solutions. If you would like to hear more about this opportunity please get in touch.
31/01/2025
Project-based
Venesky-Brown's client, a public sector organisation in Cardiff, is currently looking to recruit a Junior Technical Architect for an initial 6 week contract with potential to extend on a rate of £256.93/day (Inside IR35) - working a minimum of 15 hours per week, with the potential to increase to 37.5 hours a week. This role will be working from home with one day a week in the office. Responsibilities: - Lead the definition of technical strategies, visions and designs across teams based on Google Cloud Platform best practice to achieve organisational objectives. - Direct the delivery of technical strategies, visions and designs across teams. - Drive best practices in cloud-based data engineering, automation, and DevOps for data solutions. - Collaborate with data scientists, analysts, and other engineering teams to define architecture principles and patterns. - Lead and guide the team, contributing to team skill growth and knowledge sharing. - Design and implement scalable, reliable data models in the Google Cloud Platform. - Ensure the security, compliance, and performance of Google Cloud Platform data solutions. Essential Skills: - Master's Degree in Computer Science, Engineering, Information Technology, or related field. - Cloud Platform Certification (AWS, Azure, or Google Cloud Platform) AWS Certified Data Analytics - Specialty Google Professional Architect Microsoft Certified: Azure Data Engineer - Significant experience with Google Cloud Platform for data and analytics. - Ability to define strategies and visions across teams based on best practice, which align with organisational objectives. - Ability to direct the implementation of strategies and visions for example by creating and implementing roadmaps or plans. - Ability to make and guide architectural decisions, identify and address associated risks, and use governance and assurance to make design decisions. - Can define architectural principles and patterns using best practice. - Ability to produce relevant data models, explaining which models to use for which purpose and advising on industry best practice. - Can develop and maintain strategies in response to feedback and findings. - Ability to create technical designs characterised by high risk, impact and complexity. - Can lead and guide others in creating technical designs to achieve organisational objectives. - Can understand the impact on the organisation of emerging trends in data tools, analysis techniques and data usage. - Ability to communicate between technical and non-technical stakeholders, mediate in difficult architectural discussion and gain support from stakeholders for architectural initiatives. - Can work collaboratively in a group and adapt feedback to ensure it's effective. - Can use feedback to optimise and refine standards for technical designs throughout the life cycle. - Ability to track emerging internal and external issues, solve problems with the most appropriate actions influencing colleagues across the organisation. Desirable Skills: - Google Professional Data Engineer - Data Engineering Certifications (eg, Cloudera, Databricks) - Data-related Certifications (eg, Microsoft Certified: Data Scientist Associate, Databricks Certified Data Engineer) - Ability to design an appropriate metadata repository, improve existing repositories, understand a range of tools to manage metadata and advise team members on metadata management. - Ability to undertake data profiling and source system analysis, presenting clear vision and insights to colleagues to support the end use of data. - Can understand industry-recognised data modelling patterns and standards with the ability to compare and align different data models. - Can design data architecture to deal with problems spanning different business areas, identifying links between problems to devise common solutions. If you would like to hear more about this opportunity please get in touch.
We are seeking an experienced senior Informatica IDMC Data Engineer to join our dynamic team. This is a key role within their data team that will see you driving forward exciting new data initiatives. You will help the organisation to implement what is needed to deliver strategic data platforms. This will include, but not limited to the end-to-end support for acquisition, organisation, integration, transformation, and secure storage of mission-critical data. Key Responsibilities: Informatica IDMC Management: Design, develop, and maintain data integration solutions using Informatica Intelligent Data Management Cloud (IDMC). Optimize IDMC pipelines for performance and reliability. Ensure data integrity and quality across various data sources and targets. Unix Systems: Develop and maintain scripts and processes on Unix/Linux systems to support data integration workflows. AWS Cloud Services would be added advantage: Implement and manage data solutions on AWS, including S3, EC2 services. Utilize AWS RDS databases for storing and managing structured data. Batch Orchestration: Design, implement, and manage batch job workflows using tools like AutoSys and Apache Airflow. Schedule and monitor batch jobs, ensuring timely and successful execution. Troubleshoot and resolve issues related to batch processing. Collaboration and Communication: Work closely with data analysts, data scientists, and other stakeholders to understand requirements and deliver solutions. Document data integration processes, workflows, and best practices. Provide support and training to team members as needed. Qualifications: Experience: 10+ years of experience in data engineering or related roles. Proven experience with Informatica IDMC, minimum 5 years of experience using IDMC. Strong Unix/Linux skills, including Scripting and system administration. Proficiency with AWS cloud services and RDS databases. Proficiency in batch orchestration tools such as AutoSys and Apache Airflow. Excellent problem-solving and analytical skills. Strong understanding of data integration and ETL processes. Ability to work in a fast-paced, dynamic environment. Strong communication and interpersonal skills. Proficiency with CI/CD pipelines and DevOps practices.
30/01/2025
Full time
We are seeking an experienced senior Informatica IDMC Data Engineer to join our dynamic team. This is a key role within their data team that will see you driving forward exciting new data initiatives. You will help the organisation to implement what is needed to deliver strategic data platforms. This will include, but not limited to the end-to-end support for acquisition, organisation, integration, transformation, and secure storage of mission-critical data. Key Responsibilities: Informatica IDMC Management: Design, develop, and maintain data integration solutions using Informatica Intelligent Data Management Cloud (IDMC). Optimize IDMC pipelines for performance and reliability. Ensure data integrity and quality across various data sources and targets. Unix Systems: Develop and maintain scripts and processes on Unix/Linux systems to support data integration workflows. AWS Cloud Services would be added advantage: Implement and manage data solutions on AWS, including S3, EC2 services. Utilize AWS RDS databases for storing and managing structured data. Batch Orchestration: Design, implement, and manage batch job workflows using tools like AutoSys and Apache Airflow. Schedule and monitor batch jobs, ensuring timely and successful execution. Troubleshoot and resolve issues related to batch processing. Collaboration and Communication: Work closely with data analysts, data scientists, and other stakeholders to understand requirements and deliver solutions. Document data integration processes, workflows, and best practices. Provide support and training to team members as needed. Qualifications: Experience: 10+ years of experience in data engineering or related roles. Proven experience with Informatica IDMC, minimum 5 years of experience using IDMC. Strong Unix/Linux skills, including Scripting and system administration. Proficiency with AWS cloud services and RDS databases. Proficiency in batch orchestration tools such as AutoSys and Apache Airflow. Excellent problem-solving and analytical skills. Strong understanding of data integration and ETL processes. Ability to work in a fast-paced, dynamic environment. Strong communication and interpersonal skills. Proficiency with CI/CD pipelines and DevOps practices.