Job Title: Data Engineer Location: Manchester Package: from £40,000 - £55,000 + Benefits Type: Permanent Sanderson Recruitment is recruiting for a Data Engineer on behalf of our leading Insurance client based in Manchester. Company Overview: Are you interested in joining a leading insurance company headquartered in the UK? Established over a decade ago, my client specialises in providing a range of insurance services tailored to meet the diverse needs of their customers. With a primary focus on the motor insurance market, they offer comprehensive car insurance directly through their brand, as well as underwriting services to other insurers. In addition to motor insurance, they also provide various supporting services related to insurance, including financing, distribution, and legal assistance. My client's commitment to utilising technology and data-driven strategies ensures they deliver high-quality products and services to their customers while mitigating risks effectively. Role & Responsibilities: As a Data Engineer, you will be actively participating in technical tasks, focusing on constructing data solutions for projects and ongoing data products. Your responsibilities will include Develop secure, efficient data pipelines of varying complexity, integrating data from diverse sources, both on-premise and off-premise, internal and external. Ensure data integrity and quality by cleansing, mapping, transforming, and optimising data for storage, aligning with business and technical requirements. Incorporate data observability and quality measures into pipelines to facilitate self-testing and early detection of processing issues or discrepancies. Construct solutions to transform and store data across different storage areas, including data lakes, databases, and reporting structures, spanning data warehouse, Business Intelligence systems, and analytics applications. Design physical data models tailored to business needs and storage optimisation, emphasising reusability and scalability. Conduct thorough unit testing of own code and peer testing to maintain high quality and integrity. Document pipelines and code comprehensively to ensure transparency and facilitate understanding. Adhere to coding standards, architectural principles, and release management processes to ensure code safety, quality, and compliance. Provide guidance and support to Associate data engineers through coaching and mentoring. Develop BI solutions of varying complexity, including data marts, semantic layers, and reporting & visualisation solutions using recognised BI tools such as PowerBI. Essential Requirements: To thrive in this role, candidates must possess: Demonstrated proficiency in PySpark and SQL development, with a strong interest in advancing your career in data engineering. Enthusiastic about leveraging Azure best practices to facilitate seamless data delivery from source to consumption on a daily basis. Excels at translating customer requirements into actionable designs and timely delivery. 2-5 years of experience in designing and implementing end-to-end data solutions. Proficiency in SQL Server and Azure technologies such as Data Factory and Synapse, along with expertise in associated ETL technologies. Experience working with large, event-based datasets within an enterprise setting. Familiarity with testing techniques and tools to ensure data quality and integrity. Strong interpersonal and communication skills, with an ability to build strong relationships. Active engagement in the data community with a keen interest in leveraging data to drive business value. Comprehensive understanding of the complete data life cycle. Experience with Continuous Integration/Continuous Delivery (CI/CD) practices. Proven track record of thriving in agile environments and adeptness at self-managing teams. This role offers an exciting opportunity to drive data innovation within a forward-thinking organisation. If you're ready to make a direct and meaningful contribution to my clients dynamic work environment, apply now.
19/04/2024
Full time
Job Title: Data Engineer Location: Manchester Package: from £40,000 - £55,000 + Benefits Type: Permanent Sanderson Recruitment is recruiting for a Data Engineer on behalf of our leading Insurance client based in Manchester. Company Overview: Are you interested in joining a leading insurance company headquartered in the UK? Established over a decade ago, my client specialises in providing a range of insurance services tailored to meet the diverse needs of their customers. With a primary focus on the motor insurance market, they offer comprehensive car insurance directly through their brand, as well as underwriting services to other insurers. In addition to motor insurance, they also provide various supporting services related to insurance, including financing, distribution, and legal assistance. My client's commitment to utilising technology and data-driven strategies ensures they deliver high-quality products and services to their customers while mitigating risks effectively. Role & Responsibilities: As a Data Engineer, you will be actively participating in technical tasks, focusing on constructing data solutions for projects and ongoing data products. Your responsibilities will include Develop secure, efficient data pipelines of varying complexity, integrating data from diverse sources, both on-premise and off-premise, internal and external. Ensure data integrity and quality by cleansing, mapping, transforming, and optimising data for storage, aligning with business and technical requirements. Incorporate data observability and quality measures into pipelines to facilitate self-testing and early detection of processing issues or discrepancies. Construct solutions to transform and store data across different storage areas, including data lakes, databases, and reporting structures, spanning data warehouse, Business Intelligence systems, and analytics applications. Design physical data models tailored to business needs and storage optimisation, emphasising reusability and scalability. Conduct thorough unit testing of own code and peer testing to maintain high quality and integrity. Document pipelines and code comprehensively to ensure transparency and facilitate understanding. Adhere to coding standards, architectural principles, and release management processes to ensure code safety, quality, and compliance. Provide guidance and support to Associate data engineers through coaching and mentoring. Develop BI solutions of varying complexity, including data marts, semantic layers, and reporting & visualisation solutions using recognised BI tools such as PowerBI. Essential Requirements: To thrive in this role, candidates must possess: Demonstrated proficiency in PySpark and SQL development, with a strong interest in advancing your career in data engineering. Enthusiastic about leveraging Azure best practices to facilitate seamless data delivery from source to consumption on a daily basis. Excels at translating customer requirements into actionable designs and timely delivery. 2-5 years of experience in designing and implementing end-to-end data solutions. Proficiency in SQL Server and Azure technologies such as Data Factory and Synapse, along with expertise in associated ETL technologies. Experience working with large, event-based datasets within an enterprise setting. Familiarity with testing techniques and tools to ensure data quality and integrity. Strong interpersonal and communication skills, with an ability to build strong relationships. Active engagement in the data community with a keen interest in leveraging data to drive business value. Comprehensive understanding of the complete data life cycle. Experience with Continuous Integration/Continuous Delivery (CI/CD) practices. Proven track record of thriving in agile environments and adeptness at self-managing teams. This role offers an exciting opportunity to drive data innovation within a forward-thinking organisation. If you're ready to make a direct and meaningful contribution to my clients dynamic work environment, apply now.
Python Developer - 6 Month Contract - Inside IR35 - Glasgow (Hybrid) Hamilton Barnes is representing a leading bank who are looking to hire multiple Lead PySpark/Python Engineers. As a PySpark/Python Lead Software Engineer/Developer, you will play a crucial role in developing and maintaining data processing applications using PySpark and Python within our leading financial institution. Occasional onsite work will be required as part of this role. Develop and maintain data processing applications using PySpark and Python. Collaborate with cross-functional teams to gather requirements and implement solutions. Optimize and tune PySpark jobs for performance and scalability. Ensure data quality, reliability, and integrity throughout the data processing pipelines. Troubleshoot and resolve issues related to data processing and analysis. Skills/Experience: Hands-on experience with PySpark and Python is essential. Proven experience developing and maintaining data processing applications. Strong understanding of distributed computing concepts and big data technologies. Familiarity with data formats and storage systems such as Parquet, Avro, HDFS, and S3. Experience with Datamodelling, ETL processes, and data warehousing concepts. Contract Details: Duration: 6 months Location: Glasgow/WFH Day Rate: Up to £475 Per Day (Inside IR35) Start Date: ASAP Pyspark Developer - 6 Month Contract - Inside IR35 - Glasgow (Hybrid)
18/04/2024
Project-based
Python Developer - 6 Month Contract - Inside IR35 - Glasgow (Hybrid) Hamilton Barnes is representing a leading bank who are looking to hire multiple Lead PySpark/Python Engineers. As a PySpark/Python Lead Software Engineer/Developer, you will play a crucial role in developing and maintaining data processing applications using PySpark and Python within our leading financial institution. Occasional onsite work will be required as part of this role. Develop and maintain data processing applications using PySpark and Python. Collaborate with cross-functional teams to gather requirements and implement solutions. Optimize and tune PySpark jobs for performance and scalability. Ensure data quality, reliability, and integrity throughout the data processing pipelines. Troubleshoot and resolve issues related to data processing and analysis. Skills/Experience: Hands-on experience with PySpark and Python is essential. Proven experience developing and maintaining data processing applications. Strong understanding of distributed computing concepts and big data technologies. Familiarity with data formats and storage systems such as Parquet, Avro, HDFS, and S3. Experience with Datamodelling, ETL processes, and data warehousing concepts. Contract Details: Duration: 6 months Location: Glasgow/WFH Day Rate: Up to £475 Per Day (Inside IR35) Start Date: ASAP Pyspark Developer - 6 Month Contract - Inside IR35 - Glasgow (Hybrid)
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Birmingham; can be remote-based, hybrid working or office-based Key experience desired/what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps/DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (eg, pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
17/04/2024
Full time
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Birmingham; can be remote-based, hybrid working or office-based Key experience desired/what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps/DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (eg, pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Newcastle or Birmingham; can be remote-based, hybrid working or office-based Key experience desired/what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps/DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (eg, pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.
17/04/2024
Full time
Xpertise is seeking two talented Machine Learning Engineers to join our esteemed team in Birmingham. As part of our growing engineering division, you will play a pivotal role in designing, implementing, and optimizing machine learning models and data pipelines. With a strong emphasis on AWS technologies and MLOps practices, you'll have the opportunity to contribute to the development of scalable, production-grade solutions that drive business value. Key details: Salary: £55,000-95,000 (Mid-Lead) I'd consider experienced contractors with a rate of £400.00 per day (Outisde IR35) Benefits: 10-25% bonus + healthcare + 10% pension Location: Newcastle or Birmingham; can be remote-based, hybrid working or office-based Key experience desired/what you will learn: Experience developing, deploying, and maintaining machine learning models in production environments. Strong understanding of AWS cloud services, especially in building and managing data pipelines and machine learning workflows: S3, Redshift, Lambda, Glue, EMR, EKS (Kubernetes) Familiarity with MLOps/DevOps concepts and practices, including version control, CI/CD, and model monitoring. Proficiency in Python and relevant data manipulation and analysis libraries (eg, pandas, NumPy). Experience with distributed computing frameworks like Apache Spark is a plus. Apache Spark and Airflow would be a bonus. Role overview: If you're looking to work with a team of ambitious software engineers, talented senior leaders all while working with the latest data, AI and cloud technologies, then this one's for you. They have big plans to disrupt the industry with this machine learning work, so it's a great time to join. Interested? Please apply with your CV and/or message Billy Hall for further details. Xpertise acts as an employment agency.