Java Software Engineer (Developer Programmer Java Python Automation Big Data AWS GCP SQL Data Governance Finance Trading Contract Contractor Consultant London Financial Services Banking Remote Working AWS Trading Cloud Projects) required by our financial services client in Dublin, Ireland. You MUST have the following: Strong experience as a Java Software Engineer/Developer/Programmer Good familiarity with CI/CD automation Experience in large-scale enterprise data environments AWS or GCP Strong database knowledge The following is DESIRABLE, not essential: Finance Data governance Python Role: Java Software Engineer (Developer Programmer Java Python Automation Big Data AWS GCP SQL Data Governance Finance Trading Contract Contractor Consultant London Financial Services Banking Remote. Working AWS Trading Cloud Projects) required by our financial services client in Dublin, Ireland. You will join a central data governance team that is only 12 months old, 4 people, and sits within a company of 1000. You will be tasked with defining, designing and implementing the automation processes that allow the data governance team to function. Currently, the team comprises analysts who are on the more functional side of data governance. You will have the complete responsibility of the technical aspect of the team. The environment is AWS based and you will have the choice of working in Java or Python or both. On the automation side, other teams are working with ArgoCD and GitHub Actions but you will have the freedom to choose the most appropriate. This role is 100% remote but you will need to work roughly around UK hours. You will also have to be based in Ireland. This will likely begin as a 12 month contract and continue long-term. Duration: 12-24 months Rate: €450- 550/day
30/04/2024
Project-based
Java Software Engineer (Developer Programmer Java Python Automation Big Data AWS GCP SQL Data Governance Finance Trading Contract Contractor Consultant London Financial Services Banking Remote Working AWS Trading Cloud Projects) required by our financial services client in Dublin, Ireland. You MUST have the following: Strong experience as a Java Software Engineer/Developer/Programmer Good familiarity with CI/CD automation Experience in large-scale enterprise data environments AWS or GCP Strong database knowledge The following is DESIRABLE, not essential: Finance Data governance Python Role: Java Software Engineer (Developer Programmer Java Python Automation Big Data AWS GCP SQL Data Governance Finance Trading Contract Contractor Consultant London Financial Services Banking Remote. Working AWS Trading Cloud Projects) required by our financial services client in Dublin, Ireland. You will join a central data governance team that is only 12 months old, 4 people, and sits within a company of 1000. You will be tasked with defining, designing and implementing the automation processes that allow the data governance team to function. Currently, the team comprises analysts who are on the more functional side of data governance. You will have the complete responsibility of the technical aspect of the team. The environment is AWS based and you will have the choice of working in Java or Python or both. On the automation side, other teams are working with ArgoCD and GitHub Actions but you will have the freedom to choose the most appropriate. This role is 100% remote but you will need to work roughly around UK hours. You will also have to be based in Ireland. This will likely begin as a 12 month contract and continue long-term. Duration: 12-24 months Rate: €450- 550/day
Python Software Engineer (Developer Programmer Java Python Automation Big Data AWS GCP SQL Data Governance Finance Trading Contract Contractor Consultant London Financial Services Banking Remote Working AWS Trading Cloud Projects) required by our financial services client in Dublin, Ireland. You MUST have the following: Strong experience as a Python Software Engineer/Developer/Programmer Good familiarity with CI/CD automation Experience in large-scale enterprise data environments AWS or GCP Strong database knowledge The following is DESIRABLE, not essential: Finance Data governance Java Role: Python Software Engineer (Developer Programmer Java Python Automation Big Data AWS GCP SQL Data Governance Finance Trading Contract Contractor Consultant London Financial Services Banking Remote Working AWS Trading Cloud Projects) required by our financial services client in Dublin, Ireland. You will join a central data governance team that is only 12 months old, 4 people, and sits within a company of 1000. You will be tasked with defining, designing and implementing the automation processes that allow the data governance team to function. Currently, the team comprises analysts who are on the more functional side of data governance. You will have the complete responsibility of the technical aspect of the team. The environment is AWS based and you will have the choice of working in Python or Java or both. On the automation side, other teams are working with ArgoCD and GitHub Actions but you will have the freedom to choose the most appropriate. This role is 100% remote but you will need to work roughly around UK hours. You will also have to be based in Ireland. This will likely begin as a 12 month contract and continue long-term. Duration: 12-24 months Rate: €450- 550/day
30/04/2024
Project-based
Python Software Engineer (Developer Programmer Java Python Automation Big Data AWS GCP SQL Data Governance Finance Trading Contract Contractor Consultant London Financial Services Banking Remote Working AWS Trading Cloud Projects) required by our financial services client in Dublin, Ireland. You MUST have the following: Strong experience as a Python Software Engineer/Developer/Programmer Good familiarity with CI/CD automation Experience in large-scale enterprise data environments AWS or GCP Strong database knowledge The following is DESIRABLE, not essential: Finance Data governance Java Role: Python Software Engineer (Developer Programmer Java Python Automation Big Data AWS GCP SQL Data Governance Finance Trading Contract Contractor Consultant London Financial Services Banking Remote Working AWS Trading Cloud Projects) required by our financial services client in Dublin, Ireland. You will join a central data governance team that is only 12 months old, 4 people, and sits within a company of 1000. You will be tasked with defining, designing and implementing the automation processes that allow the data governance team to function. Currently, the team comprises analysts who are on the more functional side of data governance. You will have the complete responsibility of the technical aspect of the team. The environment is AWS based and you will have the choice of working in Python or Java or both. On the automation side, other teams are working with ArgoCD and GitHub Actions but you will have the freedom to choose the most appropriate. This role is 100% remote but you will need to work roughly around UK hours. You will also have to be based in Ireland. This will likely begin as a 12 month contract and continue long-term. Duration: 12-24 months Rate: €450- 550/day
Are you a fluent German speaker and a Data Engineer ready to take on a new challenge? Join our clients in Bern, Lucerne or Zurich in their journey as they build data solutions in the financial sector! In your new role, you'll have the opportunity to create, represent, and realize innovative data-driven solutions, modelling complex data structures and pipelines. Your expertise will be crucial in optimizing current data management processes and migrating functional deployments to a modern TechStack. You'll also play a role in refining and optimizing implemented processes, leveraging new methods and technologies where necessary. Plus, you'll have the chance to shape the architectural direction of data management, introducing new technologies to drive innovation. What should you bring to the table? The team is looking for someone with many years of experience in data warehouse and/or data management. The teams respectively are looking for a new member who is proficient in Python and Spark. Banking and financial knowledge is a big plus, especially in operational or analytical areas such as financing, saving, investing, accounting, controlling, or risk reporting. As the daily communication will be in German and English, the candidate needs to be fluent in both German and English, written and spoken. Candidates without fluent German knowledge cannot be considered. Please note that this position is limited to Swiss citizens, Swiss work permit holders without the need for sponsorship, and residents of the EU/EFTA zone. Visa and permit sponsorship options are not available. Candidates without said requirements cannot be considered. If you're ready to make an impact and be part of a dynamic team, then we want to hear from you! Apply now by sending your CV to (see below) and let's talk!
30/04/2024
Full time
Are you a fluent German speaker and a Data Engineer ready to take on a new challenge? Join our clients in Bern, Lucerne or Zurich in their journey as they build data solutions in the financial sector! In your new role, you'll have the opportunity to create, represent, and realize innovative data-driven solutions, modelling complex data structures and pipelines. Your expertise will be crucial in optimizing current data management processes and migrating functional deployments to a modern TechStack. You'll also play a role in refining and optimizing implemented processes, leveraging new methods and technologies where necessary. Plus, you'll have the chance to shape the architectural direction of data management, introducing new technologies to drive innovation. What should you bring to the table? The team is looking for someone with many years of experience in data warehouse and/or data management. The teams respectively are looking for a new member who is proficient in Python and Spark. Banking and financial knowledge is a big plus, especially in operational or analytical areas such as financing, saving, investing, accounting, controlling, or risk reporting. As the daily communication will be in German and English, the candidate needs to be fluent in both German and English, written and spoken. Candidates without fluent German knowledge cannot be considered. Please note that this position is limited to Swiss citizens, Swiss work permit holders without the need for sponsorship, and residents of the EU/EFTA zone. Visa and permit sponsorship options are not available. Candidates without said requirements cannot be considered. If you're ready to make an impact and be part of a dynamic team, then we want to hear from you! Apply now by sending your CV to (see below) and let's talk!
Site Reliability Engineer - SRE One of our biggest customers based in the Financial Services sector is looking for an experienced Site Reliability Engineer - SRE to join them as they look to create a newly appointed team. Site Reliability Engineer: We have an exciting brand-new opportunity to join a dynamic IT Team as a Site Reliability Engineer. We are looking for an expert in this field who has extensive experience and knowledge in managing APM tools such as Dynatrace and has demonstrable experience (at least 3 years) as a Site Reliability Engineer. The Site Reliability Engineer (SRE) will take ownership of the observability suite, leveraging deep DevOps skills and experience to proactively enhance the performance and stability of APIs and applications. This role will play a crucial part in ensuring reliability and scalability including managing APM tools such as Dynatrace or New Relic. Main Responsibilities as Site Reliability Engineer: Take ownership of the observability suite, including monitoring, logging, and alerting tools, to ensure comprehensive and holistic visibility into system performance and health. Configure and manage APM tools such as Dynatrace or New Relic, utilizing their capabilities to monitor application performance and troubleshoot issues effectively. Utilize deep DevOps skills and experience to implement and maintain infrastructure as code (IaC) practices, automating deployment, scaling, and management processes. Proactively measure and identify performance bottlenecks and reliability issues in APIs and applications and implement solutions to mitigate these issues. Collaborate with development teams to optimize application performance, improve resource utilization, and enhance scalability. Implement and maintain robust incident response and post-incident review processes to minimize downtime and prevent recurrence of issues. Drive continuous improvement initiatives to enhance the reliability, scalability, and efficiency of infrastructure and services, getting ahead of customer needs. Participate in on-call rotation and provide support for incident resolution and troubleshooting as needed. Skills and experience you need as Site Reliability Engineer Demonstrable experience (at least 3 years) as a Site Reliability Engineer or similar role, with a focus on maintaining high availability, reliability, and scalability of production systems. Strong expertise in monitoring, logging, and alerting tools such as Prometheus, ELK stack, Grafana, Azure Monitor etc., with the ability to take ownership of the observability suite. Experience managing APM tools such as Dynatrace or New Relic, utilizing their capabilities to monitor application performance effectively. Deep understanding of DevOps principles and practices, including infrastructure as code (IaC) using Terraform, automated deployment, and configuration management (including tools). Experience with Node.js, Java and JavaScript frameworks Experience with cloud technologies, preferably Azure, and proficiency in managing cloud-based infrastructure. Proven ability to proactively identify and resolve performance bottlenecks and reliability issues in APIs and applications. Strong collaboration and communication skills, with the ability to work effectively with cross-functional teams. Experience with incident response and post-incident review processes, and a commitment to minimizing downtime and preventing recurrence of issues. A proactive mindset with a focus on continuous improvement, constantly seeking opportunities to enhance the reliability, scalability, and efficiency of infrastructure and services. Resilient work ethic and the ability to thrive in a fast-paced and dynamic environment, including participation in on-call rotation for incident response and troubleshooting. Due to the volume of applications received for positions, it will not be possible to respond to all applications and only applicants who are considered suitable for interview will be contacted. Proactive Appointments Limited operates as an employment agency and employment business and is an equal opportunities organisation We take our obligations to protect your personal data very seriously. Any information provided to us will be processed as detailed in our Privacy Notice, a copy of which can be found on our website
29/04/2024
Full time
Site Reliability Engineer - SRE One of our biggest customers based in the Financial Services sector is looking for an experienced Site Reliability Engineer - SRE to join them as they look to create a newly appointed team. Site Reliability Engineer: We have an exciting brand-new opportunity to join a dynamic IT Team as a Site Reliability Engineer. We are looking for an expert in this field who has extensive experience and knowledge in managing APM tools such as Dynatrace and has demonstrable experience (at least 3 years) as a Site Reliability Engineer. The Site Reliability Engineer (SRE) will take ownership of the observability suite, leveraging deep DevOps skills and experience to proactively enhance the performance and stability of APIs and applications. This role will play a crucial part in ensuring reliability and scalability including managing APM tools such as Dynatrace or New Relic. Main Responsibilities as Site Reliability Engineer: Take ownership of the observability suite, including monitoring, logging, and alerting tools, to ensure comprehensive and holistic visibility into system performance and health. Configure and manage APM tools such as Dynatrace or New Relic, utilizing their capabilities to monitor application performance and troubleshoot issues effectively. Utilize deep DevOps skills and experience to implement and maintain infrastructure as code (IaC) practices, automating deployment, scaling, and management processes. Proactively measure and identify performance bottlenecks and reliability issues in APIs and applications and implement solutions to mitigate these issues. Collaborate with development teams to optimize application performance, improve resource utilization, and enhance scalability. Implement and maintain robust incident response and post-incident review processes to minimize downtime and prevent recurrence of issues. Drive continuous improvement initiatives to enhance the reliability, scalability, and efficiency of infrastructure and services, getting ahead of customer needs. Participate in on-call rotation and provide support for incident resolution and troubleshooting as needed. Skills and experience you need as Site Reliability Engineer Demonstrable experience (at least 3 years) as a Site Reliability Engineer or similar role, with a focus on maintaining high availability, reliability, and scalability of production systems. Strong expertise in monitoring, logging, and alerting tools such as Prometheus, ELK stack, Grafana, Azure Monitor etc., with the ability to take ownership of the observability suite. Experience managing APM tools such as Dynatrace or New Relic, utilizing their capabilities to monitor application performance effectively. Deep understanding of DevOps principles and practices, including infrastructure as code (IaC) using Terraform, automated deployment, and configuration management (including tools). Experience with Node.js, Java and JavaScript frameworks Experience with cloud technologies, preferably Azure, and proficiency in managing cloud-based infrastructure. Proven ability to proactively identify and resolve performance bottlenecks and reliability issues in APIs and applications. Strong collaboration and communication skills, with the ability to work effectively with cross-functional teams. Experience with incident response and post-incident review processes, and a commitment to minimizing downtime and preventing recurrence of issues. A proactive mindset with a focus on continuous improvement, constantly seeking opportunities to enhance the reliability, scalability, and efficiency of infrastructure and services. Resilient work ethic and the ability to thrive in a fast-paced and dynamic environment, including participation in on-call rotation for incident response and troubleshooting. Due to the volume of applications received for positions, it will not be possible to respond to all applications and only applicants who are considered suitable for interview will be contacted. Proactive Appointments Limited operates as an employment agency and employment business and is an equal opportunities organisation We take our obligations to protect your personal data very seriously. Any information provided to us will be processed as detailed in our Privacy Notice, a copy of which can be found on our website
We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. General Responsibilities The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Responsibilities: Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. Manage databases used in production systems, ensuring data integrity, performance, and security. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Good to have pharmaceutical experience Working remotely and on site in Belgium for the occasional meeting. 6 months + extension
29/04/2024
Project-based
We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. General Responsibilities The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Responsibilities: Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. Manage databases used in production systems, ensuring data integrity, performance, and security. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Good to have pharmaceutical experience Working remotely and on site in Belgium for the occasional meeting. 6 months + extension
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
29/04/2024
Full time
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
29/04/2024
Full time
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
29/04/2024
Full time
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
29/04/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
29/04/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
Local SAP Data Migration Lead (m/f/d) - Data clean-up/SAP S/4/MDG/Master Data/GxP/CSV/Project Management tools/Cross team communication/English & German Project : For our customer a big pharmaceutical company in Basel we are looking for a highly Local SAP Data Migration Lead (m/f/d). Background : ASPIRE is an enterprise-wide global business transformation program that will harmonize, simplify all core E2E processes across Roche and enable their execution from standardized IT- Landscape with SAP S/4HANA being at the core. This Global Template is being rolled-out in waves and Local SAP Data Migration Lead will be a key member of FHLR (Basel/KAU) Wave Deployment Team, driving the local data cleansing and data migration activities of the assigned function (PTS) within the main program. The perfect candidate: In this role, the Local Data Migration Lead is a member of the PTS Deployment Team and will enable successful business transformation by ensuring that all local activities and commitments to data migration are carried out effectively as per ASPIRE Global Data Migration strategy and methodology. The local Data Migration Lead (Business) works closely with the functional streams, local SMEs and Business Data Content Owners to ensure that local activities are aligned with ASPIRE governance and executed within the expected timelines and quality. Position will report to Local Deployment Lead and be part of the Wave Data Council Tasks & Responsibilities: * Leads end-to-end data migration activities, ensuring the accuracy, completeness, and integrity of data transferred from Legacy systems to SAP platforms. * Proactively identifies and mitigates risks associated with data migration, maintaining data quality and governance standards. * Supports high level and detailed design workshops for data migration -gathering business data requirements and value mapping * Follows ASPIRE Data Migration Strategy and provides guidance to migration teams in terms of methodology training eg cleansing and other business related activities * Accountable for overall business data migration activities planning, risks and issue management, communications and senior management escalation * Ensures tracking of business deliverables and accountable for business readiness and business data quality * Tracks and monitors continuity of Data Cleansing activities together with the business teams with corresponding reporting * Guides the involved Team members before/during/after cleansing activities * Supports Cutover Lead to ensure data migration activities aligned * Ensures compliance with data protection and privacy regulations relevant to the organization Must Haves: * Bachelor's or Master's degree in computer science, Information Technology, or a related field * 5+ years of hands-on experience in SAP data migration/data clean-up activities, preferably in Pharma/healthcare industry * Proven experience leading E2E SAP data migration projects, with in-depth knowledge of SAP modules and associated data migration tools * Strong project management skills, including planning, execution, and risk management * Competency to understand S/4 Data Structures, setup in various SAP Modules, MDG * Knowledge on design, implementation of data mapping, transformation, and validation processes using data migration toolsets * Strong knowledge with Material, Customer, and Vendor Master Data objects and conversions * Ability to conduct and facilitate workshops with business representatives, team members * Knowledge of GxP and CSV . Coaching/supporting teams * SAP or 3rd party data migration toolsets * Project Management tools (Smartsheet, Jira, Solman ) * Presentation, reporting, office apps * English fluent (written/spoken). German is a plus * Team player with excellent inter-personal skills to work in a multi-cultural environment * Strong analytical thinking, result-oriented team player and ability to prioritize and organize work effectively * Hands-on attitude and solution finder * Excellent communication and presentation skills - with ability to communicate at any hierarchical level * Ability to establish a solid network and work effectively across organizational boundaries Nice to Have: * Certifications in SAP modules or data migration tools Reference Nr.: 923401SDA Role : Local SAP Data Migration Lead (m/f/d) Industrie : Pharma Workplace : Basel (main) and Kaiseraugst Pensum : 100% Start : 01.05.2024 (latest Start Date: 1.6.2024) Duration : 12 Deadline : 06.05.2024 If you are interested in this position, please send us your complete dossier via the link in this advertisement. About us: ITech Consult is an ISO 9001:2015 certified Swiss company with offices in Germany and Ireland. ITech Consult specialises in the placement of highly qualified candidates for recruitment in the fields of IT, Life Science & Engineering. We offer staff leasing & payroll services. For our candidates this is free of charge, also for Payroll we do not charge you any additional fees.
29/04/2024
Project-based
Local SAP Data Migration Lead (m/f/d) - Data clean-up/SAP S/4/MDG/Master Data/GxP/CSV/Project Management tools/Cross team communication/English & German Project : For our customer a big pharmaceutical company in Basel we are looking for a highly Local SAP Data Migration Lead (m/f/d). Background : ASPIRE is an enterprise-wide global business transformation program that will harmonize, simplify all core E2E processes across Roche and enable their execution from standardized IT- Landscape with SAP S/4HANA being at the core. This Global Template is being rolled-out in waves and Local SAP Data Migration Lead will be a key member of FHLR (Basel/KAU) Wave Deployment Team, driving the local data cleansing and data migration activities of the assigned function (PTS) within the main program. The perfect candidate: In this role, the Local Data Migration Lead is a member of the PTS Deployment Team and will enable successful business transformation by ensuring that all local activities and commitments to data migration are carried out effectively as per ASPIRE Global Data Migration strategy and methodology. The local Data Migration Lead (Business) works closely with the functional streams, local SMEs and Business Data Content Owners to ensure that local activities are aligned with ASPIRE governance and executed within the expected timelines and quality. Position will report to Local Deployment Lead and be part of the Wave Data Council Tasks & Responsibilities: * Leads end-to-end data migration activities, ensuring the accuracy, completeness, and integrity of data transferred from Legacy systems to SAP platforms. * Proactively identifies and mitigates risks associated with data migration, maintaining data quality and governance standards. * Supports high level and detailed design workshops for data migration -gathering business data requirements and value mapping * Follows ASPIRE Data Migration Strategy and provides guidance to migration teams in terms of methodology training eg cleansing and other business related activities * Accountable for overall business data migration activities planning, risks and issue management, communications and senior management escalation * Ensures tracking of business deliverables and accountable for business readiness and business data quality * Tracks and monitors continuity of Data Cleansing activities together with the business teams with corresponding reporting * Guides the involved Team members before/during/after cleansing activities * Supports Cutover Lead to ensure data migration activities aligned * Ensures compliance with data protection and privacy regulations relevant to the organization Must Haves: * Bachelor's or Master's degree in computer science, Information Technology, or a related field * 5+ years of hands-on experience in SAP data migration/data clean-up activities, preferably in Pharma/healthcare industry * Proven experience leading E2E SAP data migration projects, with in-depth knowledge of SAP modules and associated data migration tools * Strong project management skills, including planning, execution, and risk management * Competency to understand S/4 Data Structures, setup in various SAP Modules, MDG * Knowledge on design, implementation of data mapping, transformation, and validation processes using data migration toolsets * Strong knowledge with Material, Customer, and Vendor Master Data objects and conversions * Ability to conduct and facilitate workshops with business representatives, team members * Knowledge of GxP and CSV . Coaching/supporting teams * SAP or 3rd party data migration toolsets * Project Management tools (Smartsheet, Jira, Solman ) * Presentation, reporting, office apps * English fluent (written/spoken). German is a plus * Team player with excellent inter-personal skills to work in a multi-cultural environment * Strong analytical thinking, result-oriented team player and ability to prioritize and organize work effectively * Hands-on attitude and solution finder * Excellent communication and presentation skills - with ability to communicate at any hierarchical level * Ability to establish a solid network and work effectively across organizational boundaries Nice to Have: * Certifications in SAP modules or data migration tools Reference Nr.: 923401SDA Role : Local SAP Data Migration Lead (m/f/d) Industrie : Pharma Workplace : Basel (main) and Kaiseraugst Pensum : 100% Start : 01.05.2024 (latest Start Date: 1.6.2024) Duration : 12 Deadline : 06.05.2024 If you are interested in this position, please send us your complete dossier via the link in this advertisement. About us: ITech Consult is an ISO 9001:2015 certified Swiss company with offices in Germany and Ireland. ITech Consult specialises in the placement of highly qualified candidates for recruitment in the fields of IT, Life Science & Engineering. We offer staff leasing & payroll services. For our candidates this is free of charge, also for Payroll we do not charge you any additional fees.
Data DevOps Engineer - DevOps, Big data - Permanent - Gloucestershire Location: Gloucestershire/Bristol (full-time onsite) Salary: £65 - £95K per annum Negotiable DOE Benefits: Flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: We are looking for a bright, driven, customer focussed professional to join our clients Hybrid Cloud Delivery team, and work alongside Enterprise Data Engineering Consultants to accelerate and drive data engineering opportunities. This is a fantastic opportunity for a dynamic individual with big ambitions, who is an established technologist with both outstanding technical ability and consultative mindset. This would suit an open-minded personable self-starter who relishes the fluidity and collaborative nature of consultancy. The Role: This role sits on our clients Advisory and Professional Services delivery team, who provide thought-leadership, industry know-how and technical excellence to consultative engagements. Helping customers to reap maximum business benefit from their technical investments, leveraging best in class Vender & Partner technologies to create relevant and effective business-valued technical solutions. The Data DevOps Engineer role is all about the detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems. Duties: Participating in the full life cycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader Assisting with solution improvement activities driven either by the project or service Essential Requirements: Excellent knowledge of Linux operating system administration and implementation Broad understanding of the containerisation domain adjacent technologies/services, such as: Docker, OpenShift, Kubernetes etc. Infrastructure as Code and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies eg NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages Desirable Requirements: Jupyter Hub Awareness Minio or similar S3 storage technology Trino/Presto RabbitMQ or other common queue technology eg ActiveMQ NiFi Rego Familiarity with code development, Shell-Scripting in Python, Bash etc. To apply for this Data DevOps Engineer permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
29/04/2024
Full time
Data DevOps Engineer - DevOps, Big data - Permanent - Gloucestershire Location: Gloucestershire/Bristol (full-time onsite) Salary: £65 - £95K per annum Negotiable DOE Benefits: Flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: We are looking for a bright, driven, customer focussed professional to join our clients Hybrid Cloud Delivery team, and work alongside Enterprise Data Engineering Consultants to accelerate and drive data engineering opportunities. This is a fantastic opportunity for a dynamic individual with big ambitions, who is an established technologist with both outstanding technical ability and consultative mindset. This would suit an open-minded personable self-starter who relishes the fluidity and collaborative nature of consultancy. The Role: This role sits on our clients Advisory and Professional Services delivery team, who provide thought-leadership, industry know-how and technical excellence to consultative engagements. Helping customers to reap maximum business benefit from their technical investments, leveraging best in class Vender & Partner technologies to create relevant and effective business-valued technical solutions. The Data DevOps Engineer role is all about the detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems. Duties: Participating in the full life cycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader Assisting with solution improvement activities driven either by the project or service Essential Requirements: Excellent knowledge of Linux operating system administration and implementation Broad understanding of the containerisation domain adjacent technologies/services, such as: Docker, OpenShift, Kubernetes etc. Infrastructure as Code and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies eg NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages Desirable Requirements: Jupyter Hub Awareness Minio or similar S3 storage technology Trino/Presto RabbitMQ or other common queue technology eg ActiveMQ NiFi Rego Familiarity with code development, Shell-Scripting in Python, Bash etc. To apply for this Data DevOps Engineer permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
29/04/2024
Full time
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
We have partnered with a revolutionary SaaS business that has created a platform aimed at providing solutions to save people money! They're seeking a Senior Full Stack Engineer with a Back End leaning to join their growing business. You will be a critical part of the R&D Team, where you'll need to strike a balance between swift execution and maintaining a high standard of work quality. Mentoring and guiding juniors whilst owning features end-to-end. Key experience in working collaboratively and seeing the bigger picture is essential as you'll be thinking through everything from user experience, data models, scalability, operability and ongoing metrics. Tech Stack: Node | NestJS | Vue | Typescript | Mongo | NoSQL Digital Ecosystem : AWS Salary : up to £75k Location : Nottingham - Hybrid, 2 days a week in the office Would you be interested in hearing more? Reach me at (see below)
25/04/2024
Full time
We have partnered with a revolutionary SaaS business that has created a platform aimed at providing solutions to save people money! They're seeking a Senior Full Stack Engineer with a Back End leaning to join their growing business. You will be a critical part of the R&D Team, where you'll need to strike a balance between swift execution and maintaining a high standard of work quality. Mentoring and guiding juniors whilst owning features end-to-end. Key experience in working collaboratively and seeing the bigger picture is essential as you'll be thinking through everything from user experience, data models, scalability, operability and ongoing metrics. Tech Stack: Node | NestJS | Vue | Typescript | Mongo | NoSQL Digital Ecosystem : AWS Salary : up to £75k Location : Nottingham - Hybrid, 2 days a week in the office Would you be interested in hearing more? Reach me at (see below)
A leading Software Company are on the lookout for an experienced Senior Software Engineer (skilled with .NET & Angular) to join their team of talented developers. The company has been operating successfully for over 11 years and has seen massive growth since their humble beginnings in Glasgow. They work with the Microsoft tech stack to create solutions that increase online sales for their clients. They work with a host household names and their client list is continually growing so you'll have the chance to work with some of the biggest companies across the globe. You'd be joining their team of 7 skilled developers, some of whom have been with the company for over 10 years (their retention is great here which is a massive testament to the company culture). You'll be helping to deliver a brand-new platform, whislt remaining hands on with their core product. Ideally, you'll have a few years of commercial experience with .NET (C#/.NET Core, .NET Framework) and some exposure to Front End technologies (preferably Angular) as you'll be working across the full stack here. Some knowledge/experience of working in a cloud hosted environment is definitely a plus as well. You'll have commercial experience with the following; * .NET framework (C#/.NET Core) * Databases (SQL, MySQL) * Front End exposure (Angular) Experience with the following would definitely get you some bonus points; * Leadership experience * Cloud Services (Azure/AWS) * Digital Agency experience * Exposure to Docker & Kubernetes The company work fully remotely but do meet up once a month in Glasgow and they like the whole team to be there. The salary on offer for this role is up to £60k on top of a good list of benefits (private health care, bonus, pension plan). There's also major opportunity for personal development through training here as well. If you think you'd be a good fit for this role, then please apply and/or reach out to Max at Cathcart Technology for more information.
23/04/2024
Full time
A leading Software Company are on the lookout for an experienced Senior Software Engineer (skilled with .NET & Angular) to join their team of talented developers. The company has been operating successfully for over 11 years and has seen massive growth since their humble beginnings in Glasgow. They work with the Microsoft tech stack to create solutions that increase online sales for their clients. They work with a host household names and their client list is continually growing so you'll have the chance to work with some of the biggest companies across the globe. You'd be joining their team of 7 skilled developers, some of whom have been with the company for over 10 years (their retention is great here which is a massive testament to the company culture). You'll be helping to deliver a brand-new platform, whislt remaining hands on with their core product. Ideally, you'll have a few years of commercial experience with .NET (C#/.NET Core, .NET Framework) and some exposure to Front End technologies (preferably Angular) as you'll be working across the full stack here. Some knowledge/experience of working in a cloud hosted environment is definitely a plus as well. You'll have commercial experience with the following; * .NET framework (C#/.NET Core) * Databases (SQL, MySQL) * Front End exposure (Angular) Experience with the following would definitely get you some bonus points; * Leadership experience * Cloud Services (Azure/AWS) * Digital Agency experience * Exposure to Docker & Kubernetes The company work fully remotely but do meet up once a month in Glasgow and they like the whole team to be there. The salary on offer for this role is up to £60k on top of a good list of benefits (private health care, bonus, pension plan). There's also major opportunity for personal development through training here as well. If you think you'd be a good fit for this role, then please apply and/or reach out to Max at Cathcart Technology for more information.
Red - The Global SAP Solutions Provider
Oslo, Oslo
Data Modeller/Oslo 3 days per week onsite/12 months +/Start ASAP Responsibility: * Work tight with different data domains, business areas, initiative to define and develop certified data products and information flow. * Build relation and show direction for how new solution can affect our existing services and deliveries. * Hold the "Data Engineering Community of Practice" within our Tech Family * Establish new communication channels for technical knowledge sharing of modern data engineering. Experience: * Experience with Datamodelling. * Experience with data warehouse, data lakes, data fabric, Mesh etc. * Experience with both data- and software engineering, and how to combine those to build data products. * Experience with DevOps methods. * Experience with Middleware, ETL/ELT, SQL, Apache Kafka, Streamsets, DBT, Apache Airflow, Snowflake or similar tooling stack. * Experience with building cloud solutions (AWS/Azure, Serverless, cost engineering etc.) * Strong in automation; use of metadata driven design, CI/CD, event driven architecture. * Experience from big and complex organisations with high demand of security and availability in a broad portfolio of different technologies. Qualifications: * Master (M.S.c.) or bachelor's in informatics, computer engineering, cybernetics or similar. * Minimum 3-5 years relevant work experience * Good in communication, both verbally and in written.
23/04/2024
Project-based
Data Modeller/Oslo 3 days per week onsite/12 months +/Start ASAP Responsibility: * Work tight with different data domains, business areas, initiative to define and develop certified data products and information flow. * Build relation and show direction for how new solution can affect our existing services and deliveries. * Hold the "Data Engineering Community of Practice" within our Tech Family * Establish new communication channels for technical knowledge sharing of modern data engineering. Experience: * Experience with Datamodelling. * Experience with data warehouse, data lakes, data fabric, Mesh etc. * Experience with both data- and software engineering, and how to combine those to build data products. * Experience with DevOps methods. * Experience with Middleware, ETL/ELT, SQL, Apache Kafka, Streamsets, DBT, Apache Airflow, Snowflake or similar tooling stack. * Experience with building cloud solutions (AWS/Azure, Serverless, cost engineering etc.) * Strong in automation; use of metadata driven design, CI/CD, event driven architecture. * Experience from big and complex organisations with high demand of security and availability in a broad portfolio of different technologies. Qualifications: * Master (M.S.c.) or bachelor's in informatics, computer engineering, cybernetics or similar. * Minimum 3-5 years relevant work experience * Good in communication, both verbally and in written.