FPGA Verification Engineer. This role will require Weekly access to London offices This role is working for one of the biggest Tech companies globally. Description: Have you ever built out FPGA verification infrastructure from scratch/Processes? They need an RTL verification expert to build up a UVM system and implement RTL simulations for system-level functional verification of our FPGA designs. Ideally, this candidate would be proficient with Cadence Xcellium, as this is the tool they use. Skills: RTL Verification UVM FPGA Job Title: FPGA Verification Engineer Location: London, UK Job Type: Contract Trading as TEKsystems. Allegis Group Limited, Bracknell, RG12 1RT, United Kingdom. No Allegis Group Limited operates as an Employment Business and Employment Agency as set out in the Conduct of Employment Agencies and Employment Businesses Regulations 2003. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, Talentis Solutions, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
14/05/2024
Project-based
FPGA Verification Engineer. This role will require Weekly access to London offices This role is working for one of the biggest Tech companies globally. Description: Have you ever built out FPGA verification infrastructure from scratch/Processes? They need an RTL verification expert to build up a UVM system and implement RTL simulations for system-level functional verification of our FPGA designs. Ideally, this candidate would be proficient with Cadence Xcellium, as this is the tool they use. Skills: RTL Verification UVM FPGA Job Title: FPGA Verification Engineer Location: London, UK Job Type: Contract Trading as TEKsystems. Allegis Group Limited, Bracknell, RG12 1RT, United Kingdom. No Allegis Group Limited operates as an Employment Business and Employment Agency as set out in the Conduct of Employment Agencies and Employment Businesses Regulations 2003. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, Talentis Solutions, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
* Azure Data Engineer -Freelance & Remote contract.* One of our International Customers at RED is in the search of experienced Azure Data Engineer for one of their European Projects. Role: Azure Data Engineer Industry: Pharmaceuticals Start Date: ASAP Duration: initial contract until 31.12.2024 ( with possible renewal; long-term Project) Contract: Freelance (no permanent/no Visa/No sponsorship can be offered) Workload: Full Time Location: Fully-Remote (EU time zone) Rate: Negotiable, depending on experience About the role The Data Engineer will provide E2E support of Data project Lifecycle (Acquire, Organize, Analyze and Deliver) including Big Data projects and participate in the implementation of new Data platform solutions and tools. Skills and Experience Python, Pyspark, SQL and a good understanding of ML libraries. Expertise in Azure and its services such as Azure Databricks, Azure DataFactory, Azure Devops etc. Good hands-on work experience on Data Bricks (critical required skill) Knowledge of CI/CD principles Good understanding and working knowledge with API/web service security and data security practices and methods. Experience working with relational databases (SQL Server, Oracle, MySQL, etc.) Excellent oral and written communication skills. Excellent collaboration, troubleshooting and problem solving skills. If this is something you are interested in and have the experience for, then please apply/send your updated CV to (see below)
14/05/2024
Project-based
* Azure Data Engineer -Freelance & Remote contract.* One of our International Customers at RED is in the search of experienced Azure Data Engineer for one of their European Projects. Role: Azure Data Engineer Industry: Pharmaceuticals Start Date: ASAP Duration: initial contract until 31.12.2024 ( with possible renewal; long-term Project) Contract: Freelance (no permanent/no Visa/No sponsorship can be offered) Workload: Full Time Location: Fully-Remote (EU time zone) Rate: Negotiable, depending on experience About the role The Data Engineer will provide E2E support of Data project Lifecycle (Acquire, Organize, Analyze and Deliver) including Big Data projects and participate in the implementation of new Data platform solutions and tools. Skills and Experience Python, Pyspark, SQL and a good understanding of ML libraries. Expertise in Azure and its services such as Azure Databricks, Azure DataFactory, Azure Devops etc. Good hands-on work experience on Data Bricks (critical required skill) Knowledge of CI/CD principles Good understanding and working knowledge with API/web service security and data security practices and methods. Experience working with relational databases (SQL Server, Oracle, MySQL, etc.) Excellent oral and written communication skills. Excellent collaboration, troubleshooting and problem solving skills. If this is something you are interested in and have the experience for, then please apply/send your updated CV to (see below)
| Senior Data Scientist/Senior Data Engineer Transaction Monitoring | Utrecht | We are in search of a Senior Data Scientist/Senior Data Engineer Transaction Monitoring, you'll have the opportunity to make a significant social impact by leveraging analytics models to analyze transactions for our client's 9+ million customers. Preventing Money Laundering & Terrorist Financing is a crucial aspect for our society, and our client plays a pivotal role in this field. We've developed a data-driven solution within our client's organization to analyze over 11 million transactions daily, detecting customers involved in money laundering or financing terrorism. Your role will involve enhancing and improving our Transaction Monitoring (TM) solution by introducing advanced analytics methodologies such as Machine Learning. | Your Responsibilities | - Develop and build scalable components for data processing and generation on Big Data platforms. - Analyze source and output data to build knowledge and identify potential data-related issues. - Anticipate future challenges of the current infrastructure and design/implement solutions. - Test and monitor the performance of the TM solution. - Build a Machine Learning platform and orchestrate workflow for different models (ML Ops). - Collaborate with platform teams for code deployment into production systems. | Your Qualifications:| - Hold an academic degree (MSc/PhD) in Computer Science, Data Science, Econometrics, or a related STEM field. - Possess 5+ years of experience in software development, data engineering, advanced analytics, or Machine Learning, ideally with large datasets. - Proficient in Python and PySpark (experience with Azure Databricks is a plus). - Have strong software development skills and practices, including Git, release management, and unit testing. - Demonstrate an analytical mindset, critical thinking, and excellent communication skills in English. - Motivated to combat financial crime by enhancing the effectiveness of the TM system. - Exhibit structured, precise, communicative, and proactive traits, with a business focus and natural curiosity to make a difference in fighting financial crime. Thrive in a high-impact, dynamic, and fast-paced environment. If this sounds like the right opportunity for you, don't hesitate to apply! Michael Bailey International is acting as an Employment Business in relation to this vacancy.
14/05/2024
Project-based
| Senior Data Scientist/Senior Data Engineer Transaction Monitoring | Utrecht | We are in search of a Senior Data Scientist/Senior Data Engineer Transaction Monitoring, you'll have the opportunity to make a significant social impact by leveraging analytics models to analyze transactions for our client's 9+ million customers. Preventing Money Laundering & Terrorist Financing is a crucial aspect for our society, and our client plays a pivotal role in this field. We've developed a data-driven solution within our client's organization to analyze over 11 million transactions daily, detecting customers involved in money laundering or financing terrorism. Your role will involve enhancing and improving our Transaction Monitoring (TM) solution by introducing advanced analytics methodologies such as Machine Learning. | Your Responsibilities | - Develop and build scalable components for data processing and generation on Big Data platforms. - Analyze source and output data to build knowledge and identify potential data-related issues. - Anticipate future challenges of the current infrastructure and design/implement solutions. - Test and monitor the performance of the TM solution. - Build a Machine Learning platform and orchestrate workflow for different models (ML Ops). - Collaborate with platform teams for code deployment into production systems. | Your Qualifications:| - Hold an academic degree (MSc/PhD) in Computer Science, Data Science, Econometrics, or a related STEM field. - Possess 5+ years of experience in software development, data engineering, advanced analytics, or Machine Learning, ideally with large datasets. - Proficient in Python and PySpark (experience with Azure Databricks is a plus). - Have strong software development skills and practices, including Git, release management, and unit testing. - Demonstrate an analytical mindset, critical thinking, and excellent communication skills in English. - Motivated to combat financial crime by enhancing the effectiveness of the TM system. - Exhibit structured, precise, communicative, and proactive traits, with a business focus and natural curiosity to make a difference in fighting financial crime. Thrive in a high-impact, dynamic, and fast-paced environment. If this sounds like the right opportunity for you, don't hesitate to apply! Michael Bailey International is acting as an Employment Business in relation to this vacancy.
Global Technology Solutions Ltd
Aldermaston, Berkshire
JOB TITLE: Application Packager LOCATION: Aldermaston SALARY: £56,252 WORKING HOURS: Standard office hours 9 day working fortnight, every other Friday off Holding SC or DV clearance is a MUST. Due to the clearance required we can only progress with British nationals DETAILED JOB DESCRIPTION: Purpose of the role We are looking for customer-focused and enthusiastic candidate for our clients Software Discovery and Packaging Team, who has a genuine interest in solving IT issues and is empathetic to our client needs and requirements. The applicant should have a very good understanding of software application packaging, possess good written and verbal communication skills and be willing to collaborate with the wider IT support teams. The Application Packaging Team is responsible for managing the end to end delivery of applications and Operating System gold builds, and the on-going life cycle management for those applications. Behaviours * Demonstrate the ability to methodically work through issues * Identify issues end users might be facing and drive improvement and simplifications to help end user process * Maintain good working relationship with key stakeholders and support teams * Must be able to deal directly with clients in a friendly and highly confident manner demonstrating excellent internal and external customer communication skills * Experience of Application Lifecycle Management in an Enterprise environment as part of a desktop transformation project * Strong Application Packaging experience using Flexera AdminStudio, InstallShield, IS Recapture & ORCA * Expert understanding of MSI technology, including transforms. * Document package configuration * Package defect remediation * Experience of large OS migration projects (preferably Windows 7 and Windows 10) * Data Analysis and reporting skills * Good communication skills (customer facing as you may need to speak to people in the business) * Experience of managing and maintaining a SCCM 2012 (or higher) production environment * SCCM deployment/Design experience * Some Operating System Deployment (OSD), including tools such as MDT * Experience of SCCM/WSUS patching technology * Microsoft Active Directory server * Microsoft Group Policy Objects (GPOs) * Software deployment and 3rd line support and troubleshooting * Understand and input into desktop engineering standards, processes and best practices * Involvement in delivering large scale deployments of software, eg Microsoft Office * Experience of large OS migration projects (preferably windows 10) * Highlight changes to processes or errors to the process owners. Maintain own process and working instruction documents * Work within the contractual guidelines and Statement of Work, or highlight any local shadow IT agreements * Be politically savvy and understand the concerns and priorities of our customer and our own support teams ESSENTIALS SKILLS/QUALIFICATIONS: * Basic Understanding of IT project management methodologies including agile and waterfall * Basic Knowledge of project management tools and techniques * Microsoft Active Directory server * Microsoft Group Policy Objects (GPOs) * Understand cloud technologies * Understand network topology * Must have packaged applications for Windows 7 and Windows 10 OS platforms. * Must have experience of complex application packaging. (eg Autodesk, MS Office, etc.) * Software deployment and 3rd line support and troubleshooting * Strong communication skills both written and verbal * Self-motivated with a can do attitude and comfortable working with ambiguity * Strong MSI technology * Application layering (VMware App Volumes) * Scripting experience using BAT, PowerShell and VB, C# Scripts * Ability to create and run reports * Awareness of Change and Release Management * Excellent written and verbal communication skills with a genuine enthusiasm towards IT service management * Excellent organisational skills and able to take a methodical approach * Excellent customer service skills * Strong and confident presentation skills * Professional verbal and written communication skills * Strong SCCM 2012 knowledge DESIRABLE SKILLS/QUALIFICATIONS: * ServiceNow * ITIL Foundation * Application virtualization (Microsoft APP-V) * Fundamental knowledge across Windows Operating Systems * MS SCCM 2012 As an Employee you will benefit from: * Flexible benefits including, private medial and health insurance, basic cover paid by employer * Free eye test vouchers * Company pension scheme * Income protection after 6 months' service should you be off work due to serious illness * 23 days holiday rising by 1 day per year to max 25 * Option to purchase/sell additional holiday * Life insurance * Employee Assistance Programme, free confidential advice covering a range of areas including mental health and financial support If you have the skill required, apply now! In applying for this position, you consent to your personal data being shared with the specified employer and for your details to remain with GTS for as long as is necessary to process your application. See our Privacy Notice for full information Global Technology Solutions is acting as an Employment Agency in relation to this vacancy.
13/05/2024
Full time
JOB TITLE: Application Packager LOCATION: Aldermaston SALARY: £56,252 WORKING HOURS: Standard office hours 9 day working fortnight, every other Friday off Holding SC or DV clearance is a MUST. Due to the clearance required we can only progress with British nationals DETAILED JOB DESCRIPTION: Purpose of the role We are looking for customer-focused and enthusiastic candidate for our clients Software Discovery and Packaging Team, who has a genuine interest in solving IT issues and is empathetic to our client needs and requirements. The applicant should have a very good understanding of software application packaging, possess good written and verbal communication skills and be willing to collaborate with the wider IT support teams. The Application Packaging Team is responsible for managing the end to end delivery of applications and Operating System gold builds, and the on-going life cycle management for those applications. Behaviours * Demonstrate the ability to methodically work through issues * Identify issues end users might be facing and drive improvement and simplifications to help end user process * Maintain good working relationship with key stakeholders and support teams * Must be able to deal directly with clients in a friendly and highly confident manner demonstrating excellent internal and external customer communication skills * Experience of Application Lifecycle Management in an Enterprise environment as part of a desktop transformation project * Strong Application Packaging experience using Flexera AdminStudio, InstallShield, IS Recapture & ORCA * Expert understanding of MSI technology, including transforms. * Document package configuration * Package defect remediation * Experience of large OS migration projects (preferably Windows 7 and Windows 10) * Data Analysis and reporting skills * Good communication skills (customer facing as you may need to speak to people in the business) * Experience of managing and maintaining a SCCM 2012 (or higher) production environment * SCCM deployment/Design experience * Some Operating System Deployment (OSD), including tools such as MDT * Experience of SCCM/WSUS patching technology * Microsoft Active Directory server * Microsoft Group Policy Objects (GPOs) * Software deployment and 3rd line support and troubleshooting * Understand and input into desktop engineering standards, processes and best practices * Involvement in delivering large scale deployments of software, eg Microsoft Office * Experience of large OS migration projects (preferably windows 10) * Highlight changes to processes or errors to the process owners. Maintain own process and working instruction documents * Work within the contractual guidelines and Statement of Work, or highlight any local shadow IT agreements * Be politically savvy and understand the concerns and priorities of our customer and our own support teams ESSENTIALS SKILLS/QUALIFICATIONS: * Basic Understanding of IT project management methodologies including agile and waterfall * Basic Knowledge of project management tools and techniques * Microsoft Active Directory server * Microsoft Group Policy Objects (GPOs) * Understand cloud technologies * Understand network topology * Must have packaged applications for Windows 7 and Windows 10 OS platforms. * Must have experience of complex application packaging. (eg Autodesk, MS Office, etc.) * Software deployment and 3rd line support and troubleshooting * Strong communication skills both written and verbal * Self-motivated with a can do attitude and comfortable working with ambiguity * Strong MSI technology * Application layering (VMware App Volumes) * Scripting experience using BAT, PowerShell and VB, C# Scripts * Ability to create and run reports * Awareness of Change and Release Management * Excellent written and verbal communication skills with a genuine enthusiasm towards IT service management * Excellent organisational skills and able to take a methodical approach * Excellent customer service skills * Strong and confident presentation skills * Professional verbal and written communication skills * Strong SCCM 2012 knowledge DESIRABLE SKILLS/QUALIFICATIONS: * ServiceNow * ITIL Foundation * Application virtualization (Microsoft APP-V) * Fundamental knowledge across Windows Operating Systems * MS SCCM 2012 As an Employee you will benefit from: * Flexible benefits including, private medial and health insurance, basic cover paid by employer * Free eye test vouchers * Company pension scheme * Income protection after 6 months' service should you be off work due to serious illness * 23 days holiday rising by 1 day per year to max 25 * Option to purchase/sell additional holiday * Life insurance * Employee Assistance Programme, free confidential advice covering a range of areas including mental health and financial support If you have the skill required, apply now! In applying for this position, you consent to your personal data being shared with the specified employer and for your details to remain with GTS for as long as is necessary to process your application. See our Privacy Notice for full information Global Technology Solutions is acting as an Employment Agency in relation to this vacancy.
We have partnered with a revolutionary Saas business that has created a platform aimed at providing solutions to save people money! They're seeking a Senior Backend Engineer to join their growing Development Team. You will be a critical part of the R&D Team, where you'll need to strike a balance between swift execution and maintaining a high standard of work quality. Mentoring and guiding juniors whilst owning features end-to-end. Key experience in working collaboratively and seeing the bigger picture is essential as you'll be thinking through everything from user experience, data models, scalability, operability and ongoing metrics. Tech Stack: Node | NestJS | Mongo | NoSQL Digital Ecosystem : AWS Salary : up to £75k Location : Nottingham - Hybrid, 2 days a week in the office Would you be interested in hearing more? Reach me at (see below)
13/05/2024
Full time
We have partnered with a revolutionary Saas business that has created a platform aimed at providing solutions to save people money! They're seeking a Senior Backend Engineer to join their growing Development Team. You will be a critical part of the R&D Team, where you'll need to strike a balance between swift execution and maintaining a high standard of work quality. Mentoring and guiding juniors whilst owning features end-to-end. Key experience in working collaboratively and seeing the bigger picture is essential as you'll be thinking through everything from user experience, data models, scalability, operability and ongoing metrics. Tech Stack: Node | NestJS | Mongo | NoSQL Digital Ecosystem : AWS Salary : up to £75k Location : Nottingham - Hybrid, 2 days a week in the office Would you be interested in hearing more? Reach me at (see below)
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
13/05/2024
Full time
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
13/05/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
13/05/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
13/05/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
Senior SAN Storage Engineer Start Date: ASAP Contract Length: 12 Months Location/Remote Working: Luxembourg Trust in Soda has formed a strategic partnership with a renowned consultancy company. They are actively seeking an accomplished Senior SAN Storage Engineer to ensure the stability, integrity, and efficient operation of SAN arrays and data fabrics, out-of-band managed storage arrays, as well as any array or appliance-based replication. Responsibilities Brings prior experience to organize and define work for complex or ambiguous situations Resolves issues, manages workload, and balances priorities through frequent interruptions while meeting specific, time-sensitive deadlines Supports release and life cycle process SAN fabric and storage array installs, upgrades, and decommissions Provides thought leadership for overall SAN Fabric and Storage Array infrastructure support at the enterprise level Contributes to operational readiness of platforms with dedicated/shared teams and consults on resource and skills required, process documentation creation, updates to guidelines, policies, change, and audit procedures Participates in troubleshooting efforts for storage issues and leads in major incidents, root cause analysis, and performance analysis and tuning ITIL-compliant champion for incident, request, change, and with particular focus on problem management Partner with monitoring team to develop new event and performance monitors/alerts and analysis as needed for new and/or existing systems Participate in the modernization and automation of storage infrastructure Deployment of new SAN/Switch infrastructure Maintain accurate CMDB Participates in problem management to proactively review open client and infrastructure problems and known errors to minimize the time spent firefighting and troubleshooting, supporting quicker resolution of incidents and events when they do arise. Assist and provide technical input to storage solution architect and others on the complex solution design, configuration, integration, and installation of new services. Essential Skill Set: Requires a minimum of 10 years of related experience with a Bachelor degree; or 8 years and a Masters degree; or a PhD with 5 years experience; or equivalent work experience. Experience in the following Platforms - Hitachi Storage, EMC, NetApp, IBM, Pure, Brocade Experience in Python, Ansible, Netapp Cloud Insight, Linux/Windows/Vmware, basic understanding of TCP/IP Networks and Firewalls. Based in Luxembourg
13/05/2024
Project-based
Senior SAN Storage Engineer Start Date: ASAP Contract Length: 12 Months Location/Remote Working: Luxembourg Trust in Soda has formed a strategic partnership with a renowned consultancy company. They are actively seeking an accomplished Senior SAN Storage Engineer to ensure the stability, integrity, and efficient operation of SAN arrays and data fabrics, out-of-band managed storage arrays, as well as any array or appliance-based replication. Responsibilities Brings prior experience to organize and define work for complex or ambiguous situations Resolves issues, manages workload, and balances priorities through frequent interruptions while meeting specific, time-sensitive deadlines Supports release and life cycle process SAN fabric and storage array installs, upgrades, and decommissions Provides thought leadership for overall SAN Fabric and Storage Array infrastructure support at the enterprise level Contributes to operational readiness of platforms with dedicated/shared teams and consults on resource and skills required, process documentation creation, updates to guidelines, policies, change, and audit procedures Participates in troubleshooting efforts for storage issues and leads in major incidents, root cause analysis, and performance analysis and tuning ITIL-compliant champion for incident, request, change, and with particular focus on problem management Partner with monitoring team to develop new event and performance monitors/alerts and analysis as needed for new and/or existing systems Participate in the modernization and automation of storage infrastructure Deployment of new SAN/Switch infrastructure Maintain accurate CMDB Participates in problem management to proactively review open client and infrastructure problems and known errors to minimize the time spent firefighting and troubleshooting, supporting quicker resolution of incidents and events when they do arise. Assist and provide technical input to storage solution architect and others on the complex solution design, configuration, integration, and installation of new services. Essential Skill Set: Requires a minimum of 10 years of related experience with a Bachelor degree; or 8 years and a Masters degree; or a PhD with 5 years experience; or equivalent work experience. Experience in the following Platforms - Hitachi Storage, EMC, NetApp, IBM, Pure, Brocade Experience in Python, Ansible, Netapp Cloud Insight, Linux/Windows/Vmware, basic understanding of TCP/IP Networks and Firewalls. Based in Luxembourg
Data DevOps Engineer - DevOps, Big data - Permanent - Gloucestershire Location: Gloucestershire/Bristol (full-time onsite) Salary: £65 - £95K per annum Negotiable DOE Benefits: Flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: We are looking for a bright, driven, customer focussed professional to join our clients Hybrid Cloud Delivery team, and work alongside Enterprise Data Engineering Consultants to accelerate and drive data engineering opportunities. This is a fantastic opportunity for a dynamic individual with big ambitions, who is an established technologist with both outstanding technical ability and consultative mindset. This would suit an open-minded personable self-starter who relishes the fluidity and collaborative nature of consultancy. The Role: This role sits on our clients Advisory and Professional Services delivery team, who provide thought-leadership, industry know-how and technical excellence to consultative engagements. Helping customers to reap maximum business benefit from their technical investments, leveraging best in class Vender & Partner technologies to create relevant and effective business-valued technical solutions. The Data DevOps Engineer role is all about the detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems. Duties: Participating in the full life cycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader Assisting with solution improvement activities driven either by the project or service Essential Requirements: Excellent knowledge of Linux operating system administration and implementation Broad understanding of the containerisation domain adjacent technologies/services, such as: Docker, OpenShift, Kubernetes etc. Infrastructure as Code and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies eg NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages Desirable Requirements: Jupyter Hub Awareness Minio or similar S3 storage technology Trino/Presto RabbitMQ or other common queue technology eg ActiveMQ NiFi Rego Familiarity with code development, Shell-Scripting in Python, Bash etc. To apply for this Data DevOps Engineer permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
13/05/2024
Full time
Data DevOps Engineer - DevOps, Big data - Permanent - Gloucestershire Location: Gloucestershire/Bristol (full-time onsite) Salary: £65 - £95K per annum Negotiable DOE Benefits: Flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: We are looking for a bright, driven, customer focussed professional to join our clients Hybrid Cloud Delivery team, and work alongside Enterprise Data Engineering Consultants to accelerate and drive data engineering opportunities. This is a fantastic opportunity for a dynamic individual with big ambitions, who is an established technologist with both outstanding technical ability and consultative mindset. This would suit an open-minded personable self-starter who relishes the fluidity and collaborative nature of consultancy. The Role: This role sits on our clients Advisory and Professional Services delivery team, who provide thought-leadership, industry know-how and technical excellence to consultative engagements. Helping customers to reap maximum business benefit from their technical investments, leveraging best in class Vender & Partner technologies to create relevant and effective business-valued technical solutions. The Data DevOps Engineer role is all about the detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems. Duties: Participating in the full life cycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader Assisting with solution improvement activities driven either by the project or service Essential Requirements: Excellent knowledge of Linux operating system administration and implementation Broad understanding of the containerisation domain adjacent technologies/services, such as: Docker, OpenShift, Kubernetes etc. Infrastructure as Code and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies eg NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages Desirable Requirements: Jupyter Hub Awareness Minio or similar S3 storage technology Trino/Presto RabbitMQ or other common queue technology eg ActiveMQ NiFi Rego Familiarity with code development, Shell-Scripting in Python, Bash etc. To apply for this Data DevOps Engineer permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
13/05/2024
Full time
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Director, Software Engineering - QRM. This director will manage 6 people and will help develop software applications and solutions for the quantitative management platform. This director will need hands-on experience with Java, DevOps, CICD, AWS, Containers, terraform, Etc. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office.
10/05/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Director, Software Engineering - QRM. This director will manage 6 people and will help develop software applications and solutions for the quantitative management platform. This director will need hands-on experience with Java, DevOps, CICD, AWS, Containers, terraform, Etc. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office.
Business Development Manager - French-speaking We have teamed with one of the biggest IT distributors in the UK which is looking for a French-speaking sales professional to join their growing team Responsibilities Generate qualified leads for our clients, by confidently using SPIN and other selling techniques; Develop specific and extensive client and product knowledge depending on each campaign to ensure client needs are met; Identify new business opportunities and quick win situations, and nurture database; French speaking
10/05/2024
Full time
Business Development Manager - French-speaking We have teamed with one of the biggest IT distributors in the UK which is looking for a French-speaking sales professional to join their growing team Responsibilities Generate qualified leads for our clients, by confidently using SPIN and other selling techniques; Develop specific and extensive client and product knowledge depending on each campaign to ensure client needs are met; Identify new business opportunities and quick win situations, and nurture database; French speaking
Director, Software Engineering - Quantitative Risk Management Applications SALARY: $200k - $230k flex plus 27% bonus LOCATION: Chicago, il Hybrid 3 days onsite, 2 days remote You will manage six plus people and help build the framewrok within the quantitative management platform developing software applications and solutions. Java C++ python automation devops cicd aws terraform Kubernetes SQL docker helm masters or Phd This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. This role will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, backtesting and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
10/05/2024
Full time
Director, Software Engineering - Quantitative Risk Management Applications SALARY: $200k - $230k flex plus 27% bonus LOCATION: Chicago, il Hybrid 3 days onsite, 2 days remote You will manage six plus people and help build the framewrok within the quantitative management platform developing software applications and solutions. Java C++ python automation devops cicd aws terraform Kubernetes SQL docker helm masters or Phd This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. This role will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, backtesting and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Director of Risk Management Software Engineering. Candidate will be responsible for functions within Quantitative Risk Management for developing and maintaining risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Responsibilities: Collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, back-testing and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, back-testing and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus.
09/05/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Director of Risk Management Software Engineering. Candidate will be responsible for functions within Quantitative Risk Management for developing and maintaining risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Responsibilities: Collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, back-testing and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, back-testing and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Java Risk Management Software Engineer. Candidate will develop and maintain risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Candidate will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand the technical capabilities for model development, back-testing and monitoring. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, back-testing and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus.
09/05/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Java Risk Management Software Engineer. Candidate will develop and maintain risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Candidate will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand the technical capabilities for model development, back-testing and monitoring. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, back-testing and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus.
We are Global IT Recruitment specialist that provides support to the clients across UK, Europe and Australia. We have an excellent job opportunity for you. Role- Debt Manager, Tallyman platforms Duration - Through till the end of 2024 Location- Knutsford - Hybrid position - 2 days in office Mandatory Skill Proven experience in running large data migrations for complex business services - with big bang and phased approach Strong background in presenting migration approaches, getting buy-in and executing migration plans Experience in managing and communicating with a variety of business, operations and technical stakeholders Demonstrates a high level of personal responsibility, pragmatism, and autonomy, planning own work to meet given objectives within a defined framework Excellent leadership and communication skills. Ability to navigate internal hierarchies, agendas and deliver what is required Ability to work effectively under tight deadlines Desired Skill Experience on Debt Manager, Tallyman platforms Roles and Responsibilities This role will lead and manage the Data migration delivery for the BFA Cards portfolio, working closely with the business, change and engineering teams. They will collaborate with cross-functional teams to define migrations strategies, engaging all key stakeholders and manage all the migration events
09/05/2024
Project-based
We are Global IT Recruitment specialist that provides support to the clients across UK, Europe and Australia. We have an excellent job opportunity for you. Role- Debt Manager, Tallyman platforms Duration - Through till the end of 2024 Location- Knutsford - Hybrid position - 2 days in office Mandatory Skill Proven experience in running large data migrations for complex business services - with big bang and phased approach Strong background in presenting migration approaches, getting buy-in and executing migration plans Experience in managing and communicating with a variety of business, operations and technical stakeholders Demonstrates a high level of personal responsibility, pragmatism, and autonomy, planning own work to meet given objectives within a defined framework Excellent leadership and communication skills. Ability to navigate internal hierarchies, agendas and deliver what is required Ability to work effectively under tight deadlines Desired Skill Experience on Debt Manager, Tallyman platforms Roles and Responsibilities This role will lead and manage the Data migration delivery for the BFA Cards portfolio, working closely with the business, change and engineering teams. They will collaborate with cross-functional teams to define migrations strategies, engaging all key stakeholders and manage all the migration events
On behalf of our client, an international financial service provider located in Prague, we are looking for an external resource with skills and abilities as stated below: IT Functional Analyst - System Requirements UML - BPMN (f/m/x) financial area Prague Tasks and responsibilities: Understanding of business concepts and requirements and translating them into clear system specifications for our clients new risk system . The specifications are created in the required format Communicate ideas clearly and act as a liaison between various business and IT teams Provide advice on expected functionality to the test teams Work independently on tasks but also be able to cooperate within analytical and project team Clearly communicate, challenge and be challenged about proposed solutions Mandatory skills and experiences: Experience in IT business and functional analysis Business and system requirements management, including gathering and elicitation Analytical and logical thinking to find best fitting long-term solutions Working proficiency and communication skills in English on daily basis Knowledge of financial markets (bonds, equities, interest rate swaps, futures, options) Knowledge of modelling languages, mainly UML, BPMN Ability to work with relational DB and basic knowledge of SQL A degree in a business subject, a technical/quantitative subject (Computer Science, Math/Physics, Engineering), or equivalent experience Ability to learn quickly and self-study challenging topics Optional Skills: Awareness of IT architecture, data modelling, cloud technologies Knowledge of no-SQL database technologies Experience with JIRA, Confluence Knowledge of Enterprise Architect, Bizzdesign Horizzon or similar tools Knowledge of big data management, no-SQL database technologies Knowledge of Archimate methodology Positive attitude to analytical and static mathematical Additional information: Start date of assignment: ASAP Initial contract duration: 31.12.2024 Degree of employment: Full-time Location: Prague Please let us know if this project is of interest to you and when you could be available. We are looking forward to your reply. Best regards, Andy GDPR: You are interested in this project and would like to send us your CV? Due to the General Data Protection Regulation (GDPR), we would like to ask you to give us your written consent to the permanent storage of your data in your email. We use your data exclusively for the purpose of our staffing activities. Of course, you have the right to information, correction, blocking or deletion of your data at any time. Template: I agree to the permanent storage of my data. I know that I have the right to information, correction, blocking or deletion and can revoke this consent at any time".
08/05/2024
Project-based
On behalf of our client, an international financial service provider located in Prague, we are looking for an external resource with skills and abilities as stated below: IT Functional Analyst - System Requirements UML - BPMN (f/m/x) financial area Prague Tasks and responsibilities: Understanding of business concepts and requirements and translating them into clear system specifications for our clients new risk system . The specifications are created in the required format Communicate ideas clearly and act as a liaison between various business and IT teams Provide advice on expected functionality to the test teams Work independently on tasks but also be able to cooperate within analytical and project team Clearly communicate, challenge and be challenged about proposed solutions Mandatory skills and experiences: Experience in IT business and functional analysis Business and system requirements management, including gathering and elicitation Analytical and logical thinking to find best fitting long-term solutions Working proficiency and communication skills in English on daily basis Knowledge of financial markets (bonds, equities, interest rate swaps, futures, options) Knowledge of modelling languages, mainly UML, BPMN Ability to work with relational DB and basic knowledge of SQL A degree in a business subject, a technical/quantitative subject (Computer Science, Math/Physics, Engineering), or equivalent experience Ability to learn quickly and self-study challenging topics Optional Skills: Awareness of IT architecture, data modelling, cloud technologies Knowledge of no-SQL database technologies Experience with JIRA, Confluence Knowledge of Enterprise Architect, Bizzdesign Horizzon or similar tools Knowledge of big data management, no-SQL database technologies Knowledge of Archimate methodology Positive attitude to analytical and static mathematical Additional information: Start date of assignment: ASAP Initial contract duration: 31.12.2024 Degree of employment: Full-time Location: Prague Please let us know if this project is of interest to you and when you could be available. We are looking forward to your reply. Best regards, Andy GDPR: You are interested in this project and would like to send us your CV? Due to the General Data Protection Regulation (GDPR), we would like to ask you to give us your written consent to the permanent storage of your data in your email. We use your data exclusively for the purpose of our staffing activities. Of course, you have the right to information, correction, blocking or deletion of your data at any time. Template: I agree to the permanent storage of my data. I know that I have the right to information, correction, blocking or deletion and can revoke this consent at any time".
French Speaking Data Cloud Full Stack Solution Architect/Paris Hybrid 3 days per week onsite/8 months/Start ASAP Role & Responsibilities: In the context of a big Data Transformation initiative on complete set of our data capabilities: Data Architecture and Engineering, Datamodelling, Storage for Data & Analytics, Data Visualisation, Data Science, Data Integration, Metadata Management, Data Storage and Warehousing, support the future Data Foundation platform technical architecture activities, including: Provide technical guidance and establish best practices for Snowflake account setup and configuration Manage Infrastructure-as-code and maximise automation Manage enhancements and deployments to support a fully federated Platform Subject Matter Expert (SME) for all Snowflake related questions on the project Own platform specific Snowflake documentation (decisions, best practices, features) Communication and demonstration of new features Design the cloud environment from a holistic point of view, ensuring it meets all functional and non-functional requirements Carry out deployment, maintenance, monitoring, and management tasks Oversee cloud security for the account Complete the integration of new applications into the cloud environment Education: * Higher education completed, inevitably with a degree in Computer Sciences. Experience: * 5 to 10 years' experience * Experience in putting in place Data Platforms in a cloud environment Skills: Fluent in French & English (must) Deep Snowflake expertise Platform architecture DBA experience Cloud database admin AWS Architecture Cloud networking specialist Excellence in communication, coordination & collaboration, stakeholder & risk management and especially with drive & leadership Open minded and accepting challenges Highly motivated, adaptable and flexible. Willing to integrate an existing environment and an existing project team
08/05/2024
Project-based
French Speaking Data Cloud Full Stack Solution Architect/Paris Hybrid 3 days per week onsite/8 months/Start ASAP Role & Responsibilities: In the context of a big Data Transformation initiative on complete set of our data capabilities: Data Architecture and Engineering, Datamodelling, Storage for Data & Analytics, Data Visualisation, Data Science, Data Integration, Metadata Management, Data Storage and Warehousing, support the future Data Foundation platform technical architecture activities, including: Provide technical guidance and establish best practices for Snowflake account setup and configuration Manage Infrastructure-as-code and maximise automation Manage enhancements and deployments to support a fully federated Platform Subject Matter Expert (SME) for all Snowflake related questions on the project Own platform specific Snowflake documentation (decisions, best practices, features) Communication and demonstration of new features Design the cloud environment from a holistic point of view, ensuring it meets all functional and non-functional requirements Carry out deployment, maintenance, monitoring, and management tasks Oversee cloud security for the account Complete the integration of new applications into the cloud environment Education: * Higher education completed, inevitably with a degree in Computer Sciences. Experience: * 5 to 10 years' experience * Experience in putting in place Data Platforms in a cloud environment Skills: Fluent in French & English (must) Deep Snowflake expertise Platform architecture DBA experience Cloud database admin AWS Architecture Cloud networking specialist Excellence in communication, coordination & collaboration, stakeholder & risk management and especially with drive & leadership Open minded and accepting challenges Highly motivated, adaptable and flexible. Willing to integrate an existing environment and an existing project team