*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Director, Software Engineering - QRM. This director will manage 6 people and will help develop software applications and solutions for the quantitative management platform. This director will need hands-on experience with Java, DevOps, CICD, AWS, Containers, terraform, Etc. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office.
10/05/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Director, Software Engineering - QRM. This director will manage 6 people and will help develop software applications and solutions for the quantitative management platform. This director will need hands-on experience with Java, DevOps, CICD, AWS, Containers, terraform, Etc. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office.
GCP Data Streaming Engineer - 3 months - Inside IR35 - Hybrid Are you passionate about leveraging data to drive transformational change? Here's your chance to make a meaningful impact as a GCP Data Streaming Engineer with a renowned global consultancy. Join a market leading consultancy for an exhilarating 3-month contract (with prospects for extension) where you'll play a pivotal role in shaping the future of data streaming solutions. As our GCP Data Streaming Engineer, you'll be at the forefront of harnessing the power of Google Cloud Platform (GCP) to design, develop, and implement cutting-edge data streaming solutions. Leveraging your expertise in GCP technologies, including Pub/Sub, Dataflow, and BigQuery. Key Responsibilities: Develop Scalable Solutions: Lead the creation of scalable and dependable data streaming solutions on GCP using Apache Kafka and associated technologies. Optimize Kafka Setup: Customize Kafka brokers, topics, partitions, and replication to guarantee the highest performance and reliability of data streams. Configure Kafka Connectors: Apply your expertise to set up Kafka connectors for batch processing, managing both source and sink connectors to seamlessly integrate data. Python and Apache Beam Proficiency: Utilize Python and Apache Beam to craft tailored data processing logic and transformations within pipelines, enabling swift and effective data analysis. Ensure Security: Implement SSL/TLS encryption, SASL authentication, and ACL-based authorization to fortify Kafka clusters and communication channels, ensuring data integrity and privacy. What you will Ideally Bring: Hands-On Kafka Configuration: Proven expertise in configuring Kafka connectors for batch processing, optimizing their number for improved performance. Python and DataFlow/Apache Beam Proficiency: Skilled in Python and DataFlow/Apache Beam, adept at developing custom data processing logic within pipelines Streaming Data Management: Demonstrated ability in handling streaming data, ensuring timely processing and Real Time analysis, employing various techniques like windowing and buffering. Secured Cloud Environment Experience: Experienced in deploying Kafka in secure cloud environments, implementing SSL/TLS encryption, SASL authentication, and ACL-based authorization. Kafka Configuration and Governance: Proficient in configuring Kafka brokers, managing security, and enforcing schema governance to ensure reliability, scalability, and compliance. Contract Details: Duration: 3 months Day Rate: Up to £550 Per Day (All Inclusive) Location: Cardiff/Remote GCP Data Streaming Engineer - 3 months - Inside IR35 - Hybrid
10/05/2024
Project-based
GCP Data Streaming Engineer - 3 months - Inside IR35 - Hybrid Are you passionate about leveraging data to drive transformational change? Here's your chance to make a meaningful impact as a GCP Data Streaming Engineer with a renowned global consultancy. Join a market leading consultancy for an exhilarating 3-month contract (with prospects for extension) where you'll play a pivotal role in shaping the future of data streaming solutions. As our GCP Data Streaming Engineer, you'll be at the forefront of harnessing the power of Google Cloud Platform (GCP) to design, develop, and implement cutting-edge data streaming solutions. Leveraging your expertise in GCP technologies, including Pub/Sub, Dataflow, and BigQuery. Key Responsibilities: Develop Scalable Solutions: Lead the creation of scalable and dependable data streaming solutions on GCP using Apache Kafka and associated technologies. Optimize Kafka Setup: Customize Kafka brokers, topics, partitions, and replication to guarantee the highest performance and reliability of data streams. Configure Kafka Connectors: Apply your expertise to set up Kafka connectors for batch processing, managing both source and sink connectors to seamlessly integrate data. Python and Apache Beam Proficiency: Utilize Python and Apache Beam to craft tailored data processing logic and transformations within pipelines, enabling swift and effective data analysis. Ensure Security: Implement SSL/TLS encryption, SASL authentication, and ACL-based authorization to fortify Kafka clusters and communication channels, ensuring data integrity and privacy. What you will Ideally Bring: Hands-On Kafka Configuration: Proven expertise in configuring Kafka connectors for batch processing, optimizing their number for improved performance. Python and DataFlow/Apache Beam Proficiency: Skilled in Python and DataFlow/Apache Beam, adept at developing custom data processing logic within pipelines Streaming Data Management: Demonstrated ability in handling streaming data, ensuring timely processing and Real Time analysis, employing various techniques like windowing and buffering. Secured Cloud Environment Experience: Experienced in deploying Kafka in secure cloud environments, implementing SSL/TLS encryption, SASL authentication, and ACL-based authorization. Kafka Configuration and Governance: Proficient in configuring Kafka brokers, managing security, and enforcing schema governance to ensure reliability, scalability, and compliance. Contract Details: Duration: 3 months Day Rate: Up to £550 Per Day (All Inclusive) Location: Cardiff/Remote GCP Data Streaming Engineer - 3 months - Inside IR35 - Hybrid
Director, Software Engineering - Quantitative Risk Management Applications SALARY: $200k - $230k flex plus 27% bonus LOCATION: Chicago, il Hybrid 3 days onsite, 2 days remote You will manage six plus people and help build the framewrok within the quantitative management platform developing software applications and solutions. Java C++ python automation devops cicd aws terraform Kubernetes SQL docker helm masters or Phd This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. This role will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, backtesting and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
10/05/2024
Full time
Director, Software Engineering - Quantitative Risk Management Applications SALARY: $200k - $230k flex plus 27% bonus LOCATION: Chicago, il Hybrid 3 days onsite, 2 days remote You will manage six plus people and help build the framewrok within the quantitative management platform developing software applications and solutions. Java C++ python automation devops cicd aws terraform Kubernetes SQL docker helm masters or Phd This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. This role will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, backtesting and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Director of Risk Management Software Engineering. Candidate will be responsible for functions within Quantitative Risk Management for developing and maintaining risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Responsibilities: Collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, back-testing and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, back-testing and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus.
09/05/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Director of Risk Management Software Engineering. Candidate will be responsible for functions within Quantitative Risk Management for developing and maintaining risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Responsibilities: Collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, back-testing and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, back-testing and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Java Risk Management Software Engineer. Candidate will develop and maintain risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Candidate will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand the technical capabilities for model development, back-testing and monitoring. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, back-testing and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus.
09/05/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Java Risk Management Software Engineer. Candidate will develop and maintain risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Candidate will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand the technical capabilities for model development, back-testing and monitoring. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, back-testing and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus.
Our client is seeking Data Architect with experience MDM, Data Governance in Banking domain. This is a hybrid role in London, UK. Experience Details: Must have 12 Experience in Architecting Data Analytics Platforms like Cloud, Big Data, EDW's Must have 5 years in Data Governance Solutions, Metadata Management, Data Quality Data Lineage, Data Catalogue Must have 5 solution experience in Reporting Visulization Platform Modernization Canned Self Service and Real Time Reporting Must have 3 Experience in Cloud Migration Solution Proposals includes Migration Strategy FinOps Cloud Governance Must Have strong understanding of Banking Regulations their applicability for Data Analytics Platforms Must have 8 years' Experience with Relational Databases like Oracle NoSQL Databases and or Big Data technologies eg Oracle SQL Server Postgres Spark Hadoop other Open Source Must have experience in Data Security Solutions Identity and Access Management and Data Security Access Management Must have 3 years' experience of DevOps CI CD Role Details: Define all architecture patterns Data Mesh, Data Fabric, EventHub etc Define each architecture pattern Data Analytics Platform, Functional Blueprint Logical Layers and Each layer Capabilities Identify All Ingestion Patterns Real Time Batch Structured Unstructured Images PDF for each data analytics platform Analyze existing Data Analytics Platform Architecture Playbook Identify Gaps and Enrich Playbook Guided Tooling Framework Scorecard based approach Potential Engineering Frameworks Structured Semi Structure Define Intake Frameworks Unstructured - OCR Intake Data Governance Capabilities Framework Integration with rest of Ecosystems Design Recommend Data Traceability Lineage DQ Reconciliation and Observability Platform Understanding of Cloud GCP AWS Services Offering Deep Understanding of Banking Regulations and solution required to adopt them in Hybrid Cloud Ecosystem Data Analytics Platform Governance FinOps Regulatory Conduct MVP POC
09/05/2024
Full time
Our client is seeking Data Architect with experience MDM, Data Governance in Banking domain. This is a hybrid role in London, UK. Experience Details: Must have 12 Experience in Architecting Data Analytics Platforms like Cloud, Big Data, EDW's Must have 5 years in Data Governance Solutions, Metadata Management, Data Quality Data Lineage, Data Catalogue Must have 5 solution experience in Reporting Visulization Platform Modernization Canned Self Service and Real Time Reporting Must have 3 Experience in Cloud Migration Solution Proposals includes Migration Strategy FinOps Cloud Governance Must Have strong understanding of Banking Regulations their applicability for Data Analytics Platforms Must have 8 years' Experience with Relational Databases like Oracle NoSQL Databases and or Big Data technologies eg Oracle SQL Server Postgres Spark Hadoop other Open Source Must have experience in Data Security Solutions Identity and Access Management and Data Security Access Management Must have 3 years' experience of DevOps CI CD Role Details: Define all architecture patterns Data Mesh, Data Fabric, EventHub etc Define each architecture pattern Data Analytics Platform, Functional Blueprint Logical Layers and Each layer Capabilities Identify All Ingestion Patterns Real Time Batch Structured Unstructured Images PDF for each data analytics platform Analyze existing Data Analytics Platform Architecture Playbook Identify Gaps and Enrich Playbook Guided Tooling Framework Scorecard based approach Potential Engineering Frameworks Structured Semi Structure Define Intake Frameworks Unstructured - OCR Intake Data Governance Capabilities Framework Integration with rest of Ecosystems Design Recommend Data Traceability Lineage DQ Reconciliation and Observability Platform Understanding of Cloud GCP AWS Services Offering Deep Understanding of Banking Regulations and solution required to adopt them in Hybrid Cloud Ecosystem Data Analytics Platform Governance FinOps Regulatory Conduct MVP POC
On behalf of our client, an international financial service provider located in Prague, we are looking for an external resource with skills and abilities as stated below: IT Functional Analyst - System Requirements UML - BPMN (f/m/x) financial area Prague Tasks and responsibilities: Understanding of business concepts and requirements and translating them into clear system specifications for our clients new risk system . The specifications are created in the required format Communicate ideas clearly and act as a liaison between various business and IT teams Provide advice on expected functionality to the test teams Work independently on tasks but also be able to cooperate within analytical and project team Clearly communicate, challenge and be challenged about proposed solutions Mandatory skills and experiences: Experience in IT business and functional analysis Business and system requirements management, including gathering and elicitation Analytical and logical thinking to find best fitting long-term solutions Working proficiency and communication skills in English on daily basis Knowledge of financial markets (bonds, equities, interest rate swaps, futures, options) Knowledge of modelling languages, mainly UML, BPMN Ability to work with relational DB and basic knowledge of SQL A degree in a business subject, a technical/quantitative subject (Computer Science, Math/Physics, Engineering), or equivalent experience Ability to learn quickly and self-study challenging topics Optional Skills: Awareness of IT architecture, data modelling, cloud technologies Knowledge of no-SQL database technologies Experience with JIRA, Confluence Knowledge of Enterprise Architect, Bizzdesign Horizzon or similar tools Knowledge of big data management, no-SQL database technologies Knowledge of Archimate methodology Positive attitude to analytical and static mathematical Additional information: Start date of assignment: ASAP Initial contract duration: 31.12.2024 Degree of employment: Full-time Location: Prague Please let us know if this project is of interest to you and when you could be available. We are looking forward to your reply. Best regards, Andy GDPR: You are interested in this project and would like to send us your CV? Due to the General Data Protection Regulation (GDPR), we would like to ask you to give us your written consent to the permanent storage of your data in your email. We use your data exclusively for the purpose of our staffing activities. Of course, you have the right to information, correction, blocking or deletion of your data at any time. Template: I agree to the permanent storage of my data. I know that I have the right to information, correction, blocking or deletion and can revoke this consent at any time".
08/05/2024
Project-based
On behalf of our client, an international financial service provider located in Prague, we are looking for an external resource with skills and abilities as stated below: IT Functional Analyst - System Requirements UML - BPMN (f/m/x) financial area Prague Tasks and responsibilities: Understanding of business concepts and requirements and translating them into clear system specifications for our clients new risk system . The specifications are created in the required format Communicate ideas clearly and act as a liaison between various business and IT teams Provide advice on expected functionality to the test teams Work independently on tasks but also be able to cooperate within analytical and project team Clearly communicate, challenge and be challenged about proposed solutions Mandatory skills and experiences: Experience in IT business and functional analysis Business and system requirements management, including gathering and elicitation Analytical and logical thinking to find best fitting long-term solutions Working proficiency and communication skills in English on daily basis Knowledge of financial markets (bonds, equities, interest rate swaps, futures, options) Knowledge of modelling languages, mainly UML, BPMN Ability to work with relational DB and basic knowledge of SQL A degree in a business subject, a technical/quantitative subject (Computer Science, Math/Physics, Engineering), or equivalent experience Ability to learn quickly and self-study challenging topics Optional Skills: Awareness of IT architecture, data modelling, cloud technologies Knowledge of no-SQL database technologies Experience with JIRA, Confluence Knowledge of Enterprise Architect, Bizzdesign Horizzon or similar tools Knowledge of big data management, no-SQL database technologies Knowledge of Archimate methodology Positive attitude to analytical and static mathematical Additional information: Start date of assignment: ASAP Initial contract duration: 31.12.2024 Degree of employment: Full-time Location: Prague Please let us know if this project is of interest to you and when you could be available. We are looking forward to your reply. Best regards, Andy GDPR: You are interested in this project and would like to send us your CV? Due to the General Data Protection Regulation (GDPR), we would like to ask you to give us your written consent to the permanent storage of your data in your email. We use your data exclusively for the purpose of our staffing activities. Of course, you have the right to information, correction, blocking or deletion of your data at any time. Template: I agree to the permanent storage of my data. I know that I have the right to information, correction, blocking or deletion and can revoke this consent at any time".
French Speaking Data Cloud Full Stack Solution Architect/Paris Hybrid 3 days per week onsite/8 months/Start ASAP Role & Responsibilities: In the context of a big Data Transformation initiative on complete set of our data capabilities: Data Architecture and Engineering, Datamodelling, Storage for Data & Analytics, Data Visualisation, Data Science, Data Integration, Metadata Management, Data Storage and Warehousing, support the future Data Foundation platform technical architecture activities, including: Provide technical guidance and establish best practices for Snowflake account setup and configuration Manage Infrastructure-as-code and maximise automation Manage enhancements and deployments to support a fully federated Platform Subject Matter Expert (SME) for all Snowflake related questions on the project Own platform specific Snowflake documentation (decisions, best practices, features) Communication and demonstration of new features Design the cloud environment from a holistic point of view, ensuring it meets all functional and non-functional requirements Carry out deployment, maintenance, monitoring, and management tasks Oversee cloud security for the account Complete the integration of new applications into the cloud environment Education: * Higher education completed, inevitably with a degree in Computer Sciences. Experience: * 5 to 10 years' experience * Experience in putting in place Data Platforms in a cloud environment Skills: Fluent in French & English (must) Deep Snowflake expertise Platform architecture DBA experience Cloud database admin AWS Architecture Cloud networking specialist Excellence in communication, coordination & collaboration, stakeholder & risk management and especially with drive & leadership Open minded and accepting challenges Highly motivated, adaptable and flexible. Willing to integrate an existing environment and an existing project team
08/05/2024
Project-based
French Speaking Data Cloud Full Stack Solution Architect/Paris Hybrid 3 days per week onsite/8 months/Start ASAP Role & Responsibilities: In the context of a big Data Transformation initiative on complete set of our data capabilities: Data Architecture and Engineering, Datamodelling, Storage for Data & Analytics, Data Visualisation, Data Science, Data Integration, Metadata Management, Data Storage and Warehousing, support the future Data Foundation platform technical architecture activities, including: Provide technical guidance and establish best practices for Snowflake account setup and configuration Manage Infrastructure-as-code and maximise automation Manage enhancements and deployments to support a fully federated Platform Subject Matter Expert (SME) for all Snowflake related questions on the project Own platform specific Snowflake documentation (decisions, best practices, features) Communication and demonstration of new features Design the cloud environment from a holistic point of view, ensuring it meets all functional and non-functional requirements Carry out deployment, maintenance, monitoring, and management tasks Oversee cloud security for the account Complete the integration of new applications into the cloud environment Education: * Higher education completed, inevitably with a degree in Computer Sciences. Experience: * 5 to 10 years' experience * Experience in putting in place Data Platforms in a cloud environment Skills: Fluent in French & English (must) Deep Snowflake expertise Platform architecture DBA experience Cloud database admin AWS Architecture Cloud networking specialist Excellence in communication, coordination & collaboration, stakeholder & risk management and especially with drive & leadership Open minded and accepting challenges Highly motivated, adaptable and flexible. Willing to integrate an existing environment and an existing project team
Rust Programmer - Brussels - English speaking (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
07/05/2024
Project-based
Rust Programmer - Brussels - English speaking (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
06/05/2024
Full time
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.