Global Technology Solutions Ltd
Aldermaston, Berkshire
JOB TITLE: Application Packager LOCATION: Aldermaston SALARY: £56,252 WORKING HOURS: Standard office hours 9 day working fortnight, every other Friday off Holding SC or DV clearance is a MUST. Due to the clearance required we can only progress with British nationals DETAILED JOB DESCRIPTION: Purpose of the role We are looking for customer-focused and enthusiastic candidate for our clients Software Discovery and Packaging Team, who has a genuine interest in solving IT issues and is empathetic to our client needs and requirements. The applicant should have a very good understanding of software application packaging, possess good written and verbal communication skills and be willing to collaborate with the wider IT support teams. The Application Packaging Team is responsible for managing the end to end delivery of applications and Operating System gold builds, and the on-going life cycle management for those applications. Behaviours * Demonstrate the ability to methodically work through issues * Identify issues end users might be facing and drive improvement and simplifications to help end user process * Maintain good working relationship with key stakeholders and support teams * Must be able to deal directly with clients in a friendly and highly confident manner demonstrating excellent internal and external customer communication skills * Experience of Application Lifecycle Management in an Enterprise environment as part of a desktop transformation project * Strong Application Packaging experience using Flexera AdminStudio, InstallShield, IS Recapture & ORCA * Expert understanding of MSI technology, including transforms. * Document package configuration * Package defect remediation * Experience of large OS migration projects (preferably Windows 7 and Windows 10) * Data Analysis and reporting skills * Good communication skills (customer facing as you may need to speak to people in the business) * Experience of managing and maintaining a SCCM 2012 (or higher) production environment * SCCM deployment/Design experience * Some Operating System Deployment (OSD), including tools such as MDT * Experience of SCCM/WSUS patching technology * Microsoft Active Directory server * Microsoft Group Policy Objects (GPOs) * Software deployment and 3rd line support and troubleshooting * Understand and input into desktop engineering standards, processes and best practices * Involvement in delivering large scale deployments of software, eg Microsoft Office * Experience of large OS migration projects (preferably windows 10) * Highlight changes to processes or errors to the process owners. Maintain own process and working instruction documents * Work within the contractual guidelines and Statement of Work, or highlight any local shadow IT agreements * Be politically savvy and understand the concerns and priorities of our customer and our own support teams ESSENTIALS SKILLS/QUALIFICATIONS: * Basic Understanding of IT project management methodologies including agile and waterfall * Basic Knowledge of project management tools and techniques * Microsoft Active Directory server * Microsoft Group Policy Objects (GPOs) * Understand cloud technologies * Understand network topology * Must have packaged applications for Windows 7 and Windows 10 OS platforms. * Must have experience of complex application packaging. (eg Autodesk, MS Office, etc.) * Software deployment and 3rd line support and troubleshooting * Strong communication skills both written and verbal * Self-motivated with a can do attitude and comfortable working with ambiguity * Strong MSI technology * Application layering (VMware App Volumes) * Scripting experience using BAT, PowerShell and VB, C# Scripts * Ability to create and run reports * Awareness of Change and Release Management * Excellent written and verbal communication skills with a genuine enthusiasm towards IT service management * Excellent organisational skills and able to take a methodical approach * Excellent customer service skills * Strong and confident presentation skills * Professional verbal and written communication skills * Strong SCCM 2012 knowledge DESIRABLE SKILLS/QUALIFICATIONS: * ServiceNow * ITIL Foundation * Application virtualization (Microsoft APP-V) * Fundamental knowledge across Windows Operating Systems * MS SCCM 2012 As an Employee you will benefit from: * Flexible benefits including, private medial and health insurance, basic cover paid by employer * Free eye test vouchers * Company pension scheme * Income protection after 6 months' service should you be off work due to serious illness * 23 days holiday rising by 1 day per year to max 25 * Option to purchase/sell additional holiday * Life insurance * Employee Assistance Programme, free confidential advice covering a range of areas including mental health and financial support If you have the skill required, apply now! In applying for this position, you consent to your personal data being shared with the specified employer and for your details to remain with GTS for as long as is necessary to process your application. See our Privacy Notice for full information Global Technology Solutions is acting as an Employment Agency in relation to this vacancy.
13/05/2024
Full time
JOB TITLE: Application Packager LOCATION: Aldermaston SALARY: £56,252 WORKING HOURS: Standard office hours 9 day working fortnight, every other Friday off Holding SC or DV clearance is a MUST. Due to the clearance required we can only progress with British nationals DETAILED JOB DESCRIPTION: Purpose of the role We are looking for customer-focused and enthusiastic candidate for our clients Software Discovery and Packaging Team, who has a genuine interest in solving IT issues and is empathetic to our client needs and requirements. The applicant should have a very good understanding of software application packaging, possess good written and verbal communication skills and be willing to collaborate with the wider IT support teams. The Application Packaging Team is responsible for managing the end to end delivery of applications and Operating System gold builds, and the on-going life cycle management for those applications. Behaviours * Demonstrate the ability to methodically work through issues * Identify issues end users might be facing and drive improvement and simplifications to help end user process * Maintain good working relationship with key stakeholders and support teams * Must be able to deal directly with clients in a friendly and highly confident manner demonstrating excellent internal and external customer communication skills * Experience of Application Lifecycle Management in an Enterprise environment as part of a desktop transformation project * Strong Application Packaging experience using Flexera AdminStudio, InstallShield, IS Recapture & ORCA * Expert understanding of MSI technology, including transforms. * Document package configuration * Package defect remediation * Experience of large OS migration projects (preferably Windows 7 and Windows 10) * Data Analysis and reporting skills * Good communication skills (customer facing as you may need to speak to people in the business) * Experience of managing and maintaining a SCCM 2012 (or higher) production environment * SCCM deployment/Design experience * Some Operating System Deployment (OSD), including tools such as MDT * Experience of SCCM/WSUS patching technology * Microsoft Active Directory server * Microsoft Group Policy Objects (GPOs) * Software deployment and 3rd line support and troubleshooting * Understand and input into desktop engineering standards, processes and best practices * Involvement in delivering large scale deployments of software, eg Microsoft Office * Experience of large OS migration projects (preferably windows 10) * Highlight changes to processes or errors to the process owners. Maintain own process and working instruction documents * Work within the contractual guidelines and Statement of Work, or highlight any local shadow IT agreements * Be politically savvy and understand the concerns and priorities of our customer and our own support teams ESSENTIALS SKILLS/QUALIFICATIONS: * Basic Understanding of IT project management methodologies including agile and waterfall * Basic Knowledge of project management tools and techniques * Microsoft Active Directory server * Microsoft Group Policy Objects (GPOs) * Understand cloud technologies * Understand network topology * Must have packaged applications for Windows 7 and Windows 10 OS platforms. * Must have experience of complex application packaging. (eg Autodesk, MS Office, etc.) * Software deployment and 3rd line support and troubleshooting * Strong communication skills both written and verbal * Self-motivated with a can do attitude and comfortable working with ambiguity * Strong MSI technology * Application layering (VMware App Volumes) * Scripting experience using BAT, PowerShell and VB, C# Scripts * Ability to create and run reports * Awareness of Change and Release Management * Excellent written and verbal communication skills with a genuine enthusiasm towards IT service management * Excellent organisational skills and able to take a methodical approach * Excellent customer service skills * Strong and confident presentation skills * Professional verbal and written communication skills * Strong SCCM 2012 knowledge DESIRABLE SKILLS/QUALIFICATIONS: * ServiceNow * ITIL Foundation * Application virtualization (Microsoft APP-V) * Fundamental knowledge across Windows Operating Systems * MS SCCM 2012 As an Employee you will benefit from: * Flexible benefits including, private medial and health insurance, basic cover paid by employer * Free eye test vouchers * Company pension scheme * Income protection after 6 months' service should you be off work due to serious illness * 23 days holiday rising by 1 day per year to max 25 * Option to purchase/sell additional holiday * Life insurance * Employee Assistance Programme, free confidential advice covering a range of areas including mental health and financial support If you have the skill required, apply now! In applying for this position, you consent to your personal data being shared with the specified employer and for your details to remain with GTS for as long as is necessary to process your application. See our Privacy Notice for full information Global Technology Solutions is acting as an Employment Agency in relation to this vacancy.
We have partnered with a revolutionary Saas business that has created a platform aimed at providing solutions to save people money! They're seeking a Senior Backend Engineer to join their growing Development Team. You will be a critical part of the R&D Team, where you'll need to strike a balance between swift execution and maintaining a high standard of work quality. Mentoring and guiding juniors whilst owning features end-to-end. Key experience in working collaboratively and seeing the bigger picture is essential as you'll be thinking through everything from user experience, data models, scalability, operability and ongoing metrics. Tech Stack: Node | NestJS | Mongo | NoSQL Digital Ecosystem : AWS Salary : up to £75k Location : Nottingham - Hybrid, 2 days a week in the office Would you be interested in hearing more? Reach me at (see below)
13/05/2024
Full time
We have partnered with a revolutionary Saas business that has created a platform aimed at providing solutions to save people money! They're seeking a Senior Backend Engineer to join their growing Development Team. You will be a critical part of the R&D Team, where you'll need to strike a balance between swift execution and maintaining a high standard of work quality. Mentoring and guiding juniors whilst owning features end-to-end. Key experience in working collaboratively and seeing the bigger picture is essential as you'll be thinking through everything from user experience, data models, scalability, operability and ongoing metrics. Tech Stack: Node | NestJS | Mongo | NoSQL Digital Ecosystem : AWS Salary : up to £75k Location : Nottingham - Hybrid, 2 days a week in the office Would you be interested in hearing more? Reach me at (see below)
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
13/05/2024
Full time
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
13/05/2024
Full time
Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
13/05/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £75-100k + 15% guaranteed bonus + 10% pension
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
13/05/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £100-125k + 15% guaranteed bonus + 10% pension
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
13/05/2024
Full time
REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You MUST have the following: Strong experience as an SRE/Site Reliability Engineer Excellent AWS Kubernetes clustering Good Python, JavaScript, Java or Go Terraform SRE experience in an enterprise scale environment The following is DESIRABLE, not essential: SRE for big data Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite Graffana, Prometheus Role: REMOTE Site Reliability Engineer (SRE DevOps Infrastructure AWS Python Java Go JavaScript Big Data Lake Data Mesh Kubernetes Terraform Finance Trading Glue Athena Dremio Iceberg Snowflake DBT Arrow gRPC protobuf Airflow Ignite Asset Manager Investment Management Financial Services Hedge Fund DevOps) required by our asset management client in London. You will join a team 6 data engineers who are responsible for core engineering of a big data environment on AWS. You will be the first SRE within the team and responsible for pipeline optimisation, the production environment, establishing ground rules for this team and the department from an SRE standpoint and improving overall resiliency of the suite in production. The ideal candidate will have worked as an SRE in a big data environment. AWS is imperative. You will have the ability to script- Python, Java or JavaScript would be ideal. Terraform and clustered Kubernetes are essential. An understanding of, or exposure to, the following would also be very desirable: Glue, Athena, Dremio, Iceberg, Snowflake, DBT, Arrow, gRPC, protobuf, Airflow, Ignite. This role can be remote as long as you are in the UK. There is no expectation to be regularly in the office. Salary: £125-150k + 15% guaranteed bonus + 10% pension
Senior SAN Storage Engineer Start Date: ASAP Contract Length: 12 Months Location/Remote Working: Luxembourg Trust in Soda has formed a strategic partnership with a renowned consultancy company. They are actively seeking an accomplished Senior SAN Storage Engineer to ensure the stability, integrity, and efficient operation of SAN arrays and data fabrics, out-of-band managed storage arrays, as well as any array or appliance-based replication. Responsibilities Brings prior experience to organize and define work for complex or ambiguous situations Resolves issues, manages workload, and balances priorities through frequent interruptions while meeting specific, time-sensitive deadlines Supports release and life cycle process SAN fabric and storage array installs, upgrades, and decommissions Provides thought leadership for overall SAN Fabric and Storage Array infrastructure support at the enterprise level Contributes to operational readiness of platforms with dedicated/shared teams and consults on resource and skills required, process documentation creation, updates to guidelines, policies, change, and audit procedures Participates in troubleshooting efforts for storage issues and leads in major incidents, root cause analysis, and performance analysis and tuning ITIL-compliant champion for incident, request, change, and with particular focus on problem management Partner with monitoring team to develop new event and performance monitors/alerts and analysis as needed for new and/or existing systems Participate in the modernization and automation of storage infrastructure Deployment of new SAN/Switch infrastructure Maintain accurate CMDB Participates in problem management to proactively review open client and infrastructure problems and known errors to minimize the time spent firefighting and troubleshooting, supporting quicker resolution of incidents and events when they do arise. Assist and provide technical input to storage solution architect and others on the complex solution design, configuration, integration, and installation of new services. Essential Skill Set: Requires a minimum of 10 years of related experience with a Bachelor degree; or 8 years and a Masters degree; or a PhD with 5 years experience; or equivalent work experience. Experience in the following Platforms - Hitachi Storage, EMC, NetApp, IBM, Pure, Brocade Experience in Python, Ansible, Netapp Cloud Insight, Linux/Windows/Vmware, basic understanding of TCP/IP Networks and Firewalls. Based in Luxembourg
13/05/2024
Project-based
Senior SAN Storage Engineer Start Date: ASAP Contract Length: 12 Months Location/Remote Working: Luxembourg Trust in Soda has formed a strategic partnership with a renowned consultancy company. They are actively seeking an accomplished Senior SAN Storage Engineer to ensure the stability, integrity, and efficient operation of SAN arrays and data fabrics, out-of-band managed storage arrays, as well as any array or appliance-based replication. Responsibilities Brings prior experience to organize and define work for complex or ambiguous situations Resolves issues, manages workload, and balances priorities through frequent interruptions while meeting specific, time-sensitive deadlines Supports release and life cycle process SAN fabric and storage array installs, upgrades, and decommissions Provides thought leadership for overall SAN Fabric and Storage Array infrastructure support at the enterprise level Contributes to operational readiness of platforms with dedicated/shared teams and consults on resource and skills required, process documentation creation, updates to guidelines, policies, change, and audit procedures Participates in troubleshooting efforts for storage issues and leads in major incidents, root cause analysis, and performance analysis and tuning ITIL-compliant champion for incident, request, change, and with particular focus on problem management Partner with monitoring team to develop new event and performance monitors/alerts and analysis as needed for new and/or existing systems Participate in the modernization and automation of storage infrastructure Deployment of new SAN/Switch infrastructure Maintain accurate CMDB Participates in problem management to proactively review open client and infrastructure problems and known errors to minimize the time spent firefighting and troubleshooting, supporting quicker resolution of incidents and events when they do arise. Assist and provide technical input to storage solution architect and others on the complex solution design, configuration, integration, and installation of new services. Essential Skill Set: Requires a minimum of 10 years of related experience with a Bachelor degree; or 8 years and a Masters degree; or a PhD with 5 years experience; or equivalent work experience. Experience in the following Platforms - Hitachi Storage, EMC, NetApp, IBM, Pure, Brocade Experience in Python, Ansible, Netapp Cloud Insight, Linux/Windows/Vmware, basic understanding of TCP/IP Networks and Firewalls. Based in Luxembourg
Data DevOps Engineer - DevOps, Big data - Permanent - Gloucestershire Location: Gloucestershire/Bristol (full-time onsite) Salary: £65 - £95K per annum Negotiable DOE Benefits: Flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: We are looking for a bright, driven, customer focussed professional to join our clients Hybrid Cloud Delivery team, and work alongside Enterprise Data Engineering Consultants to accelerate and drive data engineering opportunities. This is a fantastic opportunity for a dynamic individual with big ambitions, who is an established technologist with both outstanding technical ability and consultative mindset. This would suit an open-minded personable self-starter who relishes the fluidity and collaborative nature of consultancy. The Role: This role sits on our clients Advisory and Professional Services delivery team, who provide thought-leadership, industry know-how and technical excellence to consultative engagements. Helping customers to reap maximum business benefit from their technical investments, leveraging best in class Vender & Partner technologies to create relevant and effective business-valued technical solutions. The Data DevOps Engineer role is all about the detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems. Duties: Participating in the full life cycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader Assisting with solution improvement activities driven either by the project or service Essential Requirements: Excellent knowledge of Linux operating system administration and implementation Broad understanding of the containerisation domain adjacent technologies/services, such as: Docker, OpenShift, Kubernetes etc. Infrastructure as Code and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies eg NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages Desirable Requirements: Jupyter Hub Awareness Minio or similar S3 storage technology Trino/Presto RabbitMQ or other common queue technology eg ActiveMQ NiFi Rego Familiarity with code development, Shell-Scripting in Python, Bash etc. To apply for this Data DevOps Engineer permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
13/05/2024
Full time
Data DevOps Engineer - DevOps, Big data - Permanent - Gloucestershire Location: Gloucestershire/Bristol (full-time onsite) Salary: £65 - £95K per annum Negotiable DOE Benefits: Flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: We are looking for a bright, driven, customer focussed professional to join our clients Hybrid Cloud Delivery team, and work alongside Enterprise Data Engineering Consultants to accelerate and drive data engineering opportunities. This is a fantastic opportunity for a dynamic individual with big ambitions, who is an established technologist with both outstanding technical ability and consultative mindset. This would suit an open-minded personable self-starter who relishes the fluidity and collaborative nature of consultancy. The Role: This role sits on our clients Advisory and Professional Services delivery team, who provide thought-leadership, industry know-how and technical excellence to consultative engagements. Helping customers to reap maximum business benefit from their technical investments, leveraging best in class Vender & Partner technologies to create relevant and effective business-valued technical solutions. The Data DevOps Engineer role is all about the detailed development and implementation of scalable clustered Big Data solutions, with a specific focus on automated dynamic scaling, self-healing systems. Duties: Participating in the full life cycle of data solution development, from requirements engineering through to continuous optimisation engineering and all the typical activities in between Providing technical thought-leadership and advisory on technologies and processes at the core of the data domain, as well as data domain adjacent technologies Engaging and collaborating with both internal and external teams and be a confident participant as well as a leader Assisting with solution improvement activities driven either by the project or service Essential Requirements: Excellent knowledge of Linux operating system administration and implementation Broad understanding of the containerisation domain adjacent technologies/services, such as: Docker, OpenShift, Kubernetes etc. Infrastructure as Code and CI/CD paradigms and systems such as: Ansible, Terraform, Jenkins, Bamboo, Concourse etc. Monitoring utilising products such as: Prometheus, Grafana, ELK, filebeat etc. Observability - SRE Big Data solutions (ecosystems) and technologies such as: Apache Spark and the Hadoop Ecosystem Edge technologies eg NGINX, HAProxy etc. Excellent knowledge of YAML or similar languages Desirable Requirements: Jupyter Hub Awareness Minio or similar S3 storage technology Trino/Presto RabbitMQ or other common queue technology eg ActiveMQ NiFi Rego Familiarity with code development, Shell-Scripting in Python, Bash etc. To apply for this Data DevOps Engineer permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
13/05/2024
Full time
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Director, Software Engineering - QRM. This director will manage 6 people and will help develop software applications and solutions for the quantitative management platform. This director will need hands-on experience with Java, DevOps, CICD, AWS, Containers, terraform, Etc. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office.
10/05/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Director, Software Engineering - QRM. This director will manage 6 people and will help develop software applications and solutions for the quantitative management platform. This director will need hands-on experience with Java, DevOps, CICD, AWS, Containers, terraform, Etc. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office.
Business Development Manager - French-speaking We have teamed with one of the biggest IT distributors in the UK which is looking for a French-speaking sales professional to join their growing team Responsibilities Generate qualified leads for our clients, by confidently using SPIN and other selling techniques; Develop specific and extensive client and product knowledge depending on each campaign to ensure client needs are met; Identify new business opportunities and quick win situations, and nurture database; French speaking
10/05/2024
Full time
Business Development Manager - French-speaking We have teamed with one of the biggest IT distributors in the UK which is looking for a French-speaking sales professional to join their growing team Responsibilities Generate qualified leads for our clients, by confidently using SPIN and other selling techniques; Develop specific and extensive client and product knowledge depending on each campaign to ensure client needs are met; Identify new business opportunities and quick win situations, and nurture database; French speaking
Director, Software Engineering - Quantitative Risk Management Applications SALARY: $200k - $230k flex plus 27% bonus LOCATION: Chicago, il Hybrid 3 days onsite, 2 days remote You will manage six plus people and help build the framewrok within the quantitative management platform developing software applications and solutions. Java C++ python automation devops cicd aws terraform Kubernetes SQL docker helm masters or Phd This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. This role will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, backtesting and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
10/05/2024
Full time
Director, Software Engineering - Quantitative Risk Management Applications SALARY: $200k - $230k flex plus 27% bonus LOCATION: Chicago, il Hybrid 3 days onsite, 2 days remote You will manage six plus people and help build the framewrok within the quantitative management platform developing software applications and solutions. Java C++ python automation devops cicd aws terraform Kubernetes SQL docker helm masters or Phd This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. This role will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, backtesting and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Director of Risk Management Software Engineering. Candidate will be responsible for functions within Quantitative Risk Management for developing and maintaining risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Responsibilities: Collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, back-testing and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, back-testing and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus.
09/05/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Director of Risk Management Software Engineering. Candidate will be responsible for functions within Quantitative Risk Management for developing and maintaining risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Responsibilities: Collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, back-testing and monitoring. Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, back-testing and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Provide hands-on technical leadership and active coordination of tasks and priorities. Provide guidance and support for the team and reporting for the management. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Java Risk Management Software Engineer. Candidate will develop and maintain risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Candidate will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand the technical capabilities for model development, back-testing and monitoring. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, back-testing and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus.
09/05/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Java Risk Management Software Engineer. Candidate will develop and maintain risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Candidate will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand the technical capabilities for model development, back-testing and monitoring. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, back-testing and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus.
We are Global IT Recruitment specialist that provides support to the clients across UK, Europe and Australia. We have an excellent job opportunity for you. Role- Debt Manager, Tallyman platforms Duration - Through till the end of 2024 Location- Knutsford - Hybrid position - 2 days in office Mandatory Skill Proven experience in running large data migrations for complex business services - with big bang and phased approach Strong background in presenting migration approaches, getting buy-in and executing migration plans Experience in managing and communicating with a variety of business, operations and technical stakeholders Demonstrates a high level of personal responsibility, pragmatism, and autonomy, planning own work to meet given objectives within a defined framework Excellent leadership and communication skills. Ability to navigate internal hierarchies, agendas and deliver what is required Ability to work effectively under tight deadlines Desired Skill Experience on Debt Manager, Tallyman platforms Roles and Responsibilities This role will lead and manage the Data migration delivery for the BFA Cards portfolio, working closely with the business, change and engineering teams. They will collaborate with cross-functional teams to define migrations strategies, engaging all key stakeholders and manage all the migration events
09/05/2024
Project-based
We are Global IT Recruitment specialist that provides support to the clients across UK, Europe and Australia. We have an excellent job opportunity for you. Role- Debt Manager, Tallyman platforms Duration - Through till the end of 2024 Location- Knutsford - Hybrid position - 2 days in office Mandatory Skill Proven experience in running large data migrations for complex business services - with big bang and phased approach Strong background in presenting migration approaches, getting buy-in and executing migration plans Experience in managing and communicating with a variety of business, operations and technical stakeholders Demonstrates a high level of personal responsibility, pragmatism, and autonomy, planning own work to meet given objectives within a defined framework Excellent leadership and communication skills. Ability to navigate internal hierarchies, agendas and deliver what is required Ability to work effectively under tight deadlines Desired Skill Experience on Debt Manager, Tallyman platforms Roles and Responsibilities This role will lead and manage the Data migration delivery for the BFA Cards portfolio, working closely with the business, change and engineering teams. They will collaborate with cross-functional teams to define migrations strategies, engaging all key stakeholders and manage all the migration events
Our client is seeking Senior Consultant in Data Engineering with extensive experience in MDM. This is a one year FTE, hybrid role in London, UK. Experience details: Must have 12+ Experience in Architecting Data & Analytics Platforms Minimum 5+ years of Experience in Banking MDM implementation with atleast 2 implementation experience Must have 5+ years in Data Governance Solutions Must Have strong understanding of Banking Regulations & their applicability for Data & Analytics Platforms Must have 8+ years' Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (eg Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) Must have 3+ years' experience of DevOps (CI/CD) Certifications - MDM Certified Must have experience with SLCD (Agile/Waterfall). Drive the architecture of a project, including authoring functional and design specifications, scalability, testing, quality data flow, and interface. Ability to Lead and Manage team and Interact with End users clients. Worked in Onsite/Offshore model. Demonstrated excellent communication, presentation, and problem-solving skills. Experience in project governance and enterprise customer management Role Details: Design Customer/Party MDM Solutions Understanding of Market leading MDM Platform with comparative view of capability/offerings/limitations & Accuracy Understanding of Out of box AI/Ml solutions of COTS products and their limitations Design to address MDM limitations Setup Customer 360 Setup Single Global Customer ID for historic customer where Multiple Customer ID's generated with LOB due to silos operations of Retails, Wholesale Businesses Design Integrated Ecosystems (CRM, KYC, Screening, Third Party) with Customer MDM/Customer 360 Define integration patterns of Surrounding systems with MDM Understanding of Customer Screening and KYC requirements from Banking perspective Conduct MVP/POC
09/05/2024
Full time
Our client is seeking Senior Consultant in Data Engineering with extensive experience in MDM. This is a one year FTE, hybrid role in London, UK. Experience details: Must have 12+ Experience in Architecting Data & Analytics Platforms Minimum 5+ years of Experience in Banking MDM implementation with atleast 2 implementation experience Must have 5+ years in Data Governance Solutions Must Have strong understanding of Banking Regulations & their applicability for Data & Analytics Platforms Must have 8+ years' Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (eg Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) Must have 3+ years' experience of DevOps (CI/CD) Certifications - MDM Certified Must have experience with SLCD (Agile/Waterfall). Drive the architecture of a project, including authoring functional and design specifications, scalability, testing, quality data flow, and interface. Ability to Lead and Manage team and Interact with End users clients. Worked in Onsite/Offshore model. Demonstrated excellent communication, presentation, and problem-solving skills. Experience in project governance and enterprise customer management Role Details: Design Customer/Party MDM Solutions Understanding of Market leading MDM Platform with comparative view of capability/offerings/limitations & Accuracy Understanding of Out of box AI/Ml solutions of COTS products and their limitations Design to address MDM limitations Setup Customer 360 Setup Single Global Customer ID for historic customer where Multiple Customer ID's generated with LOB due to silos operations of Retails, Wholesale Businesses Design Integrated Ecosystems (CRM, KYC, Screening, Third Party) with Customer MDM/Customer 360 Define integration patterns of Surrounding systems with MDM Understanding of Customer Screening and KYC requirements from Banking perspective Conduct MVP/POC
On behalf of our client, an international financial service provider located in Prague, we are looking for an external resource with skills and abilities as stated below: IT Functional Analyst - System Requirements UML - BPMN (f/m/x) financial area Prague Tasks and responsibilities: Understanding of business concepts and requirements and translating them into clear system specifications for our clients new risk system . The specifications are created in the required format Communicate ideas clearly and act as a liaison between various business and IT teams Provide advice on expected functionality to the test teams Work independently on tasks but also be able to cooperate within analytical and project team Clearly communicate, challenge and be challenged about proposed solutions Mandatory skills and experiences: Experience in IT business and functional analysis Business and system requirements management, including gathering and elicitation Analytical and logical thinking to find best fitting long-term solutions Working proficiency and communication skills in English on daily basis Knowledge of financial markets (bonds, equities, interest rate swaps, futures, options) Knowledge of modelling languages, mainly UML, BPMN Ability to work with relational DB and basic knowledge of SQL A degree in a business subject, a technical/quantitative subject (Computer Science, Math/Physics, Engineering), or equivalent experience Ability to learn quickly and self-study challenging topics Optional Skills: Awareness of IT architecture, data modelling, cloud technologies Knowledge of no-SQL database technologies Experience with JIRA, Confluence Knowledge of Enterprise Architect, Bizzdesign Horizzon or similar tools Knowledge of big data management, no-SQL database technologies Knowledge of Archimate methodology Positive attitude to analytical and static mathematical Additional information: Start date of assignment: ASAP Initial contract duration: 31.12.2024 Degree of employment: Full-time Location: Prague Please let us know if this project is of interest to you and when you could be available. We are looking forward to your reply. Best regards, Andy GDPR: You are interested in this project and would like to send us your CV? Due to the General Data Protection Regulation (GDPR), we would like to ask you to give us your written consent to the permanent storage of your data in your email. We use your data exclusively for the purpose of our staffing activities. Of course, you have the right to information, correction, blocking or deletion of your data at any time. Template: I agree to the permanent storage of my data. I know that I have the right to information, correction, blocking or deletion and can revoke this consent at any time".
08/05/2024
Project-based
On behalf of our client, an international financial service provider located in Prague, we are looking for an external resource with skills and abilities as stated below: IT Functional Analyst - System Requirements UML - BPMN (f/m/x) financial area Prague Tasks and responsibilities: Understanding of business concepts and requirements and translating them into clear system specifications for our clients new risk system . The specifications are created in the required format Communicate ideas clearly and act as a liaison between various business and IT teams Provide advice on expected functionality to the test teams Work independently on tasks but also be able to cooperate within analytical and project team Clearly communicate, challenge and be challenged about proposed solutions Mandatory skills and experiences: Experience in IT business and functional analysis Business and system requirements management, including gathering and elicitation Analytical and logical thinking to find best fitting long-term solutions Working proficiency and communication skills in English on daily basis Knowledge of financial markets (bonds, equities, interest rate swaps, futures, options) Knowledge of modelling languages, mainly UML, BPMN Ability to work with relational DB and basic knowledge of SQL A degree in a business subject, a technical/quantitative subject (Computer Science, Math/Physics, Engineering), or equivalent experience Ability to learn quickly and self-study challenging topics Optional Skills: Awareness of IT architecture, data modelling, cloud technologies Knowledge of no-SQL database technologies Experience with JIRA, Confluence Knowledge of Enterprise Architect, Bizzdesign Horizzon or similar tools Knowledge of big data management, no-SQL database technologies Knowledge of Archimate methodology Positive attitude to analytical and static mathematical Additional information: Start date of assignment: ASAP Initial contract duration: 31.12.2024 Degree of employment: Full-time Location: Prague Please let us know if this project is of interest to you and when you could be available. We are looking forward to your reply. Best regards, Andy GDPR: You are interested in this project and would like to send us your CV? Due to the General Data Protection Regulation (GDPR), we would like to ask you to give us your written consent to the permanent storage of your data in your email. We use your data exclusively for the purpose of our staffing activities. Of course, you have the right to information, correction, blocking or deletion of your data at any time. Template: I agree to the permanent storage of my data. I know that I have the right to information, correction, blocking or deletion and can revoke this consent at any time".
French Speaking Data Cloud Full Stack Solution Architect/Paris Hybrid 3 days per week onsite/8 months/Start ASAP Role & Responsibilities: In the context of a big Data Transformation initiative on complete set of our data capabilities: Data Architecture and Engineering, Datamodelling, Storage for Data & Analytics, Data Visualisation, Data Science, Data Integration, Metadata Management, Data Storage and Warehousing, support the future Data Foundation platform technical architecture activities, including: Provide technical guidance and establish best practices for Snowflake account setup and configuration Manage Infrastructure-as-code and maximise automation Manage enhancements and deployments to support a fully federated Platform Subject Matter Expert (SME) for all Snowflake related questions on the project Own platform specific Snowflake documentation (decisions, best practices, features) Communication and demonstration of new features Design the cloud environment from a holistic point of view, ensuring it meets all functional and non-functional requirements Carry out deployment, maintenance, monitoring, and management tasks Oversee cloud security for the account Complete the integration of new applications into the cloud environment Education: * Higher education completed, inevitably with a degree in Computer Sciences. Experience: * 5 to 10 years' experience * Experience in putting in place Data Platforms in a cloud environment Skills: Fluent in French & English (must) Deep Snowflake expertise Platform architecture DBA experience Cloud database admin AWS Architecture Cloud networking specialist Excellence in communication, coordination & collaboration, stakeholder & risk management and especially with drive & leadership Open minded and accepting challenges Highly motivated, adaptable and flexible. Willing to integrate an existing environment and an existing project team
08/05/2024
Project-based
French Speaking Data Cloud Full Stack Solution Architect/Paris Hybrid 3 days per week onsite/8 months/Start ASAP Role & Responsibilities: In the context of a big Data Transformation initiative on complete set of our data capabilities: Data Architecture and Engineering, Datamodelling, Storage for Data & Analytics, Data Visualisation, Data Science, Data Integration, Metadata Management, Data Storage and Warehousing, support the future Data Foundation platform technical architecture activities, including: Provide technical guidance and establish best practices for Snowflake account setup and configuration Manage Infrastructure-as-code and maximise automation Manage enhancements and deployments to support a fully federated Platform Subject Matter Expert (SME) for all Snowflake related questions on the project Own platform specific Snowflake documentation (decisions, best practices, features) Communication and demonstration of new features Design the cloud environment from a holistic point of view, ensuring it meets all functional and non-functional requirements Carry out deployment, maintenance, monitoring, and management tasks Oversee cloud security for the account Complete the integration of new applications into the cloud environment Education: * Higher education completed, inevitably with a degree in Computer Sciences. Experience: * 5 to 10 years' experience * Experience in putting in place Data Platforms in a cloud environment Skills: Fluent in French & English (must) Deep Snowflake expertise Platform architecture DBA experience Cloud database admin AWS Architecture Cloud networking specialist Excellence in communication, coordination & collaboration, stakeholder & risk management and especially with drive & leadership Open minded and accepting challenges Highly motivated, adaptable and flexible. Willing to integrate an existing environment and an existing project team
Rust Programmer - Brussels - English speaking (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
07/05/2024
Project-based
Rust Programmer - Brussels - English speaking (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.