Data Analyst/Data Engineer - Cork, Ireland (Hybrid Working) - Contract *Must live in Cork TEKsystems is thrilled to offer an exciting opportunity for a data engineer/analyst (2-4 years experience ) to join our dynamic team of software developers and data scientists in the Business Analytics team for one of the world's largest technology companies in the world. Why This Role Is Exciting: Innovative Culture and Collaboration - Our client fosters a creative and collaborative environment. Their visionary leadership, commitment to innovation, and unique culture contribute to employee contentment. Consumer-Centric Approach : Our clients' focus on simplicity and consumer-first attitude sets it apart. In a world filled with complex features and gadgets, our client stands out by prioritising what truly matters. Key Requirements for Success: We are seeking a Data Engineer to support some innovative data pipeline projects working across a broad, modern tech stack. You must have experience working with modern databases such as Snowflake and MySQL and interested in visualization tools such as Tableau and PowerBI. Any skills in Big Data and Process Orchestration are beneficial. Role Details: Location: Cork, Ireland Office Days: 3 days a week 2-4 years experience If you're a Data Analyst/Data Engineer seeking your next opportunity, apply directly or reach out . Data Analyst/Data Engineer - Cork, Ireland (Hybrid Working) - Contract *Must live in Cork Job Title: Data Engineer Location: Cork City, Ireland Job Type: Contract Trading as TEKsystems. Allegis Group Limited, Bracknell, RG12 1RT, United Kingdom. No Allegis Group Limited operates as an Employment Business and Employment Agency as set out in the Conduct of Employment Agencies and Employment Businesses Regulations 2003. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, Talentis Solutions, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
08/05/2024
Project-based
Data Analyst/Data Engineer - Cork, Ireland (Hybrid Working) - Contract *Must live in Cork TEKsystems is thrilled to offer an exciting opportunity for a data engineer/analyst (2-4 years experience ) to join our dynamic team of software developers and data scientists in the Business Analytics team for one of the world's largest technology companies in the world. Why This Role Is Exciting: Innovative Culture and Collaboration - Our client fosters a creative and collaborative environment. Their visionary leadership, commitment to innovation, and unique culture contribute to employee contentment. Consumer-Centric Approach : Our clients' focus on simplicity and consumer-first attitude sets it apart. In a world filled with complex features and gadgets, our client stands out by prioritising what truly matters. Key Requirements for Success: We are seeking a Data Engineer to support some innovative data pipeline projects working across a broad, modern tech stack. You must have experience working with modern databases such as Snowflake and MySQL and interested in visualization tools such as Tableau and PowerBI. Any skills in Big Data and Process Orchestration are beneficial. Role Details: Location: Cork, Ireland Office Days: 3 days a week 2-4 years experience If you're a Data Analyst/Data Engineer seeking your next opportunity, apply directly or reach out . Data Analyst/Data Engineer - Cork, Ireland (Hybrid Working) - Contract *Must live in Cork Job Title: Data Engineer Location: Cork City, Ireland Job Type: Contract Trading as TEKsystems. Allegis Group Limited, Bracknell, RG12 1RT, United Kingdom. No Allegis Group Limited operates as an Employment Business and Employment Agency as set out in the Conduct of Employment Agencies and Employment Businesses Regulations 2003. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, Talentis Solutions, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
On behalf of our client, an international financial service provider located in Prague, we are looking for an external resource with skills and abilities as stated below: IT Functional Analyst - System Requirements UML - BPMN (f/m/x) financial area Prague Tasks and responsibilities: Understanding of business concepts and requirements and translating them into clear system specifications for our clients new risk system . The specifications are created in the required format Communicate ideas clearly and act as a liaison between various business and IT teams Provide advice on expected functionality to the test teams Work independently on tasks but also be able to cooperate within analytical and project team Clearly communicate, challenge and be challenged about proposed solutions Mandatory skills and experiences: Experience in IT business and functional analysis Business and system requirements management, including gathering and elicitation Analytical and logical thinking to find best fitting long-term solutions Working proficiency and communication skills in English on daily basis Knowledge of financial markets (bonds, equities, interest rate swaps, futures, options) Knowledge of modelling languages, mainly UML, BPMN Ability to work with relational DB and basic knowledge of SQL A degree in a business subject, a technical/quantitative subject (Computer Science, Math/Physics, Engineering), or equivalent experience Ability to learn quickly and self-study challenging topics Optional Skills: Awareness of IT architecture, data modelling, cloud technologies Knowledge of no-SQL database technologies Experience with JIRA, Confluence Knowledge of Enterprise Architect, Bizzdesign Horizzon or similar tools Knowledge of big data management, no-SQL database technologies Knowledge of Archimate methodology Positive attitude to analytical and static mathematical Additional information: Start date of assignment: ASAP Initial contract duration: 31.12.2024 Degree of employment: Full-time Location: Prague Please let us know if this project is of interest to you and when you could be available. We are looking forward to your reply. Best regards, Andy GDPR: You are interested in this project and would like to send us your CV? Due to the General Data Protection Regulation (GDPR), we would like to ask you to give us your written consent to the permanent storage of your data in your email. We use your data exclusively for the purpose of our staffing activities. Of course, you have the right to information, correction, blocking or deletion of your data at any time. Template: I agree to the permanent storage of my data. I know that I have the right to information, correction, blocking or deletion and can revoke this consent at any time".
08/05/2024
Project-based
On behalf of our client, an international financial service provider located in Prague, we are looking for an external resource with skills and abilities as stated below: IT Functional Analyst - System Requirements UML - BPMN (f/m/x) financial area Prague Tasks and responsibilities: Understanding of business concepts and requirements and translating them into clear system specifications for our clients new risk system . The specifications are created in the required format Communicate ideas clearly and act as a liaison between various business and IT teams Provide advice on expected functionality to the test teams Work independently on tasks but also be able to cooperate within analytical and project team Clearly communicate, challenge and be challenged about proposed solutions Mandatory skills and experiences: Experience in IT business and functional analysis Business and system requirements management, including gathering and elicitation Analytical and logical thinking to find best fitting long-term solutions Working proficiency and communication skills in English on daily basis Knowledge of financial markets (bonds, equities, interest rate swaps, futures, options) Knowledge of modelling languages, mainly UML, BPMN Ability to work with relational DB and basic knowledge of SQL A degree in a business subject, a technical/quantitative subject (Computer Science, Math/Physics, Engineering), or equivalent experience Ability to learn quickly and self-study challenging topics Optional Skills: Awareness of IT architecture, data modelling, cloud technologies Knowledge of no-SQL database technologies Experience with JIRA, Confluence Knowledge of Enterprise Architect, Bizzdesign Horizzon or similar tools Knowledge of big data management, no-SQL database technologies Knowledge of Archimate methodology Positive attitude to analytical and static mathematical Additional information: Start date of assignment: ASAP Initial contract duration: 31.12.2024 Degree of employment: Full-time Location: Prague Please let us know if this project is of interest to you and when you could be available. We are looking forward to your reply. Best regards, Andy GDPR: You are interested in this project and would like to send us your CV? Due to the General Data Protection Regulation (GDPR), we would like to ask you to give us your written consent to the permanent storage of your data in your email. We use your data exclusively for the purpose of our staffing activities. Of course, you have the right to information, correction, blocking or deletion of your data at any time. Template: I agree to the permanent storage of my data. I know that I have the right to information, correction, blocking or deletion and can revoke this consent at any time".
Our client in the Luxury retail sector is looking for an English speaking Data engineer to join an exciting long term project. Therefore, we are looking for someone with at least 4 years' data engineering expertise and knowledge/certification in Google Cloud (GCP) Experience needed in the following - This must be detailed in your CV! Data Engineer DBT - Data Building tool Big Query GCP Start: ASAP Duration: 36 Months Location: 100% remote Languages: English Rate: €275 per day
08/05/2024
Project-based
Our client in the Luxury retail sector is looking for an English speaking Data engineer to join an exciting long term project. Therefore, we are looking for someone with at least 4 years' data engineering expertise and knowledge/certification in Google Cloud (GCP) Experience needed in the following - This must be detailed in your CV! Data Engineer DBT - Data Building tool Big Query GCP Start: ASAP Duration: 36 Months Location: 100% remote Languages: English Rate: €275 per day
French Speaking Data Cloud Full Stack Solution Architect/Paris Hybrid 3 days per week onsite/8 months/Start ASAP Role & Responsibilities: In the context of a big Data Transformation initiative on complete set of our data capabilities: Data Architecture and Engineering, Datamodelling, Storage for Data & Analytics, Data Visualisation, Data Science, Data Integration, Metadata Management, Data Storage and Warehousing, support the future Data Foundation platform technical architecture activities, including: Provide technical guidance and establish best practices for Snowflake account setup and configuration Manage Infrastructure-as-code and maximise automation Manage enhancements and deployments to support a fully federated Platform Subject Matter Expert (SME) for all Snowflake related questions on the project Own platform specific Snowflake documentation (decisions, best practices, features) Communication and demonstration of new features Design the cloud environment from a holistic point of view, ensuring it meets all functional and non-functional requirements Carry out deployment, maintenance, monitoring, and management tasks Oversee cloud security for the account Complete the integration of new applications into the cloud environment Education: * Higher education completed, inevitably with a degree in Computer Sciences. Experience: * 5 to 10 years' experience * Experience in putting in place Data Platforms in a cloud environment Skills: Fluent in French & English (must) Deep Snowflake expertise Platform architecture DBA experience Cloud database admin AWS Architecture Cloud networking specialist Excellence in communication, coordination & collaboration, stakeholder & risk management and especially with drive & leadership Open minded and accepting challenges Highly motivated, adaptable and flexible. Willing to integrate an existing environment and an existing project team
08/05/2024
Project-based
French Speaking Data Cloud Full Stack Solution Architect/Paris Hybrid 3 days per week onsite/8 months/Start ASAP Role & Responsibilities: In the context of a big Data Transformation initiative on complete set of our data capabilities: Data Architecture and Engineering, Datamodelling, Storage for Data & Analytics, Data Visualisation, Data Science, Data Integration, Metadata Management, Data Storage and Warehousing, support the future Data Foundation platform technical architecture activities, including: Provide technical guidance and establish best practices for Snowflake account setup and configuration Manage Infrastructure-as-code and maximise automation Manage enhancements and deployments to support a fully federated Platform Subject Matter Expert (SME) for all Snowflake related questions on the project Own platform specific Snowflake documentation (decisions, best practices, features) Communication and demonstration of new features Design the cloud environment from a holistic point of view, ensuring it meets all functional and non-functional requirements Carry out deployment, maintenance, monitoring, and management tasks Oversee cloud security for the account Complete the integration of new applications into the cloud environment Education: * Higher education completed, inevitably with a degree in Computer Sciences. Experience: * 5 to 10 years' experience * Experience in putting in place Data Platforms in a cloud environment Skills: Fluent in French & English (must) Deep Snowflake expertise Platform architecture DBA experience Cloud database admin AWS Architecture Cloud networking specialist Excellence in communication, coordination & collaboration, stakeholder & risk management and especially with drive & leadership Open minded and accepting challenges Highly motivated, adaptable and flexible. Willing to integrate an existing environment and an existing project team
NO SPONSORSHIP Associate Principal, Software Engineering - QRM SALARY: $135k - $145k - $150kish plus 15% bonus LOCATION: CHICAGO, IL Hybrid 3 days onsite and 2 days remote SELLING POINTS: develops and maintains risk models for managing clearing fund and stress testing risk model software in production. AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. This role will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, backtesting and monitoring. Primary Duties and Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Certificates or Licenses:
08/05/2024
Full time
NO SPONSORSHIP Associate Principal, Software Engineering - QRM SALARY: $135k - $145k - $150kish plus 15% bonus LOCATION: CHICAGO, IL Hybrid 3 days onsite and 2 days remote SELLING POINTS: develops and maintains risk models for managing clearing fund and stress testing risk model software in production. AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. This role will collaborate with other developers, quantitative analysts, business users, data & technology staff to expand QRM's technical capabilities for model development, backtesting and monitoring. Primary Duties and Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Certificates or Licenses:
Job Description: Data Engineer Job Description for Data Journey USBIE: o Experience developing and deploying application code using SQL, Python, Spark. o 3-5 years' experience developing and deploying data pipeline in cloud. o 3-5 years' experience in AWS using glue, athena, lambda, secrets manager, redshift, redshift spectrum, postgreSQL, cloudformaton, step functions, s3, ec2, boto3. o Securing data using IAM and Active Directory. o Experience developing Linux, UNIX Shell Scripts. o Developer experience in big data or warehouse (Hadoop, DB2 LUW) environment. o Experience with monitoring and logging techniques to be used in conjunction with Cloudwatch and Splunk. o Working knowledge of version control tools and branching techniques using Artifactory, Jenkins, and Bitbucket. o 3-5 years' experience with Power BI. Additional information on background: o Agile experience expected. o Leadership skill/Soft skills: Collaborative, team player. Self-starter. Curious about technology. o Total level of experience: Seeking developers with python, Spark, pyspark, and SQL developer experience in AWS. o AWS Cloud Practitioner certification, or other AWS certification is a plus. o Industries: Financial services tech experience is plus. - Duration: 4-10 months - Target Start Date: ASAP - Years of Experience: 3 to 5 Years
07/05/2024
Project-based
Job Description: Data Engineer Job Description for Data Journey USBIE: o Experience developing and deploying application code using SQL, Python, Spark. o 3-5 years' experience developing and deploying data pipeline in cloud. o 3-5 years' experience in AWS using glue, athena, lambda, secrets manager, redshift, redshift spectrum, postgreSQL, cloudformaton, step functions, s3, ec2, boto3. o Securing data using IAM and Active Directory. o Experience developing Linux, UNIX Shell Scripts. o Developer experience in big data or warehouse (Hadoop, DB2 LUW) environment. o Experience with monitoring and logging techniques to be used in conjunction with Cloudwatch and Splunk. o Working knowledge of version control tools and branching techniques using Artifactory, Jenkins, and Bitbucket. o 3-5 years' experience with Power BI. Additional information on background: o Agile experience expected. o Leadership skill/Soft skills: Collaborative, team player. Self-starter. Curious about technology. o Total level of experience: Seeking developers with python, Spark, pyspark, and SQL developer experience in AWS. o AWS Cloud Practitioner certification, or other AWS certification is a plus. o Industries: Financial services tech experience is plus. - Duration: 4-10 months - Target Start Date: ASAP - Years of Experience: 3 to 5 Years
Alexander Ash are currently working with a global firm who are looking for a Data Architect to join their multi-disciplinary team. This is an exciting opportunity for any Data Architect to join a team of skilled and experienced consultants and seek to identify improvements and efficiency's, while utilising new technologies and existing tools as the organisation takes on one of it's biggest bodies of work. Skill Requirements & Respinsibilites Creating the blueprint for how data is stored, accessed, and used within an organization. Specific Microsoft technology awareness (data Fabric and Data Mesh a priority) establish data governance policies and procedures to ensure data quality, consistency, privacy, and compliance with relevant regulations. define the overall data architecture, including data warehouses, data lakes, data marts, and other data repositories. collaborate with cross-functional teams, including data analysts, data engineers, and business stakeholders, to understand data needs and ensure alignment with business objectives. Consulting Experience
07/05/2024
Full time
Alexander Ash are currently working with a global firm who are looking for a Data Architect to join their multi-disciplinary team. This is an exciting opportunity for any Data Architect to join a team of skilled and experienced consultants and seek to identify improvements and efficiency's, while utilising new technologies and existing tools as the organisation takes on one of it's biggest bodies of work. Skill Requirements & Respinsibilites Creating the blueprint for how data is stored, accessed, and used within an organization. Specific Microsoft technology awareness (data Fabric and Data Mesh a priority) establish data governance policies and procedures to ensure data quality, consistency, privacy, and compliance with relevant regulations. define the overall data architecture, including data warehouses, data lakes, data marts, and other data repositories. collaborate with cross-functional teams, including data analysts, data engineers, and business stakeholders, to understand data needs and ensure alignment with business objectives. Consulting Experience
Rust Programmer - Remote - 7-8 months+ (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. For this role you can work remotely. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
07/05/2024
Project-based
Rust Programmer - Remote - 7-8 months+ (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. For this role you can work remotely. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
F5 WAF Engineer Whitehall resources are looking for an F5 WAF Engineer. This is an initial 6-month contract, working onsite 2 days per week in Sheffield. *Inside IR35 - You will be required to use an FCSA Accredited Umbrella Company* Job Description: As an Automation Engineer, you will play a pivotal role in enhancing our IT infrastructure by designing, creating, and maintaining bespoke Continuous Integration/Continuous Deployment (CI/CD) pipelines tailored to specific project needs. This role will have an initial focus on leveraging F5 technologies alongside a broad spectrum of automation and DevOps practices to deliver our automation use cases; however once F5 automaton works have completed, works will progress to other WAF platforms and use cases. You will be responsible for the integration of CI/CD pipelines with solutions developed by other teams, Scripting, and the creation of Infrastructure as Code (IaC) manifests using tools like Terraform and Ansible. Your expertise in Jenkins, JIRA, GitHub, Python, and other relevant technologies will be essential. You should have a solid background in building CI/CD pipelines and a comprehensive understanding of DevOps practices. The ideal candidate should not only have technical proficiency in data structures, automation technologies, API interactions, and cloud services, but also exhibit a strong drive to research, investigate, and collaborate effectively within the organization. Key Responsibilities . Developing and Delivering Automation for F5 WAF Platform: In the first instance: Developing and delivering automation solutions specifically for our F5 Web Application Firewall (WAF) platform, aligned with our specific use cases. This involves Scripting, configuring, and deploying automation workflows that enhance security, manageability, and operational efficiency of the F5 WAF environment. . CI/CD Pipeline Development: Create, enhance and implement new, customized CI/CD pipelines tailored for specific project use cases, ensuring efficient, automated workflows . Pipeline Maintenance: Regularly update and maintain existing CI/CD pipelines to ensure they are efficient, secure, and up-to-date with the latest technology standards . Integration of Solutions: Work collaboratively with other teams to integrate their solutions and tools into the CI/CD pipelines effectively, enhancing overall workflow and productivity. . IaC Manifests Creation: Develop and maintain Infrastructure as Code (IaC) manifests, predominantly using Terraform, to manage and provision IT infrastructure in a consistent and repeatable manner . Tool Proficiency: Utilize and demonstrate expertise in tools such as Jenkins, JIRA, GitHub, and Python, effectively integrating them into the CI/CD processes . Script Writing: Write and maintain scripts to automate various aspects of the infrastructure and deployment processes, improving efficiency and reducing the potential for human error. . Collaboration and Communication: Collaborate with cross-functional teams, including software development, operations, and quality assurance, to ensure seamless integration and implementation of DevOps practices . Proactive Research and Collaboration: Eager to research and utilize company resources like Confluence, find relevant contacts, and reach out to other teams for unknowns. Prepared to independently investigate and resolve challenges. Required F5 Experiences - One or more of these . F5 ASM/AWAF Knowledge & Experience: Understanding and practical experience with F5's Application Security Manager (ASM) and Advanced WAF (AWAF), including configuration, management, and troubleshooting of application security policies and web application Firewalls. . F5 with API Gateway: Experience: Integrating F5 solutions with API Gateway technologies, demonstrating the ability to secure and manage APIs effectively. Experience in using F5 with Kong API Gateway; managing, and optimizing API traffic through F5 systems. . F5 GTM and Proxy Technologies: Knowledge and experience with F5's Global Traffic Manager (GTM) as well as experience with Proxy technologies, including forward and reverse proxies . Basic Certificate Management: Knowledge of SSL/TLS certificate management processes, including issuance, renewal, and deployment, within F5 environments. . F5 AS3: Experience; Experience with AS3 (Application Services 3 Extension), for declarative automation and orchestration of F5 BIG-IP services. Proficiency in automating the deployment and management of F5 configurations using AS3 Key Experience - Ideal Candidate Profile . Technical Expertise in CI/CD Tools: Proficiency in Continuous Integration and Continuous Deployment tools such as Jenkins, CircleCI, Travis CI, GitLab CI, and Bamboo. Ability to configure, manage, and optimize these tools for various project requirements. . Proficiency in Scripting Languages: Strong skills in Scripting languages such as Python, Bash, PowerShell. Ability to write and maintain scripts to automate routine tasks and deployments . Infrastructure as Code (IaC): Extensive experience in creating and managing infrastructure using code. Proficiency in IaC tools like Terraform, Ansible, Chef, or Puppet . Data Structuring and Management: Advanced skills in managing data using formats like JSON, YAML, XML, and others. Capable of parsing, creating, and maintaining complex data structures for configuration and automation purposes. . API Integration and Management: Expertise in querying, integrating, and managing APIs. Capable of constructing and executing API calls for data retrieval, updates, and inter-service communication. . Version Control Systems: In-depth knowledge of version control systems like Git, including branching strategies, repository management, and integrating with CI/CD pipelines . Containerization and Orchestration: Experience with containerization tools such as Docker and orchestration platforms like Kubernetes or Docker Swarm. Understanding of containerized environments and their integration into CI/CD pipelines . Cloud Platforms: Familiarity with major cloud platforms like AWS, Azure, or GCP; understanding of cloud-specific services and how to integrate them into CI/CD processes . Monitoring and Logging: Knowledge of monitoring and logging tools such as Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), or Splunk. Ability to set up and maintain monitoring and logging for applications and infrastructure . Security Practices in DevOps (DevSecOps): Understanding of security practices in a DevOps environment. Familiarity with security scanning tools, implementing secure coding practices, and ensuring compliance with industry standards . Agile and Scrum Methodologies: Experience with Agile and Scrum methodologies. Ability to work in fast-paced, iterative development environments and adapt to changing requirements . Networking and Security Fundamentals: Knowledge of networking concepts (eg, TCP/IP, DNS, HTTP/S) and basic security concepts (eg, Firewalls, VPNs, IDS/IPS). . Problem-Solving and Analytical Skills: Strong problem-solving skills and ability to analyze complex systems and workflows to propose effective automation solutions. . Collaboration and Communication: Excellent collaboration and communication skills. Ability to work effectively in a team and communicate complex technical concepts to both technical and non-technical stakeholders. . Project Management Skills: Basic project management skills with the ability to manage timelines, dependencies, and deliverables in a cross-functional environment. . Research and Investigative Skills: Motivated to self-educate and explore company resources and external knowledge bases. All of our opportunities require that applicants are eligible to work in the specified country/location, unless otherwise stated in the job description. Whitehall Resources are an equal opportunities employer who value a diverse and inclusive working environment. All qualified applicants will receive consideration for employment without regard to race, religion, gender identity or expression, sexual orientation, national origin, pregnancy, disability, age, veteran status, or other characteristics.
07/05/2024
Project-based
F5 WAF Engineer Whitehall resources are looking for an F5 WAF Engineer. This is an initial 6-month contract, working onsite 2 days per week in Sheffield. *Inside IR35 - You will be required to use an FCSA Accredited Umbrella Company* Job Description: As an Automation Engineer, you will play a pivotal role in enhancing our IT infrastructure by designing, creating, and maintaining bespoke Continuous Integration/Continuous Deployment (CI/CD) pipelines tailored to specific project needs. This role will have an initial focus on leveraging F5 technologies alongside a broad spectrum of automation and DevOps practices to deliver our automation use cases; however once F5 automaton works have completed, works will progress to other WAF platforms and use cases. You will be responsible for the integration of CI/CD pipelines with solutions developed by other teams, Scripting, and the creation of Infrastructure as Code (IaC) manifests using tools like Terraform and Ansible. Your expertise in Jenkins, JIRA, GitHub, Python, and other relevant technologies will be essential. You should have a solid background in building CI/CD pipelines and a comprehensive understanding of DevOps practices. The ideal candidate should not only have technical proficiency in data structures, automation technologies, API interactions, and cloud services, but also exhibit a strong drive to research, investigate, and collaborate effectively within the organization. Key Responsibilities . Developing and Delivering Automation for F5 WAF Platform: In the first instance: Developing and delivering automation solutions specifically for our F5 Web Application Firewall (WAF) platform, aligned with our specific use cases. This involves Scripting, configuring, and deploying automation workflows that enhance security, manageability, and operational efficiency of the F5 WAF environment. . CI/CD Pipeline Development: Create, enhance and implement new, customized CI/CD pipelines tailored for specific project use cases, ensuring efficient, automated workflows . Pipeline Maintenance: Regularly update and maintain existing CI/CD pipelines to ensure they are efficient, secure, and up-to-date with the latest technology standards . Integration of Solutions: Work collaboratively with other teams to integrate their solutions and tools into the CI/CD pipelines effectively, enhancing overall workflow and productivity. . IaC Manifests Creation: Develop and maintain Infrastructure as Code (IaC) manifests, predominantly using Terraform, to manage and provision IT infrastructure in a consistent and repeatable manner . Tool Proficiency: Utilize and demonstrate expertise in tools such as Jenkins, JIRA, GitHub, and Python, effectively integrating them into the CI/CD processes . Script Writing: Write and maintain scripts to automate various aspects of the infrastructure and deployment processes, improving efficiency and reducing the potential for human error. . Collaboration and Communication: Collaborate with cross-functional teams, including software development, operations, and quality assurance, to ensure seamless integration and implementation of DevOps practices . Proactive Research and Collaboration: Eager to research and utilize company resources like Confluence, find relevant contacts, and reach out to other teams for unknowns. Prepared to independently investigate and resolve challenges. Required F5 Experiences - One or more of these . F5 ASM/AWAF Knowledge & Experience: Understanding and practical experience with F5's Application Security Manager (ASM) and Advanced WAF (AWAF), including configuration, management, and troubleshooting of application security policies and web application Firewalls. . F5 with API Gateway: Experience: Integrating F5 solutions with API Gateway technologies, demonstrating the ability to secure and manage APIs effectively. Experience in using F5 with Kong API Gateway; managing, and optimizing API traffic through F5 systems. . F5 GTM and Proxy Technologies: Knowledge and experience with F5's Global Traffic Manager (GTM) as well as experience with Proxy technologies, including forward and reverse proxies . Basic Certificate Management: Knowledge of SSL/TLS certificate management processes, including issuance, renewal, and deployment, within F5 environments. . F5 AS3: Experience; Experience with AS3 (Application Services 3 Extension), for declarative automation and orchestration of F5 BIG-IP services. Proficiency in automating the deployment and management of F5 configurations using AS3 Key Experience - Ideal Candidate Profile . Technical Expertise in CI/CD Tools: Proficiency in Continuous Integration and Continuous Deployment tools such as Jenkins, CircleCI, Travis CI, GitLab CI, and Bamboo. Ability to configure, manage, and optimize these tools for various project requirements. . Proficiency in Scripting Languages: Strong skills in Scripting languages such as Python, Bash, PowerShell. Ability to write and maintain scripts to automate routine tasks and deployments . Infrastructure as Code (IaC): Extensive experience in creating and managing infrastructure using code. Proficiency in IaC tools like Terraform, Ansible, Chef, or Puppet . Data Structuring and Management: Advanced skills in managing data using formats like JSON, YAML, XML, and others. Capable of parsing, creating, and maintaining complex data structures for configuration and automation purposes. . API Integration and Management: Expertise in querying, integrating, and managing APIs. Capable of constructing and executing API calls for data retrieval, updates, and inter-service communication. . Version Control Systems: In-depth knowledge of version control systems like Git, including branching strategies, repository management, and integrating with CI/CD pipelines . Containerization and Orchestration: Experience with containerization tools such as Docker and orchestration platforms like Kubernetes or Docker Swarm. Understanding of containerized environments and their integration into CI/CD pipelines . Cloud Platforms: Familiarity with major cloud platforms like AWS, Azure, or GCP; understanding of cloud-specific services and how to integrate them into CI/CD processes . Monitoring and Logging: Knowledge of monitoring and logging tools such as Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), or Splunk. Ability to set up and maintain monitoring and logging for applications and infrastructure . Security Practices in DevOps (DevSecOps): Understanding of security practices in a DevOps environment. Familiarity with security scanning tools, implementing secure coding practices, and ensuring compliance with industry standards . Agile and Scrum Methodologies: Experience with Agile and Scrum methodologies. Ability to work in fast-paced, iterative development environments and adapt to changing requirements . Networking and Security Fundamentals: Knowledge of networking concepts (eg, TCP/IP, DNS, HTTP/S) and basic security concepts (eg, Firewalls, VPNs, IDS/IPS). . Problem-Solving and Analytical Skills: Strong problem-solving skills and ability to analyze complex systems and workflows to propose effective automation solutions. . Collaboration and Communication: Excellent collaboration and communication skills. Ability to work effectively in a team and communicate complex technical concepts to both technical and non-technical stakeholders. . Project Management Skills: Basic project management skills with the ability to manage timelines, dependencies, and deliverables in a cross-functional environment. . Research and Investigative Skills: Motivated to self-educate and explore company resources and external knowledge bases. All of our opportunities require that applicants are eligible to work in the specified country/location, unless otherwise stated in the job description. Whitehall Resources are an equal opportunities employer who value a diverse and inclusive working environment. All qualified applicants will receive consideration for employment without regard to race, religion, gender identity or expression, sexual orientation, national origin, pregnancy, disability, age, veteran status, or other characteristics.
Your opportunity To work on our mission to empower every person and every business unit in the group to achieve more thanks to the Microsoft Power Platform To support everyone to build great solutions in Microsoft PowerApps, Power Automate and PowerBI with a high business value To work with internal Zurich teams and external IT suppliers on a variety of initiatives and global projects Join the experienced Power Platform Center for Enablement of one of the biggest Power Platform consumers in the world As a Power Platform Solution Architect your main responsibilities will involve: Empowerment Program Identification of teams and individuals interested in learning more about the Power Platform Delivery of tailored Power Platform trainings internally to empower our collaborators to deliver better value to internal and external customers Reusability Identification of successful solutions built internally which could be reused across the organization to further increase the related ROI Implementation of improvements on such solutions to support scale and roll out to wider population Power Pages Governance Assessment of Power Pages technology, definition and implementation of a suitable Governance Strategy for the organization Identification of a leading use case to implement and showcase the product Mentors lower-level colleagues Works in Agile methodology (Scrum, Kanban) using Azure DevOps, Your Experience As a Microsoft 365 Solution Architect your skills and qualifications will ideally include: Deep knowledge in Power Platform technologies with experience of 3 or more of the following: SharePoint Online, Microsoft Teams, Dynamics 365, Power BI, Power Apps, Power Automate, Dataverse, Power Pages Preferably some experience in IT Governance Preferably Software Engineer degree - Informatics and Computer Engineering Good negotiating skills, performance management, good practice, and techniques as well as fluent written and spoken English Very good team player who is skilled at building up and managing stakeholder relationships successfully Ideally you already hold Power Platform Certifications Your Technical Skills Power Platform Products (PowerApps, Power Automate, AI Builder etc.) Microsoft Office 365 (SharePoint Online, MS Teams, MS Forms, Outlook etc) Azure Cloud Services Job Title: Microsoft Power Platform Solution Architect Location: Zürich, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
07/05/2024
Project-based
Your opportunity To work on our mission to empower every person and every business unit in the group to achieve more thanks to the Microsoft Power Platform To support everyone to build great solutions in Microsoft PowerApps, Power Automate and PowerBI with a high business value To work with internal Zurich teams and external IT suppliers on a variety of initiatives and global projects Join the experienced Power Platform Center for Enablement of one of the biggest Power Platform consumers in the world As a Power Platform Solution Architect your main responsibilities will involve: Empowerment Program Identification of teams and individuals interested in learning more about the Power Platform Delivery of tailored Power Platform trainings internally to empower our collaborators to deliver better value to internal and external customers Reusability Identification of successful solutions built internally which could be reused across the organization to further increase the related ROI Implementation of improvements on such solutions to support scale and roll out to wider population Power Pages Governance Assessment of Power Pages technology, definition and implementation of a suitable Governance Strategy for the organization Identification of a leading use case to implement and showcase the product Mentors lower-level colleagues Works in Agile methodology (Scrum, Kanban) using Azure DevOps, Your Experience As a Microsoft 365 Solution Architect your skills and qualifications will ideally include: Deep knowledge in Power Platform technologies with experience of 3 or more of the following: SharePoint Online, Microsoft Teams, Dynamics 365, Power BI, Power Apps, Power Automate, Dataverse, Power Pages Preferably some experience in IT Governance Preferably Software Engineer degree - Informatics and Computer Engineering Good negotiating skills, performance management, good practice, and techniques as well as fluent written and spoken English Very good team player who is skilled at building up and managing stakeholder relationships successfully Ideally you already hold Power Platform Certifications Your Technical Skills Power Platform Products (PowerApps, Power Automate, AI Builder etc.) Microsoft Office 365 (SharePoint Online, MS Teams, MS Forms, Outlook etc) Azure Cloud Services Job Title: Microsoft Power Platform Solution Architect Location: Zürich, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
Rust Programmer - Brussels - English speaking (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
07/05/2024
Project-based
Rust Programmer - Brussels - English speaking (Rust, AWS, Lambda, Jenkins, Linux) One of our Blue Chip Clients is urgently looking for a Rust Programmer. Please find some details below: We are seeking a highly skilled Senior Rust Programmer with extensive experience in large-scale image data processing and automation. The ideal candidate will possess a strong background in Rust programming language, coupled with proficiency in machine learning, GPU acceleration, and cloud computing technologies, particularly AWS EMR. Additionally, expertise in Linux environments, web development using React.js, are essential for this role. The candidate should also demonstrate proficiency in AWS services, particularly AWS S3, AWS Lambda, networking, permissions management, and observability tools. The role involves not only developing robust, efficient code but also ensuring seamless deployment, maintenance, and support of production systems. Experience in database management, website authentication, HTTPS certificates, and adherence to best practices for data archiving are highly desirable. Key Responsibilities: 1. Collaborate in developing, improving, and maintaining high-performance Rust applications for large-scale image data processing and automation. 2. Implement best practices for data archiving, ensuring compliance with regulatory requirements and business needs. 3. Manage databases used in production systems, ensuring data integrity, performance, and security. 4. Implement website authentication mechanisms and manage HTTPS certificates for secure communication. 5. Utilize machine learning techniques and GPU acceleration to optimize image processing workflows. 6. Collaborate with cross-functional teams to integrate image processing modules into web applications using React.js. 7. Deploy, configure, and manage production systems on AWS, with a focus on AWS EMR for big data processing. 8. Implement continuous integration and deployment pipelines using Jenkins for efficient code deployment. 9. Ensure observability of systems through proper logging, monitoring, and alerting mechanisms. 10. Manage AWS resources including S3 buckets, Lambda functions, networking configurations, and permissions. 11. Document production code and architectural decisions to facilitate knowledge sharing and onboarding of new team members. 12. Provide support and maintenance for production systems, troubleshooting issues and implementing timely resolutions. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - Extensive experience in Rust programming language, with a focus on large-scale data processing applications. - Proficiency in machine learning techniques and GPU acceleration for image processing tasks. - Strong background in Linux environments and Shell Scripting. - Solid understanding of web development principles, with hands-on experience in React.js. - Experience with code deployment tools such as Jenkins and version control systems like Git. - In-depth knowledge of AWS services, particularly EMR, S3, Lambda, networking, and permissions management. - Familiarity with observability tools for monitoring and logging production systems. - Experience with database management systems and website authentication mechanisms. - Excellent problem-solving skills and ability to work effectively in a collaborative team environment. - Strong communication skills and ability to document technical solutions effectively. Preferred Qualifications: - Certification in AWS or relevant cloud computing technologies. - Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. - Knowledge of DevOps practices and infrastructure as code tools like Terraform. - Understanding of cybersecurity principles and best practices for securing web applications. Please send CV for full details and immediate interviews. We are a preferred supplier to the client.
Job Title: Delivery Engineer - F5 LTM Replacements Programme Contract: Inside IR35 Day Rate: £400 Location: Flexible/On-site at Data Centers About the Role: We are seeking a mid-level Delivery Engineer to join our established team, managing the physical replacement of Legacy F5 LTM hardware as part of our critical infrastructure upgrade program. This role is ideal for a professional with experience in highly regulated financial environments, demonstrating a strong background in delivering technical projects from design through to completion. Key Responsibilities: Own the replacement process for F5 LTM hardware, ensuring successful migration from Legacy systems (including F5-BIG-LTM-4200V, F5-BIG-LTM-5250V, and others) to new platforms. Manage pre-build activities such as discovering existing/new patching requirements, rack identification and allocation, and preparation of patching sheets. Oversee device build and configuration activities including setting up console and management connectivity, extending data and VLANs, configuring rSeries, and deploying new certificates. Coordinate the migration phase, including service mapping, scheduling migration dates, conducting pre-migration testing, and executing migrations with adherence to change management process. Handle incident management and resolution of any issues arising during or after the migration. Requirements: Proven experience in managing IT hardware replacements, preferably within financial services or other highly regulated sectors. Familiarity with F5 hardware platforms, particularly the BIG-IP and LTM series. Strong understanding of data center operations, including device and network configuration, and compliance with security standards. Excellent project management skills and the ability to manage multiple tasks simultaneously. Strong communication skills for coordinating with internal teams and external vendors.
07/05/2024
Project-based
Job Title: Delivery Engineer - F5 LTM Replacements Programme Contract: Inside IR35 Day Rate: £400 Location: Flexible/On-site at Data Centers About the Role: We are seeking a mid-level Delivery Engineer to join our established team, managing the physical replacement of Legacy F5 LTM hardware as part of our critical infrastructure upgrade program. This role is ideal for a professional with experience in highly regulated financial environments, demonstrating a strong background in delivering technical projects from design through to completion. Key Responsibilities: Own the replacement process for F5 LTM hardware, ensuring successful migration from Legacy systems (including F5-BIG-LTM-4200V, F5-BIG-LTM-5250V, and others) to new platforms. Manage pre-build activities such as discovering existing/new patching requirements, rack identification and allocation, and preparation of patching sheets. Oversee device build and configuration activities including setting up console and management connectivity, extending data and VLANs, configuring rSeries, and deploying new certificates. Coordinate the migration phase, including service mapping, scheduling migration dates, conducting pre-migration testing, and executing migrations with adherence to change management process. Handle incident management and resolution of any issues arising during or after the migration. Requirements: Proven experience in managing IT hardware replacements, preferably within financial services or other highly regulated sectors. Familiarity with F5 hardware platforms, particularly the BIG-IP and LTM series. Strong understanding of data center operations, including device and network configuration, and compliance with security standards. Excellent project management skills and the ability to manage multiple tasks simultaneously. Strong communication skills for coordinating with internal teams and external vendors.
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
06/05/2024
Full time
Subject: Cloud Consultant/Architect - On-Site - Gloucestershire/Bristol - £65 to £95K - AWS - IaaS - PaaS - Kubernetes - Automation Job Title: Cloud Technical Consultant/Architect Location: Gloucestershire/Bristol Salary: £65 - £95K Per Annum Benefits: Bonus, flexible working hours, career opportunities, private medical, excellent pension, and social benefits Active DV Clearance is highly desirable. Please note that candidates will need to be eligible to undergo DV Clearance. The Client: Curo are collaborating with a global edge-to-cloud company advancing the way people live and work. They help companies connect, protect, analyse, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today's complex world. The Candidate: This is a fantastic opportunity for someone who has big ambitions and an outstanding ability to create strong relationships - or for a dynamic & seasoned Technologist who is looking for new & exciting opportunities to make a difference. Your focus will be to provide clients with the optimal consultative service and experience, resulting in business outcomes that meeting core client values and business requirements. If you are looking for challenges in a fast paced, thriving, international work environment, then we definitely want to hear from you. The Role: This is a brand new opportunity for a bright, driven, customer focussed professional to join our clients Cloud Delivery' team, and work alongside our Enterprise Cloud specialists to drive forward the design, deployment & operations of Cloud Infrastructure, Automation and Containerisation projects for the end-client. The delivery team help deliver valued clients the most effective Cloud solution to suit the organisational requirements of dynamic and fast-paced business. They support them to exploit maximum business benefit from Cloud solutions, leveraging best in class internal and Partner technologies to create relevant and engaging experiences. Duties: Support the design and development of new capabilities, preparing solution options, investigating technology, designing and running proof of concepts, providing assessments, advice and solution options, providing high level and low level design documentation. Cloud engineering capability to leverage Public Cloud platform using automated build processes deployed using Infrastructure as Code. Provide technical challenge and assurance throughout development and delivery of work. Develop re-useable common solutions and patterns to reduce development lead times, improve commonality and lowering Total Cost of Ownership. Work independently and/or within a team using a DevOps way of working. Required Technical skills & experience: Experienced in Cloud native technologies in AWS. Experienced in deploying IaaS/PaaS in Multi Cloud Environments. Experienced in Cloud and Infrastructure Engineering building and testing new capabilities, and supporting the development of new solutions and common templates. Experienced in being able to act as bridge from the infrastructure through to user facing systems. Desirable Technical Skills & Experience: Experienced in Kubernetes Containers. Experienced in the use of Automation tools eg Terraform, Ansible, Foreman, Puppet and Python. Experienced in different flavours of Linux platform and services. To apply for this Cloud Consultant/Architect permanent job, please click the button below and submit your latest CV. Curo Services endeavours to respond to all applications, however this may not always be possible during periods of high volume. Thank you for your patience. Curo Services is a trading name of Curo Resourcing Ltd and acts as an Employment Business for contract and temporary recruitment as well as an Employment Agency in relation to permanent vacancies.
Well here's a refreshing break from all those Java Developer jobs you'll see advertised by the big financial institutes in the Glasgow area. Client Details This role is to join a small and long established technology business who are a leader in their field. You'll join a small team of developers to develop and enhance product features in line with the needs of their ever expanding customer list. In return for your Java development skills you'll be rewarded with a salary of up to £45k + benefits which include a generous pension contribution, life assurance, hybrid & flexible working (3 days a week on site). Description As an experienced Java Software Engineer you'll need the following skills and experience to be considered for this opportunity: - Expert knowledge of Java, specifically with micro services - Previous experience working with Spring Boot, Docker etc. - Additional knowledge of Maven will be advantageous - Experience of working with databases (Oracle, Postgres etc.) - Any knowledge of AWS cloud technology is a bonus - Demonstrable experience of working in an Agile development environment - A positive, "can do" attitude and a keen eye for detail Profile While this is a snapshot of the full job description it represents the basic requirements for the role. The business will invest in your long term career development, so a good attitude and the willingness to contribute to the team are important to the client. Job Offer If this sounds like your ideal next move, and you're keen to be a bigger wheel in a smaller machine and enjoy the recognition you deserve for your efforts then please apply NOW with your up to date CV.
03/05/2024
Full time
Well here's a refreshing break from all those Java Developer jobs you'll see advertised by the big financial institutes in the Glasgow area. Client Details This role is to join a small and long established technology business who are a leader in their field. You'll join a small team of developers to develop and enhance product features in line with the needs of their ever expanding customer list. In return for your Java development skills you'll be rewarded with a salary of up to £45k + benefits which include a generous pension contribution, life assurance, hybrid & flexible working (3 days a week on site). Description As an experienced Java Software Engineer you'll need the following skills and experience to be considered for this opportunity: - Expert knowledge of Java, specifically with micro services - Previous experience working with Spring Boot, Docker etc. - Additional knowledge of Maven will be advantageous - Experience of working with databases (Oracle, Postgres etc.) - Any knowledge of AWS cloud technology is a bonus - Demonstrable experience of working in an Agile development environment - A positive, "can do" attitude and a keen eye for detail Profile While this is a snapshot of the full job description it represents the basic requirements for the role. The business will invest in your long term career development, so a good attitude and the willingness to contribute to the team are important to the client. Job Offer If this sounds like your ideal next move, and you're keen to be a bigger wheel in a smaller machine and enjoy the recognition you deserve for your efforts then please apply NOW with your up to date CV.