Job Description: My client is seeking a Data Engineer to join our dynamic team. In this role, you will be crucial in developing and maintaining our existing data infrastructure while exploring and implementing new cloud services to enhance efficiency. This position offers the opportunity to grow into a data architect over time, with hands-on experience and mentorship from our seasoned team. Key Responsibilities: Develop and maintain the existing data architecture, ensuring reliability, scalability, and efficiency. Design and implement ETL processes and data pipelines primarily within the AWS environment. Enhance and optimise our infrastructure using Python coding skills, particularly in Airflow. Collaborate with cross-functional teams to understand data requirements and implement solutions. Gain proficiency in AWS services such as Lambda functions, Glue, and S3 to improve data processing capabilities. Demonstrate proficiency in SQL for ETL processes, including Athena and Glue iterations. Utilize NoSQL databases, particularly MongoDB, for production data storage and retrieval. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. 0-3 years of experience in data engineering or related roles. Strong proficiency in Python programming, with experience in Airflow preferred. Familiarity with AWS services such as Lambda functions, Glue, and S3 is a plus. Solid understanding of SQL for ETL processes, including experience with Athena and Glue iterations. Experience with NoSQL databases, particularly MongoDB, is required. Ability to work collaboratively in a fast-paced environment and adapt to evolving requirements. Strong problem-solving skills and attention to detail. Benefits: Competitive salary and benefits package. Opportunity for career growth and advancement within a leading research firm. Mentorship and learning opportunities from experienced data professionals. Dynamic and collaborative work environment with a focus on innovation and excellence.
04/07/2024
Full time
Job Description: My client is seeking a Data Engineer to join our dynamic team. In this role, you will be crucial in developing and maintaining our existing data infrastructure while exploring and implementing new cloud services to enhance efficiency. This position offers the opportunity to grow into a data architect over time, with hands-on experience and mentorship from our seasoned team. Key Responsibilities: Develop and maintain the existing data architecture, ensuring reliability, scalability, and efficiency. Design and implement ETL processes and data pipelines primarily within the AWS environment. Enhance and optimise our infrastructure using Python coding skills, particularly in Airflow. Collaborate with cross-functional teams to understand data requirements and implement solutions. Gain proficiency in AWS services such as Lambda functions, Glue, and S3 to improve data processing capabilities. Demonstrate proficiency in SQL for ETL processes, including Athena and Glue iterations. Utilize NoSQL databases, particularly MongoDB, for production data storage and retrieval. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. 0-3 years of experience in data engineering or related roles. Strong proficiency in Python programming, with experience in Airflow preferred. Familiarity with AWS services such as Lambda functions, Glue, and S3 is a plus. Solid understanding of SQL for ETL processes, including experience with Athena and Glue iterations. Experience with NoSQL databases, particularly MongoDB, is required. Ability to work collaboratively in a fast-paced environment and adapt to evolving requirements. Strong problem-solving skills and attention to detail. Benefits: Competitive salary and benefits package. Opportunity for career growth and advancement within a leading research firm. Mentorship and learning opportunities from experienced data professionals. Dynamic and collaborative work environment with a focus on innovation and excellence.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Java Software Engineer. Candidate will support and work collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Preferred] Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google [Required] Experience writing unit and integration tests with testing frameworks like Junit, Citrus [Required] Experience working with various types of databases like Relational, NoSQL [Required] Experience working with Git [Preferred] Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc [Preferred] Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics [Required] Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API Technical Skills: [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns Education and/or Experience: [Required] BS degree in Computer Science, similar technical field required
03/07/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Java Software Engineer. Candidate will support and work collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Preferred] Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google [Required] Experience writing unit and integration tests with testing frameworks like Junit, Citrus [Required] Experience working with various types of databases like Relational, NoSQL [Required] Experience working with Git [Preferred] Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc [Preferred] Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics [Required] Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API Technical Skills: [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns Education and/or Experience: [Required] BS degree in Computer Science, similar technical field required
NO SPONSORSHIP Java Software Engineer Chicago based, hybrid $110-140K % Financial services, event driven, or Streaming work Must have a degree, 3+ years, but not more than about 8 years Must communicate clearly and effectively- Re: Java, do you understand Multithreading and are you able to explain concepts, where/when did you utilize? What is your level of experience in Spring. Are you able to explain some concepts to show at least a beginner level mastery? Re: Kafka Can you answer basic user/developer questions can you point to work done in KAFKA? Re: Flink do you have any experience that you are able to explain your projects to date in a clear manner . Do you have any skills or understanding of BigO notations. y/n Re: Junit testing and Linux commands, how familiar are you and where did you get to use these skills?. Re: CI/CD tools, can you explain in a way to indicate your familiarity from basic to far above basic Looking for Java Developers with 2-8 years solid Back End Java development, sharp go-getters with good communications skills, kafka streaming and financial big plus. Experience writing unit testing and integration testing high speed real=time and batch solutions cloud-based data solutions any devops tools like terraform ansible Jenkins preferred relational no SQL data structures concurrency Multithreading OOD bs degree aws preferred Qualifications: [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns
03/07/2024
Full time
NO SPONSORSHIP Java Software Engineer Chicago based, hybrid $110-140K % Financial services, event driven, or Streaming work Must have a degree, 3+ years, but not more than about 8 years Must communicate clearly and effectively- Re: Java, do you understand Multithreading and are you able to explain concepts, where/when did you utilize? What is your level of experience in Spring. Are you able to explain some concepts to show at least a beginner level mastery? Re: Kafka Can you answer basic user/developer questions can you point to work done in KAFKA? Re: Flink do you have any experience that you are able to explain your projects to date in a clear manner . Do you have any skills or understanding of BigO notations. y/n Re: Junit testing and Linux commands, how familiar are you and where did you get to use these skills?. Re: CI/CD tools, can you explain in a way to indicate your familiarity from basic to far above basic Looking for Java Developers with 2-8 years solid Back End Java development, sharp go-getters with good communications skills, kafka streaming and financial big plus. Experience writing unit testing and integration testing high speed real=time and batch solutions cloud-based data solutions any devops tools like terraform ansible Jenkins preferred relational no SQL data structures concurrency Multithreading OOD bs degree aws preferred Qualifications: [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns
Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone's chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We do this by showcasing their talents, skills, and unique experiences in an inclusive environment that helps them thrive. DLP Cloud Security Engineer Dublin, leopardstown - Hybrid (2 to 3 days in office) 6 months contract Euros 650 - Euros 700 per day Are you a seasoned Cloud Security Engineer with a passion for protecting digital assets and data within cloud environments? Our client, a highly reputable organisation and the Awarded World's best financial institution, is seeking a talented individual like you to join their team as a Cloud Security Engineer. In this role, you will have the exciting opportunity to design, implement, and maintain robust DLP security measures that align with the organisation's goals and compliance requirements. Get ready to collaborate with cross-functional teams, identify vulnerabilities, and develop proactive security solutions! You'll also have the chance to provide expert consultation, create impactful presentations, and contribute to the execution of long-term strategic plans. Skills required for this role include extensive experience in information security, proficiency in cloud platforms such as AWS, Azure, or Google Cloud, and a strong understanding of network security, DLP, and encryption. Join our client's dynamic team and make a significant impact by safeguarding their digital assets and data. You'll have the opportunity to work with cutting-edge technologies, collaborate with a diverse group of professionals, and thrive in an environment that encourages innovation. Don't miss out on this exciting opportunity! Apply now to become our client's Cloud Security Engineer. Responsibilities: -Develop and implement a comprehensive cloud DLP security strategy that aligns with the organisation's goals and compliance requirements. -Design and architect secure cloud solutions, considering aspects such as network security, identity and access management (IAM), encryption, and data protection. -Continuously monitor cloud environments for security threats, vulnerabilities, and anomalies. Take charge in implementing proactive measures to address concerns effectively. -Ensure that cloud security practises adhere to industry standards and regulatory compliance, such as GDPR, HIPAA, or SOC 2. -Establish reporting routines that provide visibility to the effective execution of long-term maturity/strategic plans. -Provide expert consultation to control owners and stakeholders in developing complete and repeatable control processes, including documentation of controls and metrics. -Develop impactful presentations tailored for executives and stakeholders, highlighting key security measures. Required Skills: -Extensive experience in information security, with a strong focus on cloud security. Prior experience as a Security Architect or similar role is highly desirable. -Proficiency in cloud platforms such as AWS, Azure, or Google Cloud, including knowledge of their security services and best practises. -Strong understanding of security principles, protocols, and technologies, with expertise in areas like network security, DLP, and encryption. -Excellent analytical and problem-solving abilities, with the capacity to assess complex security risks and devise effective solutions even in ambiguous situations. -Experience in administration of a DLP tool, including configuration, upgrade, and disaster recovery planning. -Understanding of technical and organisational security vulnerabilities, threats, and risks. -Exceptional communication skills, with the ability to articulate complex concepts concisely and accurately. -Ability to troubleshoot and solve complex problems rapidly. -Certified Cloud Security Professional (CCSP) preferred or ability to obtain within 3 months. Desired Skills: -Experience operating and tuning DLP technologies. -Familiarity with CASB solutions, Microsoft Purview, Proofpoint, M365. -Cloud platform familiarity as it relates to DLP solutions (AWS, Azure). -Operating Systems (Windows/Mac/Linux). -Basic Networking - VPN, TCP/UDP protocols. -Basic Encryption - SSL, AES, IPsec, Key Management, Certificates. -Ancillary Services - DNS, Web Server, LDAP/AD, Database technologies. -Intermediate Level Scripting - eg, Python, PowerShell. If you feel you have the skills and experience and want to hear more about this role 'apply now' to declare your interest in this opportunity with our client. Your application will be observed by our dedicated team. We will respond to all successful applicants ASAP however, please be advised that we will always look to contact you further from this time should we need further applicants or if other opportunities arise relevant to your skill-set.
03/07/2024
Project-based
Pontoon is an employment consultancy. We put expertise, energy, and enthusiasm into improving everyone's chance of being part of the workplace. We respect and appreciate people of all ethnicities, generations, religious beliefs, sexual orientations, gender identities, and more. We do this by showcasing their talents, skills, and unique experiences in an inclusive environment that helps them thrive. DLP Cloud Security Engineer Dublin, leopardstown - Hybrid (2 to 3 days in office) 6 months contract Euros 650 - Euros 700 per day Are you a seasoned Cloud Security Engineer with a passion for protecting digital assets and data within cloud environments? Our client, a highly reputable organisation and the Awarded World's best financial institution, is seeking a talented individual like you to join their team as a Cloud Security Engineer. In this role, you will have the exciting opportunity to design, implement, and maintain robust DLP security measures that align with the organisation's goals and compliance requirements. Get ready to collaborate with cross-functional teams, identify vulnerabilities, and develop proactive security solutions! You'll also have the chance to provide expert consultation, create impactful presentations, and contribute to the execution of long-term strategic plans. Skills required for this role include extensive experience in information security, proficiency in cloud platforms such as AWS, Azure, or Google Cloud, and a strong understanding of network security, DLP, and encryption. Join our client's dynamic team and make a significant impact by safeguarding their digital assets and data. You'll have the opportunity to work with cutting-edge technologies, collaborate with a diverse group of professionals, and thrive in an environment that encourages innovation. Don't miss out on this exciting opportunity! Apply now to become our client's Cloud Security Engineer. Responsibilities: -Develop and implement a comprehensive cloud DLP security strategy that aligns with the organisation's goals and compliance requirements. -Design and architect secure cloud solutions, considering aspects such as network security, identity and access management (IAM), encryption, and data protection. -Continuously monitor cloud environments for security threats, vulnerabilities, and anomalies. Take charge in implementing proactive measures to address concerns effectively. -Ensure that cloud security practises adhere to industry standards and regulatory compliance, such as GDPR, HIPAA, or SOC 2. -Establish reporting routines that provide visibility to the effective execution of long-term maturity/strategic plans. -Provide expert consultation to control owners and stakeholders in developing complete and repeatable control processes, including documentation of controls and metrics. -Develop impactful presentations tailored for executives and stakeholders, highlighting key security measures. Required Skills: -Extensive experience in information security, with a strong focus on cloud security. Prior experience as a Security Architect or similar role is highly desirable. -Proficiency in cloud platforms such as AWS, Azure, or Google Cloud, including knowledge of their security services and best practises. -Strong understanding of security principles, protocols, and technologies, with expertise in areas like network security, DLP, and encryption. -Excellent analytical and problem-solving abilities, with the capacity to assess complex security risks and devise effective solutions even in ambiguous situations. -Experience in administration of a DLP tool, including configuration, upgrade, and disaster recovery planning. -Understanding of technical and organisational security vulnerabilities, threats, and risks. -Exceptional communication skills, with the ability to articulate complex concepts concisely and accurately. -Ability to troubleshoot and solve complex problems rapidly. -Certified Cloud Security Professional (CCSP) preferred or ability to obtain within 3 months. Desired Skills: -Experience operating and tuning DLP technologies. -Familiarity with CASB solutions, Microsoft Purview, Proofpoint, M365. -Cloud platform familiarity as it relates to DLP solutions (AWS, Azure). -Operating Systems (Windows/Mac/Linux). -Basic Networking - VPN, TCP/UDP protocols. -Basic Encryption - SSL, AES, IPsec, Key Management, Certificates. -Ancillary Services - DNS, Web Server, LDAP/AD, Database technologies. -Intermediate Level Scripting - eg, Python, PowerShell. If you feel you have the skills and experience and want to hear more about this role 'apply now' to declare your interest in this opportunity with our client. Your application will be observed by our dedicated team. We will respond to all successful applicants ASAP however, please be advised that we will always look to contact you further from this time should we need further applicants or if other opportunities arise relevant to your skill-set.
Principal Software Engineer Amsterdam - Hybrid €140 - €160 Euro Per Hour Initially until end of Oct Dependant on start date Are you a Principal Software Engineer? Do you live in the Netherlands and are you seeking a new contract role? Brookwood Recruitment is working with a Global Company that puts the customer first and is heavily focused on sustainability. The successful Principal Software Engineer will need the following skills: AWS Kubernetes Java Terraform Gitlab stack Apache Spark Flink Snowflake runtimes ideal What you can expect to be doing as a Principal Software Engineer: Building software applicationsIs responsible to build software applications by using relevant development languages and applying knowledge of systems, services and tools appropriate for the business area and is the go to person on this topic for the area. Is responsible to write readable and reusable code by applying standard patterns and using standard libraries and is the go to person on this topic for the area. Is responsible to refactor and simplify code by introducing design patterns when necessary and is the go to person on this topic for the area. Is responsible to ensure the quality of the application by following standard testing techniques and methods that adhere to the test strategy and is the go to person on this topic for the area. Is responsible to maintain data security, integrity and quality by effectively following company standards and best practices and is the go to person on this topic for the area. Architectural GuidanceIs responsible to advise product teams towards a technical solution that meets the functional, nonfunctional & architectural requirements by challenging the rationale for an application design and providing context in the wider architectural landscape Is responsible to set a clear direction for a technical capability by evaluating and aligning the target architecture improvements, reframing architectural designs and decisions for varied stakeholder Is responsible to own a service end to end by actively monitoring application health and performance, setting and monitoring relevant metrics and act accordingly when violated and is the go to person on this topic for the area. Is responsible to reduce business continuity risks and bus factor by applying state-of-the-art practices and tools, and writing the appropriate documentation such as runbooks and OpDocs and is the go to person on this topic for the area. Is responsible to reduce risk and obtain customer feedback by using continuous delivery and experimentation frameworks and is the go to person on this topic for the area. Is responsible to independently manage an application or service by working through deployment and operations in production and is the go to person on this topic for the area. Is responsible to evaluate possible architecture solutions by taking into account cost, business requirements, technology requirements and emerging technologies and guide more junior members of the team in this topic. Is responsible to describe the implications of changing an existing system or adding a new system to a specific area, by having a broad, high-level understanding of the infrastructure and architecture of our systems and guide more junior members of the team in this topic. Is responsible to help grow the business and/or accelerate software development by applying engineering techniques (eg prototyping, spiking and vendor evaluation) and standards and guide more junior members of the team in this topic. Is responsible to meet business needs by designing solutions that meet current requirements and are adaptable for future enhancements and guide more junior members of the team in this topic. Technical Incident ManagementIs responsible to address and resolve live production issues by mitigating the customer impact within SLAIs responsible to improve the overall reliability of systems by producing long term solutions through root cause analysis Is responsible to keep track of incidents by contributing to postmortem processes and logging live issues Coaching/MentoringIs responsible to coach, guide and improve the overall performance of stakeholders and colleagues at all levels, when appropriate, by sharing experience, knowledge and approaches to work Critical ThinkingIs responsible to systematically identify patterns and underlying issues in complex situations, and to find solutions by applying logical and analytical thinking and guide more junior members of the team in this topic. Is responsible to constructively evaluate and develop ideas, plans and solutions by reviewing them, objectively taking into account external knowledge, initiating 'SMART' improvements and articulating their rationale and guide more junior members of the team in this topic. Continuous Quality and Process ImprovementIs responsible to identify opportunities for process, system and structural improvements (ie performance gains) by examining and evaluating current process flows, methods and standards. Is responsible to design and implement relevant improvements by defining adapted/new process flows, standards, and practices that enable business performance. Effective CommunicationIs responsible to deliver clear, well-structured, and meaningful information to a target audience by using suitable communication mediums and language tailored to the audience Is responsible to achieve mutually agreeable solutions by staying adaptable, communicating ideas in clear coherent language and practising active listening Is responsible to ask relevant (follow-up) questions to properly engage with the speaker and really understand what they are saying, by applying listening and reflection techniques If this contract Principal Software Engineer role in Amsterdam motivates and inspires you, please apply with Brookwood Recruitment today. We'd love to help you get your next role. Brookwood has a consultative and inclusive approach to business. We take time to understand our client's needs, structure and culture to enable a fully tailored service that delivers time and time again.
03/07/2024
Project-based
Principal Software Engineer Amsterdam - Hybrid €140 - €160 Euro Per Hour Initially until end of Oct Dependant on start date Are you a Principal Software Engineer? Do you live in the Netherlands and are you seeking a new contract role? Brookwood Recruitment is working with a Global Company that puts the customer first and is heavily focused on sustainability. The successful Principal Software Engineer will need the following skills: AWS Kubernetes Java Terraform Gitlab stack Apache Spark Flink Snowflake runtimes ideal What you can expect to be doing as a Principal Software Engineer: Building software applicationsIs responsible to build software applications by using relevant development languages and applying knowledge of systems, services and tools appropriate for the business area and is the go to person on this topic for the area. Is responsible to write readable and reusable code by applying standard patterns and using standard libraries and is the go to person on this topic for the area. Is responsible to refactor and simplify code by introducing design patterns when necessary and is the go to person on this topic for the area. Is responsible to ensure the quality of the application by following standard testing techniques and methods that adhere to the test strategy and is the go to person on this topic for the area. Is responsible to maintain data security, integrity and quality by effectively following company standards and best practices and is the go to person on this topic for the area. Architectural GuidanceIs responsible to advise product teams towards a technical solution that meets the functional, nonfunctional & architectural requirements by challenging the rationale for an application design and providing context in the wider architectural landscape Is responsible to set a clear direction for a technical capability by evaluating and aligning the target architecture improvements, reframing architectural designs and decisions for varied stakeholder Is responsible to own a service end to end by actively monitoring application health and performance, setting and monitoring relevant metrics and act accordingly when violated and is the go to person on this topic for the area. Is responsible to reduce business continuity risks and bus factor by applying state-of-the-art practices and tools, and writing the appropriate documentation such as runbooks and OpDocs and is the go to person on this topic for the area. Is responsible to reduce risk and obtain customer feedback by using continuous delivery and experimentation frameworks and is the go to person on this topic for the area. Is responsible to independently manage an application or service by working through deployment and operations in production and is the go to person on this topic for the area. Is responsible to evaluate possible architecture solutions by taking into account cost, business requirements, technology requirements and emerging technologies and guide more junior members of the team in this topic. Is responsible to describe the implications of changing an existing system or adding a new system to a specific area, by having a broad, high-level understanding of the infrastructure and architecture of our systems and guide more junior members of the team in this topic. Is responsible to help grow the business and/or accelerate software development by applying engineering techniques (eg prototyping, spiking and vendor evaluation) and standards and guide more junior members of the team in this topic. Is responsible to meet business needs by designing solutions that meet current requirements and are adaptable for future enhancements and guide more junior members of the team in this topic. Technical Incident ManagementIs responsible to address and resolve live production issues by mitigating the customer impact within SLAIs responsible to improve the overall reliability of systems by producing long term solutions through root cause analysis Is responsible to keep track of incidents by contributing to postmortem processes and logging live issues Coaching/MentoringIs responsible to coach, guide and improve the overall performance of stakeholders and colleagues at all levels, when appropriate, by sharing experience, knowledge and approaches to work Critical ThinkingIs responsible to systematically identify patterns and underlying issues in complex situations, and to find solutions by applying logical and analytical thinking and guide more junior members of the team in this topic. Is responsible to constructively evaluate and develop ideas, plans and solutions by reviewing them, objectively taking into account external knowledge, initiating 'SMART' improvements and articulating their rationale and guide more junior members of the team in this topic. Continuous Quality and Process ImprovementIs responsible to identify opportunities for process, system and structural improvements (ie performance gains) by examining and evaluating current process flows, methods and standards. Is responsible to design and implement relevant improvements by defining adapted/new process flows, standards, and practices that enable business performance. Effective CommunicationIs responsible to deliver clear, well-structured, and meaningful information to a target audience by using suitable communication mediums and language tailored to the audience Is responsible to achieve mutually agreeable solutions by staying adaptable, communicating ideas in clear coherent language and practising active listening Is responsible to ask relevant (follow-up) questions to properly engage with the speaker and really understand what they are saying, by applying listening and reflection techniques If this contract Principal Software Engineer role in Amsterdam motivates and inspires you, please apply with Brookwood Recruitment today. We'd love to help you get your next role. Brookwood has a consultative and inclusive approach to business. We take time to understand our client's needs, structure and culture to enable a fully tailored service that delivers time and time again.
ARM (Advanced Resource Managers)
Manchester, Lancashire
Full-Stack Python Developer 6 months Remote/Slough, Welwyn, Manchester - 1 day week on-site £550p/d - INSIDE IR35 Skills and experience: Significant commercial experience implementing full-stack solutions using Django and Python. Experience load testing and performance testing. Understanding of databases and SQL query optimisation. Experience building and consuming REST APIs. Familiarity with AWS Familiar with Docker, Docker-compose and Kubernetes Disclaimer: This vacancy is being advertised by either Advanced Resource Managers Limited, Advanced Resource Managers IT Limited or Advanced Resource Managers Engineering Limited ("ARM"). ARM is a specialist talent acquisition and management consultancy. We provide technical contingency recruitment and a portfolio of more complex resource solutions. Our specialist recruitment divisions cover the entire technical arena, including some of the most economically and strategically important industries in the UK and the world today. We will never send your CV without your permission. Where the role is marked as Outside IR35 in the advertisement this is subject to receipt of a final Status Determination Statement from the end Client and may be subject to change.
03/07/2024
Project-based
Full-Stack Python Developer 6 months Remote/Slough, Welwyn, Manchester - 1 day week on-site £550p/d - INSIDE IR35 Skills and experience: Significant commercial experience implementing full-stack solutions using Django and Python. Experience load testing and performance testing. Understanding of databases and SQL query optimisation. Experience building and consuming REST APIs. Familiarity with AWS Familiar with Docker, Docker-compose and Kubernetes Disclaimer: This vacancy is being advertised by either Advanced Resource Managers Limited, Advanced Resource Managers IT Limited or Advanced Resource Managers Engineering Limited ("ARM"). ARM is a specialist talent acquisition and management consultancy. We provide technical contingency recruitment and a portfolio of more complex resource solutions. Our specialist recruitment divisions cover the entire technical arena, including some of the most economically and strategically important industries in the UK and the world today. We will never send your CV without your permission. Where the role is marked as Outside IR35 in the advertisement this is subject to receipt of a final Status Determination Statement from the end Client and may be subject to change.
React Developer (Software Engineer Programmer Developer React TypeScript Redux Saga Ag-Grid Python Fixed Income JavaScript Node Fixed Income Credit Rates Bonds ABS Agile AWS GCP Buy Side Asset Manager Investment Management Performance Risk Attribution Node Finance Front Office Trading Financial Services UI Front End Front End) required by our asset management client in London. You MUST have the following: Strong experience as a React Developer/Software Engineer/Programmer Excellent JavaScript, TypeScript and React Fixed Income experience on the buy-side (bonds, credit, rates products) Excellent stakeholder interaction skills Agile The following is DESIRABLE, not essential: Redux Saga, Ag-Grid AWS or GCP Finance/trading Python Role: React Developer (Software Engineer Programmer Developer React TypeScript Redux Saga Ag-Grid Python Fixed Income JavaScript Node Fixed Income Credit Rates Bonds ABS Agile AWS GCP Buy Side Asset Manager Investment Management Performance Risk Attribution Node Finance Front Office Trading Financial Services UI Front End Front End) required by our asset management client in London. You will join a team of 8 that is responsible for an in-house built Fixed Income analysis application. It is entirely hosted on AWS and operates with a React Front End. You will be one of two Front End developers and supported by other engineers in the team who are full-stack. You can also contribute towards the Back End which is built in Python and Node too. On the Front End, they are also working with TypeScript, Ag-Grid and Redux Saga. Ag-Grid and Redux Saga are only desirable though. Any UX experience is desirable but is not essential as there is little work to do here. Experience is data-intensive applications is desirable here. Other technology in the stack includes Node, gRPC, protobuf, Apache Ignite, Apache Airflow and AWS. They have a hybrid-working set up that requires the team to be in the office 1-2 times a week. This is an environment that has been described as the only corporate environment with a start-up/fintech attitude towards technology. Hours are 9-5. Salary: £90k - £115k + 15% Bonus + 10% Pension
03/07/2024
Full time
React Developer (Software Engineer Programmer Developer React TypeScript Redux Saga Ag-Grid Python Fixed Income JavaScript Node Fixed Income Credit Rates Bonds ABS Agile AWS GCP Buy Side Asset Manager Investment Management Performance Risk Attribution Node Finance Front Office Trading Financial Services UI Front End Front End) required by our asset management client in London. You MUST have the following: Strong experience as a React Developer/Software Engineer/Programmer Excellent JavaScript, TypeScript and React Fixed Income experience on the buy-side (bonds, credit, rates products) Excellent stakeholder interaction skills Agile The following is DESIRABLE, not essential: Redux Saga, Ag-Grid AWS or GCP Finance/trading Python Role: React Developer (Software Engineer Programmer Developer React TypeScript Redux Saga Ag-Grid Python Fixed Income JavaScript Node Fixed Income Credit Rates Bonds ABS Agile AWS GCP Buy Side Asset Manager Investment Management Performance Risk Attribution Node Finance Front Office Trading Financial Services UI Front End Front End) required by our asset management client in London. You will join a team of 8 that is responsible for an in-house built Fixed Income analysis application. It is entirely hosted on AWS and operates with a React Front End. You will be one of two Front End developers and supported by other engineers in the team who are full-stack. You can also contribute towards the Back End which is built in Python and Node too. On the Front End, they are also working with TypeScript, Ag-Grid and Redux Saga. Ag-Grid and Redux Saga are only desirable though. Any UX experience is desirable but is not essential as there is little work to do here. Experience is data-intensive applications is desirable here. Other technology in the stack includes Node, gRPC, protobuf, Apache Ignite, Apache Airflow and AWS. They have a hybrid-working set up that requires the team to be in the office 1-2 times a week. This is an environment that has been described as the only corporate environment with a start-up/fintech attitude towards technology. Hours are 9-5. Salary: £90k - £115k + 15% Bonus + 10% Pension
Looking for Python Developers, who are passionate about building robust Back End systems and thrive in collaborative environments. The team is closely associated with Biomedical Research. They support scientists with solutions to identify hits in large libraries of samples from testing in cellular assays or with help of DNA-encoded library technologies at any scale. Their focus is on software solutions for streamlining scientific processes and workflows thinking start to end, which includes capturing and processing raw data, metadata and derived data generated by instruments during screening or obtained from intermediate analyses. They aim at applying and leveraging leading edge architecture and software technologies. Their goal is to combine use-case specific applications with optimized technical Back End platforms to eventually compose final solutions for scientists to accelerate drug discovery. Role 1: Python Backend Developer (Data) Strong Python skills with a knack for data manipulation and analysis. Your role will involve leveraging libraries like Numpy, Pandas, and Dask to transform data into actionable insights. Proficiency in ORM tools, particularly SQLAlchemy, is highly desired. Role 2: Python Backend Developer (Workflow) Python expert specialized in workflow management systems. You'll be instrumental in developing DAGs with Apache Airflow, managing concurrent tasks at scale, and working with the AWS stack. A general understanding of KV-stores, wide-column stores, or graphs will be beneficial. Both roles offer the opportunity to work on cutting-edge projects, contribute to a dynamic team, and develop solutions that make a real-world impact. Employee Value Proposition: Work for one of the most prestigious pharmaceutical companies in the world Job Title: Python Backend Engineer Location: Basel, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
03/07/2024
Project-based
Looking for Python Developers, who are passionate about building robust Back End systems and thrive in collaborative environments. The team is closely associated with Biomedical Research. They support scientists with solutions to identify hits in large libraries of samples from testing in cellular assays or with help of DNA-encoded library technologies at any scale. Their focus is on software solutions for streamlining scientific processes and workflows thinking start to end, which includes capturing and processing raw data, metadata and derived data generated by instruments during screening or obtained from intermediate analyses. They aim at applying and leveraging leading edge architecture and software technologies. Their goal is to combine use-case specific applications with optimized technical Back End platforms to eventually compose final solutions for scientists to accelerate drug discovery. Role 1: Python Backend Developer (Data) Strong Python skills with a knack for data manipulation and analysis. Your role will involve leveraging libraries like Numpy, Pandas, and Dask to transform data into actionable insights. Proficiency in ORM tools, particularly SQLAlchemy, is highly desired. Role 2: Python Backend Developer (Workflow) Python expert specialized in workflow management systems. You'll be instrumental in developing DAGs with Apache Airflow, managing concurrent tasks at scale, and working with the AWS stack. A general understanding of KV-stores, wide-column stores, or graphs will be beneficial. Both roles offer the opportunity to work on cutting-edge projects, contribute to a dynamic team, and develop solutions that make a real-world impact. Employee Value Proposition: Work for one of the most prestigious pharmaceutical companies in the world Job Title: Python Backend Engineer Location: Basel, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
Hybrid working (1 day per week at HQ) Up to £50,000 per annum + 10% bonus & Excellent Pension Contribution Market leaders in traffic management systems An exciting new opportunity to join an award-winning company building road management solutions to drive greener, cleaner inner city transportation systems. A company with a global remit to help deliver clean city government initiatives. As part of a large new project across the UK their development team is growing quickly. As such they are on the lookout for a senior engineer to join the team. This role is working on one of the company's largest and most prestigious customers delivering R&D solutions for inner city transportation. The best part about this role, alongside the great public services you will help to build, is the work life balance, flexible working and investment in your development and training. This is a company invested in making you the best version of your professional self-whilst maintaining your physical and mental well being. Key Skills: Experience with Angular, C#, .net, Proficiency with SQL or similar relational databases Experience working with GITLAB, Jira or Docker An agile mindset and working methodology Excellent Stakeholder Engagement Beneficial Skills: AWS experience with Scrum exposures Experience working in an agile/scrum environment Commitment to be hands on in coding practices Benefits Death in Service up to 6 x Annual Salary Private Health Insurance Excellent Bonus System Flexible Core Hour Working Up to 10% in Pension Contribution To apply for this role please click the "apply" button or to hear more on the position please contact (see below) Spectrum IT Recruitment (South) Limited is acting as an Employment Agency in relation to this vacancy.
03/07/2024
Full time
Hybrid working (1 day per week at HQ) Up to £50,000 per annum + 10% bonus & Excellent Pension Contribution Market leaders in traffic management systems An exciting new opportunity to join an award-winning company building road management solutions to drive greener, cleaner inner city transportation systems. A company with a global remit to help deliver clean city government initiatives. As part of a large new project across the UK their development team is growing quickly. As such they are on the lookout for a senior engineer to join the team. This role is working on one of the company's largest and most prestigious customers delivering R&D solutions for inner city transportation. The best part about this role, alongside the great public services you will help to build, is the work life balance, flexible working and investment in your development and training. This is a company invested in making you the best version of your professional self-whilst maintaining your physical and mental well being. Key Skills: Experience with Angular, C#, .net, Proficiency with SQL or similar relational databases Experience working with GITLAB, Jira or Docker An agile mindset and working methodology Excellent Stakeholder Engagement Beneficial Skills: AWS experience with Scrum exposures Experience working in an agile/scrum environment Commitment to be hands on in coding practices Benefits Death in Service up to 6 x Annual Salary Private Health Insurance Excellent Bonus System Flexible Core Hour Working Up to 10% in Pension Contribution To apply for this role please click the "apply" button or to hear more on the position please contact (see below) Spectrum IT Recruitment (South) Limited is acting as an Employment Agency in relation to this vacancy.
We are currently partnered with a climate consultancy who are on a mission to drive the sustainability transition. They do quantatitive modelling which gives their clients actionable information on strategy in pursuit of Net Zero. This role involves: Collaborate with teams to design, prioritise, develop and implement software solutions. Be second in line to Head of Engineering, helping to establish standards of the engineering team. Implement software and infrastructure. Ensure data governance, compliance, and privacy are upheld across all data-related operations. Tech stack: Python, Typescript, ETL Pipelines, AWS Terraform. Package: Salary c£90-100k 4 days onsite, 1 day work from home Private healthcare & pension Work from anywhere allowance share options Apply here or send your CV to (see below)
03/07/2024
Full time
We are currently partnered with a climate consultancy who are on a mission to drive the sustainability transition. They do quantatitive modelling which gives their clients actionable information on strategy in pursuit of Net Zero. This role involves: Collaborate with teams to design, prioritise, develop and implement software solutions. Be second in line to Head of Engineering, helping to establish standards of the engineering team. Implement software and infrastructure. Ensure data governance, compliance, and privacy are upheld across all data-related operations. Tech stack: Python, Typescript, ETL Pipelines, AWS Terraform. Package: Salary c£90-100k 4 days onsite, 1 day work from home Private healthcare & pension Work from anywhere allowance share options Apply here or send your CV to (see below)
What's the opportunity? (Role) We are currently seeking an ambitious, self-motivated and enthusiastic individual to join our ever-growing Master Systems Integrator workforce as a Smart Network Engineer (Niagara Programmer). Benefits of joining us in the Smart Network Engineer Role We pride ourselves on exceptional and motivated people, and you will be joining a professional, warm, welcoming and enthusiastic team The successful candidate will have the opportunity to work with a broad and diverse range of technologies Hybrid Working Options Pension Scheme Income Protection and Death in Service scheme Membership of Employee Assistance Programme Excellent opportunity to advance your career and progress within the Group Competitive salary based on experience and qualifications What will you be doing? (Responsibilities) Working as part of a highly technical team, you will be responsible for developing and programming formats and protocols such as Modbus, MQTT, BACnet and JSON through Tridium Niagara 4 Middleware. You will report to the MSI Director. Work with the team to supply input and feedback on all technical aspects of the MSI Service Self-documentation of all technical development work Integration with third-party API systems Ensure project milestones, programmes and targets are met Full compliance with company and customer security and safety systems What do you need? (Requirements) BMS experience essential Tridium N4 experience essential Niagara 4 certification Good understanding of networking principles Experience working in AWS, GCP, Azure and migration of data from building outputs to data lake an advantage Previous experience working with a Master Systems Integrator is an advantage Experience with secure MQTT Must have excellent verbal and written communication skills Strong interpersonal skills and an ability to deal with both internal and external customers Training for the right candidate shall be offered Demonstrate an understanding/an ability to adopt the principles of: - BrickSchema - Project Haystack - Google Digital Buildings
03/07/2024
Full time
What's the opportunity? (Role) We are currently seeking an ambitious, self-motivated and enthusiastic individual to join our ever-growing Master Systems Integrator workforce as a Smart Network Engineer (Niagara Programmer). Benefits of joining us in the Smart Network Engineer Role We pride ourselves on exceptional and motivated people, and you will be joining a professional, warm, welcoming and enthusiastic team The successful candidate will have the opportunity to work with a broad and diverse range of technologies Hybrid Working Options Pension Scheme Income Protection and Death in Service scheme Membership of Employee Assistance Programme Excellent opportunity to advance your career and progress within the Group Competitive salary based on experience and qualifications What will you be doing? (Responsibilities) Working as part of a highly technical team, you will be responsible for developing and programming formats and protocols such as Modbus, MQTT, BACnet and JSON through Tridium Niagara 4 Middleware. You will report to the MSI Director. Work with the team to supply input and feedback on all technical aspects of the MSI Service Self-documentation of all technical development work Integration with third-party API systems Ensure project milestones, programmes and targets are met Full compliance with company and customer security and safety systems What do you need? (Requirements) BMS experience essential Tridium N4 experience essential Niagara 4 certification Good understanding of networking principles Experience working in AWS, GCP, Azure and migration of data from building outputs to data lake an advantage Previous experience working with a Master Systems Integrator is an advantage Experience with secure MQTT Must have excellent verbal and written communication skills Strong interpersonal skills and an ability to deal with both internal and external customers Training for the right candidate shall be offered Demonstrate an understanding/an ability to adopt the principles of: - BrickSchema - Project Haystack - Google Digital Buildings
ASSOCIATE PRINCIPAL, SOFTWARE ENGINEERING (JAVA) SALARY: $160k - $170k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite and 2 days remote NO SPONSORSHIP Looking for a candidate with 5 plus years Back End Java development version 8 or above. financial big plus. Must have event-driven systems experience of cloud-based AWS data solutions any devops terraform ansible jenkins. big plus memory model data structures concurrency and Multithreading strong testing flint Apache Spark kafka streams etc. Re: Java, do you understand Multithreading What is your level of experience in Spring. A Re: Kafka Can you answer basic user/developer questions Re: Flink do you have any experience Do you have any skills or understanding of BigO notations. This role supports and works collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Specialist, Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Primary Duties and Responsibilities: To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. Support the application development of big data application for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. 5+ year of experience in building high speed, data-centric solutions 5+ years of experience in Java Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google Experience writing unit and integration tests with testing frameworks like Junit, Citrus Experience following Git workflows Working knowledge of DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Technical Skills: Java-based software development experience and Multithreading Fluent in object-oriented design Strong testing experience Experience working with two or more of the following: Unix/Linux environments, event-driven systems, transaction processing systems, distributed and parallel systems, large software system development, security software development, public-cloud platforms Hands-on experience with Java version 8 onwards, Spring, SpringBoot, Microservices, REST API
02/07/2024
Full time
ASSOCIATE PRINCIPAL, SOFTWARE ENGINEERING (JAVA) SALARY: $160k - $170k plus 15% bonus LOCATION: Chicago, IL Hybrid 3 days onsite and 2 days remote NO SPONSORSHIP Looking for a candidate with 5 plus years Back End Java development version 8 or above. financial big plus. Must have event-driven systems experience of cloud-based AWS data solutions any devops terraform ansible jenkins. big plus memory model data structures concurrency and Multithreading strong testing flint Apache Spark kafka streams etc. Re: Java, do you understand Multithreading What is your level of experience in Spring. A Re: Kafka Can you answer basic user/developer questions Re: Flink do you have any experience Do you have any skills or understanding of BigO notations. This role supports and works collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Specialist, Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Primary Duties and Responsibilities: To perform this job successfully, an individual must be able to perform each primary duty satisfactorily. Support the application development of big data application for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. 5+ year of experience in building high speed, data-centric solutions 5+ years of experience in Java Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google Experience writing unit and integration tests with testing frameworks like Junit, Citrus Experience following Git workflows Working knowledge of DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Technical Skills: Java-based software development experience and Multithreading Fluent in object-oriented design Strong testing experience Experience working with two or more of the following: Unix/Linux environments, event-driven systems, transaction processing systems, distributed and parallel systems, large software system development, security software development, public-cloud platforms Hands-on experience with Java version 8 onwards, Spring, SpringBoot, Microservices, REST API
NO SPONSORSHIP Software Engineering - Quantitative Risk Automation Modelers Keys are: Python, Java, Terraform, DevOps, Containerization and financial industry experience. Looking for hard core developers who want to work within quantitative risk management and develop applications and solutions for the QRM team. They do not build models, they automate models. They need to come from an industry company (financial institute, trading company, exchange, etc.). Develop hardcore applications. Need to have CICD pipelines, IaC, Kubernetes, Terraform This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting The role requires advanced coding, database and environment manipulation skills. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with Scripting languages such as Python Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
02/07/2024
Full time
NO SPONSORSHIP Software Engineering - Quantitative Risk Automation Modelers Keys are: Python, Java, Terraform, DevOps, Containerization and financial industry experience. Looking for hard core developers who want to work within quantitative risk management and develop applications and solutions for the QRM team. They do not build models, they automate models. They need to come from an industry company (financial institute, trading company, exchange, etc.). Develop hardcore applications. Need to have CICD pipelines, IaC, Kubernetes, Terraform This role is responsible for one or more functions within Quantitative Risk Management (QRM) who develops and maintains risk models for margin, clearing fund and stress testing with the focus on developing and maintaining risk model software in production, and environments and infrastructure used in model implementation and testing. Qualifications: Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting The role requires advanced coding, database and environment manipulation skills. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with Scripting languages such as Python Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
NO SPONSORSHIP Principal, Software Engineering Enterprise Cloud Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Dallas, TX 3 days onsite, 2 days remote It is all about on-premises monitoring and cloud monitoring The products they are looking for outside of Splunk is Data Dog, Dynatrace, New Relic Heavy cloud, AWS, EC2, Automation, application performance monitoring, enterprise monitoring, any EMC patrol, Tivoli, and regulatory experience Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
02/07/2024
Full time
NO SPONSORSHIP Principal, Software Engineering Enterprise Cloud Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Dallas, TX 3 days onsite, 2 days remote It is all about on-premises monitoring and cloud monitoring The products they are looking for outside of Splunk is Data Dog, Dynatrace, New Relic Heavy cloud, AWS, EC2, Automation, application performance monitoring, enterprise monitoring, any EMC patrol, Tivoli, and regulatory experience Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
AWS Cloud based performance testing Chicago - Hybrid 3 days on site. - Long term contract role C2C or W2 Must be AWS certified heavy cloud experience setting up and maintenance of a cloud-based performance system to automate and troubleshoot environmental issues. Performance testing, automation testings, financial experience strongly preferred. python Scripting: converting Java to python. Don't have to be application developers and as much. Devops and containerization as possible splunk confluence Jira API testing uc4 or similar. All about cloud testing system they are migrating from an old system to a new system kafka is a HUGE plus WORK TO BE PERFORMED: Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles. SKILL AND EXPERIENCE REQUIRED: Python Scripting - familiarity with creating modules that multiply transactional data and other data multiplier strategies that will be used in test cycles of the Real Time Clearing System SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor. Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. AWS Certified SysOps Administrator or Certified Developer (required) Languages Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud.Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL; Operating Systems experience; Methodologies: Agile, Iterative Waterfall
02/07/2024
Project-based
AWS Cloud based performance testing Chicago - Hybrid 3 days on site. - Long term contract role C2C or W2 Must be AWS certified heavy cloud experience setting up and maintenance of a cloud-based performance system to automate and troubleshoot environmental issues. Performance testing, automation testings, financial experience strongly preferred. python Scripting: converting Java to python. Don't have to be application developers and as much. Devops and containerization as possible splunk confluence Jira API testing uc4 or similar. All about cloud testing system they are migrating from an old system to a new system kafka is a HUGE plus WORK TO BE PERFORMED: Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles. SKILL AND EXPERIENCE REQUIRED: Python Scripting - familiarity with creating modules that multiply transactional data and other data multiplier strategies that will be used in test cycles of the Real Time Clearing System SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor. Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. AWS Certified SysOps Administrator or Certified Developer (required) Languages Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud.Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL; Operating Systems experience; Methodologies: Agile, Iterative Waterfall
NO SPONSORSHIP Associate Principal, Software Programming Quantitative Risk Management Area Associate Principal, Software Engineering Automating Risk Models Chicago - On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRMs software on these resources. Develop CI/CD pipelines. Contribute to development of QRMs databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Masters degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
02/07/2024
Full time
NO SPONSORSHIP Associate Principal, Software Programming Quantitative Risk Management Area Associate Principal, Software Engineering Automating Risk Models Chicago - On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRMs software on these resources. Develop CI/CD pipelines. Contribute to development of QRMs databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Masters degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
NO SPONSORSHIP Principal, Software Engineering Enterprise Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Chicago, IL 3 days onsite, 2 days remote Looking for a technical team lead over the enterprise splunk monitoring system. You will be the SME in Splunk Monitoring, Cloud Native Applications running on Kubernetes within AWS. Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
02/07/2024
Full time
NO SPONSORSHIP Principal, Software Engineering Enterprise Monitoring - Splunk SALARY: $200k- $215k base w/up to 30% bonus LOCATION: Chicago, IL 3 days onsite, 2 days remote Looking for a technical team lead over the enterprise splunk monitoring system. You will be the SME in Splunk Monitoring, Cloud Native Applications running on Kubernetes within AWS. Responsibilities Translate middle and senior management strategic directives into workable technical directives Monitor project status and take remedial action on projects behind schedule and/or over budget Provide subject matter expertise for ongoing support of third-party tools like Splunk Provide expert-level technical mentoring to more junior members of the team Resolve complex support issues in non-production and production environments. Have an understanding of Cloud Native applications running on Kubernetes within AWS and how exposed APIs may be used to monitor them Assist production support and development staff in debugging environment defects using logging monitors and/or APM-related profiling data Create procedural and troubleshooting documentation related to enterprise monitoring systems and the applications they are monitoring Write complex automation scripts using common automation tools, such as Jenkins, Ansible, and Terraform for the installation, configuration, and/or upgrade of monitoring systems Qualifications Systems administration and change management practices Enterprise monitoring and reporting tools Experience Scripting and/or coding against APIs In-depth knowledge of common used management and monitoring tech Internet/Web based technologies ITLT Best Practices Experience with tech used to support microservices Network technologies AWS log collection such as CloudTrail, CloudWatch, VPC Flow Logs Monitoring and reporting using SNMP CI/CD tools such as Artifactory, Jenkins, and GIT Cloud native applications, including Terraform experience Technologies used to support microservices Encryption technologies (SSL/TLS, PKI Infrastructure management) Security controls as applied to software technologies Bachelor's degree 10+ years of related experience Minimum 10 years experience working in a distributed multi-platform environment. Minimum 3 years experience working with cloud native applications Minimum 3 years experience managing technical projects
Contract - UC4 Automation Engineer Rate: Open Location: Chicago, IL Hybrid: 3 days on-site, 2 days remote Qualifications Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL Responsibilities Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles.
02/07/2024
Project-based
Contract - UC4 Automation Engineer Rate: Open Location: Chicago, IL Hybrid: 3 days on-site, 2 days remote Qualifications Python Scripting SDET automation testing skills/QA automation engineering Experience with Performance Engineering concepts and methodologies as well as cloud technologies and migrations using public cloud vendor preferably using cloud foundational services like AWS's VPCs, Solid utility building with Python, Perl and Powershell. Test automation using CI/CD concepts. Languages & Technologies: Java, Kafka, Docker, Kubernetes, DB2, CyberArk, Harness, JIRA, Jenkins, Splunk, Confluence, Git, JSON, API Testing, Cucumber, Selenium, Terraform, Ansible, Veracode, Virtualan, UC4, Change Data Capture, Docker, AWS/Google/Azure Cloud, Open API/Swagger, SOAP Web Service(JAX-WS), Restful Web Service (JAX-RS), Apache-CXF, Spring-Core, Spring WS, Spring Transaction, Spring-Integration, JDBC, Shell Scripting, XML, JavaScript, SQL, Python, JMeter, Gatling, Perl, PowerShell. SignalFX, AppDynamics. Software tools and Utilities: Jenkins, Kubernetes, Enterprise Architect (EA), Enterprise Manager-UM, SQL Developer, JConsole, Visual Studio, JMeter, Bitbucket, Git, CVS, SVN, PuTTy, Microsoft Visio, TOAD, SourceTree, JIRA, Confluence, Sonar, Bamboo, Splunk, Automic (UC4), Apache Kafka, LogicMonitor, BMC MainView, Real Time, and Historical monitoring tools on-prem and in the Cloud. Web Servers/App. Servers/Containers Experience; Database Technologies: DB2, PostgreSQL Responsibilities Performance Testing with open-source tools like JMeter, Gatling. Perl Scripting, PowerShell Scripting, solid Python Scripting and Java. Setting up of parallel testing environments that will be used to compare existing system business processes and data to a new cloud-based system/platform. Goal is to ensure that new system is producing correct results and performing as expected before it can become the official system of record. The ability to take raw data, mask it and create algorithms and solutions that increase the data load that will feed into our new Clearing System and with no issues, duplicates or any other data issues that will cause it to be rejected. Assist in the set up and maintenance of cloud-based performance and functional test environments in the Cloud (AWS) and define the steps to automate the process for continuous testing and iterations of cycles.
Enterprise Data Architect £50,000 - £65,000 Hybrid - 1 day a week in Bath Our client is embarking on an exciting digital transformation journey to become a data-driven organisation. The Data and Insights Project aims to enhance decision-making capabilities, improving user and staff experiences. Join the budding DDaT department, a community of technical experts dedicated to delivering secure, relevant, and accessible digital services. Role: We are seeking a skilled Data Architect or a Data Engineer/Data Analyst with some data architecture experience, eager to transition into a full-fledged Data Architect role. You will lead the architecture of the Enterprise Data Hub, ensuring it serves as a robust single source of quality-assured data. Working closely with the Chief Data & Technology Officer and stakeholders, you will help shape and measure the organisation's data strategy. Key Responsibilities: Lead data architecture initiatives and set data standards. Develop and maintain the Enterprise Data Hub. Collaborate with stakeholders to align data solutions with organisational goals. Provide architectural direction and evaluate engineering designs. Mentor and guide data governance and engineering teams. What We Offer: Opportunity to work with cutting-edge technologies and cloud platforms such as Azure and AWS. Impactful involvement in transformative projects that drive strategic goals. A collaborative and innovative work environment. Professional development and growth opportunities. Apply Today: Join today and help shape the future of data. If you are a skilled Data Architect, Data Engineer, or Data Analyst looking to transition into a Data Architect role and make a meaningful impact, we want to hear from you. Apply now to be part of the dynamic team!
02/07/2024
Full time
Enterprise Data Architect £50,000 - £65,000 Hybrid - 1 day a week in Bath Our client is embarking on an exciting digital transformation journey to become a data-driven organisation. The Data and Insights Project aims to enhance decision-making capabilities, improving user and staff experiences. Join the budding DDaT department, a community of technical experts dedicated to delivering secure, relevant, and accessible digital services. Role: We are seeking a skilled Data Architect or a Data Engineer/Data Analyst with some data architecture experience, eager to transition into a full-fledged Data Architect role. You will lead the architecture of the Enterprise Data Hub, ensuring it serves as a robust single source of quality-assured data. Working closely with the Chief Data & Technology Officer and stakeholders, you will help shape and measure the organisation's data strategy. Key Responsibilities: Lead data architecture initiatives and set data standards. Develop and maintain the Enterprise Data Hub. Collaborate with stakeholders to align data solutions with organisational goals. Provide architectural direction and evaluate engineering designs. Mentor and guide data governance and engineering teams. What We Offer: Opportunity to work with cutting-edge technologies and cloud platforms such as Azure and AWS. Impactful involvement in transformative projects that drive strategic goals. A collaborative and innovative work environment. Professional development and growth opportunities. Apply Today: Join today and help shape the future of data. If you are a skilled Data Architect, Data Engineer, or Data Analyst looking to transition into a Data Architect role and make a meaningful impact, we want to hear from you. Apply now to be part of the dynamic team!
If you are a Backend Scala consultant and you are available now then I have a great opportunity for you. The role is for 6 months + The position is completely remote Job Title: Backend Scala Engineer Job Description: As a Backend Scala Engineer, you will be responsible for designing, developing, and maintaining microservices-based applications with a strong focus on data handling and cloud integration. Your primary language will be Scala, and you will leverage AWS services to ensure scalable and efficient solutions. Key Responsibilities: Microservices Development: Design, develop, and maintain Back End microservices using Scala. Ensure high performance and scalability of microservices architecture. Cloud Integration: Utilize AWS services for deploying, managing, and scaling applications. Implement best practices for cloud-native development. Data Handling: Work with various data storage solutions, optimizing data retrieval and storage. Ensure efficient data processing within the Back End system. Must-Have Skills: Scala: Strong proficiency in Scala programming. Experience with functional programming paradigms. Microservices: In-depth knowledge of microservices architecture and design patterns. Proven experience in building and deploying microservices in production. AWS: Proficiency with AWS services (eg, EC2, S3, Lambda, RDS). Experience with cloud-based architecture and best practices. Nice-to-Have Skills: Redis: Knowledge of Redis for caching and in-memory data storage. Experience in integrating Redis with Back End applications. DynamoDB: Experience with DynamoDB or other NoSQL databases. Understanding of designing scalable and performant data models in DynamoDB. Join our team as a Backend Scala Engineer and use your expertise in Scala, microservices, and AWS to develop robust and scalable solutions. Your skills in Redis and DynamoDB will be a valuable asset in our innovative and dynamic environment. Apply now to contribute to our Back End infrastructure and data handling capabilities! Darwin Recruitment is acting as an Employment Business in relation to this vacancy.
01/07/2024
Project-based
If you are a Backend Scala consultant and you are available now then I have a great opportunity for you. The role is for 6 months + The position is completely remote Job Title: Backend Scala Engineer Job Description: As a Backend Scala Engineer, you will be responsible for designing, developing, and maintaining microservices-based applications with a strong focus on data handling and cloud integration. Your primary language will be Scala, and you will leverage AWS services to ensure scalable and efficient solutions. Key Responsibilities: Microservices Development: Design, develop, and maintain Back End microservices using Scala. Ensure high performance and scalability of microservices architecture. Cloud Integration: Utilize AWS services for deploying, managing, and scaling applications. Implement best practices for cloud-native development. Data Handling: Work with various data storage solutions, optimizing data retrieval and storage. Ensure efficient data processing within the Back End system. Must-Have Skills: Scala: Strong proficiency in Scala programming. Experience with functional programming paradigms. Microservices: In-depth knowledge of microservices architecture and design patterns. Proven experience in building and deploying microservices in production. AWS: Proficiency with AWS services (eg, EC2, S3, Lambda, RDS). Experience with cloud-based architecture and best practices. Nice-to-Have Skills: Redis: Knowledge of Redis for caching and in-memory data storage. Experience in integrating Redis with Back End applications. DynamoDB: Experience with DynamoDB or other NoSQL databases. Understanding of designing scalable and performant data models in DynamoDB. Join our team as a Backend Scala Engineer and use your expertise in Scala, microservices, and AWS to develop robust and scalable solutions. Your skills in Redis and DynamoDB will be a valuable asset in our innovative and dynamic environment. Apply now to contribute to our Back End infrastructure and data handling capabilities! Darwin Recruitment is acting as an Employment Business in relation to this vacancy.