DevOps/Platform Engineer Permanent/Financial Services (Investment Bank, Hedge Fund, Asset Management) Central London/Full Time On-Site Salary c£100K (negotiable) plus Bonus and full benefits package Overview An opportunity to work within the Front Office Platform team for this global financial services organisation. You will deliver technology solutions to Quantitative Development teams who partner with portfolio managers/trading teams providing technology and data-analysis consultancy and development. You should offer a minimum of 5-10 years' exp. as a Platform Engineer, DevOps Engineer or SRE that has worked on advanced systems extensively within other Investment Banks, Hedge Funds/Asset Managers, Financial Services or organisations with high performance platforms such as Energy firms or Media companies with Programmatic Trading. You will be involved in the build out of key platform features for Front Office; contribute to a modern, automated, cloud native continuous delivered stack; collaborate closely with quant development teams on the requirements for strategic internal products; Improve the security, reliability and cost efficiency of the Front Office platform and infrastructure; Automate as much as possible when appropriate; understand the nuances and challenges of developer experience for internal developer platforms. We require individuals that are strong on the design/architecture side of platforms as opposed to being predominantly Ops focussed. Your experience should include: Expertise working with any standard CICD technology with an understanding of different deployment units - Servers, containers, serverless. Strong cloud experience of AWS plus GCP or Azure Automation Scripting Highly proficient Linux Broad database knowledge Kubernetes, Docker, Serverless Framework, AWS SAM GitHub Actions, Argo CD Python Packer PostgreSQL Knowledge of Distributed computing and accelerator (ray/dask/jax ) Networking People (UK) is acting as an Employment Agency in relation to this vacancy.
19/09/2024
Full time
DevOps/Platform Engineer Permanent/Financial Services (Investment Bank, Hedge Fund, Asset Management) Central London/Full Time On-Site Salary c£100K (negotiable) plus Bonus and full benefits package Overview An opportunity to work within the Front Office Platform team for this global financial services organisation. You will deliver technology solutions to Quantitative Development teams who partner with portfolio managers/trading teams providing technology and data-analysis consultancy and development. You should offer a minimum of 5-10 years' exp. as a Platform Engineer, DevOps Engineer or SRE that has worked on advanced systems extensively within other Investment Banks, Hedge Funds/Asset Managers, Financial Services or organisations with high performance platforms such as Energy firms or Media companies with Programmatic Trading. You will be involved in the build out of key platform features for Front Office; contribute to a modern, automated, cloud native continuous delivered stack; collaborate closely with quant development teams on the requirements for strategic internal products; Improve the security, reliability and cost efficiency of the Front Office platform and infrastructure; Automate as much as possible when appropriate; understand the nuances and challenges of developer experience for internal developer platforms. We require individuals that are strong on the design/architecture side of platforms as opposed to being predominantly Ops focussed. Your experience should include: Expertise working with any standard CICD technology with an understanding of different deployment units - Servers, containers, serverless. Strong cloud experience of AWS plus GCP or Azure Automation Scripting Highly proficient Linux Broad database knowledge Kubernetes, Docker, Serverless Framework, AWS SAM GitHub Actions, Argo CD Python Packer PostgreSQL Knowledge of Distributed computing and accelerator (ray/dask/jax ) Networking People (UK) is acting as an Employment Agency in relation to this vacancy.
Description Methods Business and Digital Technology Limited Methods is a £100M+ IT Services Consultancy who has partnered with a range of central government departments and agencies to transform the way the public sector operates in the UK. Established over 30 years ago and UK-based, we apply our skills in transformation, delivery, and collaboration from across the Methods Group, to create end-to-end business and technical solutions that are people-centred, safe, and designed for the future. Our human touch sets us apart from other consultancies, system integrators and software houses - with people, technology, and data at the heart of who we are, we believe in creating value and sustainability through everything we do for our clients, staff, communities, and the planet. We support our clients in the success of their projects while working collaboratively to share skill sets and solve problems. At Methods we have fun while working hard; we are not afraid of making mistakes and learning from them. Predominantly focused on the public-sector, Methods is now building a significant private sector client portfolio. Methods was acquired by the Alten Group in early 2022. Requirements The development, management and supporting of the infrastructure that underpins the platforms, applications, and data which support the business Automating where possible to facilitate the rapid delivery of approved capabilities to their respective environments in a secure manner Must have good experience in developing Infrastructure as Code to automate the creation of infrastructure from development all the way to production. Should be passionate about improving ways of working and best practices by understanding the customer and the market trends. Understanding the needs of stakeholders and conveying this to the target audience. Testing and examining code written by others and providing an approval as part of the governance and review process. Ensuring that systems are safe and secure against cybersecurity threats when developing by keeping in mind that the systems must be secure by design. Familiar with the NCSC secure design principles. Familiar with managing the security of platforms whether they're on cloud or on-premises, including administration of secrets, tokens, and certificates. Working with the team (business, architecture, engineers, security, data) to ensure that development and delivery follows established processes and works as intended. Planning out projects and being involved in project management decisions. Responsible for the design, security, and maintenance of on-prem/cloud infrastructure. Making and guiding effective decisions, explaining clearly how the decision has been reached with the ability to understand and resolve technical disputes across varying levels of complexity and risk. Communicating effectively across organisational, technical, and political boundaries to understand the context and how to make complex and technical information and language simple and accessible for non-technical audiences. Understanding of how to expose data from systems (for example through APIs), link data from multiple systems, and deliver streaming services. Ensuring that risks associated with deployment are adequately understood and documented. Integrating security features in the software development life cycle. Identification and probable security risks, with their mitigating strategies. Implementation of security controls. Monitoring the infrastructure and the threat to security. Ensuring regulatory compliances for standards of security. Early detection of security vulnerabilities Faster deployment of secure software By following better compliance with security standards and regulations Greater visibility into security risks and threats Have experience or familiarity with working in an agile delivery methodology Ideal Candidates will demonstrate: Experience working with many teams especially security would be beneficial. Solid infrastructure design experience for on-prem environments to implement or migrate applications and databases. Have experience with hybrid designs between on-premise and cloud Solid experience in a range of technologies and be able to make assessments as to what is best to be used for the projects and the organisation. As well as suggest and develop innovative approaches within constrained projects and environments. Strong experience in software development change/release management processes and technical governance to fully understand the typical life cycle and maintenance of live systems. Ability to work with containerization platforms such as Kubernetes, PKS, Docker; provisioning software including Ansible, Terraform, YAML; and application/infrastructure/data performance analysis and monitoring. Experience of functional and non-functional testing. Experience with automated deployment of applications, databases and infrastructure. Understanding of the government digital service (GDS) manual and standards across Discovery/Alpha/Beta/Live phases. Understanding of SaaS, PaaS, IaaS technologies, and the implications of their use compared with bespoke development. Being able to provide training, support, and mentoring to the wider business. Knowledge of how to ensure that risks associated with deployment are adequately understood and documented. Desirable Skills & Experience: Worked as part of a system support team managing live systems and triaging & resolving incidents to resolution, including management of known defects and issues. Worked as part of a multi-disciplinary project team. Experience with Terraform and YAML to deploy on-prem/cloud infrastructure. Experience with automation tools to build and deploy containerized applications. Experience implementing effective instrumentation to monitor applications. Experience implementing SAST and DAST tooling in deployment pipelines like Trivvy and SonarQube. Experience with on-prem DevOps tooling. This role will require you to have or be willing to go through Security Clearance. As part of the onboarding process candidates will be asked to complete a Baseline Personnel Security Standard; details of the evidence required to apply may be found on the government website Gov.UK. If you are unable to meet this and any associated criteria, then your employment may be delayed, or rejected. Details of this will be discussed with you at interview. Benefits Methods is passionate about its people; we want our colleagues to develop the things they are good at and enjoy. By joining us you can expect Autonomy to develop and grow your skills and experience Be part of exciting project work that is making a difference in society Strong, inspiring and thought-provoking leadership A supportive and collaborative environment Development - access to LinkedIn Learning, a management development programme, and training Wellness - 24/7 confidential employee assistance programme Flexible Working - including home working and part time Social - office parties, breakfast Tuesdays, monthly pizza Thursdays, Thirsty Thursdays, and commitment to charitable causes Time Off - 25 days of annual leave a year, plus bank holidays, with the option to buy 5 extra days each year Volunteering - 2 paid days per year to volunteer in our local communities or within a charity organisation Pension - Salary Exchange Scheme with 4% employer contribution and 5% employee contribution Discretionary Company Bonus - based on company and individual performance Life Assurance - of 4 times base salary Private Medical Insurance - which is non-contributory (spouse and dependants included) Worldwide Travel Insurance - which is non-contributory (spouse and dependants included) Enhanced Maternity and Paternity Pay Travel - season ticket loan, cycle to work scheme *SOLE BRITISH NATIONALS ONLY* *SC REQUIRE*
19/09/2024
Description Methods Business and Digital Technology Limited Methods is a £100M+ IT Services Consultancy who has partnered with a range of central government departments and agencies to transform the way the public sector operates in the UK. Established over 30 years ago and UK-based, we apply our skills in transformation, delivery, and collaboration from across the Methods Group, to create end-to-end business and technical solutions that are people-centred, safe, and designed for the future. Our human touch sets us apart from other consultancies, system integrators and software houses - with people, technology, and data at the heart of who we are, we believe in creating value and sustainability through everything we do for our clients, staff, communities, and the planet. We support our clients in the success of their projects while working collaboratively to share skill sets and solve problems. At Methods we have fun while working hard; we are not afraid of making mistakes and learning from them. Predominantly focused on the public-sector, Methods is now building a significant private sector client portfolio. Methods was acquired by the Alten Group in early 2022. Requirements The development, management and supporting of the infrastructure that underpins the platforms, applications, and data which support the business Automating where possible to facilitate the rapid delivery of approved capabilities to their respective environments in a secure manner Must have good experience in developing Infrastructure as Code to automate the creation of infrastructure from development all the way to production. Should be passionate about improving ways of working and best practices by understanding the customer and the market trends. Understanding the needs of stakeholders and conveying this to the target audience. Testing and examining code written by others and providing an approval as part of the governance and review process. Ensuring that systems are safe and secure against cybersecurity threats when developing by keeping in mind that the systems must be secure by design. Familiar with the NCSC secure design principles. Familiar with managing the security of platforms whether they're on cloud or on-premises, including administration of secrets, tokens, and certificates. Working with the team (business, architecture, engineers, security, data) to ensure that development and delivery follows established processes and works as intended. Planning out projects and being involved in project management decisions. Responsible for the design, security, and maintenance of on-prem/cloud infrastructure. Making and guiding effective decisions, explaining clearly how the decision has been reached with the ability to understand and resolve technical disputes across varying levels of complexity and risk. Communicating effectively across organisational, technical, and political boundaries to understand the context and how to make complex and technical information and language simple and accessible for non-technical audiences. Understanding of how to expose data from systems (for example through APIs), link data from multiple systems, and deliver streaming services. Ensuring that risks associated with deployment are adequately understood and documented. Integrating security features in the software development life cycle. Identification and probable security risks, with their mitigating strategies. Implementation of security controls. Monitoring the infrastructure and the threat to security. Ensuring regulatory compliances for standards of security. Early detection of security vulnerabilities Faster deployment of secure software By following better compliance with security standards and regulations Greater visibility into security risks and threats Have experience or familiarity with working in an agile delivery methodology Ideal Candidates will demonstrate: Experience working with many teams especially security would be beneficial. Solid infrastructure design experience for on-prem environments to implement or migrate applications and databases. Have experience with hybrid designs between on-premise and cloud Solid experience in a range of technologies and be able to make assessments as to what is best to be used for the projects and the organisation. As well as suggest and develop innovative approaches within constrained projects and environments. Strong experience in software development change/release management processes and technical governance to fully understand the typical life cycle and maintenance of live systems. Ability to work with containerization platforms such as Kubernetes, PKS, Docker; provisioning software including Ansible, Terraform, YAML; and application/infrastructure/data performance analysis and monitoring. Experience of functional and non-functional testing. Experience with automated deployment of applications, databases and infrastructure. Understanding of the government digital service (GDS) manual and standards across Discovery/Alpha/Beta/Live phases. Understanding of SaaS, PaaS, IaaS technologies, and the implications of their use compared with bespoke development. Being able to provide training, support, and mentoring to the wider business. Knowledge of how to ensure that risks associated with deployment are adequately understood and documented. Desirable Skills & Experience: Worked as part of a system support team managing live systems and triaging & resolving incidents to resolution, including management of known defects and issues. Worked as part of a multi-disciplinary project team. Experience with Terraform and YAML to deploy on-prem/cloud infrastructure. Experience with automation tools to build and deploy containerized applications. Experience implementing effective instrumentation to monitor applications. Experience implementing SAST and DAST tooling in deployment pipelines like Trivvy and SonarQube. Experience with on-prem DevOps tooling. This role will require you to have or be willing to go through Security Clearance. As part of the onboarding process candidates will be asked to complete a Baseline Personnel Security Standard; details of the evidence required to apply may be found on the government website Gov.UK. If you are unable to meet this and any associated criteria, then your employment may be delayed, or rejected. Details of this will be discussed with you at interview. Benefits Methods is passionate about its people; we want our colleagues to develop the things they are good at and enjoy. By joining us you can expect Autonomy to develop and grow your skills and experience Be part of exciting project work that is making a difference in society Strong, inspiring and thought-provoking leadership A supportive and collaborative environment Development - access to LinkedIn Learning, a management development programme, and training Wellness - 24/7 confidential employee assistance programme Flexible Working - including home working and part time Social - office parties, breakfast Tuesdays, monthly pizza Thursdays, Thirsty Thursdays, and commitment to charitable causes Time Off - 25 days of annual leave a year, plus bank holidays, with the option to buy 5 extra days each year Volunteering - 2 paid days per year to volunteer in our local communities or within a charity organisation Pension - Salary Exchange Scheme with 4% employer contribution and 5% employee contribution Discretionary Company Bonus - based on company and individual performance Life Assurance - of 4 times base salary Private Medical Insurance - which is non-contributory (spouse and dependants included) Worldwide Travel Insurance - which is non-contributory (spouse and dependants included) Enhanced Maternity and Paternity Pay Travel - season ticket loan, cycle to work scheme *SOLE BRITISH NATIONALS ONLY* *SC REQUIRE*
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Senior Java Software Engineer. Candidate will support and work collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Preferred] Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google [Required] Experience writing unit and integration tests with testing frameworks like Junit, Citrus [Required] Experience working with various types of databases like Relational, NoSQL [Required] Experience working with Git [Preferred] Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc [Preferred] Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics [Required] Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API Technical Skills: [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns Education and/or Experience: [Required] BS degree in Computer Science, similar technical field required
19/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Senior Java Software Engineer. Candidate will support and work collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Preferred] Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google [Required] Experience writing unit and integration tests with testing frameworks like Junit, Citrus [Required] Experience working with various types of databases like Relational, NoSQL [Required] Experience working with Git [Preferred] Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc [Preferred] Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics [Required] Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API Technical Skills: [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns Education and/or Experience: [Required] BS degree in Computer Science, similar technical field required
NO SPONSORSHIP Software Engineering - Python, Java, Terraform, DevOps, Containerization Understanding of industry They do not necessarily have to work within a QRM portal. But they have to understand the industry and come from a highly regulated background, preferably financial Looking for a hard core developer who can work within quantitative risk management and they develop applications and solutions for the QRM team They do not build models, they automate models Develop hardcore applications These people will have masters in mathematics, statistics, physics, or computer science *They may even have a PhD They need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Develops and maintains risk models for managing clearing fund and stress testing risk model software in production AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
19/09/2024
Full time
NO SPONSORSHIP Software Engineering - Python, Java, Terraform, DevOps, Containerization Understanding of industry They do not necessarily have to work within a QRM portal. But they have to understand the industry and come from a highly regulated background, preferably financial Looking for a hard core developer who can work within quantitative risk management and they develop applications and solutions for the QRM team They do not build models, they automate models Develop hardcore applications These people will have masters in mathematics, statistics, physics, or computer science *They may even have a PhD They need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Develops and maintains risk models for managing clearing fund and stress testing risk model software in production AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
REMOTE DevOps Engineer (Senior DevOps Engineer SRE Site Reliability Engineer Java Python Automation Data Lake Datalake Data Mesh CI/CD Kafka Big Data AWS SQL Finance Trading Contract Contractor Consultant Financial Services Banking Remote Working Trading Cloud Projects Ireland Dublin Banking) required by our financial services client in Dublin, Ireland. You MUST have the following: Good experience as a DevOps Engineer/SRE/Site Reliability Engineer AWS EKS - Kubernetes Docker Terraform Good Scripting (Python, Java, Golang, Bash, Shell etc) The following is DESIRABLE, not essential: Experience in large data environments Role: REMOTE DevOps Engineer (Senior DevOps Engineer SRE Site Reliability Engineer Java Python Automation Data Lake Datalake Data Mesh CI/CD Kafka Big Data AWS SQL Finance Trading Contract Contractor Consultant Financial Services Banking Remote Working Trading Cloud Projects Ireland Dublin Banking) required by our financial services client in Dublin, Ireland. You will join a central data engineering team of 8 that are working on a project to migrate their AWS based data lake to a data mesh architecture. You will join two other DevOps Engineers and be tasked, over the next 12-24 months, with helping to build the greenfield automation environment for their new data mesh set up. You will also work to modernize the existing data platform and the automation required for that. That will mean setting up Terraform and EKS. To build the environment for the data mesh, you will be working with Docker, Kubernetes (EKS), Docker, Terraform, GitLab and GitHub Actions. For this role, you will need to have worked in a large enterprise environment. If you have also worked in a very data intensive environment, that would be beneficial but is not essential. You will need to come with good AWS, including EKS, Scripting, Terraform and Docker. Any additional overlap is desirable. This role is 100% remote but you will need to work roughly around Irish/UK hours. You will also have to be based in Ireland. This will likely begin as a 12 month contract and continue long-term. Duration: 12-24 months Rate: €450- 525/day
19/09/2024
Contractor
REMOTE DevOps Engineer (Senior DevOps Engineer SRE Site Reliability Engineer Java Python Automation Data Lake Datalake Data Mesh CI/CD Kafka Big Data AWS SQL Finance Trading Contract Contractor Consultant Financial Services Banking Remote Working Trading Cloud Projects Ireland Dublin Banking) required by our financial services client in Dublin, Ireland. You MUST have the following: Good experience as a DevOps Engineer/SRE/Site Reliability Engineer AWS EKS - Kubernetes Docker Terraform Good Scripting (Python, Java, Golang, Bash, Shell etc) The following is DESIRABLE, not essential: Experience in large data environments Role: REMOTE DevOps Engineer (Senior DevOps Engineer SRE Site Reliability Engineer Java Python Automation Data Lake Datalake Data Mesh CI/CD Kafka Big Data AWS SQL Finance Trading Contract Contractor Consultant Financial Services Banking Remote Working Trading Cloud Projects Ireland Dublin Banking) required by our financial services client in Dublin, Ireland. You will join a central data engineering team of 8 that are working on a project to migrate their AWS based data lake to a data mesh architecture. You will join two other DevOps Engineers and be tasked, over the next 12-24 months, with helping to build the greenfield automation environment for their new data mesh set up. You will also work to modernize the existing data platform and the automation required for that. That will mean setting up Terraform and EKS. To build the environment for the data mesh, you will be working with Docker, Kubernetes (EKS), Docker, Terraform, GitLab and GitHub Actions. For this role, you will need to have worked in a large enterprise environment. If you have also worked in a very data intensive environment, that would be beneficial but is not essential. You will need to come with good AWS, including EKS, Scripting, Terraform and Docker. Any additional overlap is desirable. This role is 100% remote but you will need to work roughly around Irish/UK hours. You will also have to be based in Ireland. This will likely begin as a 12 month contract and continue long-term. Duration: 12-24 months Rate: €450- 525/day
Full Stack Python Developer - Front Office - SOLE AGENT Our client, a global leading investment firm, requires a talented Python Developer to join their team. This is an on-site position, in our client's London office. You will provide first class support for Deal Teams, Portfolio Managers and other business functions locally, as well as for other key regions, as part of a global team. Sitting with the trading team, you will build strong relationships with key business stakeholders; supporting and developing trading, analytics and reporting systems; an opportunity to participate in all aspects of the application development life cycle, including requirements analysis, application development, and devising test cases, while working closely with a spectrum of business functions like operations, finance, compliance, etc. Ideally you will have prior experience of working directly with financial investment professionals, and experience in full-stack development using modern technology frameworks. YOUR SKILLS Strong Python experience Knowledge of relational databases, and other data storage solutions, experience with SQL Excellent communication and relationship building skills. 5+ years of programming experience Understanding of programming design concepts, data structures, and algorithms Experience with modern development methodologies Familiarity with Front End libraries/frameworks Understanding of the API development with HTTP, REST and JSON (Python-Flask/Django preferred) Strong troubleshooting and analytical skills; detail oriented Strong cultural fit - Teamwork, proactive/self-starter, results oriented and integrity ADDITIONAL BENEFICIAL SKILLS/KNOWLEDGE Experience in one or more ofbank loans/leveraged loans, fixed-income products, CLOs, derivatives, ABS and CMBS products Working knowledge of Linux, Docker/Kubernetes Experience in or readiness to learn building applications using the modern technology stack: Cloud/AWS, DevOps, etc. WHAT WILL YOU BE DOING Acting as a first point of contact for business teams to provide timely assistance with data queries, system enhancements, and other technical requests. Work directly with business users to perform requirements analysis, application design and implementation. In collaboration with the wider engineering team, develop systems that are larger multi-tier applications and frameworks to simpler reports. Ensure high focus on SDLC with a focus on automated unit and regression tests. Create and maintain a professional-level internal knowledge base. Provide system training to business users and new joiners. Align and add to the culture and overall vision/mission of the team. This represents an excellent opportunity to join one of the world leading investment firms. Please send your CV for full details.
19/09/2024
Full time
Full Stack Python Developer - Front Office - SOLE AGENT Our client, a global leading investment firm, requires a talented Python Developer to join their team. This is an on-site position, in our client's London office. You will provide first class support for Deal Teams, Portfolio Managers and other business functions locally, as well as for other key regions, as part of a global team. Sitting with the trading team, you will build strong relationships with key business stakeholders; supporting and developing trading, analytics and reporting systems; an opportunity to participate in all aspects of the application development life cycle, including requirements analysis, application development, and devising test cases, while working closely with a spectrum of business functions like operations, finance, compliance, etc. Ideally you will have prior experience of working directly with financial investment professionals, and experience in full-stack development using modern technology frameworks. YOUR SKILLS Strong Python experience Knowledge of relational databases, and other data storage solutions, experience with SQL Excellent communication and relationship building skills. 5+ years of programming experience Understanding of programming design concepts, data structures, and algorithms Experience with modern development methodologies Familiarity with Front End libraries/frameworks Understanding of the API development with HTTP, REST and JSON (Python-Flask/Django preferred) Strong troubleshooting and analytical skills; detail oriented Strong cultural fit - Teamwork, proactive/self-starter, results oriented and integrity ADDITIONAL BENEFICIAL SKILLS/KNOWLEDGE Experience in one or more ofbank loans/leveraged loans, fixed-income products, CLOs, derivatives, ABS and CMBS products Working knowledge of Linux, Docker/Kubernetes Experience in or readiness to learn building applications using the modern technology stack: Cloud/AWS, DevOps, etc. WHAT WILL YOU BE DOING Acting as a first point of contact for business teams to provide timely assistance with data queries, system enhancements, and other technical requests. Work directly with business users to perform requirements analysis, application design and implementation. In collaboration with the wider engineering team, develop systems that are larger multi-tier applications and frameworks to simpler reports. Ensure high focus on SDLC with a focus on automated unit and regression tests. Create and maintain a professional-level internal knowledge base. Provide system training to business users and new joiners. Align and add to the culture and overall vision/mission of the team. This represents an excellent opportunity to join one of the world leading investment firms. Please send your CV for full details.
Senior Site Reliability Engineer - FinTech - Azure, Docker, Kubernetes, Terraform, CI/CD Oliver Bernard are currently seeking a Senior SRE to join a well-established team for a FinTech company in Poland. This hire is part of a period of transformation across the business focused around expanding their global product and instilling a strong DevOps culture whilst driving transformation and innovation. Having grown and acquired new business in the last year, they require a Senior Level Engineer to support their DevOps team in their efforts to scale through a series of greenfield projects focused around Azure, Terraform, CI/CD, Monitoring, Automation & more. The ideal candidate will have at least 3-4 years SRE/DevOps experience, ideally operating in a Senior capacity in their current role, and be able to work across the following technologies: Experience with Azure Cloud & Azure Services Container work with Docker and Kubernetes IaC with Terraform, alongside Automation with Ansible Strong CI/CD knowledge, with hands-on work across Azure DevOps Prior work with tools such as TeamCity, Octopus Deploy etc This is a remote opening for EU candidates, and can offer €60-80K for the right profile. Please apply here if this opportunity could be of interest. Senior Site Reliability Engineer - FinTech - Azure, Docker, Kubernetes, Terraform, CI/CD
18/09/2024
Full time
Senior Site Reliability Engineer - FinTech - Azure, Docker, Kubernetes, Terraform, CI/CD Oliver Bernard are currently seeking a Senior SRE to join a well-established team for a FinTech company in Poland. This hire is part of a period of transformation across the business focused around expanding their global product and instilling a strong DevOps culture whilst driving transformation and innovation. Having grown and acquired new business in the last year, they require a Senior Level Engineer to support their DevOps team in their efforts to scale through a series of greenfield projects focused around Azure, Terraform, CI/CD, Monitoring, Automation & more. The ideal candidate will have at least 3-4 years SRE/DevOps experience, ideally operating in a Senior capacity in their current role, and be able to work across the following technologies: Experience with Azure Cloud & Azure Services Container work with Docker and Kubernetes IaC with Terraform, alongside Automation with Ansible Strong CI/CD knowledge, with hands-on work across Azure DevOps Prior work with tools such as TeamCity, Octopus Deploy etc This is a remote opening for EU candidates, and can offer €60-80K for the right profile. Please apply here if this opportunity could be of interest. Senior Site Reliability Engineer - FinTech - Azure, Docker, Kubernetes, Terraform, CI/CD
NO SPONSORSHIP Associate Principal, Software Programming Quantitative Risk Management Area Associate Principal, Software Engineering Automating Risk Models Chicago - On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRMs software on these resources. Develop CI/CD pipelines. Contribute to development of QRMs databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Masters degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
17/09/2024
Full time
NO SPONSORSHIP Associate Principal, Software Programming Quantitative Risk Management Area Associate Principal, Software Engineering Automating Risk Models Chicago - On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRMs software on these resources. Develop CI/CD pipelines. Contribute to development of QRMs databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Masters degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
NO SPONSORSHIP AWS Cloud Engineer SALARY: $115k - 120K and a 10% Bonus LOCATION: Chicago, IL Hybrid 2 day remote and 3 days onsite SELLING POINTS: Bash Python Scripting AWS Kubernetes CICD Github Jenkins Artifactory Docker Compose K8s Kafka Rabbit MQ Amazon Kinesis Terraform Ansible Jenkins Helm Linux Linux Shell Scripting Splunk Infrastructure as a code (IaC) Qualifications: Programming/Scripting experience in languages like Java, Bash, Python or Go Knowledge of Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s) Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Experience with RESTful APIs and JSON RPC Experience following Git workflows Technical Skills: Experience with Linux and Linux Shell Scripting. Jenkins job setup and execution analysis - including Splunk log review for Root Cause Analysis (RCA). Ability to manage Kubernetes deployments with helm charts, using continuous deployment tools like Harness.io Ability to manage AWS deployments using Terraform, Ansible, or similar Infrastructure as Code (IaC) frameworks. Experience with automation, configuration management and orchestration, infrastructure as code. Experience with Golang or Python is a plus. BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team
17/09/2024
Full time
NO SPONSORSHIP AWS Cloud Engineer SALARY: $115k - 120K and a 10% Bonus LOCATION: Chicago, IL Hybrid 2 day remote and 3 days onsite SELLING POINTS: Bash Python Scripting AWS Kubernetes CICD Github Jenkins Artifactory Docker Compose K8s Kafka Rabbit MQ Amazon Kinesis Terraform Ansible Jenkins Helm Linux Linux Shell Scripting Splunk Infrastructure as a code (IaC) Qualifications: Programming/Scripting experience in languages like Java, Bash, Python or Go Knowledge of Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s) Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Experience with RESTful APIs and JSON RPC Experience following Git workflows Technical Skills: Experience with Linux and Linux Shell Scripting. Jenkins job setup and execution analysis - including Splunk log review for Root Cause Analysis (RCA). Ability to manage Kubernetes deployments with helm charts, using continuous deployment tools like Harness.io Ability to manage AWS deployments using Terraform, Ansible, or similar Infrastructure as Code (IaC) frameworks. Experience with automation, configuration management and orchestration, infrastructure as code. Experience with Golang or Python is a plus. BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is on the search for a Senior Associate, Cloud Engineer. This company is looking for a 3-year cloud engineer with experience with bash, python, AWS, Kubernetes, CICD, Ansible, Terraform, Linux Shell, IaC, etc. Responsibilities: Enable development teams to self-service build and deployment processes through process automation. Assist in designing process improvements across the build, deployment, and monitoring of Clearing applications. Support the maintenance and configuration of development environments in Kubernetes and AWS. Support terraform, ansible, Harness, and Jenkins jobs used to instantiate and manage development environments. Qualifications: BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team Programming/Scripting experience in languages like Java, Bash, Python or Go Knowledge of Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s) Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Experience with RESTful APIs and JSON RPC Experience following Git workflows Experience with Linux and Linux Shell Scripting. Jenkins job setup and execution analysis - including Splunk log review for Root Cause Analysis (RCA). Ability to manage Kubernetes deployments with helm charts, using continuous deployment tools like Harness.io Ability to manage AWS deployments using Terraform, Ansible, or similar Infrastructure as Code (IaC) frameworks. Experience with automation, configuration management and orchestration, infrastructure as code.
17/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is on the search for a Senior Associate, Cloud Engineer. This company is looking for a 3-year cloud engineer with experience with bash, python, AWS, Kubernetes, CICD, Ansible, Terraform, Linux Shell, IaC, etc. Responsibilities: Enable development teams to self-service build and deployment processes through process automation. Assist in designing process improvements across the build, deployment, and monitoring of Clearing applications. Support the maintenance and configuration of development environments in Kubernetes and AWS. Support terraform, ansible, Harness, and Jenkins jobs used to instantiate and manage development environments. Qualifications: BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team Programming/Scripting experience in languages like Java, Bash, Python or Go Knowledge of Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s) Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Experience with RESTful APIs and JSON RPC Experience following Git workflows Experience with Linux and Linux Shell Scripting. Jenkins job setup and execution analysis - including Splunk log review for Root Cause Analysis (RCA). Ability to manage Kubernetes deployments with helm charts, using continuous deployment tools like Harness.io Ability to manage AWS deployments using Terraform, Ansible, or similar Infrastructure as Code (IaC) frameworks. Experience with automation, configuration management and orchestration, infrastructure as code.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* *NO CONTRACTORS OR CONSULTANTS* A prestigious company is looking for an Associate Principal, Backend Java Developer. This company needs someone with 7-10 years of experience focused on Back End Java development, Java 11, Kafka, Golang, Multithreading, AWS, etc. They will be working in a Real Time and highly regulated financial environment. Responsibilities: Actively participates in design of highly performing, scalable, secure, reliable and cost optimized solutions. Primary responsibility is application design and development of next gen clearing applications for business requirements in agreed architecture framework and Agile environment. Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation. Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented. Participates in code-reviews based on high engineering standards Writes unit and integration tests based on chosen test frameworks. Assists Production Support by providing advice on system functionality and fixes as required. Qualifications: BS degree in Computer Science, similar technical field required. Masters preferred. 7-10 years of experience in building large scale, compute and event-driven solutions. Experience (including internal workings of Java) in Java 11+ is required. Experience with app development in Golang. Experience developing software using Object Oriented Designs, advance patterns (like AOP) and multi-threading is required. Experience with distributed message brokers like Kafka, IBM MQ, Amazon Kinesis, etc. is desirable. Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Must be able to write good quality code with 80% or above unit and integration tests coverage. Experience with testing frameworks like Junit, Citrus is desirable. Experience working with various types of databases like Relational, NoSQL, Object-based, Graph. Experience following Git workflows is required. Familiarity with DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Docker, Helm and CI/CD pipeline etc.is a plus. Experience with performance optimization, profiling, and memory management.
17/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* *NO CONTRACTORS OR CONSULTANTS* A prestigious company is looking for an Associate Principal, Backend Java Developer. This company needs someone with 7-10 years of experience focused on Back End Java development, Java 11, Kafka, Golang, Multithreading, AWS, etc. They will be working in a Real Time and highly regulated financial environment. Responsibilities: Actively participates in design of highly performing, scalable, secure, reliable and cost optimized solutions. Primary responsibility is application design and development of next gen clearing applications for business requirements in agreed architecture framework and Agile environment. Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation. Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented. Participates in code-reviews based on high engineering standards Writes unit and integration tests based on chosen test frameworks. Assists Production Support by providing advice on system functionality and fixes as required. Qualifications: BS degree in Computer Science, similar technical field required. Masters preferred. 7-10 years of experience in building large scale, compute and event-driven solutions. Experience (including internal workings of Java) in Java 11+ is required. Experience with app development in Golang. Experience developing software using Object Oriented Designs, advance patterns (like AOP) and multi-threading is required. Experience with distributed message brokers like Kafka, IBM MQ, Amazon Kinesis, etc. is desirable. Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Must be able to write good quality code with 80% or above unit and integration tests coverage. Experience with testing frameworks like Junit, Citrus is desirable. Experience working with various types of databases like Relational, NoSQL, Object-based, Graph. Experience following Git workflows is required. Familiarity with DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Docker, Helm and CI/CD pipeline etc.is a plus. Experience with performance optimization, profiling, and memory management.
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
16/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
16/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
16/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
Methods Business and Digital Technology Limited
Gloucester, Gloucestershire
Senior Back End Developer (Cyber) Location: On-site 5-days ( Worcester/Ebbw Vale/Gloucester/Great Malvern) Company: Methods Business and Digital Technology Limited About Us: Methods is a leading £100M+ IT Services Consultancy with a rich history of transforming the public sector in the UK. With over 30 years of experience, we collaborate with central government departments and agencies to create innovative, people-centred solutions. Now expanding into the private sector, we continue to drive success through our commitment to technology, data, and a human touch. Role Overview: We are seeking a highly skilled Senior Back End Developer to join our dynamic team. The ideal candidate will have strong expertise in Python and SQL, with a proven track record of developing and maintaining robust Back End systems. You will collaborate closely with Front End developers, data engineers, and product managers to build scalable, efficient applications that meet user needs. Key Responsibilities: Design, develop, and maintain reliable Back End systems using Python and SQL. Utilize frameworks like Django, Flask, FastAPI, Asyncio, Aiohttp, and SQLAlchemy. Develop and document RESTful APIs, WebSocket, and GraphQL services. Manage and optimize databases (PostgreSQL, NATS, Redis, Min.IO). Implement cloud-based solutions using Microsoft Azure services. Ensure security protocols with OAuth and KeyCloak. Conduct testing with SonarQube, Pytest, isort, black, and bandit. Use Git for version control. Implement containerization and orchestration with Docker, Kubernetes, and Helm. Develop CI/CD pipelines with GitHub Actions and Azure DevOps Pipelines. Collaborate using Jira and Confluence. Monitor and enhance system performance with Prometheus and Grafana. Requirements: Extensive experience as a Senior Back End Developer. Proficient in Python and SQL. Skilled with frameworks and libraries: Django, Flask, FastAPI, Asyncio, Aiohttp, SQLAlchemy. Experience in developing/managing RESTful APIs, WebSocket, GraphQL services. Database management expertise (PostgreSQL, NATS, Redis, Min.IO). Hands-on with Microsoft Azure services. Security implementation knowledge (OAuth, KeyCloak). Testing proficiency (SonarQube, Pytest, isort, black, bandit). Version control with Git. Experience with Docker, Kubernetes, Helm. CI/CD processes familiarity (GitHub Actions, Azure DevOps Pipelines). Excellent collaboration and communication skills. Problem-solving abilities. Security Clearance: This role will require you to have or be willing to go through Security Clearance. As part of the onboarding process candidates will be asked to complete a Baseline Personnel Security Standard; details of the evidence required to apply may be found on the government website Gov.UK. If you are unable to meet this and any associated criteria, then your employment may be delayed, or rejected. Details of this will be discussed with you at interview. Benefits: Methods is passionate about its people; we want our colleagues to develop the things they are good at and enjoy. By joining us you can expect Autonomy to develop and grow your skills and experience Be part of exciting project work that is making a difference in society Strong, inspiring and thought-provoking leadership A supportive and collaborative environment Development - access to LinkedIn Learning, a management development programme, and training Wellness - 24/7 confidential employee assistance programme Flexible Working - including home working and part time Social - office parties, breakfast Tuesdays, monthly pizza Thursdays, Thirsty Thursdays, and commitment to charitable causes Time Off - 25 days of annual leave a year, plus bank holidays, with the option to buy 5 extra days each year Volunteering - 2 paid days per year to volunteer in our local communities or within a charity organisation Pension - Salary Exchange Scheme with 4% employer contribution and 5% employee contribution Discretionary Company Bonus - based on company and individual performance Life Assurance - of 4 times base salary Private Medical Insurance - which is non-contributory (spouse and dependants included) Worldwide Travel Insurance - which is non-contributory (spouse and dependants included) Enhanced Maternity and Paternity Pay Travel - season ticket loan, cycle to work scheme For a full list of benefits please visit our website
16/09/2024
Full time
Senior Back End Developer (Cyber) Location: On-site 5-days ( Worcester/Ebbw Vale/Gloucester/Great Malvern) Company: Methods Business and Digital Technology Limited About Us: Methods is a leading £100M+ IT Services Consultancy with a rich history of transforming the public sector in the UK. With over 30 years of experience, we collaborate with central government departments and agencies to create innovative, people-centred solutions. Now expanding into the private sector, we continue to drive success through our commitment to technology, data, and a human touch. Role Overview: We are seeking a highly skilled Senior Back End Developer to join our dynamic team. The ideal candidate will have strong expertise in Python and SQL, with a proven track record of developing and maintaining robust Back End systems. You will collaborate closely with Front End developers, data engineers, and product managers to build scalable, efficient applications that meet user needs. Key Responsibilities: Design, develop, and maintain reliable Back End systems using Python and SQL. Utilize frameworks like Django, Flask, FastAPI, Asyncio, Aiohttp, and SQLAlchemy. Develop and document RESTful APIs, WebSocket, and GraphQL services. Manage and optimize databases (PostgreSQL, NATS, Redis, Min.IO). Implement cloud-based solutions using Microsoft Azure services. Ensure security protocols with OAuth and KeyCloak. Conduct testing with SonarQube, Pytest, isort, black, and bandit. Use Git for version control. Implement containerization and orchestration with Docker, Kubernetes, and Helm. Develop CI/CD pipelines with GitHub Actions and Azure DevOps Pipelines. Collaborate using Jira and Confluence. Monitor and enhance system performance with Prometheus and Grafana. Requirements: Extensive experience as a Senior Back End Developer. Proficient in Python and SQL. Skilled with frameworks and libraries: Django, Flask, FastAPI, Asyncio, Aiohttp, SQLAlchemy. Experience in developing/managing RESTful APIs, WebSocket, GraphQL services. Database management expertise (PostgreSQL, NATS, Redis, Min.IO). Hands-on with Microsoft Azure services. Security implementation knowledge (OAuth, KeyCloak). Testing proficiency (SonarQube, Pytest, isort, black, bandit). Version control with Git. Experience with Docker, Kubernetes, Helm. CI/CD processes familiarity (GitHub Actions, Azure DevOps Pipelines). Excellent collaboration and communication skills. Problem-solving abilities. Security Clearance: This role will require you to have or be willing to go through Security Clearance. As part of the onboarding process candidates will be asked to complete a Baseline Personnel Security Standard; details of the evidence required to apply may be found on the government website Gov.UK. If you are unable to meet this and any associated criteria, then your employment may be delayed, or rejected. Details of this will be discussed with you at interview. Benefits: Methods is passionate about its people; we want our colleagues to develop the things they are good at and enjoy. By joining us you can expect Autonomy to develop and grow your skills and experience Be part of exciting project work that is making a difference in society Strong, inspiring and thought-provoking leadership A supportive and collaborative environment Development - access to LinkedIn Learning, a management development programme, and training Wellness - 24/7 confidential employee assistance programme Flexible Working - including home working and part time Social - office parties, breakfast Tuesdays, monthly pizza Thursdays, Thirsty Thursdays, and commitment to charitable causes Time Off - 25 days of annual leave a year, plus bank holidays, with the option to buy 5 extra days each year Volunteering - 2 paid days per year to volunteer in our local communities or within a charity organisation Pension - Salary Exchange Scheme with 4% employer contribution and 5% employee contribution Discretionary Company Bonus - based on company and individual performance Life Assurance - of 4 times base salary Private Medical Insurance - which is non-contributory (spouse and dependants included) Worldwide Travel Insurance - which is non-contributory (spouse and dependants included) Enhanced Maternity and Paternity Pay Travel - season ticket loan, cycle to work scheme For a full list of benefits please visit our website
Full Stack Python Developer - Front Office - SOLE AGENT Our client, a global leading investment firm, requires a talented Python Developer to join their team. This is an on-site position, in our client's London office. You will provide first class support for Deal Teams, Portfolio Managers and other business functions locally, as well as for other key regions, as part of a global team. Sitting with the trading team, you will build strong relationships with key business stakeholders; supporting and developing trading, analytics and reporting systems; an opportunity to participate in all aspects of the application development life cycle, including requirements analysis, application development, and devising test cases, while working closely with a spectrum of business functions like operations, finance, compliance, etc. Ideally you will have prior experience of working directly with financial investment professionals, and experience in full-stack development using modern technology frameworks. YOUR SKILLS Strong Python experience Knowledge of relational databases, and other data storage solutions, experience with SQL Excellent communication and relationship building skills. 5+ years of programming experience Understanding of programming design concepts, data structures, and algorithms Experience with modern development methodologies Familiarity with Front End libraries/frameworks Understanding of the API development with HTTP, REST and JSON (Python-Flask/Django preferred) Strong troubleshooting and analytical skills; detail oriented Strong cultural fit - Teamwork, proactive/self-starter, results oriented and integrity ADDITIONAL BENEFICIAL SKILLS/KNOWLEDGE Experience in one or more ofbank loans/leveraged loans, fixed-income products, CLOs, derivatives, ABS and CMBS products Working knowledge of Linux, Docker/Kubernetes Experience in or readiness to learn building applications using the modern technology stack: Cloud/AWS, DevOps, etc. WHAT WILL YOU BE DOING Acting as a first point of contact for business teams to provide timely assistance with data queries, system enhancements, and other technical requests. Work directly with business users to perform requirements analysis, application design and implementation. In collaboration with the wider engineering team, develop systems that are larger multi-tier applications and frameworks to simpler reports. Ensure high focus on SDLC with a focus on automated unit and regression tests. Create and maintain a professional-level internal knowledge base. Provide system training to business users and new joiners. Align and add to the culture and overall vision/mission of the team. This represents an excellent opportunity to join one of the world leading investment firms. Please send your CV for full details.
16/09/2024
Full time
Full Stack Python Developer - Front Office - SOLE AGENT Our client, a global leading investment firm, requires a talented Python Developer to join their team. This is an on-site position, in our client's London office. You will provide first class support for Deal Teams, Portfolio Managers and other business functions locally, as well as for other key regions, as part of a global team. Sitting with the trading team, you will build strong relationships with key business stakeholders; supporting and developing trading, analytics and reporting systems; an opportunity to participate in all aspects of the application development life cycle, including requirements analysis, application development, and devising test cases, while working closely with a spectrum of business functions like operations, finance, compliance, etc. Ideally you will have prior experience of working directly with financial investment professionals, and experience in full-stack development using modern technology frameworks. YOUR SKILLS Strong Python experience Knowledge of relational databases, and other data storage solutions, experience with SQL Excellent communication and relationship building skills. 5+ years of programming experience Understanding of programming design concepts, data structures, and algorithms Experience with modern development methodologies Familiarity with Front End libraries/frameworks Understanding of the API development with HTTP, REST and JSON (Python-Flask/Django preferred) Strong troubleshooting and analytical skills; detail oriented Strong cultural fit - Teamwork, proactive/self-starter, results oriented and integrity ADDITIONAL BENEFICIAL SKILLS/KNOWLEDGE Experience in one or more ofbank loans/leveraged loans, fixed-income products, CLOs, derivatives, ABS and CMBS products Working knowledge of Linux, Docker/Kubernetes Experience in or readiness to learn building applications using the modern technology stack: Cloud/AWS, DevOps, etc. WHAT WILL YOU BE DOING Acting as a first point of contact for business teams to provide timely assistance with data queries, system enhancements, and other technical requests. Work directly with business users to perform requirements analysis, application design and implementation. In collaboration with the wider engineering team, develop systems that are larger multi-tier applications and frameworks to simpler reports. Ensure high focus on SDLC with a focus on automated unit and regression tests. Create and maintain a professional-level internal knowledge base. Provide system training to business users and new joiners. Align and add to the culture and overall vision/mission of the team. This represents an excellent opportunity to join one of the world leading investment firms. Please send your CV for full details.
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
13/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
13/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
13/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
Associate Principal, Software Programming - Quantitative Risk Management Area - Associate Principal, Software Engineering - Automating Risk Models On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
12/09/2024
Full time
Associate Principal, Software Programming - Quantitative Risk Management Area - Associate Principal, Software Engineering - Automating Risk Models On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas