Senior Site Reliability Engineer - FinTech - Azure, Docker, Kubernetes, Terraform, CI/CD Oliver Bernard are currently seeking a Senior SRE to join a well-established team for a FinTech company in Poland. This hire is part of a period of transformation across the business focused around expanding their global product and instilling a strong DevOps culture whilst driving transformation and innovation. Having grown and acquired new business in the last year, they require a Senior Level Engineer to support their DevOps team in their efforts to scale through a series of greenfield projects focused around Azure, Terraform, CI/CD, Monitoring, Automation & more. The ideal candidate will have at least 3-4 years SRE/DevOps experience, ideally operating in a Senior capacity in their current role, and be able to work across the following technologies: Experience with Azure Cloud & Azure Services Container work with Docker and Kubernetes IaC with Terraform, alongside Automation with Ansible Strong CI/CD knowledge, with hands-on work across Azure DevOps Prior work with tools such as TeamCity, Octopus Deploy etc This is a remote opening for EU candidates, and can offer €60-80K for the right profile. Please apply here if this opportunity could be of interest. Senior Site Reliability Engineer - FinTech - Azure, Docker, Kubernetes, Terraform, CI/CD
18/09/2024
Full time
Senior Site Reliability Engineer - FinTech - Azure, Docker, Kubernetes, Terraform, CI/CD Oliver Bernard are currently seeking a Senior SRE to join a well-established team for a FinTech company in Poland. This hire is part of a period of transformation across the business focused around expanding their global product and instilling a strong DevOps culture whilst driving transformation and innovation. Having grown and acquired new business in the last year, they require a Senior Level Engineer to support their DevOps team in their efforts to scale through a series of greenfield projects focused around Azure, Terraform, CI/CD, Monitoring, Automation & more. The ideal candidate will have at least 3-4 years SRE/DevOps experience, ideally operating in a Senior capacity in their current role, and be able to work across the following technologies: Experience with Azure Cloud & Azure Services Container work with Docker and Kubernetes IaC with Terraform, alongside Automation with Ansible Strong CI/CD knowledge, with hands-on work across Azure DevOps Prior work with tools such as TeamCity, Octopus Deploy etc This is a remote opening for EU candidates, and can offer €60-80K for the right profile. Please apply here if this opportunity could be of interest. Senior Site Reliability Engineer - FinTech - Azure, Docker, Kubernetes, Terraform, CI/CD
NO SPONSORSHIP Associate Principal, Software Programming Quantitative Risk Management Area Associate Principal, Software Engineering Automating Risk Models Chicago - On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRMs software on these resources. Develop CI/CD pipelines. Contribute to development of QRMs databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Masters degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
17/09/2024
Full time
NO SPONSORSHIP Associate Principal, Software Programming Quantitative Risk Management Area Associate Principal, Software Engineering Automating Risk Models Chicago - On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRMs software on these resources. Develop CI/CD pipelines. Contribute to development of QRMs databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Masters degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
NO SPONSORSHIP AWS Cloud Engineer SALARY: $115k - 120K and a 10% Bonus LOCATION: Chicago, IL Hybrid 2 day remote and 3 days onsite SELLING POINTS: Bash Python Scripting AWS Kubernetes CICD Github Jenkins Artifactory Docker Compose K8s Kafka Rabbit MQ Amazon Kinesis Terraform Ansible Jenkins Helm Linux Linux Shell Scripting Splunk Infrastructure as a code (IaC) Qualifications: Programming/Scripting experience in languages like Java, Bash, Python or Go Knowledge of Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s) Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Experience with RESTful APIs and JSON RPC Experience following Git workflows Technical Skills: Experience with Linux and Linux Shell Scripting. Jenkins job setup and execution analysis - including Splunk log review for Root Cause Analysis (RCA). Ability to manage Kubernetes deployments with helm charts, using continuous deployment tools like Harness.io Ability to manage AWS deployments using Terraform, Ansible, or similar Infrastructure as Code (IaC) frameworks. Experience with automation, configuration management and orchestration, infrastructure as code. Experience with Golang or Python is a plus. BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team
17/09/2024
Full time
NO SPONSORSHIP AWS Cloud Engineer SALARY: $115k - 120K and a 10% Bonus LOCATION: Chicago, IL Hybrid 2 day remote and 3 days onsite SELLING POINTS: Bash Python Scripting AWS Kubernetes CICD Github Jenkins Artifactory Docker Compose K8s Kafka Rabbit MQ Amazon Kinesis Terraform Ansible Jenkins Helm Linux Linux Shell Scripting Splunk Infrastructure as a code (IaC) Qualifications: Programming/Scripting experience in languages like Java, Bash, Python or Go Knowledge of Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s) Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Experience with RESTful APIs and JSON RPC Experience following Git workflows Technical Skills: Experience with Linux and Linux Shell Scripting. Jenkins job setup and execution analysis - including Splunk log review for Root Cause Analysis (RCA). Ability to manage Kubernetes deployments with helm charts, using continuous deployment tools like Harness.io Ability to manage AWS deployments using Terraform, Ansible, or similar Infrastructure as Code (IaC) frameworks. Experience with automation, configuration management and orchestration, infrastructure as code. Experience with Golang or Python is a plus. BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is on the search for a Senior Associate, Cloud Engineer. This company is looking for a 3-year cloud engineer with experience with bash, python, AWS, Kubernetes, CICD, Ansible, Terraform, Linux Shell, IaC, etc. Responsibilities: Enable development teams to self-service build and deployment processes through process automation. Assist in designing process improvements across the build, deployment, and monitoring of Clearing applications. Support the maintenance and configuration of development environments in Kubernetes and AWS. Support terraform, ansible, Harness, and Jenkins jobs used to instantiate and manage development environments. Qualifications: BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team Programming/Scripting experience in languages like Java, Bash, Python or Go Knowledge of Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s) Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Experience with RESTful APIs and JSON RPC Experience following Git workflows Experience with Linux and Linux Shell Scripting. Jenkins job setup and execution analysis - including Splunk log review for Root Cause Analysis (RCA). Ability to manage Kubernetes deployments with helm charts, using continuous deployment tools like Harness.io Ability to manage AWS deployments using Terraform, Ansible, or similar Infrastructure as Code (IaC) frameworks. Experience with automation, configuration management and orchestration, infrastructure as code.
17/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is on the search for a Senior Associate, Cloud Engineer. This company is looking for a 3-year cloud engineer with experience with bash, python, AWS, Kubernetes, CICD, Ansible, Terraform, Linux Shell, IaC, etc. Responsibilities: Enable development teams to self-service build and deployment processes through process automation. Assist in designing process improvements across the build, deployment, and monitoring of Clearing applications. Support the maintenance and configuration of development environments in Kubernetes and AWS. Support terraform, ansible, Harness, and Jenkins jobs used to instantiate and manage development environments. Qualifications: BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team Programming/Scripting experience in languages like Java, Bash, Python or Go Knowledge of Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s) Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Experience with RESTful APIs and JSON RPC Experience following Git workflows Experience with Linux and Linux Shell Scripting. Jenkins job setup and execution analysis - including Splunk log review for Root Cause Analysis (RCA). Ability to manage Kubernetes deployments with helm charts, using continuous deployment tools like Harness.io Ability to manage AWS deployments using Terraform, Ansible, or similar Infrastructure as Code (IaC) frameworks. Experience with automation, configuration management and orchestration, infrastructure as code.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* *NO CONTRACTORS OR CONSULTANTS* A prestigious company is looking for an Associate Principal, Backend Java Developer. This company needs someone with 7-10 years of experience focused on Back End Java development, Java 11, Kafka, Golang, Multithreading, AWS, etc. They will be working in a Real Time and highly regulated financial environment. Responsibilities: Actively participates in design of highly performing, scalable, secure, reliable and cost optimized solutions. Primary responsibility is application design and development of next gen clearing applications for business requirements in agreed architecture framework and Agile environment. Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation. Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented. Participates in code-reviews based on high engineering standards Writes unit and integration tests based on chosen test frameworks. Assists Production Support by providing advice on system functionality and fixes as required. Qualifications: BS degree in Computer Science, similar technical field required. Masters preferred. 7-10 years of experience in building large scale, compute and event-driven solutions. Experience (including internal workings of Java) in Java 11+ is required. Experience with app development in Golang. Experience developing software using Object Oriented Designs, advance patterns (like AOP) and multi-threading is required. Experience with distributed message brokers like Kafka, IBM MQ, Amazon Kinesis, etc. is desirable. Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Must be able to write good quality code with 80% or above unit and integration tests coverage. Experience with testing frameworks like Junit, Citrus is desirable. Experience working with various types of databases like Relational, NoSQL, Object-based, Graph. Experience following Git workflows is required. Familiarity with DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Docker, Helm and CI/CD pipeline etc.is a plus. Experience with performance optimization, profiling, and memory management.
17/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* *NO CONTRACTORS OR CONSULTANTS* A prestigious company is looking for an Associate Principal, Backend Java Developer. This company needs someone with 7-10 years of experience focused on Back End Java development, Java 11, Kafka, Golang, Multithreading, AWS, etc. They will be working in a Real Time and highly regulated financial environment. Responsibilities: Actively participates in design of highly performing, scalable, secure, reliable and cost optimized solutions. Primary responsibility is application design and development of next gen clearing applications for business requirements in agreed architecture framework and Agile environment. Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation. Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented. Participates in code-reviews based on high engineering standards Writes unit and integration tests based on chosen test frameworks. Assists Production Support by providing advice on system functionality and fixes as required. Qualifications: BS degree in Computer Science, similar technical field required. Masters preferred. 7-10 years of experience in building large scale, compute and event-driven solutions. Experience (including internal workings of Java) in Java 11+ is required. Experience with app development in Golang. Experience developing software using Object Oriented Designs, advance patterns (like AOP) and multi-threading is required. Experience with distributed message brokers like Kafka, IBM MQ, Amazon Kinesis, etc. is desirable. Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Must be able to write good quality code with 80% or above unit and integration tests coverage. Experience with testing frameworks like Junit, Citrus is desirable. Experience working with various types of databases like Relational, NoSQL, Object-based, Graph. Experience following Git workflows is required. Familiarity with DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Docker, Helm and CI/CD pipeline etc.is a plus. Experience with performance optimization, profiling, and memory management.
Senior Cloud DevOps Engineer Assignment description We are looking for a Senior DevOps Engineer for our client in the Financial Industry Job Description The employee will be working on CI/CD pipelines, Kubernetes and cloud to enable the development team to deliver in a timely fashion. We also maintain Redis, RabbitMQ and various Linux Servers Required skills: Linux Bamboo Kubernetes Cloud (AWS, Azure) Nice to have Redis Nice to have Rabbitmq Start date: 01/10/2024 End date: 30/04/2025 Location: Copenhagen, Denmark onsite work at clients offices is required for this position and candidates will ideally already be based in Copenhagen Languages English (Proficient)
17/09/2024
Contractor
Senior Cloud DevOps Engineer Assignment description We are looking for a Senior DevOps Engineer for our client in the Financial Industry Job Description The employee will be working on CI/CD pipelines, Kubernetes and cloud to enable the development team to deliver in a timely fashion. We also maintain Redis, RabbitMQ and various Linux Servers Required skills: Linux Bamboo Kubernetes Cloud (AWS, Azure) Nice to have Redis Nice to have Rabbitmq Start date: 01/10/2024 End date: 30/04/2025 Location: Copenhagen, Denmark onsite work at clients offices is required for this position and candidates will ideally already be based in Copenhagen Languages English (Proficient)
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
16/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
16/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Company is currently seeking a Cloud Automation and Tools Software Engineer with strong Python/PowerShell automation experience. Candidate will be part of a small Innovation team of Engineers that will collaborate with stakeholders, partner teams, and Solutions Architects to research and engineer emerging technologies as part of a comprehensive requirements-driven solution design. Candidate will be developing technology engineering requirements and working on Proof-of-Concept and laboratory testing efforts using modern approaches to process and automation. Candidate will build/deploy/document/manage Lab environments within On-Prem/Cloud Datacenters to be used for Proof-of-Concepts and rapid prototyping. In this engineering role, you will use your technology background to evaluate emerging technologies and help OTSI Leadership make informed decisions on changes to the Technology Roadmap. Responsibilities: Engineer and maintain Lab environments in Public Cloud and the Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, server-less, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Ability to think strategically and map architectural decisions/recommendations to business needs Advanced problem-solving skills and logical approach to solving problems [Required] Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc [Preferred] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. [Preferred] Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Technical Skills: In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes [Preferred] Familiarity with security standards such as the NIST CSF Education and/or Experience: [Preferred] Bachelor's or master's degree in computer science related degree or equivalent experience [Required] 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment [Required] 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Certificates or Licenses: [Preferred] Cloud computing certification such as AWS Solutions Architect Associate, Azure Administrator or something similar [Desired] Technical Security Certifications such as AWS Certified Security, Microsoft Azure Security Engineer or something similar [Desired] CCNA, Network+ or other relevant Networking certifications
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
16/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
Methods Business and Digital Technology Limited
Gloucester, Gloucestershire
Senior Back End Developer (Cyber) Location: On-site 5-days ( Worcester/Ebbw Vale/Gloucester/Great Malvern) Company: Methods Business and Digital Technology Limited About Us: Methods is a leading £100M+ IT Services Consultancy with a rich history of transforming the public sector in the UK. With over 30 years of experience, we collaborate with central government departments and agencies to create innovative, people-centred solutions. Now expanding into the private sector, we continue to drive success through our commitment to technology, data, and a human touch. Role Overview: We are seeking a highly skilled Senior Back End Developer to join our dynamic team. The ideal candidate will have strong expertise in Python and SQL, with a proven track record of developing and maintaining robust Back End systems. You will collaborate closely with Front End developers, data engineers, and product managers to build scalable, efficient applications that meet user needs. Key Responsibilities: Design, develop, and maintain reliable Back End systems using Python and SQL. Utilize frameworks like Django, Flask, FastAPI, Asyncio, Aiohttp, and SQLAlchemy. Develop and document RESTful APIs, WebSocket, and GraphQL services. Manage and optimize databases (PostgreSQL, NATS, Redis, Min.IO). Implement cloud-based solutions using Microsoft Azure services. Ensure security protocols with OAuth and KeyCloak. Conduct testing with SonarQube, Pytest, isort, black, and bandit. Use Git for version control. Implement containerization and orchestration with Docker, Kubernetes, and Helm. Develop CI/CD pipelines with GitHub Actions and Azure DevOps Pipelines. Collaborate using Jira and Confluence. Monitor and enhance system performance with Prometheus and Grafana. Requirements: Extensive experience as a Senior Back End Developer. Proficient in Python and SQL. Skilled with frameworks and libraries: Django, Flask, FastAPI, Asyncio, Aiohttp, SQLAlchemy. Experience in developing/managing RESTful APIs, WebSocket, GraphQL services. Database management expertise (PostgreSQL, NATS, Redis, Min.IO). Hands-on with Microsoft Azure services. Security implementation knowledge (OAuth, KeyCloak). Testing proficiency (SonarQube, Pytest, isort, black, bandit). Version control with Git. Experience with Docker, Kubernetes, Helm. CI/CD processes familiarity (GitHub Actions, Azure DevOps Pipelines). Excellent collaboration and communication skills. Problem-solving abilities. Security Clearance: This role will require you to have or be willing to go through Security Clearance. As part of the onboarding process candidates will be asked to complete a Baseline Personnel Security Standard; details of the evidence required to apply may be found on the government website Gov.UK. If you are unable to meet this and any associated criteria, then your employment may be delayed, or rejected. Details of this will be discussed with you at interview. Benefits: Methods is passionate about its people; we want our colleagues to develop the things they are good at and enjoy. By joining us you can expect Autonomy to develop and grow your skills and experience Be part of exciting project work that is making a difference in society Strong, inspiring and thought-provoking leadership A supportive and collaborative environment Development - access to LinkedIn Learning, a management development programme, and training Wellness - 24/7 confidential employee assistance programme Flexible Working - including home working and part time Social - office parties, breakfast Tuesdays, monthly pizza Thursdays, Thirsty Thursdays, and commitment to charitable causes Time Off - 25 days of annual leave a year, plus bank holidays, with the option to buy 5 extra days each year Volunteering - 2 paid days per year to volunteer in our local communities or within a charity organisation Pension - Salary Exchange Scheme with 4% employer contribution and 5% employee contribution Discretionary Company Bonus - based on company and individual performance Life Assurance - of 4 times base salary Private Medical Insurance - which is non-contributory (spouse and dependants included) Worldwide Travel Insurance - which is non-contributory (spouse and dependants included) Enhanced Maternity and Paternity Pay Travel - season ticket loan, cycle to work scheme For a full list of benefits please visit our website
16/09/2024
Full time
Senior Back End Developer (Cyber) Location: On-site 5-days ( Worcester/Ebbw Vale/Gloucester/Great Malvern) Company: Methods Business and Digital Technology Limited About Us: Methods is a leading £100M+ IT Services Consultancy with a rich history of transforming the public sector in the UK. With over 30 years of experience, we collaborate with central government departments and agencies to create innovative, people-centred solutions. Now expanding into the private sector, we continue to drive success through our commitment to technology, data, and a human touch. Role Overview: We are seeking a highly skilled Senior Back End Developer to join our dynamic team. The ideal candidate will have strong expertise in Python and SQL, with a proven track record of developing and maintaining robust Back End systems. You will collaborate closely with Front End developers, data engineers, and product managers to build scalable, efficient applications that meet user needs. Key Responsibilities: Design, develop, and maintain reliable Back End systems using Python and SQL. Utilize frameworks like Django, Flask, FastAPI, Asyncio, Aiohttp, and SQLAlchemy. Develop and document RESTful APIs, WebSocket, and GraphQL services. Manage and optimize databases (PostgreSQL, NATS, Redis, Min.IO). Implement cloud-based solutions using Microsoft Azure services. Ensure security protocols with OAuth and KeyCloak. Conduct testing with SonarQube, Pytest, isort, black, and bandit. Use Git for version control. Implement containerization and orchestration with Docker, Kubernetes, and Helm. Develop CI/CD pipelines with GitHub Actions and Azure DevOps Pipelines. Collaborate using Jira and Confluence. Monitor and enhance system performance with Prometheus and Grafana. Requirements: Extensive experience as a Senior Back End Developer. Proficient in Python and SQL. Skilled with frameworks and libraries: Django, Flask, FastAPI, Asyncio, Aiohttp, SQLAlchemy. Experience in developing/managing RESTful APIs, WebSocket, GraphQL services. Database management expertise (PostgreSQL, NATS, Redis, Min.IO). Hands-on with Microsoft Azure services. Security implementation knowledge (OAuth, KeyCloak). Testing proficiency (SonarQube, Pytest, isort, black, bandit). Version control with Git. Experience with Docker, Kubernetes, Helm. CI/CD processes familiarity (GitHub Actions, Azure DevOps Pipelines). Excellent collaboration and communication skills. Problem-solving abilities. Security Clearance: This role will require you to have or be willing to go through Security Clearance. As part of the onboarding process candidates will be asked to complete a Baseline Personnel Security Standard; details of the evidence required to apply may be found on the government website Gov.UK. If you are unable to meet this and any associated criteria, then your employment may be delayed, or rejected. Details of this will be discussed with you at interview. Benefits: Methods is passionate about its people; we want our colleagues to develop the things they are good at and enjoy. By joining us you can expect Autonomy to develop and grow your skills and experience Be part of exciting project work that is making a difference in society Strong, inspiring and thought-provoking leadership A supportive and collaborative environment Development - access to LinkedIn Learning, a management development programme, and training Wellness - 24/7 confidential employee assistance programme Flexible Working - including home working and part time Social - office parties, breakfast Tuesdays, monthly pizza Thursdays, Thirsty Thursdays, and commitment to charitable causes Time Off - 25 days of annual leave a year, plus bank holidays, with the option to buy 5 extra days each year Volunteering - 2 paid days per year to volunteer in our local communities or within a charity organisation Pension - Salary Exchange Scheme with 4% employer contribution and 5% employee contribution Discretionary Company Bonus - based on company and individual performance Life Assurance - of 4 times base salary Private Medical Insurance - which is non-contributory (spouse and dependants included) Worldwide Travel Insurance - which is non-contributory (spouse and dependants included) Enhanced Maternity and Paternity Pay Travel - season ticket loan, cycle to work scheme For a full list of benefits please visit our website
Full Stack Python Developer - Front Office - SOLE AGENT Our client, a global leading investment firm, requires a talented Python Developer to join their team. This is an on-site position, in our client's London office. You will provide first class support for Deal Teams, Portfolio Managers and other business functions locally, as well as for other key regions, as part of a global team. Sitting with the trading team, you will build strong relationships with key business stakeholders; supporting and developing trading, analytics and reporting systems; an opportunity to participate in all aspects of the application development life cycle, including requirements analysis, application development, and devising test cases, while working closely with a spectrum of business functions like operations, finance, compliance, etc. Ideally you will have prior experience of working directly with financial investment professionals, and experience in full-stack development using modern technology frameworks. YOUR SKILLS Strong Python experience Knowledge of relational databases, and other data storage solutions, experience with SQL Excellent communication and relationship building skills. 5+ years of programming experience Understanding of programming design concepts, data structures, and algorithms Experience with modern development methodologies Familiarity with Front End libraries/frameworks Understanding of the API development with HTTP, REST and JSON (Python-Flask/Django preferred) Strong troubleshooting and analytical skills; detail oriented Strong cultural fit - Teamwork, proactive/self-starter, results oriented and integrity ADDITIONAL BENEFICIAL SKILLS/KNOWLEDGE Experience in one or more ofbank loans/leveraged loans, fixed-income products, CLOs, derivatives, ABS and CMBS products Working knowledge of Linux, Docker/Kubernetes Experience in or readiness to learn building applications using the modern technology stack: Cloud/AWS, DevOps, etc. WHAT WILL YOU BE DOING Acting as a first point of contact for business teams to provide timely assistance with data queries, system enhancements, and other technical requests. Work directly with business users to perform requirements analysis, application design and implementation. In collaboration with the wider engineering team, develop systems that are larger multi-tier applications and frameworks to simpler reports. Ensure high focus on SDLC with a focus on automated unit and regression tests. Create and maintain a professional-level internal knowledge base. Provide system training to business users and new joiners. Align and add to the culture and overall vision/mission of the team. This represents an excellent opportunity to join one of the world leading investment firms. Please send your CV for full details.
16/09/2024
Full time
Full Stack Python Developer - Front Office - SOLE AGENT Our client, a global leading investment firm, requires a talented Python Developer to join their team. This is an on-site position, in our client's London office. You will provide first class support for Deal Teams, Portfolio Managers and other business functions locally, as well as for other key regions, as part of a global team. Sitting with the trading team, you will build strong relationships with key business stakeholders; supporting and developing trading, analytics and reporting systems; an opportunity to participate in all aspects of the application development life cycle, including requirements analysis, application development, and devising test cases, while working closely with a spectrum of business functions like operations, finance, compliance, etc. Ideally you will have prior experience of working directly with financial investment professionals, and experience in full-stack development using modern technology frameworks. YOUR SKILLS Strong Python experience Knowledge of relational databases, and other data storage solutions, experience with SQL Excellent communication and relationship building skills. 5+ years of programming experience Understanding of programming design concepts, data structures, and algorithms Experience with modern development methodologies Familiarity with Front End libraries/frameworks Understanding of the API development with HTTP, REST and JSON (Python-Flask/Django preferred) Strong troubleshooting and analytical skills; detail oriented Strong cultural fit - Teamwork, proactive/self-starter, results oriented and integrity ADDITIONAL BENEFICIAL SKILLS/KNOWLEDGE Experience in one or more ofbank loans/leveraged loans, fixed-income products, CLOs, derivatives, ABS and CMBS products Working knowledge of Linux, Docker/Kubernetes Experience in or readiness to learn building applications using the modern technology stack: Cloud/AWS, DevOps, etc. WHAT WILL YOU BE DOING Acting as a first point of contact for business teams to provide timely assistance with data queries, system enhancements, and other technical requests. Work directly with business users to perform requirements analysis, application design and implementation. In collaboration with the wider engineering team, develop systems that are larger multi-tier applications and frameworks to simpler reports. Ensure high focus on SDLC with a focus on automated unit and regression tests. Create and maintain a professional-level internal knowledge base. Provide system training to business users and new joiners. Align and add to the culture and overall vision/mission of the team. This represents an excellent opportunity to join one of the world leading investment firms. Please send your CV for full details.
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
13/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
13/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
13/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
Associate Principal, Software Programming - Quantitative Risk Management Area - Associate Principal, Software Engineering - Automating Risk Models On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
12/09/2024
Full time
Associate Principal, Software Programming - Quantitative Risk Management Area - Associate Principal, Software Engineering - Automating Risk Models On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
12/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious financial firm is looking for a Principal Software Engineer. This engineer will build software solutions to test systems of financial products. This engineer will need heavy experience using Java, python, Terraform, CI/CD, DevOps, and containerization. The ideal candidate will have experience of working in a highly regulated financial environment. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus.
12/09/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious financial firm is looking for a Principal Software Engineer. This engineer will build software solutions to test systems of financial products. This engineer will need heavy experience using Java, python, Terraform, CI/CD, DevOps, and containerization. The ideal candidate will have experience of working in a highly regulated financial environment. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus.
Job Title: Tech Lead Job Summary: We are seeking a highly skilled Tech Lead to design, develop, and maintain serverless applications using Python and AWS technologies. The ideal candidate will have extensive experience in building scalable, high-performance Back End systems and a deep understanding of AWS serverless services such as Lambda, DynamoDB, SNS, SQS, S3, and others. This role requires a strong technical leader who can guide teams, architect solutions, and contribute to the overall success of our fintech products. Key Responsibilities: Architect and Develop Solutions: Design and implement robust, scalable, and secure Back End services using Python and AWS serverless technologies. Serverless Application Development: Build and maintain serverless applications leveraging AWS Lambda, DynamoDB, API Gateway, S3, SNS, SQS, and other AWS services. Leadership: Provide technical leadership and mentorship to a team of engineers, promoting best practices in software development, testing, and DevOps. Collaboration: Work closely with cross-functional teams including Front End developers, product managers, and DevOps engineers to deliver high-quality solutions that meet business needs. Automation and CI/CD: Implement and manage CI/CD pipelines, automated testing, and monitoring to ensure high availability and rapid deployment of services. Performance Optimization: Optimize Back End services for performance, scalability, and cost-effectiveness, ensuring the efficient use of AWS resources. Security: Ensure that all solutions adhere to industry best practices for security, including data protection, access controls, and encryption. Documentation: Create and maintain comprehensive technical documentation, including architecture diagrams, API documentation, and deployment guides. Problem Solving: Diagnose and resolve complex technical issues in production environments, ensuring minimal downtime and disruption. Continuous Improvement: Stay updated with the latest trends and best practices in Python, AWS serverless technologies, and fintech/banking technology stacks, and apply this knowledge to improve our systems. Qualifications: Experience: Minimum of 10 years of experience in Back End software development, with at least 6 years of hands-on experience in Python. Extensive experience with AWS serverless technologies, including Lambda, DynamoDB, API Gateway, SNS, SQS, S3, ECS, EKS and other related services. Proven experience in leading technical teams and delivering complex, scalable cloud-based solutions in the fintech or banking sectors. Technical Skills: Strong proficiency in Python and related frameworks (eg, Flask, Django). Deep understanding of AWS serverless architecture and best practices. Experience with infrastructure as code (IaC) tools such as AWS CloudFormation or Terraform. Familiarity with RESTful APIs, microservices architecture, and event-driven systems. Knowledge of DevOps practices, including CI/CD pipelines, automated testing, and monitoring using AWS services (eg, CodePipeline, CloudWatch, X-Ray). Leadership: Demonstrated ability to lead and mentor engineering teams, fostering a culture of collaboration, innovation, and continuous improvement. Problem-Solving: Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve complex technical issues in a fast-paced environment. Communication: Excellent verbal and written communication skills, with the ability to effectively convey technical concepts to both technical and non-technical stakeholders. Preferred Qualifications: Experience with other cloud platforms (eg, Azure, GCP) and containerization technologies like Docker and Kubernetes. Familiarity with financial services industry regulations and compliance requirements. Relevant certifications such as AWS Certified Solutions Architect, AWS Certified Developer, or similar.
12/09/2024
Full time
Job Title: Tech Lead Job Summary: We are seeking a highly skilled Tech Lead to design, develop, and maintain serverless applications using Python and AWS technologies. The ideal candidate will have extensive experience in building scalable, high-performance Back End systems and a deep understanding of AWS serverless services such as Lambda, DynamoDB, SNS, SQS, S3, and others. This role requires a strong technical leader who can guide teams, architect solutions, and contribute to the overall success of our fintech products. Key Responsibilities: Architect and Develop Solutions: Design and implement robust, scalable, and secure Back End services using Python and AWS serverless technologies. Serverless Application Development: Build and maintain serverless applications leveraging AWS Lambda, DynamoDB, API Gateway, S3, SNS, SQS, and other AWS services. Leadership: Provide technical leadership and mentorship to a team of engineers, promoting best practices in software development, testing, and DevOps. Collaboration: Work closely with cross-functional teams including Front End developers, product managers, and DevOps engineers to deliver high-quality solutions that meet business needs. Automation and CI/CD: Implement and manage CI/CD pipelines, automated testing, and monitoring to ensure high availability and rapid deployment of services. Performance Optimization: Optimize Back End services for performance, scalability, and cost-effectiveness, ensuring the efficient use of AWS resources. Security: Ensure that all solutions adhere to industry best practices for security, including data protection, access controls, and encryption. Documentation: Create and maintain comprehensive technical documentation, including architecture diagrams, API documentation, and deployment guides. Problem Solving: Diagnose and resolve complex technical issues in production environments, ensuring minimal downtime and disruption. Continuous Improvement: Stay updated with the latest trends and best practices in Python, AWS serverless technologies, and fintech/banking technology stacks, and apply this knowledge to improve our systems. Qualifications: Experience: Minimum of 10 years of experience in Back End software development, with at least 6 years of hands-on experience in Python. Extensive experience with AWS serverless technologies, including Lambda, DynamoDB, API Gateway, SNS, SQS, S3, ECS, EKS and other related services. Proven experience in leading technical teams and delivering complex, scalable cloud-based solutions in the fintech or banking sectors. Technical Skills: Strong proficiency in Python and related frameworks (eg, Flask, Django). Deep understanding of AWS serverless architecture and best practices. Experience with infrastructure as code (IaC) tools such as AWS CloudFormation or Terraform. Familiarity with RESTful APIs, microservices architecture, and event-driven systems. Knowledge of DevOps practices, including CI/CD pipelines, automated testing, and monitoring using AWS services (eg, CodePipeline, CloudWatch, X-Ray). Leadership: Demonstrated ability to lead and mentor engineering teams, fostering a culture of collaboration, innovation, and continuous improvement. Problem-Solving: Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve complex technical issues in a fast-paced environment. Communication: Excellent verbal and written communication skills, with the ability to effectively convey technical concepts to both technical and non-technical stakeholders. Preferred Qualifications: Experience with other cloud platforms (eg, Azure, GCP) and containerization technologies like Docker and Kubernetes. Familiarity with financial services industry regulations and compliance requirements. Relevant certifications such as AWS Certified Solutions Architect, AWS Certified Developer, or similar.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Senior Java Software Engineer. Candidate will support and work collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Preferred] Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google [Required] Experience writing unit and integration tests with testing frameworks like Junit, Citrus [Required] Experience working with various types of databases like Relational, NoSQL [Required] Experience working with Git [Preferred] Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc [Preferred] Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics [Required] Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API Technical Skills: [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns Education and/or Experience: [Required] BS degree in Computer Science, similar technical field required
11/09/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Senior Java Software Engineer. Candidate will support and work collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Preferred] Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google [Required] Experience writing unit and integration tests with testing frameworks like Junit, Citrus [Required] Experience working with various types of databases like Relational, NoSQL [Required] Experience working with Git [Preferred] Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc [Preferred] Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics [Required] Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API Technical Skills: [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns Education and/or Experience: [Required] BS degree in Computer Science, similar technical field required
NO SPONSORSHIP Software Engineering - Python, Java, Terraform, DevOps, Containerization Understanding of industry They do not necessarily have to work within a QRM portal. But they have to understand the industry and come from a highly regulated background, preferably financial Looking for a hard core developer who can work within quantitative risk management and they develop applications and solutions for the QRM team They do not build models, they automate models Develop hardcore applications These people will have masters in mathematics, statistics, physics, or computer science *They may even have a PhD They need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Develops and maintains risk models for managing clearing fund and stress testing risk model software in production AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
11/09/2024
Full time
NO SPONSORSHIP Software Engineering - Python, Java, Terraform, DevOps, Containerization Understanding of industry They do not necessarily have to work within a QRM portal. But they have to understand the industry and come from a highly regulated background, preferably financial Looking for a hard core developer who can work within quantitative risk management and they develop applications and solutions for the QRM team They do not build models, they automate models Develop hardcore applications These people will have masters in mathematics, statistics, physics, or computer science *They may even have a PhD They need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Develops and maintains risk models for managing clearing fund and stress testing risk model software in production AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas