Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
04/10/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
04/10/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Principal Financial IT Infrastructure Architect. Candidate will be part of a small Innovation team of Architects that will collaborate with development teams, Solutions Architects, vendors, and other stakeholders to define and drive architectural vision, implementation and continuous improvement of solutions running on the core Real Time data streaming and compute infrastructure platforms such Kafka, Flink and K8s in a Hybrid Environment. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Collaborate with DevOps teams to define deployment strategies and manage scalability. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Maintain vendor relationships and participate in escalation sessions and postmortems Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: [Required] Effective communication skills to effectively collaborate and evangelize best practices with technical stakeholders. [Required] Advanced problem-solving skills and logical approach to solving problems [Required] Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. [Required] Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Technical Skills: Expert level knowledge of Kafka Expert level knowledge of Flink In depth knowledge of on-premises networking as well as the hybrid connectivity to AWS and/or Azure Knowledge of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), compute, storage, database, network, content distribution, security/IAM, microservices, management, and serverless services Knowledge of Infrastructure as Code (IaC) such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Education and/or Experience: [Preferred] Bachelor's or Master's degree in an engineering discipline [Required] 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures [Required] 10+ years of experience with Java [Required] 5+ years of specific Kafka and Flink experience [Preferred] 5+ years of Kubernetes experience Certificates or Licenses: [Preferred] Confluent Certified Developer for Apache Kafka [Preferred] AWS certifications (eg Solutions Architect Associate) [Preferred] Certified Kubernetes Application Developer
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
04/10/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
Lead Software Engineer Location: Manchester Working pattern: Hybrid Salary: £70K - £100K We're seeking a passionate Lead Software Engineer to drive technical excellence and innovation in tech projects. You'll lead the design, development, and quality assurance of impactful solutions, working with cutting-edge technologies. This role offers growth opportunities and the chance to shape innovative tech that contributes to a smarter, safer, and greener world. You'll collaborate with a dynamic team, solving complex digital transformation challenges for high-profile clients, while advancing your leadership and technical skills. About Us We're a global digital transformation consultancy, delivering award-winning solutions across civil defence, healthcare, sustainability, and more. Why Join Us? Work on impactful projects with real-world applications. Innovative environment with opportunities to shape tech solutions. Tailored career growth and leadership development. A dynamic, supportive culture. Key Responsibilities Lead design, development, and integration of high-quality software solutions. Ensure technical excellence and guide best practices. Collaborate with DevOps engineers to implement CI/CD pipelines. Build strong client relationships and provide strategic, technical guidance. Mentor team members and assist with recruitment efforts. Tools & Technologies Languages : JavaScript, Python, Java, C#, and other modern programming languages. Frameworks : React, Node.js, Django, Spring, .NET. Cloud Platforms : AWS, Azure, GCP. DevOps Tools : Docker, Kubernetes, Jenkins, Terraform. CI/CD : GitLab, GitHub Actions, CircleCI. Version Control : Git. Testing : Selenium, JUnit, Cypress, Jest. Requirements Proven experience leading technical teams and delivering innovative solutions. Expertise in software engineering best practices, modern languages, and cloud platforms. Strong Agile development experience and CI/CD knowledge. Excellent leadership, problem-solving, and communication skills. Benefits Career growth through award-winning programs. Comprehensive health and wellbeing support. Hybrid working, private healthcare, profit sharing, gym membership, and more. Company Pension Contribution Profit Share Scheme
04/10/2024
Full time
Lead Software Engineer Location: Manchester Working pattern: Hybrid Salary: £70K - £100K We're seeking a passionate Lead Software Engineer to drive technical excellence and innovation in tech projects. You'll lead the design, development, and quality assurance of impactful solutions, working with cutting-edge technologies. This role offers growth opportunities and the chance to shape innovative tech that contributes to a smarter, safer, and greener world. You'll collaborate with a dynamic team, solving complex digital transformation challenges for high-profile clients, while advancing your leadership and technical skills. About Us We're a global digital transformation consultancy, delivering award-winning solutions across civil defence, healthcare, sustainability, and more. Why Join Us? Work on impactful projects with real-world applications. Innovative environment with opportunities to shape tech solutions. Tailored career growth and leadership development. A dynamic, supportive culture. Key Responsibilities Lead design, development, and integration of high-quality software solutions. Ensure technical excellence and guide best practices. Collaborate with DevOps engineers to implement CI/CD pipelines. Build strong client relationships and provide strategic, technical guidance. Mentor team members and assist with recruitment efforts. Tools & Technologies Languages : JavaScript, Python, Java, C#, and other modern programming languages. Frameworks : React, Node.js, Django, Spring, .NET. Cloud Platforms : AWS, Azure, GCP. DevOps Tools : Docker, Kubernetes, Jenkins, Terraform. CI/CD : GitLab, GitHub Actions, CircleCI. Version Control : Git. Testing : Selenium, JUnit, Cypress, Jest. Requirements Proven experience leading technical teams and delivering innovative solutions. Expertise in software engineering best practices, modern languages, and cloud platforms. Strong Agile development experience and CI/CD knowledge. Excellent leadership, problem-solving, and communication skills. Benefits Career growth through award-winning programs. Comprehensive health and wellbeing support. Hybrid working, private healthcare, profit sharing, gym membership, and more. Company Pension Contribution Profit Share Scheme
Description: SRE/Platform Engineer Join our Cloud Platform team as a Site Reliability Engineer at our client in Baden and become a part of a leading energy company where innovation meets expertise. If you are passionate about technology, automation, and continuous improvement, this is the role for you. You'll have the opportunity to design, build, and maintain systems crucial to our operations, while ensuring they are both scalable and reliable. What you will do Design, build, and maintain scalable, reliable, and highly available infrastructure and services. Develop automation tools to streamline deployment, configuration, and maintenance tasks. Close collaboration with our largest internal customer to support and consult them with their cloud infrastructure. Directly supporting software engineering teams to implement reliable and scalable solutions. Contribute to technical assessments, solution design and implementation. Engage in incident response, root cause analysis, and post-mortem reviews. Stay abreast of industry trends through workshops and conferences, sharing knowledge within the company. What you bring & who you are 3+ years of professional experience with Microsoft Azure and its services. Hands on knowledge on CI/CD platforms like Azure DevOps or Github Actions. Experience with Docker, Kubernetes and Azure container services Excellent knowledge about infrastructure as code tools such as Ansible, Terraform or Bicep. Strong programming skills in Scripting languages like Powershell or Bash. Good understanding of networking topics like Firewall and DNS, on-prem and cloud. Optionally knowledge on Azure security and policies as well as data platforms like ADF, Airflow or Databricks. Certifications on one or multiple technologies a big plus (eg Terraform or Azure Solution Architect). Proven problem-solving skills, with a proactive approach to potential issues. Excellent communication and collaboration skills, able to thrive in a dynamic environment. Fluency in English; German and/or Spanish is a plus. Skills: cloud platform SRE Platform engineering designing building automation support Microsoft azure cicd azure devops github docker kubernetes azure container Azure Devops infrastructure IaC ansible terraform bicep powershell bash Firewall DNS on prem cloud azure security adf airflow databricks azure solution architecture communication collaboration English Spanish German energy Employee Value Proposition: biggest provider of renewable energy Job Title: SRE/Platform Engineer Location: Baden, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
04/10/2024
Contractor
Description: SRE/Platform Engineer Join our Cloud Platform team as a Site Reliability Engineer at our client in Baden and become a part of a leading energy company where innovation meets expertise. If you are passionate about technology, automation, and continuous improvement, this is the role for you. You'll have the opportunity to design, build, and maintain systems crucial to our operations, while ensuring they are both scalable and reliable. What you will do Design, build, and maintain scalable, reliable, and highly available infrastructure and services. Develop automation tools to streamline deployment, configuration, and maintenance tasks. Close collaboration with our largest internal customer to support and consult them with their cloud infrastructure. Directly supporting software engineering teams to implement reliable and scalable solutions. Contribute to technical assessments, solution design and implementation. Engage in incident response, root cause analysis, and post-mortem reviews. Stay abreast of industry trends through workshops and conferences, sharing knowledge within the company. What you bring & who you are 3+ years of professional experience with Microsoft Azure and its services. Hands on knowledge on CI/CD platforms like Azure DevOps or Github Actions. Experience with Docker, Kubernetes and Azure container services Excellent knowledge about infrastructure as code tools such as Ansible, Terraform or Bicep. Strong programming skills in Scripting languages like Powershell or Bash. Good understanding of networking topics like Firewall and DNS, on-prem and cloud. Optionally knowledge on Azure security and policies as well as data platforms like ADF, Airflow or Databricks. Certifications on one or multiple technologies a big plus (eg Terraform or Azure Solution Architect). Proven problem-solving skills, with a proactive approach to potential issues. Excellent communication and collaboration skills, able to thrive in a dynamic environment. Fluency in English; German and/or Spanish is a plus. Skills: cloud platform SRE Platform engineering designing building automation support Microsoft azure cicd azure devops github docker kubernetes azure container Azure Devops infrastructure IaC ansible terraform bicep powershell bash Firewall DNS on prem cloud azure security adf airflow databricks azure solution architecture communication collaboration English Spanish German energy Employee Value Proposition: biggest provider of renewable energy Job Title: SRE/Platform Engineer Location: Baden, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
Description: Description: Senior OpenShift Administrator/Engineer General Information: Start date: ASAP Latest start date: can wait a couple of months for the right person Duration: 12 months + extension possible Work location: Basel Workload: 100% Team: OpenShift - relatively new team, knowledgeable, good collaboration around 4 members with this role Background : We are looking for an OpenShift Administrator/Engineer to join the Core Infrastructure team, to work as a member of the OpenShift team. The primary responsibilities will include design, implementation and support for the container orchestration for Hybrid Cloud infrastructure. This role is a replacement. We would be open to receive candidates at a late professional to senior level (at least five years' work experience ). The role will combine operational work (60%) with project assignment responsibilities (40%). This would be a great opportunity to join a diverse team to run activities with great impact at the whole organization level with possibility to learn and work with latest technologies. Perfect candidate: Experienced OpenShift administrator/engineer with very good technical knowledge and great communication and collaboration skills. Please note that the role involves collaboration with the business areas, therefore interpersonal skills would be important. Tasks & Responsibilities: Implement and maintain orchestration technology for container-based workloads. Integrate the platform within a Hybrid cloud environment. Integrate security controls, container scanning, and operational monitoring. Plan and implement cluster availability and manage software life cycle. Enable automation across modern compute workloads and integrate DevOps principles as part of technology modernization and operations Ensure that activities are undertaken in accordance with the Bank's high security standards in alignment with corporate policies, release and change management, and compliance. Incorporate resilience practices such that solutions are adequately protected and able to be sustained during times of adversity. Must haves: At least five years' work experience in administering and operating an OpenShift platform (*) SME in OpenShift (*) Fluency with code management solutions, in particular GIT and CI/CD tools (*) Logging experience with Grafana Loki (*) Install, configure, and maintain OpenShift clusters, whether on-premises or in the cloud. Monitor cluster performance and troubleshoot issues to ensure optimal operation. Manage user access and permissions within the OpenShift environment. Configure authentication and authorization for users and applications, and protect network traffic with network policies. Implement and manage CI/CD pipelines to automate application deployment. Manage OpenShift cluster updates and Kubernetes operator updates. Provide support to developers and troubleshoot any issues that arise within the cluster. Interpersonal skills: Excellent command of English and good communication skills Very good communication and collaboration skills Nice to have: Experience with Ceph Storage Systems monitoring, ideally Prometheus Broad understanding of ITIL fundamentals and their application in a business environment Consistent track record in provision of third-level support and design input in an enterprise environment Skills: openshift sME containerization GIT cicd English communication and people skills 3rd level support support hybrid cloud openshift platform CI CD Loki administration engineering cluster set up management monitoring tools user management deployment troubleshooting linux checkmk prometheus grafana grafana loki ITIL ceph storage Employee Value Proposition: This would be a great opportunity to join a diverse team to run activities with great impact at the whole organization level with possibility to learn and work with latest technologies. Hybrid way of working. Job Title: Senior OpenShift Administrator Location: Basel, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
04/10/2024
Contractor
Description: Description: Senior OpenShift Administrator/Engineer General Information: Start date: ASAP Latest start date: can wait a couple of months for the right person Duration: 12 months + extension possible Work location: Basel Workload: 100% Team: OpenShift - relatively new team, knowledgeable, good collaboration around 4 members with this role Background : We are looking for an OpenShift Administrator/Engineer to join the Core Infrastructure team, to work as a member of the OpenShift team. The primary responsibilities will include design, implementation and support for the container orchestration for Hybrid Cloud infrastructure. This role is a replacement. We would be open to receive candidates at a late professional to senior level (at least five years' work experience ). The role will combine operational work (60%) with project assignment responsibilities (40%). This would be a great opportunity to join a diverse team to run activities with great impact at the whole organization level with possibility to learn and work with latest technologies. Perfect candidate: Experienced OpenShift administrator/engineer with very good technical knowledge and great communication and collaboration skills. Please note that the role involves collaboration with the business areas, therefore interpersonal skills would be important. Tasks & Responsibilities: Implement and maintain orchestration technology for container-based workloads. Integrate the platform within a Hybrid cloud environment. Integrate security controls, container scanning, and operational monitoring. Plan and implement cluster availability and manage software life cycle. Enable automation across modern compute workloads and integrate DevOps principles as part of technology modernization and operations Ensure that activities are undertaken in accordance with the Bank's high security standards in alignment with corporate policies, release and change management, and compliance. Incorporate resilience practices such that solutions are adequately protected and able to be sustained during times of adversity. Must haves: At least five years' work experience in administering and operating an OpenShift platform (*) SME in OpenShift (*) Fluency with code management solutions, in particular GIT and CI/CD tools (*) Logging experience with Grafana Loki (*) Install, configure, and maintain OpenShift clusters, whether on-premises or in the cloud. Monitor cluster performance and troubleshoot issues to ensure optimal operation. Manage user access and permissions within the OpenShift environment. Configure authentication and authorization for users and applications, and protect network traffic with network policies. Implement and manage CI/CD pipelines to automate application deployment. Manage OpenShift cluster updates and Kubernetes operator updates. Provide support to developers and troubleshoot any issues that arise within the cluster. Interpersonal skills: Excellent command of English and good communication skills Very good communication and collaboration skills Nice to have: Experience with Ceph Storage Systems monitoring, ideally Prometheus Broad understanding of ITIL fundamentals and their application in a business environment Consistent track record in provision of third-level support and design input in an enterprise environment Skills: openshift sME containerization GIT cicd English communication and people skills 3rd level support support hybrid cloud openshift platform CI CD Loki administration engineering cluster set up management monitoring tools user management deployment troubleshooting linux checkmk prometheus grafana grafana loki ITIL ceph storage Employee Value Proposition: This would be a great opportunity to join a diverse team to run activities with great impact at the whole organization level with possibility to learn and work with latest technologies. Hybrid way of working. Job Title: Senior OpenShift Administrator Location: Basel, Switzerland Job Type: Contract TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
Must have active enhanced DV (West/North/South) Clearance Up to £75k DoE plus bonuses and benefits 3 days on site per week in Manchester Skills required in AWS/Azure, Containerisation, Orchestration, CI/CD, Automation Who are we? We are recruiting a Cloud and DevOps Consultant with enhanced DV Clearance for a prestigious client to work on a portfolio of public and private sector projects. Our client is a global leader in technology, consulting, and engineering services at the forefront of innovation. You'll experience excellent career progression opportunities to develop your skill set and personal profile in an inclusive culture. What will the Cloud and DevOps Consultant be doing? You'll lead clients through their cloud journey, aiding in the creation of robust, scalable platforms for critical services. Your role involves designing and implementing agile cloud environments, covering everything from architecture to operations, prototyping, and testing. You'll have exclusive chances to learn, grow, and lead in cutting-edge areas like infrastructure-as-code, DevOps, containers, platform-as-a-service, CI/CD, and micro-services. Key Skills and Requirements Active Enhanced DV Clearance Proficient in offering clear, practical solutions to intricate business challenges Effective communication of technical concepts to non-technical audiences Assessing applications for migration to the cloud, based on technical suitability Technical skills: Cloud platforms (AWS/Azure/GCP) Container technologies (eg, Docker, Kubernetes) Infrastructure as code (Terraform, CloudFormation) Cloud automation and CI/CD DevOps practices and toolchains Linux and Windows system management with automation frameworks like Ansible, Puppet, and PowerShell TO BE CONSIDERED. Please either apply by clicking online or emailing me directly at (see below) For further information please call me - I can make myself available outside of normal working hours to suit from 7 am until 10 pm. If unavailable, please leave a message and either myself or one of my colleagues will respond. By applying for this role, you give express consent for us to process & submit (subject to required skills) your application to our client in conjunction with this vacancy only. I look forward to hearing from you. KEY SKILLS: DevOps Engineer/Cloud Consultant/DevOps Consultant/AWS/Azure/Kubernetes/CI/CD/Ansible/Terraform/Docker/Python/Cheltenham/Security Cleared/DV/DV Cleared/Enhanced Clearance
04/10/2024
Full time
Must have active enhanced DV (West/North/South) Clearance Up to £75k DoE plus bonuses and benefits 3 days on site per week in Manchester Skills required in AWS/Azure, Containerisation, Orchestration, CI/CD, Automation Who are we? We are recruiting a Cloud and DevOps Consultant with enhanced DV Clearance for a prestigious client to work on a portfolio of public and private sector projects. Our client is a global leader in technology, consulting, and engineering services at the forefront of innovation. You'll experience excellent career progression opportunities to develop your skill set and personal profile in an inclusive culture. What will the Cloud and DevOps Consultant be doing? You'll lead clients through their cloud journey, aiding in the creation of robust, scalable platforms for critical services. Your role involves designing and implementing agile cloud environments, covering everything from architecture to operations, prototyping, and testing. You'll have exclusive chances to learn, grow, and lead in cutting-edge areas like infrastructure-as-code, DevOps, containers, platform-as-a-service, CI/CD, and micro-services. Key Skills and Requirements Active Enhanced DV Clearance Proficient in offering clear, practical solutions to intricate business challenges Effective communication of technical concepts to non-technical audiences Assessing applications for migration to the cloud, based on technical suitability Technical skills: Cloud platforms (AWS/Azure/GCP) Container technologies (eg, Docker, Kubernetes) Infrastructure as code (Terraform, CloudFormation) Cloud automation and CI/CD DevOps practices and toolchains Linux and Windows system management with automation frameworks like Ansible, Puppet, and PowerShell TO BE CONSIDERED. Please either apply by clicking online or emailing me directly at (see below) For further information please call me - I can make myself available outside of normal working hours to suit from 7 am until 10 pm. If unavailable, please leave a message and either myself or one of my colleagues will respond. By applying for this role, you give express consent for us to process & submit (subject to required skills) your application to our client in conjunction with this vacancy only. I look forward to hearing from you. KEY SKILLS: DevOps Engineer/Cloud Consultant/DevOps Consultant/AWS/Azure/Kubernetes/CI/CD/Ansible/Terraform/Docker/Python/Cheltenham/Security Cleared/DV/DV Cleared/Enhanced Clearance
Associate Principal, Software Programming - Quantitative Risk Management Area - Associate Principal, Software Engineering - Automating Risk Models On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
04/10/2024
Full time
Associate Principal, Software Programming - Quantitative Risk Management Area - Associate Principal, Software Engineering - Automating Risk Models On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
03/10/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* A prestigious company is looking for an Associate Principal, Application/Cloud Engineering. This role is focused on engineering and maintaining lab environments in public cloud and data centers using IaC techniques. This person will need experience with DevOps tools like Terraform, Ansible, Jenkins, Kubernetes, AWS, etc. This person will also need experience developing tools and automate tasks using languages such as Python, PowerShell, Bash. Responsibilities: Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery Create and maintain operational documentation, runbooks, and Standard Operating Procedures (SOPs) for the Lab environments that will be used to validate assumptions within high level Solution Designs Qualifications: Bachelor's or master's degree in computer science related degree or equivalent experience 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious financial firm is looking for a Principal Software Engineer. This engineer will build software solutions to test systems of financial products. This engineer will need heavy experience using Java, python, Terraform, CI/CD, DevOps, and containerization. The ideal candidate will have experience of working in a highly regulated financial environment. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus.
03/10/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious financial firm is looking for a Principal Software Engineer. This engineer will build software solutions to test systems of financial products. This engineer will need heavy experience using Java, python, Terraform, CI/CD, DevOps, and containerization. The ideal candidate will have experience of working in a highly regulated financial environment. Responsibilities: Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Qualifications: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 10+ years of experience as a software developer with exposure to the cloud or high-performance computing areas Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus.
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
03/10/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* A prestigious company is looking for a Principal Kafka/Flink Infrastructure Architect. This architect will drive the architectural vision of the companies Real Time data streaming computing. They will need expert level expertise with Kafka, Flink, and have a heavy Java application development background. This architect will work on streaming of both on prem and AWS cloud environments. Responsibilities: Collaborate with cross-functional teams to design, create and review software application architectures specifically tailored for streaming use cases. Ensure fault tolerance, scalability, and low-latency processing in streaming applications. Drive optimization of streaming application performance by fine-tuning configurations, monitoring resource utilization, and identifying bottlenecks. Drive Implementation of best practices for efficient data serialization, compression, and network communication. Create and maintain architecture documentation, including system diagrams, data flow, and component interactions. Evaluate and recommend tools and frameworks that enhance the performance and reliability of our streaming systems. Stay informed about industry trends related to Kafka, Flink, and Kubernetes. Qualifications: Bachelor's or Master's degree in an engineering discipline 10+ years of experience architecting of mission critical Cloud and On-Prem Real Time data streaming and event-driven architectures 10+ years of experience with Java 5+ years of specific Kafka and Flink experience 5+ years of Kubernetes experience Expert level knowledge of Kafka Expert level knowledge of Flink Ability to execute spikes and provide code samples demonstrating best practices when developing solutions on Kafka and Flink. Experience with DevOps tools, eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc.
Description: If you are a software engineer with a genuine interest in technology, sound C#.NET experience, and passion to deliver cutting-edge products, we have a perfect job for you. In our international team you can further advance your skills and deliver products that generate value for our trading activities. Working as a software engineer: This Company follows a startup-like approach promoting agile development practices Testing and adopting new technologies and ideas is considered an essential part of software craftsmanship allowing the company to seize new business opportunities; the company encourages and sponsors training courses, lab time, conferences etc. Developers gain valuable business knowledge by working closely with many different teams within the company across Europe We mainly work with the following technologies/tools: C#.NET/.NET core; MS SQL, Angular; Azure DevOps and Octopus. Responsibilities: Build best in class products as part of an effective, highly motivated and agile development team, responsible for a medium size portfolio of business applications Deliver new features in line with customer's expectations, while maintaining high level of coding standards (clean code, automated tests) Promote good coding culture and contribute to continuous improvement of our DevOps culture (CI/CD, pull requests, pair programming, ) Support the team by contributing to continuous improvement of DevOps culture and tooling Take over end-to-end responsibility for some applications (operations and first line support is covered by an operations team) Profile Must have: University degree in computer science or a quantitative subject Strong experience with development of business applications using C#.NET and REST APIs; 7+ years of OOP experience (out of that at least 3+ years in C#.NET) 3+ years of experience with automated testing (unit, integration, regression tests) Some experience building frontends for web or desktop (Angular, WPF) Willingness to participate in team workshops at our headquarters in Switzerland Very good command of written and spoken English Nice to have experiences: Databases (MS SQL, Cassandra), Message brokers (Kafka, RabbitMQ) Service oriented architectures: microservices, domain services, containers (Docker, Kubernetes) Azure DevOps, Jenkins, Octopus Observability frameworks (Grafana, ELK) Financial or commodity industry background Compensation: We offer a highly competitive compensation package Bonus scheme linked to performance Benefits: food tickets, free language courses, gym discounts, option to work mostly remotely Job Title: Senior Fullstack Software Engineer (C#.NET) Location: Madrid, Spain Job Type: Permanent TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
03/10/2024
Full time
Description: If you are a software engineer with a genuine interest in technology, sound C#.NET experience, and passion to deliver cutting-edge products, we have a perfect job for you. In our international team you can further advance your skills and deliver products that generate value for our trading activities. Working as a software engineer: This Company follows a startup-like approach promoting agile development practices Testing and adopting new technologies and ideas is considered an essential part of software craftsmanship allowing the company to seize new business opportunities; the company encourages and sponsors training courses, lab time, conferences etc. Developers gain valuable business knowledge by working closely with many different teams within the company across Europe We mainly work with the following technologies/tools: C#.NET/.NET core; MS SQL, Angular; Azure DevOps and Octopus. Responsibilities: Build best in class products as part of an effective, highly motivated and agile development team, responsible for a medium size portfolio of business applications Deliver new features in line with customer's expectations, while maintaining high level of coding standards (clean code, automated tests) Promote good coding culture and contribute to continuous improvement of our DevOps culture (CI/CD, pull requests, pair programming, ) Support the team by contributing to continuous improvement of DevOps culture and tooling Take over end-to-end responsibility for some applications (operations and first line support is covered by an operations team) Profile Must have: University degree in computer science or a quantitative subject Strong experience with development of business applications using C#.NET and REST APIs; 7+ years of OOP experience (out of that at least 3+ years in C#.NET) 3+ years of experience with automated testing (unit, integration, regression tests) Some experience building frontends for web or desktop (Angular, WPF) Willingness to participate in team workshops at our headquarters in Switzerland Very good command of written and spoken English Nice to have experiences: Databases (MS SQL, Cassandra), Message brokers (Kafka, RabbitMQ) Service oriented architectures: microservices, domain services, containers (Docker, Kubernetes) Azure DevOps, Jenkins, Octopus Observability frameworks (Grafana, ELK) Financial or commodity industry background Compensation: We offer a highly competitive compensation package Bonus scheme linked to performance Benefits: food tickets, free language courses, gym discounts, option to work mostly remotely Job Title: Senior Fullstack Software Engineer (C#.NET) Location: Madrid, Spain Job Type: Permanent TEKsystems, an Allegis Group company. Allegis Group AG, Aeschengraben 20, CH-4051 Basel, Switzerland. Registration No. CHE-101.865.121. TEKsystems is a company within the Allegis Group network of companies (collectively referred to as "Allegis Group"). Aerotek, Aston Carter, EASi, TEKsystems, Stamford Consultants and The Stamford Group are Allegis Group brands. If you apply, your personal data will be processed as described in the Allegis Group Online Privacy Notice available at our website. To access our Online Privacy Notice, which explains what information we may collect, use, share, and store about you, and describes your rights and choices about this, please go our website. We are part of a global network of companies and as a result, the personal data you provide will be shared within Allegis Group and transferred and processed outside the UK, Switzerland and European Economic Area subject to the protections described in the Allegis Group Online Privacy Notice. We store personal data in the UK, EEA, Switzerland and the USA. If you would like to exercise your privacy rights, please visit the "Contacting Us" section of our Online Privacy Notice on our website for details on how to contact us. To protect your privacy and security, we may take steps to verify your identity, such as a password and user ID if there is an account associated with your request, or identifying information such as your address or date of birth, before proceeding with your request. commitments under the UK Data Protection Act, EU-U.S. Privacy Shield or the Swiss-U.S. Privacy Shield.
Request Technology - Craig Johnson
Chicago, Illinois
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Senior Java Software Engineer. Candidate will support and work collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Preferred] Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google [Required] Experience writing unit and integration tests with testing frameworks like Junit, Citrus [Required] Experience working with various types of databases like Relational, NoSQL [Required] Experience working with Git [Preferred] Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc [Preferred] Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics [Required] Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API Technical Skills: [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns Education and/or Experience: [Required] BS degree in Computer Science, similar technical field required
02/10/2024
Full time
*We are unable to sponsor for this permanent Full time role* *Position is bonus eligible* Prestigious Financial Institution is currently seeking a Senior Java Software Engineer. Candidate will support and work collaboratively with business analysts, team leads and development team. A contributor in developing scalable and resilient hybrid and Cloud-based data solutions supporting critical financial market clearing and risk activities; collaborate with other developers, architects and product owners to support enterprise transformation into a data-driven organization. The Application Developer will be a team player and work well with business, technical and non-technical professionals in a project environment. Responsibilities: Support the application development of Real Time and batch applications for business requirements in agreed architecture framework and Agile environment Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented Performs application and project risk analysis and recommends quality improvements Assists Production Support by providing advice on system functionality and fixes as required Communicates in a clear and concise manner all time delays or defects in the software immediately to appropriate team members and management Experience with resolving security vulnerabilities Qualifications: The requirements listed are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the primary functions. [Required] 3+ year of experience in building high speed, Real Time and batch solutions [Required] 3+ years of experience in Java [Preferred] Experience with high speed distributed computing frameworks like FLINK, Apache Spark, Kafka Streams, etc [Preferred] Experience with distributed message brokers like Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. [Preferred] Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc [Preferred] Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google [Required] Experience writing unit and integration tests with testing frameworks like Junit, Citrus [Required] Experience working with various types of databases like Relational, NoSQL [Required] Experience working with Git [Preferred] Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc [Preferred] Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics [Required] Hands-on experience with Java version 8 onwards, Spring, SpringBoot, REST API Technical Skills: [Required] Java-based software development experience, including deep understanding of Java fundamentals like Data structures, Concurrency and Multithreading [Required] Experience in object-oriented design and software design patterns Education and/or Experience: [Required] BS degree in Computer Science, similar technical field required
NO SPONSORSHIP Software Engineering - Python, Java, Terraform, DevOps, Containerization Understanding of industry They do not necessarily have to work within a QRM portal. But they have to understand the industry and come from a highly regulated background, preferably financial Looking for a hard core developer who can work within quantitative risk management and they develop applications and solutions for the QRM team They do not build models, they automate models Develop hardcore applications These people will have masters in mathematics, statistics, physics, or computer science *They may even have a PhD They need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Develops and maintains risk models for managing clearing fund and stress testing risk model software in production AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
02/10/2024
Full time
NO SPONSORSHIP Software Engineering - Python, Java, Terraform, DevOps, Containerization Understanding of industry They do not necessarily have to work within a QRM portal. But they have to understand the industry and come from a highly regulated background, preferably financial Looking for a hard core developer who can work within quantitative risk management and they develop applications and solutions for the QRM team They do not build models, they automate models Develop hardcore applications These people will have masters in mathematics, statistics, physics, or computer science *They may even have a PhD They need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Develops and maintains risk models for managing clearing fund and stress testing risk model software in production AWS develop CICD pipelines JAVA C# Python Agile Scrum financial products a plus understand markets financial derivatives equities interest rates commodity products Java preferred cicd infrastructure as a code Kubernetes terraform splunk open telemetry SQL big data Scripting in python Develop and maintain software and environments used to implement and test systems for pricing, margin risk and stress testing of financial products and derivatives. Configure and manage resources in the local and AWS cloud environments and deploy QRM's software on these resources. Develop CI/CD pipelines. Configure, execute, and monitor execution pipelines for model testing, backtesting and monitoring. Contribute to development of QRM's databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. Track record of complex production implementations and a demonstrated ability in developing and maintaining enterprise level software, including in the cloud environment. Proficiency in technical and/or scientific documentation (eg, white papers, user guides, etc.) Strong problem-solving skills: Be able to accurately identify a problem's source, severity, and impact to determine possible solutions and needed resources. Experience with Agile/SCRUM or another rapid development framework. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Experience with logging, profiling, monitoring, telemetry (eg Splunk, OpenTelemetry). Good command of database technology and query languages (SQL) and non-relational DB and other Big Data technology, including efficient storage and serialization protocols (eg Parquet, Avro, Protocol Buffers). Experience with automated quality assurance frameworks (eg, Junit, TestNG, PyTest, etc.). Experience with high performance and distributed computing. Experience with productivity tools such as Jira, Confluence, MS Office. Experience with Scripting languages such as Python is a plus. Experience with numerical libraries and/or scientific computing is a plus. Education and/or Experience: Master's degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
For one of the main banking clients based in Utrecht we are looking for an Solution Architect to carry out the infra automation of solutions. Responsibilities Infrastructure Automation : Design and implement automated infrastructure solutions to enhance operational efficiency. Develop frameworks for automating deployment, monitoring, and scaling of infrastructure (using tools like Ansible, Terraform, or similar). Optimize the use of containerization (Docker, Kubernetes) for infrastructure services. Collaborate with DevOps teams to drive continuous integration and delivery (CI/CD) practices across the infrastructure. Storage Solutions : Architect scalable, secure, and high-performing storage solutions to meet the bank's data needs. Design storage architectures (SAN, NAS, Object Storage) that can handle the demands of modern banking services. Implement backup, disaster recovery, and business continuity solutions for storage. Data Lake Development : Architect and implement a scalable, resilient, and secure data lake infrastructure to support data analytics and machine learning initiatives. Integrate the data lake with various data sources, ensuring Real Time and batch processing capabilities. Network Integration : Ensure the infrastructure and storage solutions are seamlessly integrated into the bank's network architecture. Collaborate with network engineers to design network topologies that support low-latency, high-availability access to storage and data lake resources. Collaboration & Stakeholder Management : Work closely with various stakeholders, including IT operations, data engineering, and security teams, to align architectural strategies with business objectives. Serve as the technical advisor on infrastructure projects, providing guidance on best practices, tools, and technologies. Security & Compliance : Ensure all infrastructure solutions comply with relevant banking regulations and industry standards (ISO, GDPR, etc.). Implement security best practices across automation, storage, and data lake solutions, ensuring data integrity and protection. Requirements Bachelor's or Master's degree in Computer Science, Information Technology, or related field. Minimum of 7+ years of experience in IT architecture, with a strong focus on infrastructure automation, storage solutions, and data lake architectures. Proven experience in designing and implementing automated infrastructure using tools like Terraform, Ansible, Puppet, or Chef. Expertise in containerization technologies (Docker, Kubernetes). Deep understanding of storage architectures (SAN, NAS, Object Storage) and their integration with cloud and on-premise systems. Hands-on experience with data lake technologies (eg, Hadoop, Spark, AWS S3, Azure Data Lake). Familiarity with network design, including VPNs, Firewalls, load balancers, and network security. Strong knowledge of cloud platforms (AWS, Azure, Google Cloud) and hybrid cloud architecture. Excellent problem-solving and troubleshooting skills. Strong communication skills and the ability to influence and guide technical and non-technical teams. Job Details Start Date: ASAP Permanent Position Hybrid Setting For inquiries, contact Alexander Mungkorn, Delivery Consultant Michael Bailey International is acting as an Employment Agency in relation to this vacancy.
02/10/2024
Full time
For one of the main banking clients based in Utrecht we are looking for an Solution Architect to carry out the infra automation of solutions. Responsibilities Infrastructure Automation : Design and implement automated infrastructure solutions to enhance operational efficiency. Develop frameworks for automating deployment, monitoring, and scaling of infrastructure (using tools like Ansible, Terraform, or similar). Optimize the use of containerization (Docker, Kubernetes) for infrastructure services. Collaborate with DevOps teams to drive continuous integration and delivery (CI/CD) practices across the infrastructure. Storage Solutions : Architect scalable, secure, and high-performing storage solutions to meet the bank's data needs. Design storage architectures (SAN, NAS, Object Storage) that can handle the demands of modern banking services. Implement backup, disaster recovery, and business continuity solutions for storage. Data Lake Development : Architect and implement a scalable, resilient, and secure data lake infrastructure to support data analytics and machine learning initiatives. Integrate the data lake with various data sources, ensuring Real Time and batch processing capabilities. Network Integration : Ensure the infrastructure and storage solutions are seamlessly integrated into the bank's network architecture. Collaborate with network engineers to design network topologies that support low-latency, high-availability access to storage and data lake resources. Collaboration & Stakeholder Management : Work closely with various stakeholders, including IT operations, data engineering, and security teams, to align architectural strategies with business objectives. Serve as the technical advisor on infrastructure projects, providing guidance on best practices, tools, and technologies. Security & Compliance : Ensure all infrastructure solutions comply with relevant banking regulations and industry standards (ISO, GDPR, etc.). Implement security best practices across automation, storage, and data lake solutions, ensuring data integrity and protection. Requirements Bachelor's or Master's degree in Computer Science, Information Technology, or related field. Minimum of 7+ years of experience in IT architecture, with a strong focus on infrastructure automation, storage solutions, and data lake architectures. Proven experience in designing and implementing automated infrastructure using tools like Terraform, Ansible, Puppet, or Chef. Expertise in containerization technologies (Docker, Kubernetes). Deep understanding of storage architectures (SAN, NAS, Object Storage) and their integration with cloud and on-premise systems. Hands-on experience with data lake technologies (eg, Hadoop, Spark, AWS S3, Azure Data Lake). Familiarity with network design, including VPNs, Firewalls, load balancers, and network security. Strong knowledge of cloud platforms (AWS, Azure, Google Cloud) and hybrid cloud architecture. Excellent problem-solving and troubleshooting skills. Strong communication skills and the ability to influence and guide technical and non-technical teams. Job Details Start Date: ASAP Permanent Position Hybrid Setting For inquiries, contact Alexander Mungkorn, Delivery Consultant Michael Bailey International is acting as an Employment Agency in relation to this vacancy.
NO SPONSORSHIP Associate Principal, Software Programming Quantitative Risk Management Area Associate Principal, Software Engineering Automating Risk Models Chicago - On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRMs software on these resources. Develop CI/CD pipelines. Contribute to development of QRMs databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Masters degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
01/10/2024
Full time
NO SPONSORSHIP Associate Principal, Software Programming Quantitative Risk Management Area Associate Principal, Software Engineering Automating Risk Models Chicago - On site 3 days a week Salary - $185 - $195K + Bonus Looking for a hard core developer who works within the quantitative risk management and cab develop applications and solutions for the QRM team. You will not build models, you will automate models You will need to come from a financial institute, trading company, exchange, etc. Develop hardcore applications You will need to have CICD pipelines, Infrastructure as a Code, Kubernetes, Terraform, etc. Preferably having Java, Python, C++ Configure and manage resources in the local and AWS cloud environments and deploy QRMs software on these resources. Develop CI/CD pipelines. Contribute to development of QRMs databases and ETLs. Integrate model prototypes, model library and model testing tools using best industry practices and innovations. Create unit and integration tests; build and enhance test automation tools. Participate in code reviews and demo accomplishments. Write technical documentation and user manuals. Provide production support and perform troubleshooting. Strong programming skills. Able to read and/or write code using a programming language (eg, Java, C++, Python, etc.) in a collaborative software development setting: The role requires advanced coding, database and environment manipulation skills. cloud environment. Financial products knowledge is a plus: understanding of markets and financial derivatives in equities, interest rate, and commodity products. Background in Financial mathematics is a plus: derivatives pricing models, stochastic calculus, statistics and probability theory, linear algebra. Technical Skills: Proficiency in Java (preferred) or another object-oriented language is required, including effective application of design patterns and best coding practices. DevOps experience, with a good command of CI/CD process and tools (eg, Git, GitHub, Gradle, Jenkins, Docker, Helm, Harness). Experience in containerized deployment in cloud environments. Experienced with cloud technology (AWS preferred), infrastructure-as-code (eg Terraform), managing and orchestrating containerized workloads (eg Kubernetes). Education and/or Experience: Masters degree or equivalent in a computational or numerical field such as computer science, information systems, mathematics, physics 7+ years of experience as a software developer with exposure to the cloud or high-performance computing areas
NO SPONSORSHIP AWS Cloud Engineer SALARY: $115k - 120K and a 10% Bonus LOCATION: Chicago, IL Hybrid 2 day remote and 3 days onsite SELLING POINTS: Bash Python Scripting AWS Kubernetes CICD Github Jenkins Artifactory Docker Compose K8s Kafka Rabbit MQ Amazon Kinesis Terraform Ansible Jenkins Helm Linux Linux Shell Scripting Splunk Infrastructure as a code (IaC) Qualifications: Programming/Scripting experience in languages like Java, Bash, Python or Go Knowledge of Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s) Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Experience with RESTful APIs and JSON RPC Experience following Git workflows Technical Skills: Experience with Linux and Linux Shell Scripting. Jenkins job setup and execution analysis - including Splunk log review for Root Cause Analysis (RCA). Ability to manage Kubernetes deployments with helm charts, using continuous deployment tools like Harness.io Ability to manage AWS deployments using Terraform, Ansible, or similar Infrastructure as Code (IaC) frameworks. Experience with automation, configuration management and orchestration, infrastructure as code. Experience with Golang or Python is a plus. BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team
01/10/2024
Full time
NO SPONSORSHIP AWS Cloud Engineer SALARY: $115k - 120K and a 10% Bonus LOCATION: Chicago, IL Hybrid 2 day remote and 3 days onsite SELLING POINTS: Bash Python Scripting AWS Kubernetes CICD Github Jenkins Artifactory Docker Compose K8s Kafka Rabbit MQ Amazon Kinesis Terraform Ansible Jenkins Helm Linux Linux Shell Scripting Splunk Infrastructure as a code (IaC) Qualifications: Programming/Scripting experience in languages like Java, Bash, Python or Go Knowledge of Continuous Integration and Continuous Delivery (CI/CD) tools (examples - GitHub, Jenkins, Artifactory, Docker, Compose, K8s) Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Experience with cloud technologies and migrations Working knowledge of DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Experience developing and delivering technical solutions using public cloud service providers like Amazon, Google, etc. Familiarity with monitoring related tools and frameworks like Splunk, ElasticSearch, Prometheus, AppDynamics Experience with RESTful APIs and JSON RPC Experience following Git workflows Technical Skills: Experience with Linux and Linux Shell Scripting. Jenkins job setup and execution analysis - including Splunk log review for Root Cause Analysis (RCA). Ability to manage Kubernetes deployments with helm charts, using continuous deployment tools like Harness.io Ability to manage AWS deployments using Terraform, Ansible, or similar Infrastructure as Code (IaC) frameworks. Experience with automation, configuration management and orchestration, infrastructure as code. Experience with Golang or Python is a plus. BS degree in Computer Science, similar technical field, or equivalent experience 1+ years of experience in building large scale, data-centric solutions 3+ years of experience (recent) participating on a DevOps team or as product owner for DevOps team
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* *NO CONTRACTORS OR CONSULTANTS* A prestigious company is looking for an Associate Principal, Backend Java Developer. This company needs someone with 7-10 years of experience focused on Back End Java development, Java 11, Kafka, Golang, Multithreading, AWS, etc. They will be working in a Real Time and highly regulated financial environment. Responsibilities: Actively participates in design of highly performing, scalable, secure, reliable and cost optimized solutions. Primary responsibility is application design and development of next gen clearing applications for business requirements in agreed architecture framework and Agile environment. Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation. Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented. Participates in code-reviews based on high engineering standards Writes unit and integration tests based on chosen test frameworks. Assists Production Support by providing advice on system functionality and fixes as required. Qualifications: BS degree in Computer Science, similar technical field required. Masters preferred. 7-10 years of experience in building large scale, compute and event-driven solutions. Experience (including internal workings of Java) in Java 11+ is required. Experience with app development in Golang. Experience developing software using Object Oriented Designs, advance patterns (like AOP) and multi-threading is required. Experience with distributed message brokers like Kafka, IBM MQ, Amazon Kinesis, etc. is desirable. Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Must be able to write good quality code with 80% or above unit and integration tests coverage. Experience with testing frameworks like Junit, Citrus is desirable. Experience working with various types of databases like Relational, NoSQL, Object-based, Graph. Experience following Git workflows is required. Familiarity with DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Docker, Helm and CI/CD pipeline etc.is a plus. Experience with performance optimization, profiling, and memory management.
01/10/2024
Full time
*Hybrid, 3 days onsite, 2 days remote* *We are unable to sponsor as this is a permanent Full time role* *NO CONTRACTORS OR CONSULTANTS* A prestigious company is looking for an Associate Principal, Backend Java Developer. This company needs someone with 7-10 years of experience focused on Back End Java development, Java 11, Kafka, Golang, Multithreading, AWS, etc. They will be working in a Real Time and highly regulated financial environment. Responsibilities: Actively participates in design of highly performing, scalable, secure, reliable and cost optimized solutions. Primary responsibility is application design and development of next gen clearing applications for business requirements in agreed architecture framework and Agile environment. Thoroughly analyzes requirements, develops, tests, and documents software quality to ensure proper implementation. Follows agreed upon SDLC procedures to ensure that all information system products and services meet: both explicit and implicit quality standards, end-user functional requirements, architectural standards, performance requirements, audit requirements, security rules are upheld, and external facing reporting is properly represented. Participates in code-reviews based on high engineering standards Writes unit and integration tests based on chosen test frameworks. Assists Production Support by providing advice on system functionality and fixes as required. Qualifications: BS degree in Computer Science, similar technical field required. Masters preferred. 7-10 years of experience in building large scale, compute and event-driven solutions. Experience (including internal workings of Java) in Java 11+ is required. Experience with app development in Golang. Experience developing software using Object Oriented Designs, advance patterns (like AOP) and multi-threading is required. Experience with distributed message brokers like Kafka, IBM MQ, Amazon Kinesis, etc. is desirable. Experience with cloud technologies and migrations. Experience preferred with AWS foundational services like VPCs, Security groups, EC2, RDS, S3 ACLs, KMS, AWS CLI and IAM etc. Must be able to write good quality code with 80% or above unit and integration tests coverage. Experience with testing frameworks like Junit, Citrus is desirable. Experience working with various types of databases like Relational, NoSQL, Object-based, Graph. Experience following Git workflows is required. Familiarity with DevOps tools. Eg Terraform, Ansible, Jenkins, Kubernetes, Docker, Helm and CI/CD pipeline etc.is a plus. Experience with performance optimization, profiling, and memory management.
Senior Engineer, Cloud/Infrastructure Security Salary: Open + bonus Location: Chicago, IL Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* Qualifications Bachelor's degree in computer science related degree 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Preferred Experience with DevOps tools, ex. Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Familiarity with security standards such as the NIST CSF Related certifications Responsibilities Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery
01/10/2024
Full time
Senior Engineer, Cloud/Infrastructure Security Salary: Open + bonus Location: Chicago, IL Hybrid: 3 days onsite, 2 days remote *We are unable to provide sponsorship for this role* Qualifications Bachelor's degree in computer science related degree 7+ years of experience as a System or Cloud Engineer with hands on implementation, security, and standards experience within a hybrid technology environment 3+ years of experience contributing to the architecture of Cloud and On-Prem Solutions Ability to develop tools and automate tasks using Scripting languages such as Python, PowerShell, Bash, PERL, Ruby, etc. In depth knowledge of on-premises, cloud and hybrid networking concepts Knowledge of Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Azure Resource Manager Knowledge of containerization technologies like Docker and orchestration tools like Kubernetes Preferred Experience with DevOps tools, ex. Terraform, Ansible, Jenkins, Kubernetes, Helm and CI/CD pipeline etc. Experience with distributed message brokers Kafka, RabbitMQ, ActiveMQ, Amazon Kinesis, etc. Familiarity with security standards such as the NIST CSF Related certifications Responsibilities Engineer and maintain Lab environments in Public Cloud and Data Centers using Infrastructure as Code techniques Collaborate with Engineering, Architecture and Cloud Platform Engineering teams to evaluate, document, and demonstrate Proof of Concepts for company infrastructure, application and services that impact the Technology Roadmap Document Technology design decisions and conduct Technology assessments as part of a centralized Demand Management process within IT Apply your expertise in compute, storage, database, serverless, monitoring, microservices, and event management to pilot new/innovative solutions to business problems Find opportunities to improve existing infrastructure architecture to improve performance, support, scalability, reliability, and security Incorporate security best practices, Identity and Access Management, and encryption mechanisms for data protection Develop automation scripts and processes to streamline routine tasks such as scaling, patching, backup, and recovery
Position: Senior Full Stack Developer Location: Hybrid - Zug Contract type: freelance - 8 months Requirements: 6+ years of proven experience in full stack development with Angular (Front End) and Node.js (Back End). Strong proficiency in JavaScript and TypeScript . In-depth experience developing RESTful APIs and integrating them with Front End components. Advanced knowledge of web technologies including HTML5 , CSS3 , SCSS , and Bootstrap . Solid understanding of relational and non-relational databases (eg, MongoDB , MySQL ). Strong experience in implementing and maintaining best security practices , including OWASP standards. Proficient in version control using Git . Strong problem-solving skills and ability to work effectively in a cooperativeenvironment. Experience in optimizing web application performance for both client and server sides. Plus skills: Experience with DevOps tools and containerization technologies (eg, Docker , Kubernetes ). Familiarity with CI/CD tools like Jenkins , GitLab CI , or CircleCI . Experience with microservices architecture & cloud computing platforms If this sounds like the role for you, please apply with your updated CV. Michael Bailey International is acting as an Employment Business in relation to this vacancy.
01/10/2024
Contractor
Position: Senior Full Stack Developer Location: Hybrid - Zug Contract type: freelance - 8 months Requirements: 6+ years of proven experience in full stack development with Angular (Front End) and Node.js (Back End). Strong proficiency in JavaScript and TypeScript . In-depth experience developing RESTful APIs and integrating them with Front End components. Advanced knowledge of web technologies including HTML5 , CSS3 , SCSS , and Bootstrap . Solid understanding of relational and non-relational databases (eg, MongoDB , MySQL ). Strong experience in implementing and maintaining best security practices , including OWASP standards. Proficient in version control using Git . Strong problem-solving skills and ability to work effectively in a cooperativeenvironment. Experience in optimizing web application performance for both client and server sides. Plus skills: Experience with DevOps tools and containerization technologies (eg, Docker , Kubernetes ). Familiarity with CI/CD tools like Jenkins , GitLab CI , or CircleCI . Experience with microservices architecture & cloud computing platforms If this sounds like the role for you, please apply with your updated CV. Michael Bailey International is acting as an Employment Business in relation to this vacancy.