Job Description
You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you.
As an AbInitio and Python Data Engineer II at JPMorgan Chase within the Corporate Technology Group, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.
Job Responsibilities
- Execute standard software solutions, design, development, and technical troubleshooting.
- Write secure and high-quality code using the syntax of at least one programming language with limited guidance.
- Design, develop, code, and troubleshoot with consideration of upstream and downstream systems and technical implications.
- Apply knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation.
- Apply technical troubleshooting to break down solutions and solve technical problems of basic complexity.
- Gather, analyze, and draw conclusions from large, diverse data sets to identify problems and contribute to decision-making in service of secure, stable application development.
- Learn and apply system processes, methodologies, and skills for the development of secure, stable code and systems.
Required Qualifications, Capabilities, and Skills
- Formal training or certification on software engineering concepts and 2+ years applied experience
- Minimum six years of experience in Ab Initio suite of products, including expertise in developing ETL processes using Ab Initio, Abinitio Graphs, Continuous flows, Plans, reusable components, and meta programming.
- Hands-on practical experience in system design, application development, testing, and operational stability.
- Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages.
- Experience in Python coding to handle data flow pipelines. Proficient in SQL, PL/SQL, and familiar with technologies like Oracle and Cloud SQL.
- Hands-on exposure to containerization environments like Docker and Kubernetes deployments, and job orchestration technologies like Control M and Airflow.
- Exposure to methodologies such as CI/CD using Jenkins, Application Resiliency, and Security, Splunk.
Preferred Qualifications, Capabilities, and Skills
- Familiarity with modern data processing frameworks in Big Data like PySpark.
- Exposure to cloud technologies specifically AWS and minimum associate level certification on AWS.
- Databricks certified Data Engineer or equivalent ones.