You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you.
As a PySpark Developer Software Engineer II at JPMorgan Chase within the Commercial & Investment Bank Payments Technology team, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.
Job responsibilities
- Executes standard software solutions, design, development, and technical troubleshooting
- Writes secure and high-quality code using the syntax of at least one programming language with limited guidance
- Designs, develops, codes, and troubleshoots with consideration of upstream and downstream systems and technical implications
- Applies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation
- Applies technical troubleshooting to break down solutions and solve technical problems of basic complexity
- Gathers, analyzes, and draws conclusions from large, diverse data sets to identify problems and contribute to decision-making in service of secure, stable application development
- Learns and applies system processes, methodologies, and skills for the development of secure, stable code and systems
- Adds to team culture of diversity, equity, inclusion, and respect
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 2+ years applied experience
- Hands-on practical experience in system design, application development, testing, and operational stability
- Proven experience as a Data Engineer (atleast 3 years), with a focus on PySpark and big data technologies.
- Proficiency in Python and PySpark for data processing and analysis.
- Experience with Big data technologies on Cloud: AWS or other Cloud services.
- Strong understanding of SQL and experience with relational databases.
- Knowledge of data warehousing concepts and ETL processes.
- Monitor data pipelines for performance and reliability, and troubleshoot issues as they arise.
- Optimize PySpark jobs for performance and scalability, including tuning Spark configurations and resource allocation
- Experience across the whole Software Development Life Cycle
Preferred qualifications, capabilities, and skills
- Familiarity with modern front-end technologies
- Exposure to cloud technologies
- Pyspark, Databricks, SQL, AWS Cloud, Glue, Databricks