https://bayt.page.link/ZTwVMVAiJ4qF662b7
Create a job alert for similar positions

Job Description

Overview As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around finance. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Responsibilities Drive and lead delivery of key projects within time and budget Drive and lead solution design and build to ensure scalability, performance, and reuse Ability to recommend and drive consensus around preferred data integration/engineering approaches Ability to anticipate data bottlenecks (latency, quality, speed) and recommend appropriate remediation strategies Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards. Facilitate work intake, prioritization and release timing, balancing demand and available resources. Ensure tactical initiatives are aligned with the strategic vision and business needs. Ensure sustainability of live pipelines in production environment Hands on experience of implementing and designing data engineering workloads using Spark, Databricks , or similar modern data processing technology Work with product owners, scrum masters and technical committee to define the 3 months road-map for each program increment (sprint wise) Manage and scale data pipelines responsible for ingestion and data transformation. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Qualifications Required skill set and experience: Bachelor’s degree in Computer Science, MIS, Business Management, or related field 10 + years’ experience in Information Technology 4 + years of Azure, AWS and Cloud technologies 6+ years of experience of writing complex SQL queries in DWH or Datalake environment 5+ years of experience with a programming language (i.,e python , java or scala) preferably python. Experience of building frameworks for different processes such as data ingestion and dataops Good written and verbal communication skills along with collaboration and listening skills Well versed with Spark optimization techniques Experience dealing with multiple vendors as necessary. Hands on experience of writing complex SQL queries Big Data (Hadoop, HBase, MapReduce, Hive, HDFS etc.), Spark/PySpark Sound skills and hands on experience with Azure Data Lake, Azure Data Factory, Azure Data Bricks ,Azure Synapse Analytics, Azure Storage Explorer Proficient in creating Data Factory pipelines for on-cloud ETL processing; copy activity, custom Azure development etc. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Mandatory “Non-Technical” Skills Excellent remote collaboration skills Experience working in a matrix organization with diverse priorities. Enthusiast for learning functional knowledge specific to finance business Ability to work with virtual teams (remote work locations); within team of technical resources (employees and contractors) based in multiple global locations. Participate in technical discussions, driving clarity of complex issues/requirements to build robust solutions Strong communication skills to meet with delivery teams and business-facing teams, understand sometimes ambiguous, needs, and translate to clear, aligned requirements.

You have reached your limit of 15 Job Alerts. To create a new Job Alert, delete one of your existing Job Alerts first.
Similar jobs alert created successfully. You can manage alerts in settings.
Similar jobs alert disabled successfully. You can manage alerts in settings.