https://bayt.page.link/EajuKzhufFC56NWt6
Create a job alert for similar positions

Job Description

WHO YOU’LL WORK WITH


This position is for Retail organization, Nike Retail Engineering products are used in 37 countries. we serve 540M consumers and 46,000 athletes annually. This position will be focused on Store Performance Reporting product. This is a team that’s always on the offense, working on innovative solutions, building world class experiences to serve Nike store coaches to view business intelligence reports & helps to take key decision for stores.


WHO WE ARE LOOKING FOR


  • The candidate should have minimum 7 years of work experience in on building and maintaining data pipelines.
  • The candidate should have strong command on backend technologies like Databricks, Apache spark, Python and SQL.
  • Candidate should have relevant experience working on Databricks, AWS/Azure, data storage technologies such as databases and distributed file systems and be familiar with Spark framework. 
  • Hands-on problem-solving skills and the ability to troubleshoot complex data issues. 
  • Experience with ETL tools and processes. 
  • Proficient in Python programming with Apache Spark, with a focus on data processing and automation .
  • Strong SQL skills and experience with relational databases. 
  • Familiarity with data warehousing concepts and best practices .
  • Exposure to cloud platforms like AWS, Azure.
  • The candidate should have bachelor’s or master’s degree in computer/information science engineering from a reputed institution.

WHAT YOU’LL WORK ON


You will be part of Store Performance reporting team in Athlete Tools and Consumer Order department at Nike. You will Collaborate with different core data team across Nike to understand the data flows and data ingestions across the Nike enterprise level. 


   The role and responsibilities for you will be below:


  • You get a chance to design, develop, enhance and maintain scalable ETL pipelines to process large volumes of data from various sources.
  • You will Implement and manage data integration solutions using tools like Databricks, Snowflake, and other relevant technologies.
  • You will Develop and optimize data models and schemas to support analytics and reporting needs. 
  • You will Write efficient and maintainable code in Python for data processing and transformations.
  • You will Utilize Apache Spark for distributed data processing and large-scale data analytics.
  • You will Work on converting business requirements into technical solutions Ensuring data quality and integrity through unit testing.
  • You will Collaborate with cross functional teams to understand data structure and data pipelines.
You have reached your limit of 15 Job Alerts. To create a new Job Alert, delete one of your existing Job Alerts first.
Similar jobs alert created successfully. You can manage alerts in settings.
Similar jobs alert disabled successfully. You can manage alerts in settings.