Over the past 20 years Amazon has earned the trust of over 300 million customers worldwide by providing unprecedented convenience, selection and value on Amazon.com. By deploying Amazon Pay’s products and services, merchants make it easy for these millions of customers to safely purchase from their third party sites using the information already stored in their Amazon account.
In this role, you will lead Data Engineering efforts to drive automation for Amazon Pay organization.
You will be part of the data engineering team that will envision, build and deliver high-performance, and fault-tolerant data pipeliens. As a Data Engineer, you will be working with cross-functional partners from Science, Product, SDEs, Operations and leadership to translate raw data into actionable insights for stakeholders, empowering them to make data-driven decisions.
Key job responsibilities
· Design, implement, and support a platform providing ad-hoc access to large data sets
· Interface with other technology teams to extract, transform, and load data from a wide variety of data sources
· Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift, and OLAP technologies
· Model data and metadata for ad-hoc and pre-built reporting
· Interface with business customers, gathering requirements and delivering complete reporting solutions
· Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark.
· Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs.
· Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers
About the team
Amazon Pay Data engineering and Analytics team’s mission is to transform raw data into actionable insights. We do this by providing single source of truth, standardized metrics, reporting with deep dive capabilities, produce ML models and actionable insights that will identify growth opportunities and drive AmazonPay fly-wheel.
- 2+ years of data engineering experience
- Experience with SQL
- Experience with one or more scripting language (e.g., Python, KornShell)
- Experience with data modeling, warehousing and building ETL pipelines
- Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
- Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions