https://bayt.page.link/P6J6xi3d7kGrtCV8A
Create a job alert for similar positions

Job Description

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future.
Background
Gracenote is the world’s leading entertainment data and technology company. We power the top music services, consumer electronics companies, automakers, media companies and cable and satellite operators on the planet. At its core, Gracenote helps people find, discover and connect with the entertainment they love.
Gracenote does this by
Global Databases - Creating, collecting and organizing detailed information about TV shows, 
sports, movies and music
.
Best-in-Class Experience - Building best-in-class services and technologies to make TV, sports, movies and music more accessible to and discoverable by fans.
Highly Scalable Platform - All on a highly scalable platform that serves billions every month. The data Gracenote creates and the tech we invent is essential to 80% of Forbes’ Most Valuable Brands in Automotive, Media and Tech.
Job Purpose
We are currently looking for a Data Engineer with 3+ years of experience for our Business
Intelligence & Analytics (BIA) team, which is part of Gracenote’s Global Quality Office. The BIA team is responsible for building standardized, scalable measurement and reporting solutions for our clients and internal teams. Some of our guiding principles are client-centricity, data-driven, transparency, proactiveness, better user-experience and engagement. This role is to assist in the organization’s journey to leverage our vast metadata content.

Responsibilities


  • Build and maintain big data pipelines on Cloud, for BI solutions.
  • Leverage distributed computing framework, for processing batch and streaming workloads.
  • Author and orchestrate jobs, event-driven compute services with a focus on serverless.
  • Implement and maintain data-warehouse, data-lake and lakehouse architectures for
  • BI/analytics use-cases.
  • Develop scalable, cost-effective data solutions using clean coding principles.

Skills


Conceptual, hands-on and strong implementation experience in:
Languages - Python, SQL, Pyspark Cloud (AWS) - S3, Lambda, Glue, Event Bridge, Step-functions, EC2, EMR, SSM, IAM, VPC,Cloudfront, Cloudwatch, API Gateway Modern columnar and open table formats - Parquet, Delta-lake, Apache Iceberg Relational Databases - PostgreSQL, MySQL, SQL Server, Amazon RDS Streaming - Kafka, Spark Streaming, Flink, Druid Cloud Data-warehouses - Snowflake, AWS Redshift, Synapse Analytics Data Catalogs - AWS Data Catalog, Hive Metastore, Apache Iceberg Query Engines - Athena, Presto, Trino, DremioFamiliarity and working knowledge of:
 Unified Platforms - Databricks, Microsoft Fabric, Azure Synapse Modern data stack API - RESTSoft skills
 Problem solving Strong communication Collaboration and teamwork Learning agility for new technologies, languages, systems etc. Attention to detail, critical thinking and focus on quality
You have reached your limit of 15 Job Alerts. To create a new Job Alert, delete one of your existing Job Alerts first.
Similar jobs alert created successfully. You can manage alerts in settings.
Similar jobs alert disabled successfully. You can manage alerts in settings.