https://bayt.page.link/P6J6xi3d7kGrtCV8A
Create a job alert for similar positions

Job Description

At Nielsen, we believe that career growth is a partnership. You ultimately own, fuel and set the journey. By joining our team of nearly 14,000 associates, you will become part of a community that will help you to succeed. We champion you because when you succeed, we do too. Embark on a new initiative, explore a fresh approach, and take license to think big, so we can all continuously improve. We enable your best to power our future. 
Background
Gracenote is the world’s leading entertainment data and technology company. We power the top music services, consumer electronics companies, automakers, media companies and cable and satellite operators on the planet. At its core, Gracenote helps people find, discover and connect with the entertainment they love.
Gracenote does this by
Global Databases - Creating, collecting and organizing detailed information about TV shows, 
sports, movies and music
.
Best-in-Class Experience - Building best-in-class services and technologies to make TV, sports, movies and music more accessible to and discoverable by fans.
Highly Scalable Platform - All on a highly scalable platform that serves billions every month. The data Gracenote creates and the tech we invent is essential to 80% of Forbes’ Most Valuable Brands in Automotive, Media and Tech.
Job Purpose
We are currently looking for a Data Engineer with 3+ years of experience for our Business
Intelligence & Analytics (BIA) team, which is part of Gracenote’s Global Quality Office. The BIA team is responsible for building standardized, scalable measurement and reporting solutions for our clients and internal teams. Some of our guiding principles are client-centricity, data-driven, transparency, proactiveness, better user-experience and engagement. This role is to assist in the organization’s journey to leverage our vast metadata content.

Responsibilities


  • Build and maintain big data pipelines on Cloud, for BI solutions.
  • Leverage distributed computing framework, for processing batch and streaming workloads.
  • Author and orchestrate jobs, event-driven compute services with a focus on serverless.
  • Implement and maintain data-warehouse, data-lake and lakehouse architectures for
  • BI/analytics use-cases.
  • Develop scalable, cost-effective data solutions using clean coding principles.

Skills


Conceptual, hands-on and strong implementation experience in:
Languages - Python, SQL, Pyspark Cloud (AWS) - S3, Lambda, Glue, Event Bridge, Step-functions, EC2, EMR, SSM, IAM, VPC,Cloudfront, Cloudwatch, API Gateway Modern columnar and open table formats - Parquet, Delta-lake, Apache Iceberg Relational Databases - PostgreSQL, MySQL, SQL Server, Amazon RDS Streaming - Kafka, Spark Streaming, Flink, Druid Cloud Data-warehouses - Snowflake, AWS Redshift, Synapse Analytics Data Catalogs - AWS Data Catalog, Hive Metastore, Apache Iceberg Query Engines - Athena, Presto, Trino, DremioFamiliarity and working knowledge of:
 Unified Platforms - Databricks, Microsoft Fabric, Azure Synapse Modern data stack API - RESTSoft skills
 Problem solving Strong communication Collaboration and teamwork Learning agility for new technologies, languages, systems etc. Attention to detail, critical thinking and focus on quality

Job Details

Job Location
India
Company Industry
Other Business Support Services
Company Type
Unspecified
Employment Type
Unspecified
Monthly Salary Range
Unspecified
Number of Vacancies
Unspecified

Do you need help in adding the right mix of strong keywords to your CV?

Let our experts design a Professional CV for you.

You have reached your limit of 15 Job Alerts. To create a new Job Alert, delete one of your existing Job Alerts first.
Similar jobs alert created successfully. You can manage alerts in settings.
Similar jobs alert disabled successfully. You can manage alerts in settings.