https://bayt.page.link/P6J6xi3d7kGrtCV8A
أنشئ تنبيهًا وظيفيًا للوظائف المشابهة

الوصف الوظيفي

At Nielsen, we believe that career growth is a partnership. You ultimately own, fuel and set the journey. By joining our team of nearly 14,000 associates, you will become part of a community that will help you to succeed. We champion you because when you succeed, we do too. Embark on a new initiative, explore a fresh approach, and take license to think big, so we can all continuously improve. We enable your best to power our future. 
Background
Gracenote is the world’s leading entertainment data and technology company. We power the top music services, consumer electronics companies, automakers, media companies and cable and satellite operators on the planet. At its core, Gracenote helps people find, discover and connect with the entertainment they love.
Gracenote does this by
Global Databases - Creating, collecting and organizing detailed information about TV shows, 
sports, movies and music
.
Best-in-Class Experience - Building best-in-class services and technologies to make TV, sports, movies and music more accessible to and discoverable by fans.
Highly Scalable Platform - All on a highly scalable platform that serves billions every month. The data Gracenote creates and the tech we invent is essential to 80% of Forbes’ Most Valuable Brands in Automotive, Media and Tech.
Job Purpose
We are currently looking for a Data Engineer with 3+ years of experience for our Business
Intelligence & Analytics (BIA) team, which is part of Gracenote’s Global Quality Office. The BIA team is responsible for building standardized, scalable measurement and reporting solutions for our clients and internal teams. Some of our guiding principles are client-centricity, data-driven, transparency, proactiveness, better user-experience and engagement. This role is to assist in the organization’s journey to leverage our vast metadata content.

Responsibilities


  • Build and maintain big data pipelines on Cloud, for BI solutions.
  • Leverage distributed computing framework, for processing batch and streaming workloads.
  • Author and orchestrate jobs, event-driven compute services with a focus on serverless.
  • Implement and maintain data-warehouse, data-lake and lakehouse architectures for
  • BI/analytics use-cases.
  • Develop scalable, cost-effective data solutions using clean coding principles.

Skills


Conceptual, hands-on and strong implementation experience in:
Languages - Python, SQL, Pyspark Cloud (AWS) - S3, Lambda, Glue, Event Bridge, Step-functions, EC2, EMR, SSM, IAM, VPC,Cloudfront, Cloudwatch, API Gateway Modern columnar and open table formats - Parquet, Delta-lake, Apache Iceberg Relational Databases - PostgreSQL, MySQL, SQL Server, Amazon RDS Streaming - Kafka, Spark Streaming, Flink, Druid Cloud Data-warehouses - Snowflake, AWS Redshift, Synapse Analytics Data Catalogs - AWS Data Catalog, Hive Metastore, Apache Iceberg Query Engines - Athena, Presto, Trino, DremioFamiliarity and working knowledge of:
 Unified Platforms - Databricks, Microsoft Fabric, Azure Synapse Modern data stack API - RESTSoft skills
 Problem solving Strong communication Collaboration and teamwork Learning agility for new technologies, languages, systems etc. Attention to detail, critical thinking and focus on quality

تفاصيل الوظيفة

منطقة الوظيفة
الهند
قطاع الشركة
خدمات الدعم التجاري الأخرى
طبيعة عمل الشركة
غير محدد
نوع التوظيف
غير محدد
الراتب الشهري
غير محدد
عدد الوظائف الشاغرة
غير محدد

هل تحتاج لمساعدة في إضافة الكلمات المفتاحية المناسبة لسيرتك الذاتية؟

اطلب مساعدة الخبراء لكتابة سيرة ذاتية مميزة.

لقد تجاوزت الحد الأقصى لعدد التنبيهات الوظيفية المسموح بإضافتها والذي يبلغ 15. يرجى حذف إحدى التنبيهات الوظيفية الحالية لإضافة تنبيه جديد
تم إنشاء تنبيه للوظائف المماثلة بنجاح. يمكنك إدارة التنبيهات عبر الذهاب إلى الإعدادات.
تم إلغاء تفعيل تنبيه الوظائف المماثلة بنجاح. يمكنك إدارة التنبيهات عبر الذهاب إلى الإعدادات.