At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. We are looking for an experienced Data Engineer to join our team which provides a spectrum of services and expertise to all business verticals within Gracenote. This person will collaborate with other Data Engineers, DBAs, SQL/ETL Developers, DevOps Engineers, Security professionals and Data Science team members, to architect, build, and deploy the platform solutions on which our entertainment metadata pipelines thrive. Our team views diversity as a strength and we are looking for people who will help support an inclusive culture of belonging where everyone feels empowered to bring their full, authentic selves to work. Purpose As a Data Engineer, your role is to own the data pipeline and the data governance of our Data Strategy. Our Data Strategy underpins our suite of Client-facing Applications, Data Science activities, Operational Tools and Business Analytics.
Responsibilities:
Architect and build scalable, resilient and cost-effective software to support complex data pipelines.
The architecture has two facets: Storage and Compute. The Data Engineer is responsible for designing and maintaining the different tiers of the data storage, including (but not limited to) archival, long-term persistent storage, transactional and reporting storage.
The Data Engineer is responsible for designing, implementing and maintaining various data pipelines such as self-service ingestion tools, exports to application-specific warehouses and indexing activities.
The Data Engineer is responsible for data modeling, as well as designing, implementing and maintaining various data catalogs, to support data transformation and product requirements.
Collaborate with Data Science to understand, translate, and integrate methodologies into engineering build pipelines.
Partner with product owners to translate complex business requirements into technical solutions, imparting design and architecture guidance.
Provide expert mentorship to project teams on technology strategy, cultivating advanced skill sets in software engineering and modern SDLC.
Stay informed about the latest technologies and methodologies by participating in industry forums, having an active peer network, and engaging actively with customers.
Cultivate a team environment focused on continuous learning, where innovative technologies are developed and refined through teamwork.
Qualifications
A degree in Computer Science or related technical field.
Strong Computer Science fundamentals3+ years of professional Database Development, with languages such as ANSI SQL, TSQL, PL/SQL, PLSQL, plus database design, normalization, server tuning, and query plan optimization3+ years Software Engineering experience with programming languages such as Java, Scala, Python and Unix Shell3+ years of professional DBA experience with large datastores including HA and DR planning and support.
Understanding of File Systems
Demonstrated understanding and experience with big data tools such as Kafka, Spark and Trino/Presto
Experience configuring database replication (physical and/or logical)ETL experience (3rd party and proprietary)Experience with orchestration tools such as Airflow
Comfortable with version control systems such as git
A thirst for learning new Tech and keeping up with industry advances.
Excellent communication and knowledge-sharing skills.
Comfortable working with technical and non-technical teams.
Strong debugging skills.
Comfortable providing and receiving code review feedback.
A positive attitude, adaptability, enthusiasm, and a growth mindset.
Nice to have:
A personal technical blog
A personal (Git) repository of side projects
Participation in an open-source community
Preferred skills:
Comfortable using Docker and Kubernetes for container management.
DevOps experience deploying and tuning the applications you’ve built.
Monitoring tools such as Datadog, Prometheus, Grafana, Cloudwatch.