https://bayt.page.link/f1JKMT9k5BZsceGf6
Create a job alert for similar positions

Job Description

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future.
Introduction
Join the Nielsen One Application (N1 Apps) team as we develop the next-generation software suite that unifies all of Nielsen’s offerings into one seamless experience. Leveraging cutting-edge technologies, we’re on a quest for smart, innovative engineers ready to tackle complex integration tasks and introduce new technologies. 
At N1 Apps, collaboration is key. We thrive on growth, initiative, and innovation, nurturing an open culture that prizes learning and experimentation. Enhance your skills in our guild meetings, influence our roadmap with your architectural ideas, and collaborate cross-functionally to deliver unparalleled user experiences. 
We’re expanding across multiple teams and are eager to connect with candidates who are ready to make an impact. If you’re interested and believe you fit this dynamic role, we’d love to hear from you!
About the role
You’ll be working within an international group of teams spanning from India to Europe and the US. As a Senior Scala/Spark Engineer, you will be working alongside and guiding a team of diverse engineers, including DevOps, Data, Backend, and Front End engineers.
You should be able to work independently, guide junior engineers, and possess a passion and drive for learning, suggesting, and adapting to new technologies.
Responsibilities:
Discuss the Cost of Change (= code quality) with your team members continuously.
Write unit tests, integration tests, and API tests.
Support the application 24/7 based on team on-call rotations.
Write clean code with a focus on coupling, separation of concerns, and best practices.
Spend 90% of your time writing code, emphasizing Test-driven development (TDD).Dedicate 10% of your time to learning and improving existing application architecture.
Stay open to learning and adapting to new technology architectures and patterns.
Possess knowledge of distributed architectures, particularly with Akka, Akka Cluster, and Akka Persistence, alongside experience using Spark with Scala.
Have some hands-on experience with building and creating CI/CD pipelines.
Conduct code reviews and participate in design discussions.
Analyze the impact of changes on data and implement event sourcing and CQRS patterns.
Have a strong understanding of functional, reactive and parallel programming.
Troubleshoot and solve complex problems in production.
Collaborate and coordinate with different stakeholders, including product, data science, and account managers.
Diagnose AWS infrastructure issues related to the application.
Implement best practices for 24/7 application monitoring, orchestration, and performance optimization.
Follow Agile principles, participate in grooming and planning sessions, and effectively translate business requirements to Agile stories.
Practice DevOps and SecOps for continuous incremental delivery and quality products with the guidance of senior engineers.
Key Skills Required
Bachelor’s or Master’s degree in Computer Science or related discipline or equivalent work experience.
4-8 years of experience with Scala, experience in upgrading, maintaining, and performance tuning large Scala applications is required.
4+ years of advanced experience with Scala frameworks such as Akka/Pekko, Akka Cluster - deep understanding of Akka Persistence, Akka Projection and Akka Serialization is essential.
4+ years of advanced experience with Java and relational databases is essential.
2+ years of experience with AWS services (RDS, S3) is required.
2+ years of experience with Apache Spark. Familiarity with Spark SQL and a basic understanding of performance tuning large Spark applications would be beneficial.
2+ years of experience using monitoring and alert orchestration tools such as Prometheus, Grafana, OpsGenie/PagerDuty is essential.
2+ years of experience building CI/CD pipelines in GitLab for applications running on Kubernetes (EKS) using Docker is required.
2+ years of experience in developing microservices applications and familiarity with protocols such as HTTP and gRPC is essential.
Proficient in debugging and performance tuning large-scale Java and Big Data applications, using tools such as Visual VM, JProfiler, and remote debugging techniques.
Fluent in English, both spoken and written, with a large vocabulary (C1 English level).
Understand and implement basic object-oriented principles and functional programming principles. Implement good coding practices with thorough unit and integration testing, emphasizing TDD.
Commitment to following best practices for security, scalability, and performance.
Excellent problem-solving skills and the ability to troubleshoot complex technical issues in production environments.
Strong communication skills for effective collaboration with cross-functional teams, stakeholders, and third-party vendors.
Continuous improvement mindset to identify opportunities for automation, optimization, and efficiency gains in infrastructure and deployment processes.
Ability to document processes, procedures, and technical architectures for knowledge sharing and future reference.
Preferred
Leadership qualities and the ability to inspire and motivate a team, mentoring junior engineers and fostering a collaborative team environment
At least 1 year of experience with Test-driven development (yes, test-first!)
Familiarity with CQRS, event sourcing, and Domain-Driven Design (DDD).
Familiarity working with large scale enterprise Java/BigData systems using Agile, TDD & DevOps methodologies.
Proven track record on delivering enterprise software solutions using Agile principles with either Scrum or Kanban.

You have reached your limit of 15 Job Alerts. To create a new Job Alert, delete one of your existing Job Alerts first.
Similar jobs alert created successfully. You can manage alerts in settings.
Similar jobs alert disabled successfully. You can manage alerts in settings.