Overview:
We are seeking a highly skilled Senior Data Engineer with a strong focus on building robust data streaming pipelines and effective payload design for both batch and streaming use cases. The ideal candidate will have over 7 years of experience in data engineering, with proficiency in Java, Apache Beam or Flink, Kafka, and SQL. If you are passionate about data engineering and thrive in a collaborative Agile environment, we invite you to apply!
Roles and Responsibilities:
Data Streaming Pipeline Development:
Data Analysis and Design:
Documentation: Create and maintain thorough documentation of data pipeline architectures, design decisions, and operational processes.
Observability: Ensure proper observability is embedded into the design of data pipelines, allowing for seamless monitoring and troubleshooting.
DevSecOps Mindset: Embrace a DevSecOps mindset: you build it, you run it, taking ownership of deployed pipelines and managing their performance in production.
Agile Participation: Actively participate in Agile ceremonies, including grooming user stories, estimation, and sprint planning to ensure alignment with project goals.
Testing: Conduct unit, functional, integration, and non-functional testing to validate data pipeline functionality and performance.
Must-Have Skills:
Programming Language: Proficient in Java programming language.
Streaming Technologies: Experience with Apache Beam or Flink for building data processing pipelines.
Source Code Management: Familiarity with source code management using GitHub.
API Consumption: Experience in consuming API endpoints for data integration and processing.
Database Management: Proficient with at least one enterprise-grade relational database (e.g., Oracle, SQL Server, PostgreSQL) and at least one NoSQL database.
Messaging Systems: Knowledge of Kafka for real-time data processing and streaming.
SQL Skills: Intermediate level SQL skills for querying and managing data.
Build Tools: Experience with Maven for project management and build automation.
Preferred Skills:
Java Version: Familiarity with Java JDK 17.
AWS Services: Experience with AWS Managed Service for Flink, AWS EKS, AWS Aurora PostgreSQL, and AWS ElastiCache.
CI/CD Tools: Knowledge of GitHub Actions and Docker for managing deployment pipelines.
Monitoring and Observability: Proficiency with OpenTelemetry for observability and diagnosing issues within data pipelines.
Data Solutions: Experience with Debezium for change data capture and streaming of database changes.
Certification: An Associate-level AWS certification is a plus.
Qualifications:
SYNECHRON’S DIVERSITY & INCLUSION STATEMENT
Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.
All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.
Candidate Application Notice