Job Description
At Juniper, we believe the network is the single greatest vehicle for knowledge, understanding, and human advancement the world has ever known.
To achieve real outcomes, we know that experience is the most important requirement for networking teams and the people they serve. Delivering an experience-first, AI-Native Network pivots on the creativity and commitment of our people. It requires a consistent and committed practice, something we call the Juniper Way.
Future of X:
The Future of X (FoX) team is dedicated to transforming and creating a digital-first experience for support and services at Juniper. We are investing in a state-of-the-art digital technology stack, leveraging AI and automation to simplify customer journeys, enhance support experiences, and accelerate processes—often by enabling self-service capabilities. Key components of our solution include omnichannel platforms, portals, and digital suite of automation tools, providing customers with seamless, low-effort engagement options and enabling faster issue resolution, often without the need to open a case.
To build these complex solutions, our team consists of domain experts, architects, data scientists, data engineers, and MLOps professionals, managing the entire solution lifecycle—from concept to deployment. On the GenAI/AI front, we develop solutions tailored to Juniper’s business objectives.
Finally, every solution we bring to market is fully integrated into Juniper’s customer support and services technology stack, ensuring a seamless and impactful digital experience.
Job Title: Senior Data Engineer
Location: Bangalore, India
Primary Tech skills:
- Advanced Web-crawling & scraping methods and tools
- Building end-end Data Engineering pipelines for Semi and unstructured data (Text, all kinds of simple/complex table structures, images, video and audio data)
- Python, Pyspark, SQL, RDBMS
- Data Transformation (ETL/ELT) activities
- SQL Data warehouse (e.g. Snowflake) working / preferably administration
Secondary Tech skills:
- Databricks
- Familiarity with AWS services : S3, Glue, EMR, EC2, RDS, monitoring and IAM
- Kafka, Spark & Kafka Streaming
- Workflow automation (e.g. using Github actions)
- Performing RCA
Responsibilities:
Develop, maintain, and optimize data pipelines and workflows and Feature Store to ensure seamless data ingestion and transformation as a scalable data solution.
- Design, develop, implement, and architect Data Engineering pipelines, considering performance & scalability including data storage and processing.
- Implement advanced data transformations and quality checks to ensure data accuracy, completeness, security and consistency of data.
- Seamlessly integrate data from diverse sources, for data ingestion, transformation and storage, leveraging AWS S3 Storage and possibly Snowflake as a SQL Data Warehouse.
- Create and implement advanced data models and schemas and ensure data governance and data management best practices.
Qualification and Desired Experiences:
- 7+ years of data analysis and engineering experience
- Bachelor’s degree in computer science, Statistics, Informatics, Information Systems or another quantitative field.
- Working knowledge of API or Stream-based data extraction processes like Salesforce API and Bulk API and have hands-on experience in web crawling.
Personal Skills:
- Ability to collaborate cross-functionally and build sound working relationships within all levels of the organization
- Ability to handle sensitive information with keen attention to detail and accuracy. Passion for data handling ethics.
- Effective time management skills and ability to solve complex technical problems with creative solutions while anticipating stakeholder needs and helping meet or exceed expectations
- Comfortable with ambiguity and uncertainty of change when assessing needs for stakeholders
- Self-motivated and innovative; confident when working independently, but an excellent team player with a growth-oriented personality