Description
It’s an exciting time to be at Infoblox. Named a Top 25 Cyber Security Company by The Software Report and one of Inc. magazine’s Best Workplaces for 2020, Infoblox is the leader in cloud-first networking and security services. Our solutions empower organizations to take full advantage of the cloud to deliver network experiences that are inherently simple, scalable, and reliable for everyone. Infoblox customers are among the largest enterprises in the world and include 70% of the Fortune 500, and our success depends on bright, energetic, talented people who share a passion for building the next generation of networking technologies—and having fun along the way. We are looking for a Senior Data Engineer to join ourCloud Engineering team in Pune, India, reporting to the Director of Software Engineering. In this role, you will develop platforms and products for Infoblox’s SaaS product line delivering next level networking for our customers. This is an opportunity to work closely with data scientists and product teams to curate and refine data powering our latest cloud products. Come join our growing Cloud Engineering team and help us build world class solutions. You are the ideal candidate if you are passionate about the nexus between data and computer science and driven to figure out how best to represent and summarize data in a way that informs good decisions and drives new products. What you’ll do:
Curate large-scale data from a multitude of sources into appropriate sets for research and development for the data scientists, threat analysts, and developers across the company
Design, test, and implement storage solutions for various consumers of the data. Especially data warehouses like ClickHouse and OpenSearch.
Design and implement mechanisms to monitor data sources over time for changes using summarization, monitoring, and statistical methods
Design, develop, and maintain APIs that enable seamless data integration and retrieval processes for internal and external applications, and ensure these APIs are scalable, secure, and efficient to support high-volume data interactions
Leverage computer science algorithms and constructs, including probabilistic data structures, to distill large data into sources of insight and enable future analytics
Convert prototypes into production data engineering solutions through disciplined software engineering practices, Spark optimizations, and modern deployment pipelines
Collaborate on design, implementation, and deployment of applications with the rest of software engineering
Support data scientists and Product teams in building, debugging, and deploying Spark applications that best leverage data
Build and maintain tools for automation, deployment, monitoring, and operations
Create test plans, test cases, and run tests with automated tools
What you’ll bring:
7+ years of experience in software development with programming languages such as Golang, C, C++, C#, or Java
Expertise in Big Data—MapReduce, Spark Streaming, Kafka, Pub-Sub, In-memory Database
Experience with NoSQL databases such as OpenSearch/Clickhouse
Good exposure in application performance tuning, memory management, scalability
Ability to design highly scalable distributed systems using different open-source technologies
Experience in microservices development and container-based software using Docker/Kubernetes and other container technologies is a plus
Experience with AWS, GCP or Azure is a plus.
Experience building high-performance algorithms
Bachelor’s degree in Computer Science, Computer Engineering, or Electrical Engineering is required, master’s degree preferred
What success looks like: After six months, you will…
Complete onboarding by demonstrating knowledge of the Data Lake and associated technologies and CI/CD processes by deploying ETL pipelines for curation and warehousing to production
Complete rotations on support/duty where you will gain experience with the different systems, tools, and processes in the Data Lake by resolving reported issues
Contribute to the team's velocity by participating in Scrum and driving stories to completion
After about a year, you will…
Be an expert on the Data Lake target state architecture and drive engineering design and grooming sessions for new feature development
Apply coding best practices and provide in depth-reviews on the team’s pull requests
Be a thought leader in one or more domains of the Data Lake, driving development and mentoring teammates in this domain
We’ve got you covered: Our holisticbenefitspackage includes coverage of your health, wealth, and wellness—as well as a great work environment, employee programs, and company culture. We offer a competitive salary and benefits package, including a Provident Fund with company matches and generous paid time off to help you balance your life. We have a strong culture and live our