Our Big Data Engineers are experienced technologists with technical depth and breadth, along with strong interpersonal skills. In this role, you will work directly with customers and our team to help enable innovation through continuous, hands-on, deployment across technology stacks. You will work to build data pipelines and by developing data engineering code ( as well as writing complex data queries and algorithms. If you get a thrill working with cutting-edge technology and love to help solve customers’ problems, we’d love to hear from you. It’s time to rethink the possible. Are you ready?
What You’ll Be Doing:
Build complex ETL code
Build complex SQL queries using MongoDB, Oracle, SQL Server, MariaDB, MySQL
Work on Data and Analytics Tools in the Cloud
Develop code using Python, Scala, R languages
Work with technologies such as Spark, Databricks, Hadoop, Kafka, etc
AWS, GCP And/Or Azure Public Cloud skills and experience
Build complex Data Engineering workflows
Create complex data solutions and build data pipelines
Establish credibility and build impactful relationships with our customers to enable them to be cloud advocates
Capture and share industry best practices amongst the community
Attend and present valuable information at Industry Events
Qualifications & Experience:
3+ years design & implementation experience with distributed applications
2+ years of experience in database architectures and data pipeline development
Demonstrated knowledge of software development tools and methodologies
Presentation skills with a high degree of comfort speaking with executives, IT management, and developers
Excellent communication skills with an ability to right level conversations
Technical degree required; Computer Science or Math background desired
Demonstrated ability to adapt to new technologies and learn quickly
AWS, GCP And/Or Azure Public Cloud skills and experience