In this role, you will be responsible for planning, designing, and implementing robust data pipelines using different technologies. You will work with both relational and non-relational databases, ingesting different types of data from multiple sources, and building machine learning and deep learning models. Your expertise in data collection, data manipulation, data analysis, and machine learning techniques will be crucial in understanding customer requirements and building data solutions accordingly.
Responsibilities include (but are not limited to) the following:
-Design, build, and maintain ELK Stack solutions for both non-production and production deployments.
-Develop scalable data pipelines for diverse data sources, ensuring quality and integrity.
-Utilize advanced analytics for insights and predictions, including statistical analysis and machine learning.
-Create interactive dashboards and reports for stakeholders.
-Implement data governance frameworks for integrity, privacy, and security.
-Stay updated on latest data science and engineering technologies.
-Provide expertise for ELK Stack solution design and deployment, ensuring compliance.
-Configure and maintain Linux-based systems and Elastic Cloud Enterprise solutions.
-Test data flows, troubleshoot issues, and monitor performance.
-Actively participate in meetings to align development with project needs.
Education and Experience:
Three years of experience and a Bachelor’s degree in related field
preferred.
Physical Requirements:
Prolonged periods sitting at a desk and working on a computer.
Required Skills:
Let our experts design a Professional CV for you.