Job Description
Job Description
Bachelor’s degree in computer science, Engineering, or a related field.
Proven experience as a Data Engineer or in a similar role, working with large-scale data processing and ETL pipelines.
Strong programming skills in languages such as Python, Java, or Scala, with experience in data manipulation and processing frameworks like Apache Spark.
Experience with SQL and database technologies (e.g., relational databases, SQL queries, data modelling).
Proficiency in working with big data technologies such as Hadoop, Hive and knowledge of distributed systems and cloud computing platforms (e.g., AWS, Azure, GCP).
Familiarity with data integration and workflow management tools such as Apache Airflow
Knowledge of data warehousing concepts and experience with data warehousing solutions is highly desirable.
Strong analytical and problem-solving skills, with the ability to analyse complex data-related issues and propose effective solutions.