https://bayt.page.link/fzSmRM971i8TjsvD9
Create a job alert for similar positions

Job Description

At FreePL, you will be an integral part of a small team that is currently on a mission to build something big from the ground up. You will need to be comfortable with ambiguity and fast pace, have an entrepreneurial mindset, a can-do attitude and will be expected to take ownership of your work and be comfortable making decisions and solving problems independently.


As part of a quickly growing team, you will have the opportunity to shape the future of FreePL and make a real impact. You will work closely with the founding team to develop and implement strategies for growth and success. If you are self-motivated, results-driven, and have a passion for delivering excellence, this is your chance to not only join us in our launch phase but to grow into a leader at FreePL.
What We Look For


We are looking for a talented Data Engineer to join our team and contribute to building efficient, scalable, and reliable data infrastructure to drive our data-driven decision-making processes.


Responsabilities


  • Design, build, and maintain efficient and scalable data pipelines using Python and other modern tools.
  • Collaborate with cross-functional teams to understand data requirements and deliver robust solutions.
  • Develop and optimize ETL/ELT processes to extract, transform, and load data from multiple sources.
  • Manage, monitor, and improve database systems to ensure high performance and availability.
  • Implement and maintain data models and architecture to support analytics and machine learning workflows.
  • Ensure data security, privacy, and compliance with relevant regulations.
  • Perform root cause analysis for data-related issues and propose solutions.
  • Stay updated on the latest technologies, frameworks, and industry

Qualifications & Work Experience


  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • 3+ years of experience as a Data Engineer or in a similar role.
  • Proficiency in Python with a strong understanding of libraries like Pandas, NumPy, and PySpark.
  • Hands-on experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Strong knowledge of data integration tools and frameworks (e.g., Apache Airflow, Apache Kafka, etc.).
  • Experience with Azure cloud platforms and managing cloud-based data pipelines.
  • Familiarity with big data technologies such as Hadoop, Spark, or Snowflake is a plus.
  • Solid understanding of data warehousing concepts and tools (e.g., Redshift, BigQuery).
  • Knowledge of containerization (e.g., Docker) and orchestration tools (e.g., Kubernetes) is an advantage.
  • Excellent problem-solving and communication skills.
  • Experience in the logistics or supply chain domain.
  • Familiarity with real-time data streaming and processing.
  • Knowledge of data governance, quality frameworks, and best practices.


You have reached your limit of 15 Job Alerts. To create a new Job Alert, delete one of your existing Job Alerts first.
Similar jobs alert created successfully. You can manage alerts in settings.
Similar jobs alert disabled successfully. You can manage alerts in settings.