https://bayt.page.link/rHPTQUheZGu3XPJv9
أنشئ تنبيهًا وظيفيًا للوظائف المشابهة

الوصف الوظيفي

Introduction
At IBM, work is more than a job – it’s a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you’ve never thought possible. Are you ready to lead in this new era of technology and solve some of the world’s most challenging problems? If so, let’s talk.

IBM Consulting is IBM’s consulting and global professional services business, with market leading capabilities in business and technology transformation. With deep expertise in many industries, we offer strategy, experience, technology, and operations services to many of the most innovative and valuable companies in the world. Our people are focused on accelerating our clients’ businesses through the power of collaboration. We believe in the power of technology responsibly used to help people, partners and the planet.


IBM Cognitive Asset Engineering Services is looking for a motivated and seasoned Senior Data Scientist to our global team to deliver outstanding results for both internal/external clients and build a robust and repeatable product offering. You will be working in one of the top 10 assets in the group with highly collaborative teams in a dynamic and agile environment.


The IBM Consulting Modern Data Accelerators provides the foundation of data and analytics to enable business outcomes through the creation of cloud and hybrid cloud environments. It manages: Curation, Ingestion, Ingestion/Curation, Metadata, Operational Controls, Persist & Publish, and Data Security.
You will directly collaborate with CTO of the asset, architects, development leaders, system administrators, and testers who both enhance the product and establish and govern the architectural, development, and testing processes used across the organization.

Your Role and Responsibilities


  • Develop and maintain data pipelines for data ingestion, transformation, and loading.
  • Optimize data pipelines for performance and scalability.
  • Monitor data quality and troubleshoot issues.
  • Collaborate with Data Architect and DevOps for infrastructure management.


Required Technical and Professional Expertise


  • Strong programming skills in Python (or similar language).
  • Experience building data pipelines and data transformation processes with DBT, DuckDB, Spark, and other ETL/ELT tools.
  • Familiarity with data ingestion tools like Kafka (if applicable).
  • Experience of cloud platforms for data storage and processing.


Preferred Technical and Professional Expertise


  • Knowledge of Model Ops practices on cloud and containerization
  • Familiarity with design lead development methodologies in complex data platforms and analytics
  • Ability to work with data from the data platform and communicate insights effectively.

لقد تجاوزت الحد الأقصى لعدد التنبيهات الوظيفية المسموح بإضافتها والذي يبلغ 15. يرجى حذف إحدى التنبيهات الوظيفية الحالية لإضافة تنبيه جديد
تم إنشاء تنبيه للوظائف المماثلة بنجاح. يمكنك إدارة التنبيهات عبر الذهاب إلى الإعدادات.
تم إلغاء تفعيل تنبيه الوظائف المماثلة بنجاح. يمكنك إدارة التنبيهات عبر الذهاب إلى الإعدادات.