https://bayt.page.link/KyCY3gmFthydp4BC9
أنشئ تنبيهًا وظيفيًا للوظائف المشابهة

الوصف الوظيفي

Position Summary:
We are seeking an Apache Spark - Subject Matter Expert (SME) who will be responsible for designing, optimizing, and scaling Spark-based data processing systems. This role involves hands-on experience in Spark architecture and core functionalities, focusing on building resilient, high-performance distributed data systems. You will collaborate with engineering teams to deliver high-throughput Spark applications and solve complex data challenges in real-time processing, big data analytics, and streaming.
If you’re passionate about working in fast-paced, dynamic environments and want to be part of the cutting edge of data solutions, this role is for you.

We’re looking for someone who can:


  • Design and optimize distributed Spark-based applications, ensuring low-latency, high-throughput performance for big data workloads.
  • Troubleshooting: Provide expert-level troubleshooting for any data or performance issues related to Spark jobs and clusters.
  • Data Processing Expertise: Work extensively with large-scale data pipelines using Spark's core components (Spark SQL, DataFrames, RDDs, Datasets, and structured streaming).
  • Performance Tuning: Conduct deep-dive performance analysis, debugging, and optimization of Spark jobs to reduce processing time and resource consumption.
  • Cluster Management: Collaborate with DevOps and infrastructure teams to manage Spark clusters on platforms like Hadoop/YARN, Kubernetes, or cloud platforms (AWS EMR, GCP Dataproc, etc.).
  • Real-time Data: Design and implement real-time data processing solutions using Apache Spark Streaming or Structured Streaming.

What makes you the right fit for this position:


  • Expert in Apache Spark: In-depth knowledge of Spark architecture, execution models, and the components (Spark Core, Spark SQL, Spark Streaming, etc.)
  • Data Engineering Practices: Solid understanding of ETL pipelines, data partitioning, shuffling, and serialization techniques to optimize Spark jobs.
  • Big Data Ecosystem: Knowledge of related big data technologies such as Hadoop, Hive, Kafka, HDFS, and YARN.
  • Performance Tuning and Debugging: Demonstrated ability to tune Spark jobs, optimize query execution, and troubleshoot performance bottlenecks.
  • Experience with Cloud Platforms: Hands-on experience in running Spark clusters on cloud platforms such as AWS, Azure, or GCP.
  • Containerization & Orchestration: Experience with containerized Spark environments using Docker and Kubernetes is a plus.

Good to have:


  • Certification in Apache Spark or related big data technologies.
  • Experience working with Acceldata's data observability platform or similar tools for monitoring Spark jobs.
  • Demonstrated experience with scripting languages like Bash, PowerShell, and Python.
  • Familiarity with concepts related to application, server, and network security management.
  • Possession of certifications from leading Cloud providers (AWS, Azure, GCP), and expertise in Kubernetes would be significant advantages.

لقد تجاوزت الحد الأقصى لعدد التنبيهات الوظيفية المسموح بإضافتها والذي يبلغ 15. يرجى حذف إحدى التنبيهات الوظيفية الحالية لإضافة تنبيه جديد
تم إنشاء تنبيه للوظائف المماثلة بنجاح. يمكنك إدارة التنبيهات عبر الذهاب إلى الإعدادات.
تم إلغاء تفعيل تنبيه الوظائف المماثلة بنجاح. يمكنك إدارة التنبيهات عبر الذهاب إلى الإعدادات.