https://bayt.page.link/VsXioDwPJwEzJWmcA
Create a job alert for similar positions

Job Description

As NVIDIA makes inroads into the Datacenter business, our team plays a central role in getting the most out of our exponentially growing datacenter deployments as well as establishing a data-driven approach to hardware design and system software development. We collaborate with a broad cross section of teams at NVIDIA ranging from DL research teams to CUDA Kernel and DL Framework development teams, to Silicon Architecture Teams. As our team grows, and as we seek to identify and take advantage of long term opportunities, our skillset needs are expanding as well.


Do you want to influence the development of high-performance Datacenters designed for the future of AI? Do you have an interest in system architecture and performance? In this role you will find how CPU, GPU, networking, and IO relate to deep learning (DL) architectures for Natural Language Processing, Computer Vision, Autonomous Driving and other technologies. Come join our team, and bring your interests to help us optimize our next generation systems and Deep Learning Software Stack.


What you'll be doing:


  • Help develop software infrastructure to characterize and analyze a broad range Deep Learning applications
  • Evolve cost-efficient datacenter architectures tailored to meet the needs of Large Language Models (LLMs).
  • Work with experts to help develop analysis and profiling tools in Python, bash and C++ to measure key performance metrics of DL workloads running on Nvidia systems.
  • Analyze system and software characteristics of DL applications.
  • Develop analysis tools and methodologies to measure key performance metrics and to estimate potential for efficiency improvement.


What we need to see:


  • A Bachelor’s degree in Electrical Engineering or Computer Science with 3 years or more of relevant experience (Masters or PhD degree preferred)
  • Experience in at least one of the following:
    • System Software: Operating Systems (Linux), Compilers, GPU kernels (CUDA), DL Frameworks (PyTorch, TensorFlow).
    • Silicon Architecture and Performance Modeling/Analysis: CPU, GPU, Memory or Network Architecture
  • Experience programming in C/C++ and Python. Exposure to Containerization Platforms (docker) and Datacenter Workload Managers (slurm) is a plus
  • Demonstrated ability to work in virtual environments, and a strong drive to own tasks from beginning to end. Prior experience with such environments will make you stand out.

Ways to stand out from the crowd:


  • Background with system software, Operating system intrinsics, GPU kernels (CUDA), or DL Frameworks (PyTorch, TensorFlow).


  • Experience with silicon performance monitoring or profiling tools (e.g. perf, gprof, nvidia-smi, dcgm).


  • In depth performance modeling experience in any one of CPU, GPU, Memory or Network Architecture


  • Exposure to Containerization Platforms (docker) and Datacenter Workload Managers (slurm).


  • Prior experience with multi-site teams or multi-functional teams.


NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative and autonomous, we want to hear from you!


#LI-Hybrid


You have reached your limit of 15 Job Alerts. To create a new Job Alert, delete one of your existing Job Alerts first.
Similar jobs alert created successfully. You can manage alerts in settings.
Similar jobs alert disabled successfully. You can manage alerts in settings.