As NVIDIA makes inroads into the Datacenter business, our team plays a central role in getting the most out of our exponentially growing datacenter deployments as well as establishing a data-driven approach to hardware design and system software development. We collaborate with a broad cross section of teams at NVIDIA ranging from DL research teams to CUDA Kernel and DL Framework development teams, to Silicon Architecture Teams. As our team grows, and as we seek to identify and take advantage of long term opportunities, our skillset needs are expanding as well.
Do you want to influence the development of high-performance Datacenters designed for the future of AI? Do you have an interest in system architecture and performance? In this role you will find how CPU, GPU, networking, and IO relate to deep learning (DL) architectures for Natural Language Processing, Computer Vision, Autonomous Driving and other technologies. Come join our team, and bring your interests to help us optimize our next generation systems and Deep Learning Software Stack.
What you'll be doing:
Develop analysis tools and methodologies to measure key performance metrics and to estimate potential for efficiency improvement.
What we need to see:
Ways to stand out from the crowd:
Background with system software, Operating system intrinsics, GPU kernels (CUDA), or DL Frameworks (PyTorch, TensorFlow).
Experience with silicon performance monitoring or profiling tools (e.g. perf, gprof, nvidia-smi, dcgm).
In depth performance modeling experience in any one of CPU, GPU, Memory or Network Architecture
Exposure to Containerization Platforms (docker) and Datacenter Workload Managers (slurm).
Prior experience with multi-site teams or multi-functional teams.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative and autonomous, we want to hear from you!
#LI-Hybrid