We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Principal / Senior GPU Software Performance Engineer - PostTraining

Advanced Micro Devices, Inc.
$226,400.00/Yr.-$339,600.00/Yr.
United States, California, San Jose
2100 Logic Drive (Show on map)
Dec 16, 2025


WHAT YOU DO AT AMD CHANGES EVERYTHING

At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.

Principal / Senior GPU Software Performance Engineer - PostTraining

THE ROLE:

Drive the performance of posttraining workloads on AMD Instinct GPUs. You'll work across kernels, distributed training, and framework integrations to deliver fast, stable, and reproducible training pipelines on ROCm.

THE PERSON:

The ideal candidate is passionate about software engineering and the craft of training performance. You lead sophisticated crossstack issues-spanning data loaders, kernels, distributed training, and compilers-to clear resolution. You communicate crisply and collaborate effectively with framework, compiler, kernel, and model teams across AMD, driving measurable improvements with rigor, ownership, and reproducibility.

KEY RESPONSIBILITIES:

  • Lead performance for finetuning and RL training solutions on AMD GPUs.
  • Improve throughput, memory efficiency, and stability across data, model, and optimizer steps.
  • Optimize multiGPU/multinode training and communication patterns.
  • Contribute efficient kernels/ops and targeted graphlevel optimizations.
  • Profile, diagnose, and resolve bottlenecks using standard tooling; prevent regressions in CI.
  • Ship reproducible pipelines and documentation adopted by internal teams and external developers.
  • Collaborate with framework, compiler, and model teams to land durable improvements.

PREFERRED EXPERIENCE:

  • Proven GPU performance engineering for deep learning (ROCm/HIP, Triton, or similar).
  • Handson with SFT. LoRA and RLbased training at scale.
  • Strong PyTorch experience (torch.distributed, FSDP/ZeRO or equivalent).
  • Proficient in Python and C++; comfortable reading/writing kernels when needed.
  • Experience with distributed systems and collective communication libraries.
  • Track record of turning profiles into fixes, upstreaming changes, and documenting results.

ACADEMIC CREDENTIALS:

  • B.S./M.S./Ph.D. in Computer Science, Computer Engineering, Electrical Engineering, or equivalent

LOCATION:

San Jose, CA preferred. Other US based locations may be considered.

#LI-MV1

#LI-HYBRID

Benefits offered are described: AMD benefits at a glance.

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.

Applied = 0

(web-df9ddb7dc-h6wrt)