"Alerted.org

Job Title, Industry, Employer
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Advanced Search

Advanced Search

Cancel
Remove
+ Add search criteria
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Related to

  • Senior On-Device Model Inference Optimization…

    NVIDIA (Santa Clara, CA)



    Apply Now

    NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.

     

    We are seeking a highly-skilled Senior On-Device Model Inference Optimization Engineer to join our team and lead efforts in improving the performance and efficiency of AI models enabling the next generation of autonomous vehicles technology at NVIDIA!

    What you'll be doing:

    + Develop and implement strategies to optimize AI model inference for on-device deployment.

    + Employ techniques like pruning, quantization, and knowledge distillation to minimize model size and computational demands.

    + Optimize performance-critical components using CUDA and C++.

    + Collaborate with multi-functional teams to align optimization efforts with hardware capabilities and deployment needs.

    + Benchmark inference performance, identify bottlenecks, and implement solutions.

    + Research and apply innovative methods for inference optimization.

    + Adapt models for diverse hardware platforms and operating systems with varying capabilities.

    + Create tools to validate the accuracy and latency of deployed models at scale with minimal friction.

    + Recommend and implement model architecture changes to improve the accuracy-latency balance.

    What we need to see:

    + MSc or PhD in Computer Science, Engineering, or a related field, or equivalent experience.

    + Over 5 years of confirmed experience specializing in model inference and optimization.

    + 10+ overall years of work experience in a relevant area

    + Expertise in modern machine learning frameworks, particularly PyTorch, ONNX, and TensorRT.

    + Proven experience in optimizing inference for transformer and convolutional architectures.

    + Strong programming proficiency in CUDA, Python, and C++.

    + In-depth knowledge of optimization techniques, including quantization, pruning, distillation, and hardware-aware neural architecture search.

    + Skilled in building and deploying scalable, cloud-based inference systems.

    + Passionate about developing efficient, production-ready solutions with a strong focus on code quality and performance.

    + Meticulous attention to detail, ensuring precision and reliability in safety-critical systems.

    + Strong collaboration and communication skills for working optimally across multidisciplinary teams.

    + A proactive, diligent mentality with a drive to tackle complex optimization challenges.

    Ways to stand out from the crowd:

    + Publications or industry experience in optimizing and deploying model inference at scale.

    + Hands-on expertise in hardware-aware optimizations and accelerators such as GPUs, TPUs, or custom ASICs.

    + Active contributions to open-source projects focused on inference optimization or machine learning frameworks.

    + Experience in designing and deploying inference pipelines for real-time or autonomous systems.

     

    The base salary range is 184,000 USD - 356,500 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

     

    You will also be eligible for equity and benefits (https://www.nvidia.com/en-us/benefits/) . NVIDIA accepts applications on an ongoing basis.

     

    NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

     


    Apply Now



Recent Searches

[X] Clear History

Recent Jobs

  • Senior On-Device Model Inference Optimization Engineer
    NVIDIA (Santa Clara, CA)
  • Licensed Professional Mental Health Counselor (Program Coordinator) - RANGE Program
    Veterans Affairs, Veterans Health Administration (Fargo, ND)
  • Staff Pharmacist Senior Manager - Accredo
    The Cigna Group (Honolulu, HI)
  • Area Executive Chef
    Sodexo (Northfield, VT)
[X] Clear History

Account Login

Cancel
 
Forgot your password?

Not a member? Sign up

Sign Up

Cancel
 

Already have an account? Log in
Forgot your password?

Forgot your password?

Cancel
 
Enter the email associated with your account.

Already have an account? Sign in
Not a member? Sign up

© 2025 Alerted.org