"Alerted.org

Job Title, Industry, Employer
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Advanced Search

Advanced Search

Cancel
Remove
+ Add search criteria
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Related to

  • AI Software Engineer, LLM Inference Performance…

    NVIDIA (Santa Clara, CA)



    Apply Now

    NVIDIA is at the forefront of the generative AI revolution. We are looking for a Software Engineer, Performance Analysis, and Optimization for LLM Inference, to join our performance engineering team. In this role, you will focus on improving the efficiency and scalability of large language model (LLM) inference on NVIDIA Computing Platforms through compiler and kernel-level analysis and optimizations. You will work on key components that span IR-based compiler optimization, graph-level transformations, and precompiled kernel performance tuning to deliver innovative inference speed and efficiency.

     

    As a core contributor, you will collaborate with groups passionate about compiler, kernel, hardware, and framework development. You will analyze performance bottlenecks, develop new optimization passes, and validate gains through profiling and projection tools. Your work will directly influence the runtime behavior and hardware utilization of next-generation LLMs deployed across NVIDIA’s data center and embedded platforms.

    What you'll be doing:

    + Analyze the performance of LLMs running on NVIDIA Compute Platforms using profiling, benchmarking, and performance analysis tools.

    + Understand and find opportunities for compiler optimization pipelines, including IR-based compiler middle-end optimizations and kernel-level transformation s

    + Design and develop new compiler passes and optimizations techniques to deliver best-in-class, robust, and maintainable compiler infrastructure and tools.

    + Collaborate with hardware architecture, compiler, and kernel teams to understand how firmware and circuitry co-design enables efficient LLM inference.

    + Work with globally distributed teams across compiler, kernel, hardware, and framework domains to investigate performance issues and contribute to solutions.

    What we need to see:

    + Master’s or PhD in Computer Science, Computer Engineering, or a related field, or equivalent experience.

    + Strong hands-on programming expertise in C++ and Python, with solid software engineering fundamentals.

    + Foundational understanding of modern deep learning models (including transformers and LLMs) and interest in inference performance and optimization.

    + Exposure to compiler concepts such as intermediate representations (IR), graph transformations, scheduling, or code generation through coursework, research, internships, or projects.

    + Familiarity with at least one deep learning framework or compiler/runtime ecosystem (e.g., TensorRT-LLM, PyTorch, JAX/XLA, Triton, vLLM, or similar).

    + Ability to analyze performance bottlenecks and reason about optimization opportunities across model execution, kernels, and runtime systems.

    + Experience working on class projects, internships, research, or open-source contributions involving performance-critical systems, compilers, or ML infrastructure.

    + Strong communication skills and the ability to collaborate effectively in a fast-paced, team-oriented environment.

    Ways to stand out from the crowd:

    + Proficiency in CUDA programming and familiarity with GPU-accelerated deep learning frameworks and performance tuning techniques.

    + Showcase innovative applications of agentic AI tools that enhance productivity and workflow automation.

    + Active engagement with the open-source LLVM or MLIR community to ensure tighter integration and alignment with upstream efforts.

     

    NVIDIA is recognized as one of the world’s most desirable engineering environments, built by teams who value technical depth, innovation, and impact. We work alongside some of the best minds in GPU computing, systems software, and AI. If you’re driven by performance, enjoy solving sophisticated problems, and thrive in an environment that rewards initiative and technical perfection, we’d love to hear from you!

     

    Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 124,000 USD - 195,500 USD for Level 2, and 152,000 USD - 218,500 USD for Level 3.

     

    You will also be eligible for equity and benefits (https://www.nvidia.com/en-us/benefits/) .

     

    Applications for this job will be accepted at least until January 18, 2026.

     

    This posting is for an existing vacancy.

     

    NVIDIA uses AI tools in its recruiting processes.

     

    NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

     


    Apply Now



Recent Searches

[X] Clear History

Recent Jobs

  • Senior Protection & Control Engineer
    Qualus (Lake Mary, FL)
  • Distribution Coordinator
    Aston Carter (Lithia, FL)
  • Software Engineer Apps II
    AeroVironment (Centreville, VA)
  • SQL TSQL .Net 3.03.5 Developer
    COOLSOFT (Richmond, VA)
[X] Clear History

Account Login

Cancel
 
Forgot your password?

Not a member? Sign up

Sign Up

Cancel
 

Already have an account? Log in
Forgot your password?

Forgot your password?

Cancel
 
Enter the email associated with your account.

Already have an account? Sign in
Not a member? Sign up

© 2026 Alerted.org