"Alerted.org

Job Title, Industry, Employer
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Advanced Search

Advanced Search

Cancel
Remove
+ Add search criteria
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Related to

  • Senior Researcher - LLM Systems

    Microsoft Corporation (Redmond, WA)



    Apply Now

    Generative AI is transforming how people create, collaborate, and communicate - redefining productivity across Microsoft 365 and our customers globally. At Microsoft, we run the biggest platform for collaboration and productivity in the world with hundreds of millions of consumer/enterprise users. Tackling AI efficiency challenges is crucial for delivering these experiences at scale.

     

    Within our Microsoft wide Systems Innovation initiative, we are working to advance efficiency across AI systems, where we look at novel designs and optimizations across AI stacks: models, AI frameworks, cloud infrastructure, and hardware. We are an Applied Research team driving mid- and long-term product innovations. We closely collaborate with multiple research teams and product groups across the globe who bring a multitude of technical knowledge in cloud systems, machine learning and software engineering. We communicate our research both internally and externally through academic publications, open-source releases, blog posts, patents, and industry conferences. Further, we also collaborate with academic and industry partners to advance the state of the art and target material product impact that will affect 100s of millions of customers.

     

    We are looking for a **Senior Researcher – LLM Systems** to invent, analyze, and productionize the next generation of serving architectures for transformer-based models across cloud and edge. The candidate will focus on algorithmic and systems innovations, including batching, routing, scheduling, caching, deployment safety, and endpoint configuration, that materially improve latency, throughput, cost, and reliability under real-world SLAs for Microsoft Copilots.

     

    The qualified candidate brings a solid background in distributed systems, operating systems, and/or large-scale ML serving, plus the ambition to translate research into impact in production environments. This role blends rigorous research (theory + measurement) with hands-on engineering, and includes publishing papers, filing patents, and collaborating across research and product teams to advance the state of the art.

     

    Have a look at this link for reading: Efficient AI - Microsoft Research (https://www.microsoft.com/en-us/research/group/efficient-ai/)

     

    Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.

    Responsibilities

    + Invent and evaluate algorithms for dynamic batching, routing, and scheduling for transformer inference under multi-tenant Service Level Objective (SLOs) and variable sequence lengths.

    + Design and implement caching layers (e.g., KV cache paging/offload, prompt/result caching) and memory pressure controls to maximize GPU/accelerator utilization.

    + Develop endpoint configuration policies (e.g., tensor/pipe parallelism, quantization/precision profiles, speculative decoding, chunked/streaming generation) and safe rollout mechanisms.

    + Profile and optimize end-to-end serving pipelines: token-level latency, end-to-end p95/p99, throughput-per-$, cold-start behavior, warm pool strategy, and capacity planning.

    + Collaborate with model, kernel, and hardware teams to align serving algorithms with attention/KV innovations and accelerator features.

    + Publish research, file patents, and, where appropriate, contribute to open-source serving frameworks.

    + Document designs, benchmarks, and operational playbooks; mentor researchers/engineers on the team.

    Qualifications

    Required Qualifications:

    + Doctorate in relevant field

    + OR equivalent experience.

    + 2+ years of experience in queuing/scheduling theory and practical request orchestration under SLO constraints.

    + 2+ years of experience in C++ and Python for high-performance systems; reliable code quality and profiling/debugging skills.

    + Demonstrated research impact (publications and/or patents) and shipping systems that run at scale.

    Other Requirements:

    Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings:

     

    + **Microsoft Cloud Background Check** : This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.

    Preferred Qualifications:

    + Deep understanding of transformer inference efficiency techniques (attention, paged Key-Value (KV) caching, speculative decoding, Low-Rank Adaptation (LoRA), sequence packing/continuous batching, quantization).

    + Background in cost/performance modeling, autoscaling, and multi-region DR.

    + Hands-on experience with inference serving frameworks (e.g., vLLM, Triton Inference Server, TensorRT-LLM, ONNX Runtime/ORT, Ray Serve, DeepSpeed-MII).

    + Familiarity with GPU/accelerator memory management concepts to co-design cache/throughput policies.

     

    Research Sciences IC4 - The typical base pay range for this role across the U.S. is USD $119,800 - $234,700 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $158,400 - $258,000 per year.

     

    Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here: https://careers.microsoft.com/us/en/us-corporate-pay

     

    Microsoft will accept applications and processes offers for these roles on an ongoing basis.

     

    \#M365Core #M365Research #Research

     

    Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations (https://careers.microsoft.com/v2/global/en/accessibility.html) .

     


    Apply Now



Recent Searches

[X] Clear History

Recent Jobs

  • Senior Researcher - LLM Systems
    Microsoft Corporation (Redmond, WA)
[X] Clear History

Account Login

Cancel
 
Forgot your password?

Not a member? Sign up

Sign Up

Cancel
 

Already have an account? Log in
Forgot your password?

Forgot your password?

Cancel
 
Enter the email associated with your account.

Already have an account? Sign in
Not a member? Sign up

© 2025 Alerted.org