"Alerted.org

Job Title, Industry, Employer
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Advanced Search

Advanced Search

Cancel
Remove
+ Add search criteria
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Related to

  • Software Engineer II - AI Infrastructure…

    Microsoft Corporation (Redmond, WA)



    Apply Now

    Overview

     

    The AI Platform organization builds the end-to-end Azure AI stack, from the infrastructure layer to the PaaS and user experience offerings for AI application builders, researchers, and major partner groups across Microsoft. The platform is core to Azure’s innovation, differentiation and operational efficiency, as well as the AI-related capabilities of all of Microsoft’s flagship products, from M365 and Teams to GitHub Copilot and Bing Copilot. We are the team building the Azure OpenAI service, AI Foundry, Azure ML Studio, Cognitive Services, and the global Azure infrastructure for managing the GPU and NPU capacity running the largest AI workloads on the planet.

     

    One of the major, mature offerings of AI Platform is Azure ML Services. It provides data scientists and developers a rich experience for defining, training, fine-tuning, deploying, monitoring, and consuming machine learning models. We provide the infrastructure and workload management capabilities powering Azure ML Services, and we engage directly with some of the major internal research and applied ML groups using these services, including Microsoft Research and the Bing WebXT team.

     

    As part of AI Platform, the AI Infra team is looking for a Software Engineer II - AI Infrastructure (Scheduler) - CoreAI, with initial focus on the Scheduler subsystem. The scheduler is the “brains” of the AI Infra control plane. It governs access to the GPU and NPU capacity of the platform according to a complex system of workload preference rules, placement constraints, optimization objectives, and dynamically interacting policies aimed to maximize hardware utilization and fulfill greatly varying needs of users and the AI Platform partner services in terms of workload types, prioritization, and capacity targeting flexibility. The scheduler’s set of capabilities is broad and ambitions. It manages quota, capacity reservations, SLA tiers, preemption, auto-scaling, and a wide range of configurable policies. Global scheduling is a distinctive major feature that overcomes the regional segmentation of the Azure compute fleet by treating the GPU capacity as a single global virtual pool, which greatly increases capacity availability and utilization for major classes of ML workload. We have achieved this capability without allowing a major global single point of failure, based on regional instances of the scheduler service interacting via peer-to-peer protocols for sharing capacity inventory and coordinating handoff of jobs for scheduling. Our system manages significant amount of GPU capacity even outside Azure datacenters, through a unified model and operational process and highly generalized, flexible workload scheduling capabilities.

     

    To be able to manage the inherent complexity of the Scheduler subsystem and enable it to meet the stringent expectations of high service reliability, availability, and throughput, we emphasize rigorous engineering, utmost precision and quality, and ownership—from feature design to livesite. Quality mindset, attention to detail, development process rigor, and data-driven design and problem-solving skills are key for success in our mission-critical control plane space.

    Responsibilities

    + Work on the design and development of the core AI Infrastructure distributed and in-cluster services that support large scale AI training and inferencing.

    + Develop, test, and maintain control plane services written in C#, hosted on Service Fabric or Kubernetes (AKS) clusters.

    + Enhance systems and applications to ensure high stability, efficiency and maintainability, low latency, tight cloud security.

    + Provide operational support and DRI (on-call) responsibilities for the service.

    + Develop and foster a deep understanding of the machine learning concepts, use cases, and relevant services used by our customers.

    + Collaborate closely with service engineers, product managers, and internal applied research and data science teams within Microsoft to build better solutions together.

    + Investigate use of tools and cloud services and prototype solutions for problems in our control plane space.

    + Embody our culture (https://careers.microsoft.com/v2/global/en/culture) and values (https://www.microsoft.com/en-us/about/corporate-values) .

    Qualifications

    Required Qualifications

    + Bachelor's Degree in Computer Science or related technical field AND 2+ years technical engineering experience with coding in languages including, but not limited to, C++, C#, Java, Scala, Rust, Go, TypeScript

    + OR equivalent experience.

    Other Requirements

    + Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include but are not limited to the following specialized security screenings:

    + Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.

    Preferred Qualifications

    + OOP proficiency and practical familiarity with common code design patterns

    + 2+ years of experience with service development in a distributed environment, in a dev-ops role, including concurrency management and stateful resource management

    + Master's degree in Computer Science or a related technical field

    + Hands-on experience with public cloud services at the IaaS level

    + Advanced knowledge of C# and .Net

    + Proficiency with use of complex data structures and algorithms, preferably in the setting of a resource allocator/scheduler, workflow/execution orchestration engine, database engine, or similar

    + Significant experience with unit testing and writing testable code

    + Technical communication skills: verbal and written

    + First-hand experience with building large-scale, multi-tenant global services with high availability

    + Experience with building and operating “stateful” and critical control plane services; handling challenges with data size and data partitioning; related use of a NoSQL cloud database

    + Experience with mapping complex object models to relational and non-relational datastores

    + Dev-ops experience with microservices architecture in a complex infrastructure and operational environment

    + Service reliability and fundamentals engineering; instrumentation for KPIs or performance analysis; demonstrated service and code quality mindset

    + Performance engineering: work on scalability, profiling; CPU, memory and I/O use optimization techniques

    + Applied knowledge of Kubernetes: service model, workload packaging and deployment, programmatic extensibility (CRDs, operators); or equivalent knowledge of Service Fabric

    + Server-side Windows programming and performance engineering

    + Data analytics skills, in particular with Kusto

    + Experience working in a geo-distributed team

    \#AIPLATFORM

    \#AICORE

    Software Engineering IC3 - The typical base pay range for this role across the U.S. is USD $100,600 - $199,000 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $131,400 - $215,400 per year.

     

    Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:

     

    https://careers.microsoft.com/us/en/us-corporate-pay

     

    This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.

     

    Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations. (https://careers.microsoft.com/v2/global/en/accessibility.html)

     


    Apply Now



Recent Searches

[X] Clear History

Recent Jobs

  • Software Engineer II - AI Infrastructure (Scheduler) - CoreAI
    Microsoft Corporation (Redmond, WA)
[X] Clear History

Account Login

Cancel
 
Forgot your password?

Not a member? Sign up

Sign Up

Cancel
 

Already have an account? Log in
Forgot your password?

Forgot your password?

Cancel
 
Enter the email associated with your account.

Already have an account? Sign in
Not a member? Sign up

© 2025 Alerted.org