"Alerted.org

Job Title, Industry, Employer
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Advanced Search

Advanced Search

Cancel
Remove
+ Add search criteria
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Related to

  • Principal AI Security and Safety Researcher

    Microsoft Corporation (Redmond, WA)



    Apply Now

    Overview

     

    Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world.

     

    Are you a red teamer who is looking to break into the AI field? Do you want to find AI failures in Microsoft’s largest AI systems impacting millions of users? Join Microsoft’s AI Red Team where you'll emulate work alongside security and AI hacking experts to proactively test for failures in Microsoft’s big AI systems. We are looking for a **Principal AI Security and Safety Researcher** who can be a red teamer dedicated to making AI security better and helping our customers expand with our AI systems. In AI red teaming, you'll apply the newest AI security, frontier harms, and safety research to emulate adversarial hacking on Microsoft’s Ai models, systems, products, and features, advising product teams on how to mitigate risks before technology reaches our customers.

     

    This role will also serve as our technical lead in AI frontier harms, such as autonomy, and loss of control of AI systems, uplift in chemistry or biology. Not only will you set cross-harm strategy and advise on implementation of Frontier Mondel Forum and industry best practices within the team, you’ll serve as a coach to the operators leading the individual harm strategies in each area on day-to-day red teaming. In addition to frontier experience, we want AI-obsessed hacker-mindsets to come join our team with leadership and comfort with ambiguity. The Team & Work: Our team is an interdisciplinary group of red teamers, adversarial Machine Learning (ML) researchers, Safety & Responsible AI experts, AI researchers, and software developers with the mission of proactively finding failures across all of Microsoft’s AI portfolio. In this role, you will red team AI models, such as our Phi series and MAI models, and applications, including Bing Copilot, Security Copilot, Github Copilot, Office Copilot and Windows Copilot.

     

    This work is sprint based, working with AI Safety, Security, and Product Development teams, to run operations that aim to find safety and security risks before they happen. Our reporting and findings directly inform internal key business decision leadership. This role will also focus on our team’s approach to frontier AI model harms, requiring parallel tracking between Ai red teaming operations and driving strategy and informing industry-level discussions on autonomy, CBRN, harmful manipulation, cyber, and more novel harms. This a fast-moving team with multiple roles and responsibilities within the AI Security and Safety space; people who love to provide agile, practical insights and who enjoy jumping in to solve ambiguous problems excel in this role. More about our approach to AI Red Teaming: https://www.microsoft.com/en-us/security/blog/2023/08/07/microsoft-ai-red-team-building-future-of-safer-ai/

     

    _Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond._

    Responsibilities

    Responsibilities:

    + Lead cross-domain frontier harms strategy, represent as industry frontier forums, and coach individual operator leads on specific harm areas.

    + Discover and exploit GenAI vulnerabilities end-to-end in order to assess the safety of systems.

    + Manage product group stakeholders as priority recipients and collaborators for operational sprints.

    + Drive clarity on communication and reporting for red teaming peers when working with product groups.

    + Work alongside traditional offensive security engineers, adversarial ML experts, developers to land responsible AI operations while creating a culture of positive, inclusive problem solving.

    Qualifications

    Minimum Qualifications:

    + Doctorate in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 3+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection.

    + OR Master's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 4+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection.

    + OR Bachelor's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 6+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection.

    + OR equivalent experience.

    Other Requirements:

    Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings:

    Microsoft Cloud Background Check:

    + This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.

    Preferred Qualifications:

    + Doctorate in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 5+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection.

    + OR Master's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 8+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection.

    + OR Bachelor's Degree in Statistics, Mathematics, Computer Science, Computer Security, or related field AND 12+ years experience in software development lifecycle, large-scale computing, threat analysis or modeling, cybersecurity, vulnerability research, and/or anomaly detection.

    + OR equivalent experience.

    + Related Fields include: AI Security, AI Safety, Biology AND an applied background, Chemistry AND an applied background, Cybersecurity, Nuclear Physics, Machine Learning, and more.

     

    \#MSFTSecurity #MSECAI #AI #RAI #Safety #Security #MSECAI #AEGIS #AIRedTeam

     

    Security Research IC5 - The typical base pay range for this role across the U.S. is USD $139,900 - $274,800 per year. There is a different range applicable to specific work locations, within the San Francisco Bay area and New York City metropolitan area, and the base pay range for this role in those locations is USD $188,000 - $304,200 per year.

     

    Certain roles may be eligible for benefits and other compensation. Find additional benefits and pay information here:

     

    https://careers.microsoft.com/us/en/us-corporate-pay

     

    This position will be open for a minimum of 5 days, with applications accepted on an ongoing basis until the position is filled.

     

    Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance with religious accommodations and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations. (https://careers.microsoft.com/v2/global/en/accessibility.html)

     


    Apply Now



Recent Searches

[X] Clear History

Recent Jobs

  • Principal AI Security and Safety Researcher
    Microsoft Corporation (Redmond, WA)
[X] Clear History

Account Login

Cancel
 
Forgot your password?

Not a member? Sign up

Sign Up

Cancel
 

Already have an account? Log in
Forgot your password?

Forgot your password?

Cancel
 
Enter the email associated with your account.

Already have an account? Sign in
Not a member? Sign up

© 2025 Alerted.org