"Alerted.org

Job Title, Industry, Employer
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Advanced Search

Advanced Search

Cancel
Remove
+ Add search criteria
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Related to

  • Project Lead, Safety Testing - 12 Month Fixed Term…

    Google (New York, NY)



    Apply Now

    Snapshot

     

    As the Project Lead, Safety Testing in the Responsible Development and Innovation (ReDI) team, you’ll be integral to the delivery and scaling of our external safety testing program on Google DeepMind’s (GDM’s) most groundbreaking models.

     

    You will work with teams across GDM, including Product Management, Research, Legal, Engineering, Public Policy, and Frontier Safety and Governance, to lead external safety evaluations which are a key part of our responsibility and safety best practices, helping Google DeepMind to progress towards its mission.

     

    The role is a 12 month fixed-term contract.

    About us

    Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

     

    As the Project Lead, Safety Testing working in ReDI, you’ll be part of a team that partners with external, expert groups to conduct safety evaluations across various domains and modalities on our frontier models. In this role, you’ll work in collaboration with other members of this critical program, upholding our safety and responsibility commitments whilst responding to the evolving needs of the business.

     

    The role Key responsibilities

    Overarching:

    + Lead the design and oversee the implementation of GDM’s external safety testing program, ensuring it meets our safety and responsibility requirements and external commitments

    + Lead GDM’s input into external safety testing requirements from regulators and government bodies

    + Input into public policy work to help shape potential future regulatory requirements and government policies related to AI safety

    + Lead implementation of external safety testing requirements from regulators and government bodies, working with multidisciplinary teams across Legal, Business and Corporate Development, and Engineering teams

    + Oversee efforts to optimise and scale the program to support the growing needs of the business

    + Identify and plan the program’s strategic resource requirements to execute the external safety testing program successfully, and to deliver against its priorities

    + Carry out cross-industry ‘horizon scanning’ to identify and maintain visibility of current and future external testing requirements from regulators, government bodies, and wider industry standards

    + Matrix manage a cross-functional team, aligning resources against business priorities and leading the escalation of risks and issues to wider stakeholder groups, including the Head of Evaluations, and Responsibility leadership

    Testing scope:

    + Scope GDM’s external testing program, including the domains of frontier models to be tested

    + Engage with various stakeholders across Responsibility, modeling and SME teams to identify high-priority focus areas to build into testing plans and inform partnership approaches

    Partnerships:

    + Own and manage relationships with various external testing partners across the partnership lifecycle

    + Oversee the identification of new partners with relevant skillsets to undertake external safety testing, working with relevant SMEs, to ensure it is aligned with high-priority focus areas

    Findings:

    + Oversee the collation, assessment, and distribution of external safety testing findings, ensuring internal alignment on severity and escalation of high-severity findings

    Stakeholder engagement and communication:

    + Build and lead a high-performing and collaborative multidisciplinary team to deliver the program

    + Oversee communication about the program to wider teams across GDM to increase visibility and buy-in

    + Oversee communication to relevant external stakeholders to influence industry standards and policy positions

    + Represent the external safety testing program in relevant internal and external forums

    Budget:

    + Own a significant program budget, ensuring work is delivered within budget, working with program manager on forecasting spend and reconciliation

     

    About you

    In order to set you up for success as a Project Lead, Safety Testing in the ReDI team, we look for the following skills and experience:

    + Ability to shape, lead and deliver programs in a highly complex and live environment where decisions are made in a timely fashion

    + Ability to build and lead high-performing teams

    + Previous experience working in a fast-paced environment, either in a start-up, tech company, or consulting organisation

    + Familiarity with safety considerations of generative AI, including (but not limited to) frontier safety (such as chemical and biological risks), content safety, and sociotechnical risks (such as fairness)

    + Strong communication skills and demonstrated ability to work in cross-functional teams, foster collaboration, and influence outcomes

    + Strong project management skills to work with the program manager to optimise existing processes and create new processes

    + Significant experience presenting and communicating complex concepts succinctly and clearly to different audiences

    In addition, the following would be an advantage:

    + Experience of working with sensitive data and access controls

    + Prior experience working with product development or in similar agile settings would be advantageous

    + Subject matter expertise in generative AI safety considerations, including (but not limited to) frontier safety (such as chemical and biological risks), content safety, and sociotechnical risks (such as fairness)

    + Experience designing and implementing audits or evaluations of cutting edge AI systems

     


    Apply Now



Recent Searches

  • Software Cloud Architect Active (Washington, DC)
  • Web Applications Programmer (United States)
  • CTI Data Scientist (United States)
[X] Clear History

Recent Jobs

  • Project Lead, Safety Testing - 12 Month Fixed Term Contract
    Google (New York, NY)
  • Customer Service Representative
    Ryder System (New Albany, OH)
  • Service Electrical Engineering Intern
    JBT Corporation (Sandusky, OH)
  • Senior Manager, Wealth Advisor- Schwab Wealth Advisory
    Charles Schwab (Boston, MA)
[X] Clear History

Account Login

Cancel
 
Forgot your password?

Not a member? Sign up

Sign Up

Cancel
 

Already have an account? Log in
Forgot your password?

Forgot your password?

Cancel
 
Enter the email associated with your account.

Already have an account? Sign in
Not a member? Sign up

© 2025 Alerted.org