"Alerted.org

Job Title, Industry, Employer
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Advanced Search

Advanced Search

Cancel
Remove
+ Add search criteria
City & State or Zip Code
20 mi
  • 0 mi
  • 5 mi
  • 10 mi
  • 20 mi
  • 50 mi
  • 100 mi
Related to

  • Research Scientist, Multimodal LLMs

    Google (Mountain View, CA)



    Apply Now

    Snapshot

     

    The VIVID team at Google DeepMind focuses on cutting-edge research to advance the capabilities of foundation models, to enable personalized, multimodal, and agentic experiences.

     

    Our work spans new modeling approaches, problem definitions, and data, with a strong emphasis on the bridge between perceptual (audio, image, video) and semantic (language, code) modalities. In addition to producing highly-cited research published at top academic venues, our innovations land in flagship models like Gemini, and in Google products used by people every day.

    About us

    Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.

    The role

    We are seeking a highly motivated and talented Research Scientist to join our team at the forefront of multimodal large language model (LLM) research. You will work alongside a world-class team of researchers and engineers to develop and advance the next generation of AI models that can seamlessly integrate and reason across different modalities such as text, images, audio, and video.

    Key responsibilities

    + Develop and implement next-generation agentic reasoning frameworks for multimodal understanding: This core responsibility involves moving beyond single-step inference to design models that can formulate complex plans, critique their own thought processes, and iteratively refine their conclusions. You will explore reinforcement learning (RL) to reward detailed reasoning chains, develop robust self-critique mechanisms for error correction, and integrate tool-use, enabling models to execute code for interactive video analysis and manipulation. Your work will directly address the project's goal of teaching models to solve problems that require deep, contextual reasoning over extended periods.

    + Pioneer the extension of visual reasoning from 2D into the 3D domain, enabling a more physically-grounded form of intelligence: Your research will focus on empowering models like Gemini to not just perceive 2D images and videos, but to infer and construct coherent 3D representations of the world from them. You will then leverage this deep spatial understanding to develop novel generative capabilities, such as synthesizing photorealistic views of a scene from new perspectives or allowing for the interactive modification of 3D objects and layouts. This frontier research is crucial for unlocking future applications in robotics, mixed reality, and creating agents that can truly reason about and interact with their physical environment.

    + Spearhead the creation of novel, challenging benchmarks to rigorously measure progress and define the future of visual reasoning: As models become more capable, existing benchmarks become insufficient. You will be responsible for identifying these gaps and leading the development of the next wave of evaluation datasets that test the limits of multimodal intelligence.

     

    About you

    In order to set you up for success as a Research Scientist at Google DeepMind, we look for the following skills and experience:

    + PhD in Computer Science, Statistics, or a related field.

    + Strong publication record in top machine learning conferences (e.g., NeurIPS, CVPR, ICML, ICLR, ICCV, ECCV).

    + Expertise in one or more of the following areas: computer vision, natural language processing, machine learning.

    In addition, the following would be an advantage:

    + Experience with training, evaluating, or interpreting large language models.

    + Proven ability to design and execute independent research projects.

    + Excellent communication and collaboration skills.

    + Extensive experience with deep learning frameworks (e.g. PyTorch, JAX) and large-scale model training.

     

    The US base salary range for this full-time position is between 141,000 USD - 202,000 USD + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

     

    At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunities regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

     


    Apply Now



Recent Searches

  • Staff Security Risk Compliance (New Mexico)
  • Data Scientist Python Machine (United States)
  • Java Application Test Engineer (United States)
  • Design automation STA engineer (United States)
[X] Clear History

Recent Jobs

  • Research Scientist, Multimodal LLMs
    Google (Mountain View, CA)
  • Senior DevSecOps Engineer
    BAE Systems (Sterling, VA)
  • AI/ML Technical Product Owner
    TD Bank (New York, NY)
[X] Clear History

Account Login

Cancel
 
Forgot your password?

Not a member? Sign up

Sign Up

Cancel
 

Already have an account? Log in
Forgot your password?

Forgot your password?

Cancel
 
Enter the email associated with your account.

Already have an account? Sign in
Not a member? Sign up

© 2025 Alerted.org