-
Software Engineer III, AI/ML, Cloud AI
- Google (Sunnyvale, CA)
-
Minimum qualifications:
+ Bachelor’s degree or equivalent practical experience.
+ 2 years of experience with software development in one or more programming languages, or 1 year of experience with an advanced degree.
+ 1 year of experience with ML infrastructure (e.g., model deployment, model evaluation, optimization, data processing, debugging).
+ 1 year of experience developing large-scale infrastructure or distributed systems.
+ Experience with LLMs (large language models).
Preferred qualifications:
+ Experience with Python or C++.
+ Experience and strong interest in performance and resource optimization.
+ Experience with inference frameworks (vLLM, TensorRT-LLM, DeepSpeed, Hugging Face TGI, etc.).
+ Experience in building large-scale, high performance serving systems.
+ Understanding of LLM model architectures, e.g. Gemini or other LLM models, e.g. Gemma, Llama, or DeepSeek.
+ Knowledge of the latest LLM inference optimization techniques and research.
Google Cloud's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google Cloud's needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. You will anticipate our customer needs and be empowered to act like an owner, take action and innovate. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.
The team builds and maintains Alphabet's Machine Learning (ML) inference infrastructure and the service that powers customers across product areas with strict latency requirements. As one of Alphabet's largest ML inference services, we've seen rapid growth serving LLMs and are addressing challenges such as rapidly evolving ML use cases and software/hardware platforms.
The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world.
We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers.
The US base salary range for this full-time position is $141,000-$202,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.
Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google (https://careers.google.com/benefits/) .
+ Collaborate with peers and stakeholders through design and code reviews to ensure best practices amongst available technologies (e.g., style guidelines, checking code in, accuracy, testability, and efficiency,)
+ Contribute to existing documentation or educational content and adapt content based on product/program updates and user feedback.
+ Implement solutions in one or more specialized ML areas, utilize ML infrastructure, and contribute to model optimization and data processing.
+ Optimize the Gemini model for Google Distributed Cloud, on new generations of Graphics Processing Unit (GPU) machines. Optimize inference performance and resource efficiency of Gemini models based on first-party customers (Bard/Vertex AI/Search/YouTube) requirements on their workload on GPU.
+ Implement cutting-edge inference features for Gemini on GPU.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also https://careers.google.com/eeo/ and https://careers.google.com/jobs/dist/legal/OFCCP_EEO_Post.pdf If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form: https://goo.gl/forms/aBt6Pu71i1kzpLHe2.
-
Recent Jobs
-
Software Engineer III, AI/ML, Cloud AI
- Google (Sunnyvale, CA)
-
Senior Lead Software Engineer - Java/AWS/React
- JPMorgan Chase (New York, NY)
-
ILS Warehouse Operator
- Lilly (Durham, NC)
-
Manager, Web Development
- Live Nation (New York, NY)