- Red Hat (Raleigh, NC)
- …models, and deliver innovative apps. The OpenShift AI team seeks a Software Engineer with Kubernetes and Model Inference Runtimes experience to join our ... packaging, such as PyPI libraries + Solid understanding of the fundamentals of model inference architectures + Experience with Jenkins, Git, shell scripting, and… more
- Amazon (Seattle, WA)
- …The Annapurna Labs team at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and GenAI workloads on Amazon's ... with popular ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement and Acceleration team… more
- Amazon (Cupertino, CA)
- … lifecycles along with work experience on some optimizations for improving the model execution. - Software development experience in C++, Python (experience in ... at Amazon Web Services (AWS) builds AWS Neuron, the software development kit used to accelerate deep learning and...ML frameworks like PyTorch and JAX enabling unparalleled ML inference and training performance. The Inference Enablement… more
- NVIDIA (Santa Clara, CA)
- …open-sourced inference frameworks. Seeking a Senior Deep Learning Algorithms Engineer to improve innovative generative AI models like LLMs, VLMs, multimodal and ... In this role, you will design, implement, and productionize model optimization algorithms for inference and deployment...you'll be doing: + Design and build modular, scalable model optimization software platforms that deliver exceptional… more
- NVIDIA (CA)
- …can make a lasting impact on the world. We are now looking for a Senior System Software Engineer to work on user facing tools for Dynamo Inference Server! ... NVIDIA is hiring software engineers for its GPU-accelerated deep learning software team, and we are a remote friendly work environment. Academic and commercial… more
- MongoDB (Palo Alto, CA)
- … Engineer , you'll focus on building core systems and services that power model inference at scale. You'll own key components of the infrastructure, work ... **About the Role** We're looking for a Senior Engineer to help build the next-generation inference...multi-tenant service design + Familiar with concepts in ML model serving and inference runtimes, even if… more
- NVIDIA (Santa Clara, CA)
- NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, and ... high-performance open-source frameworks, which are at the forefront of efficient large-scale model serving and inference . You will play a central role… more
- NVIDIA (Santa Clara, CA)
- NVIDIA seeks a Senior Software Engineer specializing in Deep Learning Inference for our growing team. As a key contributor, you will help design, build, and ... vLLM, which are at the forefront of efficient large-scale model serving and inference . You will play...inference libraries, vLLM and SGLang, FlashInfer and LLM software solutions. + Work with cross-collaborative teams across frameworks,… more
- Amazon (Cupertino, CA)
- …and efficiently on AWS silicon. We are seeking a Software Development Engineer to lead and architect our next-generation model serving infrastructure, with a ... Description AWS Neuron is the software stack powering AWS Inferentia and Trainium machine...resilient AI infrastructure at AWS. We focus on developing model -agnostic inference innovations, including disaggregated serving, distributed… more
- Argonne National Laboratory (Lemont, IL)
- …supercomputing resources and computational science expertise. The ALCF has an opening for a Software Engineer working in the space of enabling AI for science, ... and AI. In this position, the candidate can expect to explore and engineer solutions for AI inference integrated within scientific workflows, via programmatic… more
Recent Jobs
-
Machine Learning Platform Engineer - Chicago Hybrid
- TransUnion (Chicago, IL)