- Bank of America (Addison, TX)
- Senior Engineer - AI Inference Addison, Texas **To proceed with your application, you must be at least 18 years of age.** Acknowledge Refer a friend **To ... must be at least 18 years of age.** Acknowledge (https://ghr.wd1.myworkdayjobs.com/Lateral-US/job/Addison/ Senior - Engineer - AI - Inference \_25029879) **Job Description:**… more
- NVIDIA (Santa Clara, CA)
- …software ecosystem to power AI at scale. We are looking for a Senior Technical Marketing Engineer to join our growing accelerated computing product team. ... ensure a consistent, high-impact go-to-market strategy. This role will focus on AI inference at scale, ensuring that customers and partners understand how to… more
- Red Hat (Raleigh, NC)
- …bring the power of open-source LLMs and vLLM to every enterprise. Red Hat Inference team accelerates AI for the enterprise and brings operational simplicity to ... At Red Hat we believe the future of AI is open and we are on a...optimize, and scale LLM deployments. As a Machine Learning Engineer focused on vLLM, you will be at the… more
- NVIDIA (Santa Clara, CA)
- …enthusiastic about building the next generation of scalable AI systems. As a Senior Applied AI Software Engineer on the Dynamo project, you will ... GPU resource management, and intelligent request handling, Dynamo achieves high-performance AI inference for demanding applications. Our team is addressing… more
- Amazon (Seattle, WA)
- …will benefit all Amazon businesses and customers. Key job responsibilities As a Senior Software Development Engineer , you will be responsible for designing, ... Description Are you interested in advancing Amazon's Generative AI capabilities? Come work with a talented team...developing, testing, and deploying high performance inference capabilities, including but not limited to multi-modality, SOTA… more
- NVIDIA (WA)
- We are now looking for a Senior Research Engineer passionate about Generative AI inference . Are you excited to change the way people infuse AI into ... products and services? NVIDIA is at the forefront of generative AI models, from language to images. NVIDIA provides building blocks to democratize AI and make… more
- Amazon (Cupertino, CA)
- …learning accelerators and servers that use them. This role is for a software engineer in the Machine Learning Inference Model Enablement team for AWS Neuron ... beyond, as well as stable diffusion, vision transformers and many more. The Inference Model Enablement team works side by side with compiler engineers and runtime… more
- Amazon (Seattle, WA)
- …cloud-scale machine learning accelerators. This role is for a senior software engineer in the Machine Learning Inference Applications team. This role is ... for development and performance optimization of core building blocks of LLM Inference - Attention, MLP, Quantization, Speculative Decoding, Mixture of Experts, etc.… more
- Amazon (Cupertino, CA)
- …and the Trn1 and Inf1 servers that use them. This role is for a software engineer in the Machine Learning Applications (ML Apps) team for AWS Neuron. This role is ... compiler engineers and runtime engineers to create, build and tune distributed inference solutions with Trn1. Experience optimizing inference performance for… more
- NVIDIA (Santa Clara, CA)
- We are now looking for a Senior Software Engineer for our TensorRT Inference team! At NVIDIA, we're at the forefront of innovation, driving advancements in ... our TensorRT team in developing the industry-leading deep learning inference software for NVIDIA AI accelerators. What...AI accelerators. What you'll be doing: As a Senior Software Engineer in the TensorRT team,… more