- Amazon (Cupertino, CA)
- …collaborate across compiler , runtime, framework, and hardware teams to optimize machine learning workloads for our global customer base. Working at the ... used to accelerate deep learning and GenAI workloads on Amazon's custom machine learning accelerators, Inferentia and Trainium. The AWS Neuron SDK, developed… more
- NVIDIA (Santa Clara, CA)
- …Flash Attention) + Expertise in inference engines like vLLM and SGLang + Expertise in machine learning compilers (eg Apache TVM, MLIR) + Strong experience in GPU ... We are now looking for a Senior Deep Learning Software Engineer, FlashInfer....out from the crowd: + Background in domain specific compiler and library solutions for LLM inference and training… more
- Amazon (Cupertino, CA)
- Description The Product: AWS Machine Learning accelerators are at the forefront of AWS innovation. The Inferentia chip delivers best-in-class ML inference ... and JAX. Your role will involve working closely with our custom-built Machine Learning accelerators, including Inferentia and Trainium, which represent the… more
- Draper (Boston, MA)
- …a Senior Cyber Software Engineer to support current and future cybersecurity, machine learning , and cyber tool development projects across a variety of ... reverse engineering automation tools and analysis frameworks. + Experience in leveraging machine learning (where appropriate) to automate cyber software tool… more
- NVIDIA (Santa Clara, CA)
- … learning ignited modern AI - the next era of computing. NVIDIA is a " learning machine " that constantly evolves by adapting to new opportunities that are hard ... will have the chance to work on custom and compiler ram layouts with cut in edge process technology...+ SRAM digital custom block design experience + SRAM compiler experience With competitive salaries and a generous benefits… more
- Amazon (Cupertino, CA)
- …Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine learning accelerators and servers that use them. This role is for ... a software engineer in the Machine Learning Inference Model Enablement team for...Inference Model Enablement team works side by side with compiler engineers and runtime engineers to create, build and… more
- Amazon (Cupertino, CA)
- …Neuron is the complete software stack for the AWS Inferentia and Trainium cloud-scale machine learning accelerators. As a part of the Neuron Frameworks team ... for ML model developers. A successful candidate will have a experience developing Machine Learning infrastructure and/or ML Frameworks, a demonstrated ability to… more
- NVIDIA (Santa Clara, CA)
- …inference pipeline. + Collaborate across the company to guide the direction of machine learning inferencing, working with software, research and product teams + ... are using GPUs to power a revolution in deep learning -powered AI, enabling breakthroughs in areas like LLM, ChatGPT...Prior experience with a LLM framework or a DL compiler in inference, deployment, algorithms, or implementation + Prior… more
- NVIDIA (Santa Clara, CA)
- … learning ignited modern AI - the next era of computing. NVIDIA is a " learning machine " that constantly evolves by adapting to new opportunities that are hard ... Stand Out from the Crowd: + Custom SRAM mask design experience + SRAM compiler experience With competitive salaries and a generous benefits package, NVIDIA is widely… more
- Amazon (New York, NY)
- …of talent, we have been able to improve AWS cloud infrastructure in high-performance machine learning with AWS Neuron, Inferentia and Trainium ML chips, in ... is the software of Trainium and Inferentia, the AWS Machine Learning chips. Inferentia delivers best-in-class ML...AWS Neuron. Neuron is a Software that include ML compiler and native integration into popular ML frameworks. Our… more