-
Machine Learning Post Processing Engineer
- Ralliant (Beaverton, OR)
-
ng a Machine Learning Post Processing Engineer to design, implement, and optimize systems that process, analyze, and enhance machine learning model context from Test and Measurement Acquisition Systems.
In this role, you'll work at the intersection of high-performance computing, embedded systems, and AI infrastructure to build robust post-processing pipelines that transform raw ML predictions into actionable insights and refined outputs.
You'll collaborate closely with ML researchers, hardware engineers, and platform teams to create scalable, efficient systems that handle large-scale model outputs while maintaining performance, reliability, and integration with broader system architectures.
What You'll Do
Core ML Post-Processing Responsibilities
+ Design and implement post-processing pipelines for various ML model outputs (computer vision, NLP, time series, etc.)
+ Develop efficient algorithms for model output refinement, filtering, and enhancement
+ Build real-time and batch processing systems for ML inference results
+ Optimize post-processing workflows for GPU-accelerated environments
+ Create robust error handling and validation systems for ML pipeline outputs
+ Implement model ensemble techniques and output aggregation strategies
Embedded Systems Development
+ Design, implement, and maintain software for embedded systems, focusing on performance, reliability, and integration with broader system architectures
+ Optimize GPU-accelerated embedded applications for resource-constrained environments
+ Develop low-level drivers and firmware interfaces for GPU-enabled embedded platforms
+ Collaborate with hardware teams to define embedded system specifications and constraints
Tooling and Operational Efficiency
+ Build and support internal tools that improve developer productivity, streamline engineering workflows, and enhance operational processes across teams
+ Develop GPU profiling and debugging tools for embedded and distributed systems
+ Create automated testing frameworks for ML post-processing applications
+ Design monitoring and observability solutions for GPU resource utilization and performance
+ Implement CI/CD pipelines optimized for ML post-processing development workflows
+ Build developer experience tools that simplify ML pipeline development and deployment
AI Protocol Integration (MCP)
+ Collaborate on the development and integration of AI protocols, including Model Context Protocol (MCP), to enable intelligent system behavior and scalable machine learning infrastructure
+ Implement GPU-optimized MCP servers for high-performance AI model serving and post-processing
+ Design efficient data pipelines between GPU compute resources and MCP protocol endpoints
+ Optimize memory management and data transfer patterns for MCP-enabled ML post-processing workloads
+ Ensure seamless integration between GPU computing clusters and distributed AI protocol systems
+ Develop secure communication protocols for GPU-accelerated AI model inference and post-processing
Core Engineering Tasks
+ Perform routine software development activities such as debugging, code reviews, documentation, and maintenance of non-AI components
+ Conduct thorough code reviews with focus on GPU programming best practices and performance optimization
+ Debug complex GPU kernel issues, memory management problems, and distributed system failures
+ Maintain comprehensive documentation for ML post-processing development standards and procedures
+ Implement robust errors in handling and logging for GPU-accelerated ML applications
+ Collaborate with cross-functional teams on non-AI system components and integrations
Knowledge Retrieval and Augmentation Systems (RAG)
+ Contribute to the design and implementation of retrieval-augmented generation (RAG) and related systems to improve contextual understanding and information access within intelligent applications
+ Optimize GPU-accelerated vector databases and similarity search engines for RAG systems
+ Implement efficient embedding generation and storage systems using GPU compute resources
+ Design scalable knowledge retrieval architectures that leverage distributed GPU infrastructure
+ Develop caching and indexing strategies for large-scale knowledge bases in GPU memory
+ Collaborate on hybrid CPU-GPU architectures for balanced RAG system performance
+ Integrate RAG capabilities with existing GPU-accelerated machine learning pipelines
What We're Looking For
Required Qualifications
+ Master's degree or PhD in Computer Science, Electrical Engineering, Machine Learning, or related field
+ 3-7 years of experience in software development with focus on ML systems or high-performance computing, or comparable work experience
+ Strong programming skills in Python, C++, and CUDA
+ Experience with ML frameworks (PyTorch, TensorFlow, JAX) and GPU programming
+ Understanding of federated learning or distributed computing architectures
+ Knowledge of computer vision, NLP, or other ML domains requiring post-processing
+ Experience with data pipeline development and real-time processing systems
Technical Skills
Machine Learning & AI
+ Deep understanding of ML model architectures and inference optimization
+ Experience with model quantization, pruning, and optimization techniques
+ Knowledge of ensemble methods and model output fusion strategies
+ Familiarity with MLOps practices and model deployment pipelines
+ Understanding of AI protocol standards and emerging technologies like MCP
High-Performance Computing
+ Proficiency in CUDA programming and GPU optimization techniques
+ Experience with parallel algorithms and distributed computing frameworks
+ Knowledge of memory optimization and efficient data structures
+ Understanding of performance profiling and bottleneck identification
Embedded & Hardware Integration
+ Experience with embedded GPU platforms (NVIDIA Jetson, AMD Embedded, etc.)
+ Understanding of hardware constraints and power optimization for embedded GPU systems
Developer Productivity & Operations
+ Proficiency in building developer tools and automation frameworks
+ Experience with GPU cluster management and orchestration platforms
+ Knowledge of performance monitoring and profiling tools for GPU workloads
+ Understanding of DevOps practices specific to GPU development environments
Enterprise Knowledge Systems
+ Experience with vector databases and similarity search technologies
+ Knowledge of information retrieval systems and search algorithms
Preferred Qualifications
+ Experience with federated learning and distributed AI computation patterns
+ Knowledge of edge computing and IoT device integration
+ Background in signal processing or computer graphics
+ Experience with containerization (Docker, Kubernetes) for ML workloads
+ Familiarity with cloud platforms (AWS, GCP, Azure) and their ML services
+ Understanding of security and privacy considerations in ML systems
+ Experience with model serving architectures and API design patterns
+ Knowledge of specialized hardware accelerators (TPUs, FPGAs, neuromorphic chips)
Soft Skills
+ Strong problem-solving abilities and analytical thinking
+ Excellent communication skills for cross-functional collaboration
+ Ability to work in fast-paced, research-oriented environments
+ Strong attention to detail and commitment to code quality
+ Collaborative mindset with ability to mentor junior engineers
What We Offer
+ Opportunity to work on cutting-edge ML infrastructure and post-processing systems
+ Access to state-of-the-art GPU clusters and computing resources
+ Collaborative environment with world-class ML researchers and engineers
+ Professional development opportunities and conference attendance
+ Competitive compensation and comprehensive benefits package
Impact You'll Make
As a Machine Learning Post Processing Engineer, you'll directly contribute to advancing the state of ML system performance and reliability. Your work will enable more efficient, accurate, and scalable AI applications across various domains, from embedded systems to large-scale cloud deployments. You'll be instrumental in building the infrastructure that transforms raw ML predictions into valuable, actionable insights that drive impacts in the Test and Measurement market segment.
\#LI-KJ1
Ralliant Corporation Overview
Ralliant, originally part of Fortive, now stands as a bold, independent public company driving innovation at the forefront of precision technology. With a global footprint and a legacy of excellence, we empower engineers to bring next-generation breakthroughs to life — faster, smarter, and more reliably. Our high-performance instruments, sensors, and subsystems fuel mission-critical advancements across industries, enabling real-world impact where it matters most. At Ralliant we’re building the future, together with those driven to push boundaries, solve complex problems, and leave a lasting mark on the world.
We Are an Equal Opportunity Employer
Ralliant Corporation and all Ralliant Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Ralliant and all Ralliant Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at [email protected].
About Tektronix
Tektronix, a wholly owned subsidiary of Ralliant Corporation, is a place where people are challenged to explore the boundaries of what’s possible, bringing the digital future one step closer every day. Through precision-engineered measurement solutions, we work with our customers to eliminate the barriers between inspiration and realization of world-changing technologies. We believe that cultivating a deeper sense of loyalty and belonging is key to how we attract and retain our best people. This reality inspires our Inclusion & Diversity vision, We Are More Together, and guides our approach as we all work toward creating great places where our teams work and thrive. Realize your true potential at Tektronix – join us in revolutionizing a better tomorrow!
We Are an Equal Opportunity Employer. Ralliant Corporation and all Ralliant Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Ralliant and all Ralliant Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at [email protected].
Bonus or Equity
This position is also eligible for bonus as part of the total compensation package.
Pay Range
The salary range for this position (in local currency) is 79,300.00 - 147,300.00
-
Recent Jobs
-
Machine Learning Post Processing Engineer
- Ralliant (Beaverton, OR)
-
Principal Engineering Manager-Reliability Engineering
- Fujifilm (College Station, TX)
-
Senior Electrical Engineer
- Matternet (Mountain View, CA)
-
Optical Technical Lead
- Fortive Corporation (Pittsburgh, PA)