We’re a team of high-output generalists where ML and systems engineering converge to push autonomy performance forward. As a Perception ML Data Engineer, you’ll bridge machine learning innovation and autonomy infrastructure to ensure our models learn from the most relevant, diverse, and high-quality data. Your work will directly impact how autonomous systems understand rare scenarios, adapt to global geographies, and scale safely.
Design and advance systems that:
- Leverage VLMs to curate geographically diverse datasets matching real-world driving distributions
- Develop high fidelity synthetic data frameworks across sensor modalities
- Optimize ML-powered validation of data quality and model readiness
Tailor Your Impact:
- High-Output Generalist: Work across autonomy, infrastructure, databases, simulation, and ML development, gaining domain knowledge in Robotics and ML.
- Robotics Expert: Build state of the art solutions for data discovery, auto-labeling, and synthetic generation/reconstruction in close collaboration with Infrastructure and Autonomy.
You’ll solve autonomy’s hardest data challenges through applied ML and systems rigor:
- Architect hybrid systems combining deep learning and classical algorithms for scalable data curation and annotation.
- Design frameworks to quantify synthetic data’s real-world fidelity and improve synthetic data rendering quality.
- Build tools that automatically surface data gaps impacting perception model performance.
- Collaborate with autonomy engineers to turn raw sensor streams into targeted training priorities – addressing critical gaps that limit perception and autonomy performance
- BS in Computer Science, Robotics, Statistics, Physics, Math or another quantitative area.
- Experience:
- 4+ years of industry software engineering experience with Python fluency and C/C++ familiarity. Proven ability to lead cross-functional technical projects from design to completion.
- You possess practical experience in implementing ML solutions and enjoy integrating them into real-world systems. Your focus is on deploying impactful, integrated solutions rather than purely theoretical ML experimentation.
- Familiarity working with synthetic or autonomous driving data.
- Experience building ML systems for robotic applications
At Nuro, your base pay is one part of your total compensation package. For this position, the reasonably expected pay range is between $193,930.00 and $291,150.00/year for the level at which this job has been scoped. Your base pay will depend on several factors, including your experience, qualifications, education, location, and skills. In the event that you are considered for a different level, a higher or lower pay range would apply. This position is also eligible for an annual performance bonus, equity, and a competitive benefits package.
At Nuro, we celebrate differences and are committed to a diverse workplace that fosters inclusion and psychological safety for all employees. Nuro is proud to be an equal opportunity employer and expressly prohibits any form of workplace discrimination based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, veteran status, or any other legally protected characteristics. #LI-DNP
Top Skills
Nuro Mountain View, California, USA Office
Mountain View, CA, United States
Similar Jobs
What you need to know about the San Francisco Tech Scene
Key Facts About San Francisco Tech
- Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Google, Apple, Salesforce, Meta
- Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
- Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
- Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine


