Bedrock Robotics Logo

Bedrock Robotics

Machine Learning Engineer: Perception

Posted 7 Days Ago
Hybrid
San Francisco, CA, USA
Mid level
Hybrid
San Francisco, CA, USA
Mid level
As a Machine Learning Engineer, you will design and train perception models for robotics, focusing on 3D perception systems. You will tackle complex physical challenges, optimize models for embedded hardware, and collaborate with cross-functional teams.
The summary above was generated by AI

Join the team bringing advanced autonomy to the built world

At Bedrock, we’re moving AI out of the lab and into the real world. Our team is composed of industry veterans who helped launch Waymo, scaled Segment to a $3.2B acquisition, and grew Uber Freight to $5B in revenue. Today, we’re deploying autonomous systems on heavy construction machinery across the country, accelerating project schedules of billion-dollar infrastructure projects and improving safety on job sites. Backed by $350M in funding, we’re working quickly to close the gap between America's surging demand for housing, data centers, manufacturing hubs, and the construction industry's growing labor shortage.

This is where algorithms meet steel-toed boots. You’ll collaborate with construction veterans and world-class engineers to solve physical-world problems that simulations can’t touch. If you're ready to apply cutting-edge technology to solve meaningful problems alongside a talented team—we'd love to have you join us.

Machine Learning Engineer: Perception

 

Bedrock is bringing autonomy to the construction industry! We’re a group of veterans from the autonomous vehicle industry who are passionate about bringing the benefits of automation to areas in the construction industry currently underserved by the market.

We are looking for engineers with expertise in shipping production 3D perception systems at scale. Successful candidates have architected systems, trained models from scratch, understand the full stack (clustering, detection, classification, and tracking), and have shipped at scale. We use both computer vision and LIDAR-based approaches, so knowledge of either or both is key. Models are just part of the system: you understand data and have good intuition about why models fail. You know how to evaluate corner cases, manage or build data pipelines, use autolabels (or not), and have a strong understanding of statistical properties of these systems.

 

What You’ll Do:

  • Design Early Fusion Architectures: Develop and train state-of-the-art models (e.g., BEV-based transformers) that fuse raw Lidar and Camera data to solve for object detection and semantic segmentation.

  • Tackle "Messy" Physics: Build perception systems robust enough to handle dynamic occlusion (seeing the robot’s own arm/bucket), particulates (dust, snow, rain), and high-vibration conditions.

  • Deploy to the Edge: Optimize models for inference on embedded hardware. You will debug system-level issues, such as sensor calibration drift and latency bottlenecks.

  • Collaborating with other teams to create state-of-the-art representations for downstream use cases.

What we're looking for:

  • Production ML Experience: 3+ years of experience taking deep learning models from research to real-world production using PyTorch, Tensorflow, or JAX.

  • 3D Geometry & Calibration: You have a deep understanding of SE(3) transformations, homogeneous coordinates, and intrinsic/extrinsic sensor calibration. You understand the math required to project a 3D Lidar point onto a 2D image pixel accurately.

  • Early Fusion Expertise: Practical experience with architectures that fuse modalities at the feature level (e.g., BEVFusion, TransFuser, PointPainting) rather than just fusing final bounding boxes.

  • SOTA Object Detection experience with modern transformer-based architectures (DETR, PETR, etc…) including similar temporal models (PETRv2, StreamPETR, …)

  • Systems Fluency: You are an expert in Python, but you are also comfortable reading and writing systems code in C++ or Rust. You understand memory management and real-time constraints.

  • Data Intuition: You understand that in robotics, better data alignment often beats a bigger model. You are willing to dig into the data infrastructure to ensure ground truth quality.

Ways to stand out:

  • Bonus: Voxel/Occupancy Experience: Experience working with occupancy grids, NeRFs, or voxel-based representations for terrain mapping.

  • Bonus: Top-Tier Research: Published work in conferences such as ICRA, IROS, CVPR, ECCV, ICCV, CoRL, or RSS

Our roles are often flexible. If you don't fit all the criteria, or are in another location (especially one where we have an office like SF or NY) please apply anyway! We'd love to consider you.

HQ

Bedrock Robotics San Francisco, California, USA Office

San Francisco, CA, United States

Similar Jobs

5 Days Ago
In-Office
San Francisco, CA, USA
190K-222K Annually
Senior level
190K-222K Annually
Senior level
Hardware • Energy • Defense • Automation
The Senior Machine Learning Engineer will lead the perception stack, focusing on detection, tracking, classification, and sensor fusion for targeting systems, with responsibilities including model development and real-time optimization on embedded devices.
Top Skills: C++OnnxPythonTensorrt
5 Days Ago
Hybrid
Foster City, CA, USA
242K-290K Annually
Senior level
242K-290K Annually
Senior level
Artificial Intelligence • Machine Learning • Robotics • Software • Transportation • Design • Manufacturing
As a Senior Machine Learning Engineer for the Localization team, you'll improve ML systems for pose estimation, integrate computer vision models, and maintain coding best practices. You'll also collaborate closely on model validation and contribute technically to team architecture.
Top Skills: C++NumpyPythonPyTorch
7 Days Ago
Hybrid
Foster City, CA, USA
242K-333K Annually
Senior level
242K-333K Annually
Senior level
Artificial Intelligence • Machine Learning • Robotics • Software • Transportation • Design • Manufacturing
As a Senior Machine Learning Engineer, you'll design algorithms for 2D/3D perception and mapping, lead cross-functional projects, and contribute to HD mapping pipelines.
Top Skills: C++Python

What you need to know about the San Francisco Tech Scene

San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.

Key Facts About San Francisco Tech

  • Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Google, Apple, Salesforce, Meta
  • Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
  • Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
  • Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account