Agtonomy Logo

Agtonomy

Senior/Staff Machine Learning Engineer, Perception

Reposted 2 Hours Ago
Be an Early Applicant
Hybrid
South San Francisco, CA
180K-250K Annually
Senior level
Hybrid
South San Francisco, CA
180K-250K Annually
Senior level
Develop and optimize computer vision and machine learning algorithms for autonomous tractors, focusing on perception tasks and sensor fusion. Collaborate with team to enhance real-time systems and improve accuracy in dynamic environments.
The summary above was generated by AI
About Us

At Agtonomy, we’re not just building tech—we’re transforming how vital industries get work done. Our Physical AI and fleet services turn heavy machinery into intelligent, autonomous systems that tackle the toughest challenges in agriculture, turf, and beyond. Partnering with industry-leading equipment manufacturers, we’re creating a future where labor shortages, environmental strain, and inefficiencies are relics of the past. Our team is a tight-knit group of bold thinkers—engineers, innovators, and industry experts—who thrive on turning audacious ideas into reality. If you want to shape the future of industries that matter, this is your shot.

About the Role

We’re looking for a skilled software engineer to build and refine perception algorithms that give our autonomous tractors human-like awareness in rugged environments. You’ll develop computer vision and machine learning systems to process noisy data from cameras, LiDAR, and radar, enabling tractors to navigate through any type of dirty mess it may find itself in. This role is hands-on: you’ll write production-grade software, optimize models for embedded hardware, and test your work on real tractors at operating farms all over the world. Working closely with team members across the autonomy stack, you’ll own critical pieces of our perception stack, driving innovations that make our systems generalized, safe and reliable.

What you'll do

  • Develop computer vision and machine learning models for real-time perception systems, enabling tractors to identify crops, obstacles, and terrain in varying unpredictable conditions.
  • Build sensor fusion algorithms to combine camera, LiDAR, and radar data, creating robust 3D scene understanding that handles challenges like crop occlusions or GNSS drift.
  • Optimize models for low-latency inference on resource-constrained hardware, balancing accuracy and performance.
  • Design and test data pipelines to curate and label large sensor datasets, ensuring high-quality inputs for training and validation, with tools to visualize and debug failures.
  • Analyze performance metrics and iterate on algorithms to improve accuracy and efficiency of various perception subsystems.

What you’ll bring

  • A MS, or PhD in Computer Science, AI, or a related field, or 5+ years of industry experience building vision-based perception systems.
  • Deep expertise in developing and deploying machine learning models, particularly for perception tasks such as object detection, segmentation, mono/stereo depth estimation, sensor fusion, and scene understanding.
  • Strong understanding of integrating data from multiple sensors like cameras, LiDAR, and radar.
  • Experience handling large datasets efficiently and organizing them for labeling, training and evaluation.
  • Fluency in Python and experience with ML/CV frameworks like TensorFlow, PyTorch, or OpenCV, with the ability to write efficient, production-ready code for real-time applications.
  • Proven ability to design experiments, analyze performance metrics (e.g., mAP, IoU, latency), and optimize algorithms to meet stringent performance requirements in dynamic settings.
  • An eagerness to get your hands dirty and agility in a fast-moving, collaborative, small team environment with lots of ownership.

What makes you a strong fit

  • Experience architecting multi-sensor ML systems from scratch.
  • Experience with Foundational models for robotics or Vision-Language-Action (VLA) models
  • Experience with compute-constrained pipelines including optimizing models to balance the accuracy vs. performance tradeoff, leveraging TensorRT, model quantization, etc.
  • Experience implementing custom operations in CUDA.
  • Publications at top-tier perception/robotics conferences (e.g. CVPR, ICRA, etc.).
  • Passion for sustainable agriculture and securing our food supply chain.

Benefits

• 100% covered medical, dental, and vision for the employee (partner, children, or family is additional)
• Commuter Benefits
• Flexible Spending Account (FSA)
• Life Insurance
• Short- and Long-Term Disability
• 401k Plan
• Stock Options
• Collaborative work environment working alongside passionate mission-driven team!

Our interview process is generally conducted in five (5) phases:
1. Phone Screen with Hiring Manager (30 minutes)
2. Technical Evaluation in Domain (1 hour)
3. Software Engineering Evaluation (1 hour)
4. Panel Interview (Video interviews scheduled with key stakeholders, each interview will be 30 to 60 minutes)

Top Skills

Lidar
Opencv
Python
PyTorch
Radar
TensorFlow
HQ

Agtonomy San Francisco, California, USA Office

San Francisco, CA, United States

Similar Jobs

18 Days Ago
Easy Apply
In-Office
Milpitas, CA, USA
Easy Apply
Senior level
Senior level
Artificial Intelligence • Machine Learning • Robotics
Develop computer vision models and algorithms for perception in robotics. Implement pipelines for AI model training and deployment on robotic platforms.
Top Skills: CudaJaxPythonPyTorchTensorFlowTensorrt
14 Days Ago
Hybrid
Foster City, CA, USA
242K-333K Annually
Senior level
242K-333K Annually
Senior level
Artificial Intelligence • Machine Learning • Robotics • Software • Transportation • Design • Manufacturing
Develop advanced multimodal large language models for autonomous driving, optimize solutions using real-world scenarios, and collaborate with engineering teams.
Top Skills: Ml LibrariesNumpyPythonPyTorch
14 Days Ago
Easy Apply
In-Office
Santa Clara, CA, USA
Easy Apply
244K-413K Annually
Senior level
244K-413K Annually
Senior level
Automotive
Develop and implement advanced machine learning models for autonomous driving, focusing on deep learning, perception, and real-time performance optimization.
Top Skills: OnnxPythonPyTorchTensorFlowTensorrt

What you need to know about the San Francisco Tech Scene

San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.

Key Facts About San Francisco Tech

  • Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Google, Apple, Salesforce, Meta
  • Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
  • Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
  • Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account