Zendar
Machine Learning Research Intern (for PhDs) - World Modeling / Scene Understanding
Zendar is looking for a research-oriented Machine Learning Intern (World Modeling and Scene Understanding) to join our Berkeley office. Zendar develops one of the best 360-degree radar-based vehicular perception systems for automotive. We are expanding our autonomy stack toward scene-level world modeling, developing representations that capture occupancy, motion, and temporal evolution of the environment. This work builds on multi-sensor perception (camera, radar, and beyond) and is designed to scale across both the automotive and robotics industries. We are not bogged down by legacy systems, and by joining us you’ll have the opportunity to define and own key components of a next-generation world model that supports reliable, long-horizon autonomy.
About Zendar:Zendar is building perception for physical AI—giving engineers a strong foundation for creating world-class robotics applications. At Zendar, you’ll work on perception foundation models that enable robots to understand and interact with their environments across a wide range of industries.
Zendar pioneered RF perception that delivers a vision-like, semantically segmented understanding of the environment—running on embedded automotive systems using only radar data. This RF perception forms the backbone of Zendar’s next-generation foundation models, which are built around early fusion of RF and vision data.
This architecture inverts the traditional perception stack. Instead of treating RF signals as secondary, Zendar’s models combine vision’s high angular resolution with RF’s strong temporal and spatial understanding at the earliest stages of perception. The result is a system that sees farther, remains robust to occlusion and adverse weather, and operates far more efficiently than vision-only or lidar-based approaches.
See a demo of Zendar’s foundational RF perception
At Zendar, you’ll work at the cutting edge of autonomous mobility and robotics—advancing foundation models that will power the next generation of physical AI systems. You’ll work with large-scale, real-world, multi-modal datasets composed of synchronized and calibrated radar, camera, and lidar data collected across multiple continents.
Our team brings together deep expertise across hardware, signal processing, machine learning, and software engineering, with decades of experience in sensing and perception. We are a global team with offices in Berkeley, Lindau (Germany), and Paris (France). Zendar is well-funded by leading Tier-1 venture capital firms and has established strong industry partnerships.
Although AI is central to what we build, our hiring process is intentionally human: every résumé is reviewed by a real person.
Zendar’s Semantic Spectrum perception technology extracts a rich scene understanding from radar sensing. Our next goal is to develop scene-level world models that represent and forecast the occupancy and dynamics of the environment, enabling robust downstream autonomy across automotive and robotics applications.
As an ML Research Intern, you will work on learning-based world models that estimate full-scene occupancy and motion (occupancy-flow) and predict how the scene evolves over time. These models leverage inputs from radar, camera, and other sensors to provide a unified, reusable representation for downstream tasks such as planning and collision avoidance. You will work closely with experienced engineers and researchers to prototype, train, and evaluate these models on large-scale real-world datasets, and gain hands-on exposure to deploying perception models in real-time systems.
This role is ideal for a PhD student who enjoys designing spatio-temporal modeling, and working with real sensor data in autonomy.
What You’ll Do:- Contribute to the design and implementation of scene-level world models for autonomy
- Develop and experiment with occupancy, free-space, and dynamic occupancy / flow representations
- Train and evaluate spatiotemporal deep learning models
- Work with real-world sensor data from radar, camera, and lidar
- Help define evaluation metrics and analyze model behavior over time and across edge cases
- Currently pursuing a PhD in Computer Science, Robotics, Electrical Engineering, or a related field
- Strong interest or prior experience in scene-level world modeling (occupancy, free space, motion, dynamics) and unsupervised or semi-supervised learning techniques.
- Experience (projects, publications, or thesis work) with spatio-temporal modeling.
- Experience working with any real-world sensor data (camera, lidar, radar, or multi-sensor)
- Proficiency with Python and a major deep learning framework (e.g., PyTorch, TensorFlow)
What We Offer:
- Opportunity to make an impact at a young, venture-backed company in an emerging market
- Competitive salary of $55 / hour
- Daily catered lunch and a stocked fridge in the Berkeley office
Zendar is committed to creating a diverse environment where talented people come to do their best work. We are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.
Zendar participates in E-Verify
Top Skills
Zendar Berkeley, California, USA Office
Berkeley, CA, United States
Similar Jobs
What you need to know about the San Francisco Tech Scene
Key Facts About San Francisco Tech
- Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Google, Apple, Salesforce, Meta
- Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
- Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
- Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine


