Help us build generative models of the 3D world. World models power numerous domains, such as media generation, visual reasoning, simulation, planning for embodied agents, and real-time interactive experiences. Work with us to build better versions of Gemini, Genie, and Veo, while also exploring new, spatial modalities beyond images and videos.
The RoleKey responsibilities: Conduct research to build generative multimodal models of the 3D world. Solve essential problems to train world models at massive scale: build and train large-scale systems for data annotation, curate and annotate training datasets, build and maintain large model training infrastructure, develop scaling ladders and training recipes, develop metrics for spatial intelligence, enable real-time interactive experiences, study the integration of spatial modalities with multimodal language models, and of course: actually train massive-scale models.
Areas of focus:
- 3D computer vision, spatial annotation systems
- Spatial representations
- Training large-scale transformers
- Generative pixel and latent models
- Infrastructure for large-scale data pipelines and annotation.
- Quantitative evals for spatial accuracy and intelligence.
- Model scaling, efficiency, distillation, training infrastructure
We seek individuals who are passionate about large-scale generative models and believe spatial understanding and generation are on the path to intelligence. We strive for simple methods that scale and look for candidates excited to improve models through infrastructure, data, evals, and compute.
In order to set you up for success as a Research Scientist/Engineer at Google DeepMind, we look for the following skills and experience:
- MSc or PhD in computer science or machine learning, or equivalent industry experience.
- Experience with large-scale transformer models and/or large-scale data pipelines.
- Track record of releases, publications, and/or open source projects relating to video generation, world models, multimodal language models, or transformer architectures.
- Exceptional engineering skills in Python and deep learning frameworks (e.g., Jax, TensorFlow, PyTorch), with a track record of building high-quality research prototypes and systems.
- Demonstrated experience in large-scale training of multimodal generative models.
In addition, the following would be an advantage:
- Experience building training codebases for large-scale video or multimodal transformers.
- Expertise optimizing efficiency of distributed training systems and/or inference systems.
- Strong background in 3D representations or 3D computer vision
- Strong publication record at top-tier machine learning, computer vision, and graphics conferences (e.g., NeurIPS, ICLR, ICML, SIGGRAPH, CVPR, ICCV).
- A keen eye for visual aesthetics and detail, coupled with a passion for creating high-quality, visually compelling generative content.
Top Skills
Deepmind Mountain View, California, USA Office
Ampitheatre Pkwy, Mountain View, CA, United States, 94034
Similar Jobs
What you need to know about the San Francisco Tech Scene
Key Facts About San Francisco Tech
- Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Google, Apple, Salesforce, Meta
- Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
- Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
- Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine


