Similar Jobs
We’re looking for a Software Engineer to re-define efficient training of frontier LLMs at massive scale. This role offers an opportunity to influence the design of frontier LLM models, and drive an effort to ensure efficient training and inference.
Key responsibilities:
Being responsible for Pre-Training efficiency and optimising the performance of the latest models on Google’s fleet of hardware accelerators - throughout the entire LLM research, training and deployment lifecycle.
Being responsible for guiding model design to ensure inference-efficiency.
Greatly improving the performance of LLM models on hardware accelerators by optimizing at all levels, including developing custom kernels when necessary.
Collaborating with the compiler, framework, and platform teams. And ensure efficient training at industry-largest scale.
Profile models to identify performance bottlenecks and opportunities for optimization.
Develop low-level custom kernels for maximum performance of the most critical operators.
Collaborating with research teams by enabling new critical operators in advance of their availability in frameworks and compilers.
You're an engineer looking to re-define efficient training of frontier LLMs at massive scale and have:
A proven track record of critical contributions to the distributed training of LLMs at 1e25 FLOPs scale on modern GPU/TPU clusters
Experience in programming hardware accelerators GPU/TPUs via ML frameworks (e.g. JAX, PyTorch) and low-level programming models (e.g. CUDA, OpenCL)
Experience in leveraging custom kernels and compiler infrastructure to improve performance on hardware
Experience with Python and neural network training (publications, open-source projects, relevant work experience, etc.)
The US base salary range for this full-time position is between $235,000 - $350,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
Application deadline: July 31st, 2025
Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policyopen_in_new.
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Deepmind Mountain View, California, USA Office
Ampitheatre Pkwy, Mountain View, CA, United States, 94034
What you need to know about the San Francisco Tech Scene
Key Facts About San Francisco Tech
- Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Google, Apple, Salesforce, Meta
- Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
- Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
- Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine


