The Lead Infrastructure and Reliability Engineer will enhance GPU operations, define scalability strategies, and develop organizational strengths in a high-demand AI infrastructure setting.
About Luma AI
Where You Come In
What You’ll OwnReliability of the Frontier
Scaling Training & Inference
Building the Organization
Who You AreRequired:
Leadership Expectations
Why This Role Is Special
A new class of intelligence is emerging, systems that understand and generate the world across video, images, audio, and language.
Building multimodal AGI is not just a modeling challenge. It is an infrastructure challenge at the edge of what hardware, software, and organizations can support.
At Luma, we operate rapidly scaling 10k+ GPU fleets, pushing utilization, throughput, and reliability hard enough that yesterday’s solutions break regularly. Researchers depend on this infrastructure to move the frontier forward. Customers depend on it to power real creative work.
Many companies run accelerators. Very few sit directly next to the teams inventing the models that redefine what those accelerators must do.
At Luma, improvements to scheduling, efficiency, and reliability immediately translate into faster research iteration and entirely new product capabilities.
We are still early. The playbook is still being written. A single exceptional engineer can reshape how the company operates.
Our Infrastructure Engineering team is a systems engineering group with company-level responsibility. At Luma, reliability engineers work directly with the researchers and products pushing the limits of multimodal intelligence.
We operate close to the metal:
- Kernels
- Containers
- Schedulers
- Networking
- Storage
- GPU behavior
But we are also responsible for something bigger:
Turning deep systems knowledge into repeatable, scalable reliability for the entire company. We are hiring a leader who will define that direction. You will be a technical authority, an organizational force multiplier, and a magnet for other great engineers.
- Architect and operate large, heterogeneous GPU environments under extreme demand
- Improve utilization and performance where small gains materially change company outcomes
- Resolve failures that span hardware, OS, runtimes, and orchestration
- Eliminate entire classes of instability
- Build mechanisms that make heroics unnecessary
- Define how infrastructure and workloads evolve as cluster size and concurrency grow
- Design scheduling, placement, and resource management approaches for increasingly complex jobs
- Work directly with research to build the systems required for new model capabilities
- Ensure inference platforms scale rapidly without sacrificing reliability or latency
- Anticipate where today’s abstractions will fail and redesign ahead of them
- Hire and develop exceptional systems and reliability engineers
- Set the bar for technical depth, judgment, and production ownership
- Shape architecture early through strong partnerships with research and product
- Translate reliability constraints into long-term platform strategy
- Deep expertise in Linux and distributed systems
- Experience operating GPU / accelerator clusters in real production environments
- Strong fluency in Kubernetes and modern open-source infrastructure
- Comfortable debugging across hardware → kernel → runtime → orchestration
- You understand how systems behave under contention and at scale
- You write code and build automation
- You think in bottlenecks, failure modes, and tradeoffs
- Engineers trust your judgment, especially when things break
Important: This role requires comfort operating close to upstream and close to the metal. If most of your experience has been inside highly abstracted internal platforms where others owned the underlying machinery, this is unlikely to be a match.
Leadership Expectations
- You raise reliability standards across the company
- You influence product and research architecture early
- You build strong partnerships, not ticket queues
- You attract and level up exceptional engineers
- You are curious how models use infrastructure, because improving systems expands what becomes possible
Most infrastructure roles optimize mature systems. This one helps define how reliability works for a new generation of AI infrastructure.
The decisions you make here will influence:
- How research progresses
- How products scale
- How customers trust us
- And how the engineering organization grows
If you want to build the reliability foundations of a company operating at the technological frontier, we should talk.
CompensationThe base pay range for this role is $230,000 – $360,000 per year.
Top Skills
Containers
Distributed Systems
Gpu
Kubernetes
Linux
Networking
Orchestration
Storage
Luma AI San Francisco, California, USA Office
San Francisco, CA, United States
Similar Jobs
Biotech
As a Software Engineer, you will develop software for clinical BCI applications, engage in code reviews, and collaborate with scientists and engineers while traveling between offices.
Top Skills:
C/C++JavaScriptLinuxmacOSPythonRustTypescript
Cloud • Fintech • Software • Business Intelligence • Consulting • Financial Services
The Director of Quality Assurance leads a QA organization to ensure technology systems deliver flawless, secure experiences, establishing standards and strategies for testing and quality culture across the firm.
Top Skills:
AppiumAWSAzureCircleCICypressGCPGitlab CiJenkinsSelenium
Cloud • Fintech • Software • Business Intelligence • Consulting • Financial Services
The Manager will lead advisory services for skilled nursing clients, focusing on financial reporting, budgeting, and operational improvements while mentoring staff.
Top Skills:
Bill.ComIntaactMicrosoft Office SuiteNetSuiteQuickbooks Online
What you need to know about the San Francisco Tech Scene
San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.
Key Facts About San Francisco Tech
- Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Google, Apple, Salesforce, Meta
- Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
- Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
- Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine


