Thinking Machines Lab Logo

Thinking Machines Lab

Research Engineer, Infrastructure, RL Systems

Reposted 13 Days Ago
Be an Early Applicant
Easy Apply
In-Office
San Francisco, CA, USA
350K-475K Annually
Mid level
Easy Apply
In-Office
San Francisco, CA, USA
350K-475K Annually
Mid level
Design and build infrastructure for scalable reinforcement learning, optimize RL training pipelines, and collaborate with researchers on production-grade solutions.
The summary above was generated by AI

Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals. 

We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.

About the Role

We’re looking for an infrastructure research engineer to design and build the core systems that enable scalable, efficient training of large models through reinforcement learning.

This role sits at the intersection of research and large-scale systems engineering: a builder who understands both the algorithms behind RL and the realities of distributed training and inference at scale. You’ll wear many hats, from optimizing rollout and reward pipelines to enhancing reliability, observability, and orchestration, collaborating closely with researchers and infra teams to make reinforcement learning stable, fast, and production-ready.

Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.

What You’ll Do
  • Design, build, and optimize the infrastructure that powers large-scale reinforcement learning and post-training workloads.
  • Improve the reliability and scalability of RL training pipeline, distributed RL workloads, and training throughput.
  • Develop shared monitoring and observability tools to ensure high uptime, debuggability, and reproducibility for RL systems.
  • Collaborate with researchers to translate algorithmic ideas into production-grade training pipelines.
  • Build evaluation and benchmarking infrastructure that measures model progress on helpfulness, safety, and factuality.
  • Publish and share learnings through internal documentation, open-source libraries, or technical reports that advance the field of scalable AI infrastructure.
Skills and Qualifications

Minimum qualifications:

  • Bachelor’s degree or equivalent experience in computer science, electrical engineering, statistics, machine learning, physics, robotics, or similar.
  • Strong engineering skills, ability to contribute performant, maintainable code and debug in complex codebases
  • Understanding of deep learning frameworks (e.g., PyTorch, JAX) and their underlying system architectures.
  • Thrive in a highly collaborative environment involving many, different cross-functional partners and subject matter experts.
  • A bias for action with a mindset to take initiative to work across different stacks and different teams where you spot the opportunity to make sure something ships.

Preferred qualifications — we encourage you to apply if you meet some but not all of these:

  • Experience training or supporting large-scale language models with tens of billions of parameters or more.
  • Experience working with reinforcement learning workloads (e.g., PPO, DPO, RLHF, or reward modeling).
  • Background in high-performance or reliability engineering — distributed training frameworks and cluster orchestration (Kubernetes, Slurm).
  • Familiarity with monitoring and observability tools (Prometheus, Grafana, OpenTelemetry).
  • Contributions to large-scale ML research or infrastructure, open-source frameworks, or internal performance optimization efforts.
Logistics
  • Location: This role is based in San Francisco, California. 
  • Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.
  • Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process together.
  • Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.

As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.

Top Skills

Grafana
Jax
Kubernetes
Opentelemetry
Prometheus
PyTorch
Slurm
HQ

Thinking Machines Lab San Francisco, California, USA Office

San Francisco, CA, United States

Similar Jobs

4 Hours Ago
In-Office
218K-296K Annually
Senior level
218K-296K Annually
Senior level
Aerospace • Information Technology • Software • Cybersecurity • Design • Defense • Manufacturing
Manage Acoustic Programs at Boeing, leading integrated product teams, stakeholder relationships, and ensuring program execution in underwater acoustic systems for ASW platforms.
Top Skills: Earned Value ManagementProgram Management
4 Hours Ago
In-Office
96K-118K Annually
Junior
96K-118K Annually
Junior
Aerospace • Information Technology • Software • Cybersecurity • Design • Defense • Manufacturing
This role involves managing certification plans for aircraft components, ensuring compliance with regulatory standards, and coordinating across various engineering teams.
Top Skills: Capto (Certification Tracking)Customer & Supplier Data Submittal Tool (Csdt)Design Change Classification System (Dccs)
4 Hours Ago
In-Office
99K-198K Annually
Mid level
99K-198K Annually
Mid level
Aerospace • Information Technology • Software • Cybersecurity • Design • Defense • Manufacturing
The engineer will design and analyze electronic systems for aircraft, collaborating on certification, development, and troubleshooting of avionics systems.
Top Skills: Avionics Systems DesignDo-178Do-254Electrical EngineeringHardware DesignSoftware DevelopmentTest Planning

What you need to know about the San Francisco Tech Scene

San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.

Key Facts About San Francisco Tech

  • Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Google, Apple, Salesforce, Meta
  • Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
  • Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
  • Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account