As a Machine Learning Systems Administrator - HPC Infrastructure, you will be responsible for maintaining and developing the core infrastructure behind our machine learning research and production efforts. You’ll work closely with various training and inference teams to ensure the smooth operation of our systems while laying the groundwork for scalable, secure, and efficient workflows.
You’ll work across:Administration and automation of our Linux-based cluster environments
Managing user onboarding/offboarding, security auditing, and access control
Monitoring system resources and job scheduling
Supporting and improving developer workflows (e.g., VSCode compatibility, Docker)
Enabling and supporting AI/ML workloads, including large-scale training jobs
Comfortable operating across a wide range of infrastructure concerns and excited to own and improve critical systems.
You’ll have a significant impact on both developer productivity and training and inference performance.
Strong experience with Linux system administration, user and access management, and automation
Demonstrated expertise in scripting languages for system tooling and automation (bash, Python, etc.)
Familiarity with containerized environments (e.g., Docker) and job scheduling systems like Slurm
Experience building tooling for cluster validation and reliability (GPU, networking, storage health checks)
Experience setting up and managing developer tools and third-party services (e.g, Cloud storage providers, Dockerhub, Slack, Gmail, Telegraf, experiment trackers, etc.)
Excellent debugging and troubleshooting skills across compute, storage, and networking
Strong communication skills and ability to collaborate across technical and non-technical teams
Experience with infrastructure as code (e.g., Ansible, Terraform)
Prior work supporting ML/AI infrastructure, including GPU management and workload optimization
Exposure to backend development for ML model serving (e.g., vLLM, Ray, SGLang)
Experience working with cloud platforms such as AWS, Azure, or GCP
Familiarity with containers (Docker, Apptainer) and their integration with scheduling systems (Slurm, Kubernetes)
Our research methodology is grounded in methodical, step-by-step approaches to ambitious goals. Both deep research and engineering excellence are equally valued
We strongly value new and crazy ideas and are very willing to bet big on new ideas
We move as quickly as we can; we aim to minimize the bar to impact as low as possible
We all enjoy what we do and love discussing AI
Comprehensive medical, dental, vision, and FSA plans
Competitive compensation and 401(k)
Relocation and immigration support on a case-by-case basis
On-site meals prepared by a dedicated culinary team; Thursday Happy Hours
In-person team in San Francisco, California. with a collaborative, high-energy environment
If you are excited to bring reliability best practices to the frontier of AI infrastructure, this job is for you. Apply Today!
Top Skills
Zyphra Palo Alto, California, USA Office
Palo Alto, California, United States, 94306
Similar Jobs
What you need to know about the San Francisco Tech Scene
Key Facts About San Francisco Tech
- Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Google, Apple, Salesforce, Meta
- Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
- Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
- Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine

.png)