Sciforium Logo

Sciforium

Distributed Training Engineer

Posted 5 Days Ago
Be an Early Applicant
In-Office
San Francisco, CA
190K-250K Annually
Senior level
In-Office
San Francisco, CA
190K-250K Annually
Senior level
The role involves optimizing and maintaining the software stack for large-scale AI training, focusing on CUDA/ROCm, JAX, and PyTorch to ensure efficiency and stability in distributed training systems.
The summary above was generated by AI

Sciforium is an AI infrastructure company developing next-generation multimodal AI models and a proprietary, high-efficiency serving platform. Backed by multi-million-dollar funding and direct sponsorship from AMD with hands-on support from AMD engineers the team is scaling rapidly to build the full stack powering frontier AI models and real-time applications.

About the role

Sciforium is seeking a highly skilled Distributed Training Engineer to build, optimize, and maintain the critical software stack that powers our large-scale AI training workloads. In this role, you will work across the entire machine learning infrastructure from low-level CUDA/ROCm runtimes to high-level frameworks like JAX and PyTorch to ensure our distributed training systems are fast, scalable, stable, and efficient.

This position is ideal for someone who loves deep systems engineering, debugging complex hardware–software interactions, and optimizing performance at every layer of the ML stack. You will play a pivotal role in enabling the training and deployment of next-generation LLMs and generative AI models.

What you'll do
  • Software Stack Maintenance: Maintain, update, and optimize critical ML libraries and frameworks including JAX, PyTorch, CUDA, and ROCm across multiple environments and hardware configurations.

  • End-to-End Stack Ownership: Build, maintain, and continuously improve the entire ML software stack from ROCm/CUDA drivers to high-level JAX/PyTorch tooling.

  • Distributed Training Optimization: Ensure all model implementations are efficiently sharded, partitioned, and configured for large-scale distributed training.

  • System Integration: Continuously integrate and validate modules for runtime correctness, memory efficiency, and scalability across multi-node GPU/accelerator clusters.

  • Profiling & Performance Analysis: Conduct detailed profiling of compilation graphs, training workloads, and runtime execution to optimize performance and eliminate bottlenecks.

  • Debugging & Reliability: Troubleshoot complex hardware–software interaction issues, including vLLM compilation failures on ROCm, CUDA memory leaks, distributed runtime failures, and kernel-level inconsistencies.

  • Collaborate with research, infrastructure, and kernel engineering teams to improve system throughput, stability, and developer experience.

Ideal candidate profile
  • 5+ years of industry experience in ML systems, distributed training, or related fields.

  • Bachelor’s or Master’s degree in Computer Science, Computer Engineering, Electrical Engineering, or related technical fields.

  • Strong programming experience in Python, C++, and familiarity with ML tooling and distributed systems.

  • Deep understanding of profiling tools (e.g., Nsight, ROCm Profiler, XLA profiler, TPU tools).

  • Deep expertise with partitioning configuration on the modern ML frameworks such as PyTorch and JAX.

  • Experience with multi-node distributed training systems and orchestration frameworks (DTensor, GSPMD, etc.).

  • Hands-on experience maintaining or building ML training stacks involving CUDA, ROCm, NCCL, XLA, or similar technologies.

Nice-to-have
  • Extensive experience with the XLA/JAX stack, including compilation internals and custom lowering paths.

  • Familiarity with distributed serving or large-scale inference frameworks (e.g., vLLM, TensorRT, FasterTransformer).

  • Background in GPU kernel optimization or accelerator-aware model partitioning.

  • Strong understanding of low-level C++ building blocks used in ML frameworks (e.g., XLA, CUDA kernels, custom ops).

Benefits include
  • Medical, dental, and vision insurance

  • 401k plan

  • Daily lunch, snacks, and beverages

  • Flexible time off

  • Competitive salary and equity

Equal opportunity

Sciforium is an equal opportunity employer. All applicants will be considered for employment without attention to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status.

Top Skills

C++
Cuda
Jax
Python
PyTorch
Rocm
HQ

Sciforium San Francisco, California, USA Office

San Francisco, CA, United States

Sciforium Los Altos, California, USA Office

4401 El Camino Real, Los Altos, California, United States, 94022

Similar Jobs

7 Days Ago
In-Office
Palo Alto, CA, USA
Junior
Junior
Software
The Research Engineer will focus on advancing model training and reinforcement learning systems, enhancing performance and efficiency of large-scale GPU systems.
Top Skills: KubernetesPythonPyTorchSlurm
23 Days Ago
Easy Apply
In-Office
2 Locations
Easy Apply
116K-170K Annually
Mid level
116K-170K Annually
Mid level
Artificial Intelligence • Software
Design and maintain large-scale distributed training systems, optimize performance for ML models, and improve training efficiency.
Top Skills: C++DeepspeedMegatron-LmPythonRayTorchtitan
17 Days Ago
In-Office or Remote
Menlo Park, CA, USA
Mid level
Mid level
Artificial Intelligence • Hardware • Information Technology • Robotics
Optimize and develop large-scale distributed LLM training systems, support reinforcement learning workflows, and contribute to open-source frameworks.
Top Skills: DeepspeedFsdpMegatron-LmTorchtitan

What you need to know about the San Francisco Tech Scene

San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.

Key Facts About San Francisco Tech

  • Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Google, Apple, Salesforce, Meta
  • Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
  • Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
  • Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account