Cohere AI Logo

Cohere AI

Senior ML Systems Engineer, Frameworks & Tooling

Reposted 11 Days Ago
In-Office or Remote
Hiring Remotely in San Francisco, CA, USA
Senior level
In-Office or Remote
Hiring Remotely in San Francisco, CA, USA
Senior level
The Senior ML Systems Engineer will build and maintain the training framework for large-scale language models, focusing on distributed training and performance optimization.
The summary above was generated by AI

Who are we?

Our mission is to scale intelligence to serve humanity. We’re training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation, semantic search, RAG, and agents. We believe that our work is instrumental to the widespread adoption of AI.

We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do what’s best for our customers.

Cohere is a team of researchers, engineers, designers, and more, who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.

Join us on our mission and shape the future!

We’re looking for a senior engineer to help build, maintain and evolve the training framework that powers our frontier-scale language models. This role sits at the intersection of large-scale training, distributed systems, and HPC infrastructure. You will design and maintain the core components that enable fast, reliable, and scalable model training — and build the tooling that connects research ideas to thousands of GPUs.

If you enjoy working across the full stack of ML systems, this role gives you the opportunity and autonomy to have massive impact.

What You’ll Work On
  • Build and own the training framework responsible for large-scale LLM training.

  • Design distributed training abstractions (data/tensor/pipeline parallelism, FSDP/ZeRO strategies, memory management, checkpointing).

  • Improve training throughput and stability on multi-node clusters (e.g., GB200/300, AMD, H200/100).

  • Develop and maintain tooling for monitoring, logging, debugging, and developer ergonomics.

  • Collaborate closely with infra teams to ensure our cluster, container environments, and hardware configurations support high-performance training.

  • Investigate and resolve performance bottlenecks across the ML systems stack.

  • Build robust systems that ensure reproducible, debuggable, large-scale runs.

You Might Be a Good Fit If You Have
  • Strong engineering experience in large-scale distributed training or HPC systems.
    Deep familiarity with JAX internals, distributed training libraries, or custom kernels/fused ops.

  • Experience with multi-node cluster orchestration (Slurm, Ray, Kubernetes, or similar).

  • Comfort debugging performance issues across CUDA/NCCL, networking, IO, and data pipelines.

  • Experience working with containerized environments (Docker, Singularity/Apptainer).

  • A track record of building tools that increase developer velocity for ML teams.

  • Excellent judgment around trade-offs: performance vs complexity, research velocity vs maintainability.

  • Strong collaboration skills — you’ll work closely with infra, research, and deployment teams.

Nice to Have
  • Experience with training LLMs or other large transformer architectures.

  • Contributions to ML frameworks (PyTorch, JAX, DeepSpeed, Megatron, xFormers, etc.).

  • Familiarity with evaluation and serving frameworks (vLLM, TensorRT-LLM, custom KV caches).

  • Experience with data pipeline optimization, sharded datasets, or caching strategies.

  • Background in performance engineering, profiling, or low-level systems.

Bonus: paper at top-tier venues (such as NeurIPS, ICML, ICLR, AIStats, MLSys, JMLR, AAAI, Nature, COLING, ACL, EMNLP).

Why Join Us
  • You’ll work on some of the most challenging and consequential ML systems problems today.

  • You’ll collaborate with a world-class team working fast and at scale.

  • You’ll have end-to-end ownership over critical components of the training stack.

  • You’ll shape the next generation of infrastructure for frontier-scale models.

  • You’ll build tools and systems that directly accelerate research and model quality.

Sample Projects:

  • Build a high-performance data loading and caching pipeline.

  • Implement performance profiling across the ML systems stack

  • Develop internal metrics and monitoring for training runs.

  • Build reproducibility and regression testing infrastructure.

  • Develop a performant fault-tolerant distributed checkpointing system.

If some of the above doesn’t line up perfectly with your experience, we still encourage you to apply!

We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process, please submit an Accommodations Request Form, and we will work together to meet your needs.

Full-Time Employees at Cohere enjoy these Perks:

🤝 An open and inclusive culture and work environment 

🧑‍💻 Work closely with a team on the cutting edge of AI research 

🍽 Weekly lunch stipend, in-office lunches & snacks

🦷 Full health and dental benefits, including a separate budget to take care of your mental health 

🐣 100% Parental Leave top-up for up to 6 months

🎨 Personal enrichment benefits towards arts and culture, fitness and well-being, quality time, and workspace improvement

🏙 Remote-flexible, offices in Toronto, New York, San Francisco, London and Paris, as well as a co-working stipend

✈️ 6 weeks of vacation (30 working days!)

Top Skills

Cuda
Deepspeed
Docker
Jax
Kubernetes
Megatron
Nccl
PyTorch
Ray
Singularity
Slurm
Tensorrt-Llm
Vllm
Xformers

Cohere AI San Francisco, California, USA Office

San Francisco, California, United States

Similar Jobs

16 Days Ago
Remote
125K-170K Annually
Senior level
125K-170K Annually
Senior level
Artificial Intelligence • Big Data • Cloud • Analytics
As a Senior Machine Learning Engineer, you will create machine learning solutions, optimize models, and lead client projects using cloud technologies.
Top Skills: AWSAzureGCPPythonSQL
54 Minutes Ago
Remote or Hybrid
115K-160K Annually
Senior level
115K-160K Annually
Senior level
Cloud • Computer Vision • Information Technology • Sales • Security • Cybersecurity
Lead digital customer success strategies, focusing on email marketing and product adoption. Collaborate cross-functionally to optimize customer engagement and enhance user experience.
Top Skills: Ai ToolsMarketing Automation Platforms
2 Hours Ago
Easy Apply
Remote or Hybrid
Easy Apply
84K-108K Annually
Senior level
84K-108K Annually
Senior level
Artificial Intelligence • Cloud • Computer Vision • Hardware • Internet of Things • Software
The Technical Account Manager serves as a trusted technical advisor, ensuring customers successfully adopt the Samsara platform and achieve their business goals while optimizing their technical health.
Top Skills: APIsGainsightGongHardwareJIRAPythonSalesforceSoftwareTableauZendesk

What you need to know about the San Francisco Tech Scene

San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.

Key Facts About San Francisco Tech

  • Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Google, Apple, Salesforce, Meta
  • Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
  • Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
  • Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account