adaption Logo

adaption

Distributed Systems Engineer, Data & Inference Platform

Posted 2 Hours Ago
Remote or Hybrid
6 Locations
Senior level
Remote or Hybrid
6 Locations
Senior level
Design and operate distributed inference systems for LLMs, build large-scale data pipelines, and debug production issues while collaborating with researchers and ML engineers.
The summary above was generated by AI
About Us

Most AI is frozen in place - it doesn't adapt to the world. We think that's backwards. Our mandate is to build efficient intelligence that evolves in real-time. Our vision is AI systems that are flexible, personalized, and accessible to everyone. We believe efficiency is what makes this possible - it's how we expand access and ensure innovation benefits the many, not the few. We believe in talent density: bringing together the best and most driven individuals to push the boundaries of continual adaptation. We're looking for builders and creative thinkers ready to shape the next era of intelligence.

The Role

You'll build and operate the systems that turn raw compute into useful intelligence — the inference services that serve LLMs at scale and the data pipelines that feed them. One week you're hunting a tail-latency regression in a production inference service handling millions of requests; the next you're redesigning a Ray Data pipeline so it stops melting down at petabyte scale. The work spans architecture, implementation, and the on-call pager that keeps you honest about both. Researchers and ML engineers will hand you workloads that barely run; you'll hand them back systems that run reliably, efficiently, and cheaply enough to matter.

Responsibilities
  • Serve Models at Scale: Design and operate distributed inference systems for LLMs, optimizing throughput, latency, and cost across heterogeneous GPU fleets. Batching, scheduling, KV cache management, autoscaling — you own the levers that make inference economical.

  • Move the Data: Build large-scale data pipelines (Ray Data, Spark, or equivalents) that ingest, transform, and curate the datasets behind training and evaluation. The bottleneck is rarely where people think it is, and you find it.

  • Debug the Undebuggable: Chase down the failure modes that only emerge under real production traffic — stragglers, head-of-line blocking, silent data corruption, GPU memory fragmentation — and write the postmortems that prevent the next ten. Define SLOs, build the observability to measure them, and own the on-call rotation that defends them.

  • Partner Across the Stack: Work directly with researchers and ML engineers to take experimental workloads from "runs on one node" to "runs in production." You're a systems partner, not a ticket queue.

Qualifications
  • 5+ years building and operating distributed systems in production.

  • Deep experience with at least one large-scale data or compute framework (Ray, Spark, Flink, Beam, Dask).

  • Strong fluency in Python and at least one systems language (Go, Rust, C++).

  • Working knowledge of the GPU/accelerator stack: CUDA fundamentals, NCCL, mixed precision, memory layout. You don't need to write kernels, but you should know why a workload is bound by what it's bound by.

  • Experience operating Kubernetes-based infrastructure, including custom operators or schedulers.

  • A track record of owning hard production incidents end-to-end — diagnosis, mitigation, and the durable fix.

  • Bonus: hands-on experience with LLM inference engines (vLLM, SGLang, TensorRT-LLM, TGI), modern lakehouse formats (Iceberg, Delta, Hudi), or open-source contributions to relevant projects.

Above all, we're looking for great teammates who make work feel lighter and aren't afraid to go out on a limb with bold ideas. You don't need to be perfect, but you do need to be adaptable. We encourage you to apply, even if you don't check every box.

 
Benefits
  • Flexible work: In-person collaboration in the Bay Area, a distributed global-first team, and team offsites.

  • Adaption Passport: Annual travel stipend to explore a country you've never visited. We're building intelligence that evolves alongside you, so we encourage you to keep expanding your horizons.

  • Lunch Stipend: Weekly meal allowance for take-out or grocery delivery.

  • Well-Being: Comprehensive medical benefits and generous paid time off.

Similar Jobs

107K-179K Annually
Senior level
Artificial Intelligence • Healthtech • Machine Learning • Natural Language Processing • Biotech • Pharmaceutical
The Senior Manager, Retail Pharmacy & Distribution Partnerships drives strategic partnerships with retail pharmacy chains and distributors, focusing on negotiation, governance, and collaboration to enhance healthcare access and utilization of Pfizer products.
107K-179K Annually
Senior level
Artificial Intelligence • Healthtech • Machine Learning • Natural Language Processing • Biotech • Pharmaceutical
The role involves leading strategic partnerships with retail pharmacies, executing negotiation frameworks, managing projects, and collaborating with various stakeholders to optimize healthcare outcomes and ensure compliance.
2 Hours Ago
Remote or Hybrid
117K-195K Annually
Senior level
117K-195K Annually
Senior level
Artificial Intelligence • Healthtech • Machine Learning • Natural Language Processing • Biotech • Pharmaceutical
The Strategic Healthcare Partner Team Lead will manage senior strategic partners to optimize healthcare impact, oversee initiatives, and collaborate with marketing and leadership for effective pharmaceutical strategies related to vaccines and COVID treatment.
Top Skills: Digital Health SolutionsPharmacy Digital Solutions

What you need to know about the San Francisco Tech Scene

San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.

Key Facts About San Francisco Tech

  • Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Google, Apple, Salesforce, Meta
  • Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
  • Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
  • Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account