CoreWeave Logo

CoreWeave

Director of Engineering, Inference Services

Posted Yesterday
Be an Early Applicant
In-Office
2 Locations
206K-303K Annually
Expert/Leader
In-Office
2 Locations
206K-303K Annually
Expert/Leader
The Director of Engineering will oversee the development of CoreWeave's Inference Platform, focusing on high-performance GPU inference services, leading engineering teams, and collaborating cross-functionally to enhance model-serving capabilities.
The summary above was generated by AI
CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at www.coreweave.com.
About this Role:

CoreWeave is looking for a Director of Engineering to own and scale our next-generation Inference Platform. In this highly technical, strategic role you will lead a world-class engineering organization to design, build, and operate the fastest, most cost-efficient, and most reliable GPU inference services in the industry. Your charter spans everything from model-serving runtimes (e.g., Triton, vLLM, TensorRT-LLM) and autoscaling micro-batch schedulers to developer-friendly SDKs and airtight, multi-tenant security - all delivered on CoreWeave’s unique accelerated-compute infrastructure.

What You'll Do:
  • Vision & Roadmap -  Define and continuously refine the end-to-end Inference Platform roadmap, prioritizing low-latency, high-throughput model serving and world-class developer UX. Set technical standards for runtime selection, GPU/CPU heterogeneity, quantization, and model-optimization techniques.
  • Platform Architecture - Design and implement a global, Kubernetes-native inference control plane that delivers <50 ms P99 latencies at scale. Build adaptive micro-batching, request-routing, and autoscaling mechanisms that maximize GPU utilization while meeting strict SLAs. Integrate model-optimization pipelines (TensorRT, ONNX Runtime, BetterTransformer, AWQ, etc.) for frictionless deployment.
  • Implement state-of-the-art runtime optimizations—including speculative decoding, KV-cache reuse across batches, early-exit heuristics, and tensor-parallel streaming—to squeeze every microsecond out of LLM inference while retaining accuracy.
  • Operational Excellence -  Establish SLOs/SLA dashboards, real-time observability, and self-healing mechanisms for thousands of models across multiple regions. Drive cost-performance trade-off tooling that makes it trivial for customers to choose the best HW tier for each workload.
  • Leadership -  Hire, mentor, and grow a diverse team of engineers and managers passionate about large-scale AI inference. Foster a customer-obsessed, metrics-driven engineering culture with crisp design reviews and blameless post-mortems.
  • Collaboration -  Partner closely with Product, Orchestration, Networking, and Security teams to deliver a unified CoreWeave experience. Engage directly with flagship customers (internal and external) to gather feedback and shape the roadmap.
Who You Are: 
  • 10+ years building large-scale distributed systems or cloud services, with 5+ years leading multiple engineering teams.
  • Proven success delivering mission-critical model-serving or real-time data-plane services (e.g., Triton, TorchServe, vLLM, Ray Serve, SageMaker Inference, GCP Vertex Prediction).
  • Deep understanding of GPU/CPU resource isolation, NUMA-aware scheduling, micro-batching, and low-latency networking (gRPC, QUIC, RDMA).
  • Track record of optimizing cost-per-token / cost-per-request and hitting sub-100 ms global P99 latencies.
  • Expertise in Kubernetes, service meshes, and CI/CD for ML workloads; familiarity with Slurm, Kueue, or other schedulers a plus.
  • Hands-on experience with LLM optimization (quantization, compilation, tensor parallelism, speculative decoding) and hardware-aware model compression.
  • Excellent communicator who can translate deep technical concepts into clear business value for C-suite and engineering audiences.
  • Bachelor’s or Master’s in CS, EE, or related field (or equivalent practical experience).
Nice-to-have:
  • Experience operating multi-region inference fleets at a cloud provider or hyperscaler.
  • Contributions to open-source inference or MLOps projects.
    Familiarity with observability stacks (Prometheus, Grafana, OpenTelemetry) for AI workloads.
  • Background in edge inference, streaming inference, or real-time personalization systems.

The base salary range for this role is $206,000 to $303,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility). 

What We Offer

The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.

In addition to a competitive salary, we offer a variety of benefits to support your needs, including:

  • Medical, dental, and vision insurance - 100% paid for by CoreWeave
  • Company-paid Life Insurance 
  • Voluntary supplemental life insurance 
  • Short and long-term disability insurance 
  • Flexible Spending Account
  • Health Savings Account
  • Tuition Reimbursement 
  • Ability to Participate in Employee Stock Purchase Program (ESPP)
  • Mental Wellness Benefits through Spring Health 
  • Family-Forming support provided by Carrot
  • Paid Parental Leave 
  • Flexible, full-service childcare support with Kinside
  • 401(k) with a generous employer match
  • Flexible PTO
  • Catered lunch each day in our office and data center locations
  • A casual work environment
  • A work culture focused on innovative disruption

Our Workplace

While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration

California Consumer Privacy Act - California applicants only

CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.

As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: [email protected].


Export Control Compliance

This position requires access to export controlled information.  To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency.  CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.

Top Skills

Awq
Bettertransformer
Ci/Cd
Grpc
Kubernetes
Onnx Runtime
Quic
Rdma
Tensorrt-Llm
Triton

CoreWeave Sunnyvale, California, USA Office

CoreWeave Sunnyvale, CA Office

Sunnyvale, California, United States

Similar Jobs at CoreWeave

12 Days Ago
In-Office or Remote
5 Locations
135K-198K Annually
Senior level
135K-198K Annually
Senior level
Cloud • Information Technology • Machine Learning
Lead product PR and media relations, crafting narratives for CoreWeave's AI cloud platform, collaborating with cross-functional teams, and securing media coverage.
Top Skills: AICloud InfrastructureDeveloper Tooling
17 Days Ago
In-Office
5 Locations
240K-285K Annually
Senior level
240K-285K Annually
Senior level
Cloud • Information Technology • Machine Learning
The Head of Credits and Tax Incentives will secure and manage government incentive programs to support CoreWeave's growth in AI infrastructure, ensuring compliance and collaboration across departments.
Top Skills: Compliance ProgramsPublic PolicyTax Incentive Programs
19 Days Ago
In-Office
6 Locations
188K-275K Annually
Senior level
188K-275K Annually
Senior level
Cloud • Information Technology • Machine Learning
Lead product vision and execution for CoreWeave's Kubernetes and compute services, conducting market research and collaborating with cross-functional teams to innovate AI-driven solutions.
Top Skills: AIBare MetalCloud ComputingGpusKubernetesServerlessVms

What you need to know about the San Francisco Tech Scene

San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.

Key Facts About San Francisco Tech

  • Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Google, Apple, Salesforce, Meta
  • Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
  • Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
  • Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account