Hippocratic AI Logo

Hippocratic AI

LLM Inference Engineer

Reposted 7 Days Ago
Be an Early Applicant
In-Office
Palo Alto, CA
Mid level
In-Office
Palo Alto, CA
Mid level
The LLM Inference Engineer will optimize large language model infrastructure and implement serving architectures, quantization techniques, and latency optimizations. Responsibilities include designing disaggregated serving solutions, benchmarking system performance, and guaranteeing efficient, scalable LLM systems in production.
The summary above was generated by AI
About Us

Hippocratic AI has developed a safety-focused Large Language Model (LLM) for healthcare. The company believes that a safe LLM can dramatically improve healthcare accessibility and health outcomes in the world by bringing deep healthcare expertise to every human. No other technology has the potential to have this level of global impact on health.

Why Join Our Team
  • Innovative Mission: We are developing a safe, healthcare-focused large language model (LLM) designed to revolutionize health outcomes on a global scale.

  • Visionary Leadership: Hippocratic AI was co-founded by CEO Munjal Shah, alongside a group of physicians, hospital administrators, healthcare professionals, and artificial intelligence researchers from leading institutions, including El Camino Health, Johns Hopkins, Stanford, Microsoft, Google, and NVIDIA.

  • Strategic Investors: We have raised a total of $400+ million in funding, backed by top investors such as Andreessen Horowitz, General Catalyst, Kleiner Perkins, NVIDIA’s NVentures, Premji Invest, SV Angel, and six health systems.

  • World-Class Team: Our team is composed of leading experts in healthcare and artificial intelligence, ensuring our technology is safe, effective, and capable of delivering meaningful improvements to healthcare delivery and outcomes.

For more information, visit www.HippocraticAI.com.

We value in-person teamwork and believe the best ideas happen together. Our team is expected to be in the office five days a week in Palo Alto, CA unless explicitly noted otherwise in the job description.

About the Role

We're seeking an experienced LLM Inference Engineer to optimize our large language model (LLM) serving infrastructure. The ideal candidate has:

  • Extensive hands-on experience with state-of-the-art inference optimization techniques

  • A track record of deploying efficient, scalable LLM systems in production environments

Key Responsibilities
  • Design and implement multi-node serving architectures for distributed LLM inference

  • Optimize multi-LoRA serving systems

  • Apply advanced quantization techniques (FP4/FP6) to reduce model footprint while preserving quality

  • Implement speculative decoding and other latency optimization strategies

  • Develop disaggregated serving solutions with optimized caching strategies for prefill and decoding phases

  • Continuously benchmark and improve system performance across various deployment scenarios and GPU types

Required Qualifications
  • Experience optimizing LLM inference systems at scale

  • Proven expertise with distributed serving architectures for large language models

  • Hands-on experience implementing quantization techniques for transformer models

  • Strong understanding of modern inference optimization methods, including:

    • Speculative decoding techniques with draft models

    • Eagle speculative decoding approaches

  • Proficiency in Python and C++

  • Experience with CUDA programming and GPU optimization

Preferred Qualifications
  • Contributions to open-source inference frameworks such as vLLM, SGLang, or TensorRT-LLM

  • Experience with custom CUDA kernels

  • Track record of deploying inference systems in production environments

  • Deep understanding of performance optimization systems

Show us what you've built: Tell us about an LLM inference or training project that makes you proud! Whether you've optimized inference pipelines to achieve breakthrough performance, designed innovative training techniques, or built systems that scale to billions of parameters - we want to hear your story.


Open source contributor? Even better! If you've contributed to projects like vllm, sglang, lmdeploy or similar LLM optimization frameworks, we'd love to see your PRs. Your contributions to these communities demonstrate exactly the kind of collaborative innovation we value.
Join a team where your expertise won't just be appreciated—it will be celebrated and amplified. Help us shape the future of AI deployment at scale!

References


1. Polaris: A Safety-focused LLM Constellation Architecture for Healthcare, https://arxiv.org/abs/2403.13313
2. Polaris 2: https://www.hippocraticai.com/polaris2
3. Personalized Interactions: https://www.hippocraticai.com/personalized-interactions
4. Human Touch in AI: https://www.hippocraticai.com/the-human-touch-in-ai
5. Empathetic Intelligence: https://www.hippocraticai.com/empathetic-intelligence
6. Polaris 1: https://www.hippocraticai.com/research/polaris
7. Research and clinical blogs: https://www.hippocraticai.com/research

***Be aware of recruitment scams impersonating Hippocratic AI. All recruiting communication will come from @hippocraticai.com email addresses. We will never request payment or sensitive personal information during the hiring process. If anything appears suspicious, stop engaging immediately and report the incident.

Top Skills

C++
Cuda
Fp4
Fp6
Multi-Lora
Python
Sglang
Tensorrt-Llm
Vllm
HQ

Hippocratic AI Palo Alto, California, USA Office

167 Hamilton Ave, 3rd Floor, Palo Alto, California, United States, 94301

Similar Jobs

19 Days Ago
In-Office or Remote
Menlo Park, CA, USA
Mid level
Mid level
Artificial Intelligence • Hardware • Information Technology • Robotics
Integrate and optimize large-scale inference systems for AI research, supporting GPU utilization and low-latency access across models in a collaborative setting.
Top Skills: GpusLarge Language ModelsReinforcement LearningSglangTensorrt-LlmVllm
12 Days Ago
Hybrid
5 Locations
193K-241K Annually
Mid level
193K-241K Annually
Mid level
Fintech • Machine Learning • Payments • Software • Financial Services
The Lead AI Engineer will develop and support AI software components, optimize AI performance, and lead collaborative product delivery efforts to enhance customer experiences.
Top Skills: AWSAzureGoGCPHuggingfaceJavaNemo GuardrailsPythonPyTorchScalaVectordbs
3 Hours Ago
Remote or Hybrid
2 Locations
Expert/Leader
Expert/Leader
Cloud • Computer Vision • Information Technology • Sales • Security • Cybersecurity
The Regional Sales Manager will drive enterprise security software sales, managing relationships and revenue growth with new accounts, focusing on strategic planning and coordination with internal teams.
Top Skills: Crm SoftwareMarketing Software

What you need to know about the San Francisco Tech Scene

San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.

Key Facts About San Francisco Tech

  • Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Google, Apple, Salesforce, Meta
  • Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
  • Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
  • Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account