Databricks Logo

Databricks

Staff Software Engineer - GenAI inference

Reposted 8 Days Ago
Be an Early Applicant
In-Office
San Francisco, CA
191K-233K Annually
Senior level
In-Office
San Francisco, CA
191K-233K Annually
Senior level
Lead the architecture and development of an inference engine for Databricks Model API, optimizing for performance and scalability while collaborating with researchers and cross-functional teams.
The summary above was generated by AI

P-1285

About This Role

As a staff software engineer for GenAI inference, you will lead the architecture, development, and optimization of the inference engine that powers Databricks Foundation Model API.. You’ll bridge research advances and production demands, ensuring high throughput, low latency, and robust scaling. Your work will encompass the full GenAI inference stack: kernels, runtimes, orchestration, memory, and integration with frameworks and orchestration systems.

What You Will Do
  • Own and drive the architecture, design, and implementation of the inference engine, and collaborate on model-serving stack optimized for large-scale LLMs inference
  • Partner closely with researchers to bring new model architectures or features (sparsity, activation compression, mixture-of-experts) into the engine
  • Lead the end-to-end optimization for latency, throughput, memory efficiency, and hardware utilization across GPUs, and accelerators
  • Define and guide standards to build and maintain instrumentation, profiling, and tracing tooling to uncover bottlenecks and guide optimizations
  • Architect scalable routing, batching, scheduling, memory management, and dynamic loading mechanisms for inference workloads
  • Ensure reliability, reproducibility, and fault tolerance in the inference pipelines, including A/B launches, rollback, and model versioning
  • Collaborate cross-functionally on Integrating with federated, distributed inference infrastructure – orchestrate across nodes, balance load, handle communication overhead
  • Drive cross-team collaboration: with platform engineers, cloud infrastructure, and security/compliance teams
  • Represent the team externally through benchmarks, whitepapers, and open-source contributions
What We Look For
  • BS/MS/PhD in Computer Science, or a related field
  • Strong software engineering background (6+ years or equivalent) in performance-critical systems
  • Proven track record of owning complex system components and driving architectural decisions end-to-end
  • Deep understanding of ML inference internals: attention, MLPs, recurrent modules, quantization, sparse operations, etc.
  • Hands-on experience with CUDA, GPU programming, and key libraries (cuBLAS, cuDNN, NCCL, etc.)
  • Strong background in distributed systems design, including RPC frameworks, queuing, RPC batching, sharding, memory partitioning
  • Demonstrated ability to uncover and solve performance bottlenecks across layers (kernel, memory, networking, scheduler)
  • Experience building instrumentation, tracing, and profiling tools for ML models
  • Ability to lead through influence - work closely with ML researchers, translate novel model ideas into production systems
  • Excellent communication and leadership skills, with a proactive and ownership-driven mindset
  • Bonus: published research or open-source contributions in ML systems, inference optimization, or model serving


Pay Range Transparency

Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles.  Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here.


Local Pay Range
$190,900$232,800 USD

About Databricks

Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. 

Our Commitment to Diversity and Inclusion

At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.

Compliance

If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.

Top Skills

Cublas
Cuda
Cudnn
Gpu Programming
Nccl

Databricks San Francisco, California, USA Office

160 Spear Street, San Francisco, CA, United States, 94105

Similar Jobs

4 Hours Ago
Hybrid
San Francisco, CA, USA
189K-351K Annually
Senior level
189K-351K Annually
Senior level
Cloud • Software
Lead and develop a talented SRE team while ensuring compliance with FedRAMP regulations and collaborating across teams for security and operations.
Top Skills: AIAutomationCloudDistributed SystemsFedrampSecurity
5 Hours Ago
Hybrid
Carmel, CA, USA
22-28 Hourly
Junior
22-28 Hourly
Junior
Fintech • Financial Services
As a Teller, you will support customer transactions, engage with clients, process operations, and ensure compliance with bank policies while building community relationships.
5 Hours Ago
Hybrid
Temecula, CA, USA
23-31 Hourly
Entry level
23-31 Hourly
Entry level
Fintech • Financial Services
The Associate Personal Banker will build relationships and provide financial solutions to customers, assist with account openings, and ensure compliance with regulations.

What you need to know about the San Francisco Tech Scene

San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.

Key Facts About San Francisco Tech

  • Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Google, Apple, Salesforce, Meta
  • Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
  • Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
  • Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account