Etched Logo

Etched

Head of Inference Kernels

Reposted 6 Days Ago
Be an Early Applicant
In-Office
San Jose, CA
200K-300K Annually
Senior level
In-Office
San Jose, CA
200K-300K Annually
Senior level
Lead a high-performing team to develop optimized kernels and inference stacks for transformer models, employing hardware-software co-design and performance optimization strategies.
The summary above was generated by AI

About Etched

Etched is building the world’s first AI inference system purpose-built for transformers - delivering over 10x higher performance and dramatically lower cost and latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents. Backed by hundreds of millions from top-tier investors and staffed by leading engineers, Etched is redefining the infrastructure layer for the fastest growing industry in history.

Job Summary
As a core member of the team, you will play a pivotal role in leading a high-performing team to build a suite of optimized kernels and implement highly optimized inference stacks for a variety of state-of-the-art transformer models (e.g., Llama-3, Llama-4, Deepseek-R1, Qwen-3, Stable Diffusion-3 etc.). You will be responsible for managing and scaling a high-performance team to pioneer novel model mapping strategies, while co-designing inference time algorithms (e.g., speculative and parallel decoding, prefill-decode disaggregation etc.).

Key responsibilities

  • Architect Best-in-Class Inference Performance on Sohu: Deliver continuous batching throughput exceeding B200 by ≥10x on priority workloads

  • Develop Best-in-Performance Inference Mega Kernels: Develop complex, fused kernels (including basics like reordering and fusing, but also more complex work involving simultaneous computation and transmission of intermediate values for sequential matmuls) that increase chip utilization and reduce inference latency, and validate these optimizations through benchmarking and regression-tested in production pipelines.

  • Architect Model Mapping Strategies: Develop system level optimizations using a mix of techniques such tensor parallelism and expert parallelism for optimal performance.

  • Hardware-Software Co-design of Inference-time Algorithmic Innovation: Develop and deploy production-ready inference-time algorithmic improvements (e.g., speculative decoding, prefill-decode disaggregation, KV cache offloading)

  • Build Scalable Team and Roadmap: Grow and retain a team of high-performing inference optimization engineers.

  • Cross-Functional Performance Alignment: Ensure inference stack and performance goals are aligned with the software infrastructure teams (e.g., runtime, and scheduling support), GTM (e.g., latency SLAs, workload targets) and hardware teams (e.g., instruction design, memory bandwidth) for future generations of our hardware.

Representative projects

  • Develop optimized kernels for multi-head latent attention on Sohu

  • Develop optimization strategies to optimally hide compute and communication in mixture-of-expert layers

  • Organize the team to deliver production ready forward pass implementations of new state-of-the-art models within 2-weeks of their release. Build infrastructure to be able to build this in <1 week in the future.

You may be a good fit if you have

  • Experience in designing and optimizing GPU kernels for deep learning on GPUs using CUDA, and assembly (ASM). You should have experience with low-level programming to maximize performance for AI operations, leveraging tools like Compute Kernel (CK), CUTLASS, and Triton for multi-GPU and multi-platform performance.

  • Deep fluency with transformer inference architecture, optimization levers, and full-stack systems (e.g., vLLM, custom runtimes). History of delivering tangible perf wins on GPU hardware or custom AI accelerators.

  • Have solid understanding of roofline models of compute throughput, memory bandwidth and interconnect performance.

  • Experienced in running large-scale workloads on heterogeneous compute clusters, optimizing for efficiency and scalability of AI workloads.

  • Scopes projects crisply, sets aggressive but realistic milestones, and drives technical decision-making across the team. Anticipates blockers and shifts resources proactively.

Strong candidates may also have

  • Experience with implementation of state-of-the-art reasoning and chain-of-thought models at production scale

  • Experience with implementation of newer AI compute operations on hardware (e.g., flash attention, long-context attention variants and alternatives)

  • Analyzed and implemented strategies such as KV-cache offloading for efficient compute resource management

  • Familiarity with linear algebra (e.g. matrix decomposition, alternatives bases for vector spaces, matrix rank and its implications)

  • Managed lean, high-performing engineering teams and drove execution on timelines with high quality outcomes

Benefits

  • Full medical, dental, and vision packages, with generous premium coverage

  • Housing subsidy of $2,000/month for those living within walking distance of the office

  • Daily lunch and dinner in our office

  • Relocation support for those moving to San Jose (Santana Row)

Salary Compensation

  • $200,000 - $300,000 + significant equity package

How we’re different

Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.

We are a fully in-person team in San Jose (Santana Row), and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.

Top Skills

Asm
Compute Kernel (Ck)
Cuda
Cutlass
Triton
HQ

Etched Cupertino, California, USA Office

Cupertino, CA, United States

Similar Jobs

An Hour Ago
Easy Apply
Hybrid
9 Locations
Easy Apply
98K-143K Annually
Mid level
98K-143K Annually
Mid level
Healthtech • Other • Sales • Software • Analytics • Conversational AI
The Events & Field Marketing Manager will strategize and implement in-person and virtual marketing initiatives to enhance brand awareness and generate leads. They will manage event logistics, collaborate cross-functionally, ensure ROI measurement, and lead a team towards successful execution of events.
Top Skills: Salesforce
An Hour Ago
Easy Apply
Hybrid
Los Angeles, CA, USA
Easy Apply
84K-110K Annually
Junior
84K-110K Annually
Junior
Cloud • Mobile • Software
The FP&A Analyst will support financial reporting, forecasting, budgeting, and maintain models while collaborating with Accounting. The role requires strong analytical skills and attention to detail.
Top Skills: Adaptive InsightsExcelGoogle Sheets
An Hour Ago
Easy Apply
Hybrid
6 Locations
Easy Apply
191K-265K Annually
Senior level
191K-265K Annually
Senior level
Fintech • HR Tech
As a Senior Staff Software Engineer at Gusto, you will design and build scalable systems for Payments and Risk, mentor engineers, and collaborate across multiple teams.
Top Skills: ReactRuby On RailsTypescript

What you need to know about the San Francisco Tech Scene

San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.

Key Facts About San Francisco Tech

  • Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Google, Apple, Salesforce, Meta
  • Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
  • Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
  • Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account