Who we are:
Persimmons is building the infrastructure that will power the next decade of AI. Founded in 2023 by veteran technologists from the worlds of semiconductors, AI systems, and software innovation, We’re on a mission to enable smarter devices, more sustainable data centers, and entirely new applications the world hasn’t imagined yet.
Why join us:
We’re growing fast and looking for bold thinkers, builders, and curious problem-solvers who want to push the limits of AI hardware and software. If you're ready to join a world-class team and play a critical role in making a global impact - we want to talk to you.
Summary of Role:
This role focuses on transforming higher-level MLIR-based large language models by applying sophisticated mid- and backend compiler techniques to target Persimmons.ai's custom accelerator hardware. You will help design and optimize the Persimmons Compiler mid- and backend, integrate it with custom operations and kernels, as well as implement compiler passes that convert higher-level intermediate representations into runtime-oriented code and libraries. This position offers the opportunity to directly shape Persimmons.ai’s innovative AI hardware and software stack through close collaboration with teams across hardware, systems, and software.
What you’ll do:
- Develop and enhance MLIR-based compiler pipelines targeting Persimmons' custom spatial accelerator hardware.
- Design and optimize the Persimmons Compiler mid- and backend techniques for efficient lowering, graph-to-resources mapping, and code generation.
- Implement transformations to convert Python, PyTorch, and similar kernel representations to LLVM IR and runtime-ready libraries.
- Architect and implement efficient support for SPMD-based, distributed collective operations and lower them through specialized MLIR compiler dialects (e.g., MESH, SHARDY).
- Drive advanced loop optimizations leveraging polyhedral analysis: loop tiling, fusion, interchange, skewing, and related techniques.
- Apply and optimize techniques such as bufferization, padding, inlining, and integration of custom operations and kernels within the compilation workflow.
- Work on register allocation and instruction scheduling for Persimmons’ spatial hardware, ensuring high resource utilization, throughput, and low latency.
- Contribute to graph and tensor partitioning logic for optimal hardware-targeted execution.
- Collaborate across teams to deliver performant compilation flows from high-level ML representations to low-level executable artifacts.
Requirements
What You Bring To The Table:
- We do not expect candidates to meet all of the requirements listed below; strong candidates will demonstrate expertise in several key areas.
- Solid understanding and experience with underlying principles and methods of the MLIR framework (SSA representation, interfaces, rewriting, dialect hierarchy, etc.).
- Hands-on experience with developing MLIR-based compiler infrastructure, algorithms, and techniques for non-GPU/custom spatial hardware architectures.
- Working experience with lowering SIMD operations from PyTorch, Triton, xDSL, pyDSL, or similar Python-based frontends toward LLVM IR and, further, to SIMD kernel library.
- Extensive experience and understanding of loop optimization based on polyhedral principles.
- Experience and understanding of SPMD-based, distributed collective operations, specialized MLIR compiler dialects (e.g., MESH, SHARDY), and collective operation lowering in compilers for spatial hardware.
- Experience with techniques such as padding, bufferization, inlining, and other lowering techniques.
- Knowledge of register allocation and instruction scheduling in spatial architectures.
- Experience in lowering and integration of custom operations and kernels at the compiler mid- and backend.
- Familiarity with graph and tensor partitioning and mapping optimization algorithms and their integration in the compiler workflow.
- High level of understanding and 5+ years of experience with C++ and appreciation for writing clean and maintainable code. Good knowledge of Python is a big plus.
Benefits
- Competitive salary and benefits package.
- Flexible PTO
- 401k
Please note: Our organization does not accept unsolicited candidate submissions from external recruiters or agencies. Any such submissions, regardless of form (including but not limited to email, direct messaging, or social media), shall be deemed voluntary and shall not create any express or implied obligation on the part of the organization to pay any fees, commissions, or other compensation. Direct contact of employees, officers, or board members regarding employment opportunities is strictly prohibited and will not receive a response.
Top Skills
Persimmons, Inc. San Jose, California, USA Office
San Jose, California, United States, 95054
Similar Jobs
What you need to know about the San Francisco Tech Scene
Key Facts About San Francisco Tech
- Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Google, Apple, Salesforce, Meta
- Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
- Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
- Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine


.png)