Prior Labs is building foundation models that understand tabular data, the backbone of science and business. Foundation models have transformed text and images, but structured data has remained largely untouched. We’re tackling this $600B opportunity to fundamentally change how organizations work with scientific, medical, financial, and business data.
Momentum: We’re the world-leading organization in structured data ML. Our TabPFN v2 model was published in Nature and set a new state-of-the-art for tabular machine learning. Since its release, we’ve scaled model capabilities more than 20x, reached 2.5M+ downloads, 5,500+ GitHub stars, and are seeing accelerating adoption across research and industry. We’re now building the next generation of tabular foundation models and actively commercializing them with global enterprises across Europe and the US.
Our team: We’re a small, highly selective team of 20+ engineers and researchers, selected from over 5,000 applicants, with backgrounds spanning Google, Apple, Amazon, Microsoft, G-Research, Jane Street, Goldman Sachs, and CERN, led by the creators of TabPFN and advised by world-leading AI researchers such as Bernhard Schölkopf and Turing Award winner Yann LeCun. Meet the team here.
What’s Next: Backed by top-tier investors and leaders from Hugging Face, DeepMind, and Silo AI, we’re scaling fast. This is the moment to join: help us shape the future of structured data AI. Read our manifesto.
About the roleYou’ll take on challenging engineering tasks crucial to the development of tabular foundation models. You’ll work on building and maintaining best-in-class training infrastructure, while maintaining our developer productivity tooling and open source projects. You’ll work closely with researchers to ensure that we can iterate quickly and scale our models.
This is a rare opportunity to:
Contribute to high-impact AI systems that are changing an industry
Have significant impact by owning big projects from the start
Join a world-class team at the perfect time: significant funding secured, strong early traction, and rapid scaling.
Training & research compute infrastructure: Own our cloud GPU cluster (operations, reliability, and cost/performance) currently based on Slurm. Design and implement future versions as our compute needs scale and we expand across multiple cloud/HPC providers.
Training & inference performance: Work closely with researchers to identify and resolve performance bottlenecks in distributed training and inference. Support high hardware utilization and efficient memory usage through systems-level debugging, profiling, and infrastructure improvements.
Developer productivity: Manage our internal repositories on GitHub and keep their CI and other pipelines speedy. Ensure our experiment tracking, model registry, data processing pipelines are working smoothly.
Try out your own ideas! We operate an open environment. If you’ve got the next SOTA tabular architecture up your sleeve, go ahead and train it.
What we use today: Slurm, GCP, Docker, wandb, GitHub Actions, uv, Torch, Triton
QualificationsExceptional software engineering fundamentals and expert-level Python proficiency, with 5+ years of hands-on industry experience building and operating production systems.
Proven track record of designing and building complex, scalable software, preferably for data processing or distributed systems.
Deep, practical knowledge of the modern ML ecosystem (PyTorch, scikit-learn, etc.) and a genuine interest in applying systems thinking to solve hard problems in AI.
Core MLOps Concepts: Strong understanding of the entire machine learning lifecycle (MLLC) from data ingestion and preparation to model deployment, monitoring, and retraining. Familiarity with MLOps principles and best practices (e.g., reproducibility, versioning, automation, continuous integration/delivery for ML).
Offices in Freiburg, Berlin, San Francisco and NYC with flexibility to work across our locations
Competitive compensation package with meaningful equity
30 days of paid vacation + public holidays
Comprehensive benefits including healthcare, transportation, and fitness
Work with state-of-the-art ML architecture, substantial compute resources and with a world-class team
We believe the best products and teams come from a wide range of perspectives, experiences, and backgrounds. That’s why we welcome applications from people of all identities and walks of life, especially anyone who’s ever felt discouraged by "not checking every box."
We’re committed to creating a safe, inclusive environment and providing equal opportunities regardless of gender, sexual orientation, origin, disabilities, or any other traits that make you who you are.
Top Skills
Similar Jobs
What you need to know about the San Francisco Tech Scene
Key Facts About San Francisco Tech
- Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Google, Apple, Salesforce, Meta
- Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
- Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
- Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine


