Variance Logo

Variance

Research Engineer, Evals

Reposted 19 Days Ago
Be an Early Applicant
In-Office
San Francisco, CA, USA
250K-400K Annually
Mid level
In-Office
San Francisco, CA, USA
250K-400K Annually
Mid level
The Research Engineer will define metrics and improve model quality by building benchmarks, datasets, and evaluation tools to enhance AI systems in fraud and risk investigations.
The summary above was generated by AI
Role

At Variance, we are teaching machines to make the hardest judgment calls at scale. That means building AI agents for the high-stakes gray area of risk investigations, fraud, and identity reviews.

We’re a small, talent-dense team in San Francisco working on a problem at the edge of what AI systems can reliably do: making good decisions in messy, adversarial, real-world environments. We focus on building, high-consequence systems problems where the edge cases matter most.

We’re looking for a Research Engineer to help define how we measure and improve model quality. You’ll build the benchmarks, datasets, tooling, and evaluation loops that tell us whether our systems are actually getting better on the tasks that matter. This role sits at the center of research, product, and engineering. It is about creating rigorous, domain-specific evaluations that reflect real customer workflows, expose meaningful failure modes, and drive the next generation of model and agent improvements.

You’re a fit if you:
  • Care deeply about craftsmanship and have strong opinions about model quality, measurement, and experimental rigor

  • Want to work on core model and agent behavior, not just surface-level product metrics

  • Are excited by the challenge of defining what “good” looks like in messy, high-stakes environments

  • Think in tight loops: hypothesis, benchmark design, evaluation, failure analysis, iteration

  • Have strong engineering fundamentals and like building robust systems around ambiguous research problems

  • Thrive in environments where success criteria are initially underspecified and need to be sharpened through work

  • Are willing to do the work in the trenches: reviewing outputs, grading edge cases, curating datasets, and refining tasks until the evaluation actually measures what matters

  • Care deeply about building systems that protect people from fraud, scams, and abuse

What you’ll do
  • Build proprietary benchmarks and datasets to evaluate models and model systems on fraud, identity, and risk workflows

  • Design and run offline and online evals that measure model performance on real customer tasks, not just abstract benchmarks

  • Define quality metrics for judgment systems, including precision, calibration, consistency, abstention, and failure handling

  • Study where models and agents break, and turn those failures into better evals, better datasets, and better training loops

  • Build reusable evaluation tools and quality building blocks that can be used across different product surfaces and workflows

  • Partner closely with research, engineering, product, and design to improve system quality through rigorous experimentation

  • Help create a strong culture of scientific experimentation, clear measurement, and continuous iteration

  • Push the boundary of how AI systems are evaluated in regulated, adversarial, and high-consequence environments

What success looks like
  • We have a clear, trusted view of how our systems perform across the workflows that matter most

  • Our evals predict real-world quality better than generic benchmarks

  • We identify meaningful failure modes earlier and improve system behavior faster

  • We develop differentiated datasets, benchmarks, and quality loops that compound over time

  • Research and engineering teams use your work to make better decisions about what to train, ship, and improve next

  • Variance becomes known for rigorous, domain-specific evaluation of judgment systems

Preferred background
  • Experience training, evaluating, or improving modern ML systems

  • Strong programming skills and comfort working in research-heavy codebases

  • Experience building benchmarks, datasets, evaluation pipelines, or quality systems

  • Familiarity with LLMs, agent systems, retrieval, post-training, or adjacent areas

  • Ability to design clean experiments and draw reliable conclusions from noisy results

  • Strong engineering judgment and a bias toward building

  • Interest in fraud, risk, trust and safety, compliance, or other regulated and adversarial domains

Our culture

We believe in ownership, urgency, and craft. We enjoy spirited debate, wild ideas, and building things we’re proud of. We’re fully in-person in San Francisco.

What we offer
  • Competitive salary and meaningful equity

  • Platinum-level medical, dental, and vision insurance

  • Unlimited PTO, sick leave, and parental leave

  • Up to $100 per month in reimbursement for personal health and wellness expenses

  • 401(k) plan

HQ

Variance San Francisco, California, USA Office

163 2nd St, #300, San Francisco, California, United States, 94105

Similar Jobs

2 Days Ago
Remote or Hybrid
2 Locations
160K-240K Annually
Mid level
160K-240K Annually
Mid level
Artificial Intelligence
The Research Engineer will design and build evaluation systems to measure data quality for Firecrawl's web data extraction, creating metrics and benchmarks, and collaborating on model improvements.
Top Skills: Data QualityLlm EvaluationMachine LearningPipelines
10 Days Ago
In-Office
San Francisco, CA, USA
Mid level
Mid level
Artificial Intelligence • Software
As a Research Engineer, you will focus on benchmarking, evaluations, and failure analysis of AI models, enhancing model performance through systematic analysis and collaboration with teams.
Top Skills: APIsCloud PlatformsMachine LearningNoSQLSQL
12 Days Ago
In-Office
San Francisco, CA, USA
205K-380K Annually
Mid level
205K-380K Annually
Mid level
Artificial Intelligence • Machine Learning • Generative AI
The role involves identifying AI safety risks, building evaluations, designing scalable systems, and refining best practice guidelines for AI safety evaluations.
Top Skills: Large Language ModelsMl ObservabilityMl Research Engineering

What you need to know about the San Francisco Tech Scene

San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.

Key Facts About San Francisco Tech

  • Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Google, Apple, Salesforce, Meta
  • Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
  • Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
  • Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account