We're tackling one of healthcare's most critical challenges in medical imaging and diagnostics. Our company operates at the intersection of cutting-edge AI and clinical practice, building technology that directly impacts patient outcomes. We've assembled one of the industry's most comprehensive and diverse medical imaging datasets and have a proven product-market fit with a substantial customer pipeline already in place.
Role OverviewWe're seeking a research engineer to bridge the gap between research and production, building ML infrastructure and data systems for medical imaging at scale. You'll own critical data pipelines that unify live production traffic with offline datasets, design storage solutions for multimodal medical data, and build training + inference infrastructure that enables our research team to iterate rapidly. This role requires someone who can move fluidly between model training, data engineering, ML systems, and production deployment.
Key ResponsibilitiesBuild and optimize distributed ML infrastructure for training foundation models on large-scale medical imaging datasets.
Design and implement robust data pipelines to collect, process, and store large-scale multimodal medical imaging data from both production traffic and offline sources.
Build centralized data storage solutions with standardized formats (e.g., protobufs) that enable efficient retrieval and training across the organization.
Create model inference pipelines and evaluation frameworks that work seamlessly across research experimentation and production deployment.
Collaborate with researchers to rapidly prototype new ideas and translate them into production-ready code.
Own end-to-end delivery of ML systems from experimentation through deployment and monitoring.
5+ years building ML infrastructure, data pipelines, or ML systems in production
Strong Python skills and expertise in PyTorch or JAX
Hands-on experience with data pipeline technologies (e.g., Spark, Airflow, BigQuery, Snowflake, Databricks, Chalk) and schema design
Experience with distributed systems, cloud infrastructure (AWS/GCP), and containerization (Docker/Kubernetes)
Track record of building scalable data systems and shipping production ML infrastructure
Ability to move quickly and handle competing priorities in a fast-paced environment
Experience with reinforcement learning training pipelines (e.g., RLHF, reward modeling, or online learning systems)
Support A/B testing and experimentation workflows for model rollouts, including monitoring statistical significance and managing canary deployments.
Familiarity with vision-language models (VLMs) or multimodal architectures
Experience with medical imaging formats (DICOM) and healthcare data standards
Background in distributed training frameworks (PyTorch Lightning, DeepSpeed, Accelerate)
Familiarity with MLOps practices and model deployment pipelines
Experience with privacy-preserving data systems and HIPAA compliance
Top Skills
Similar Jobs
What you need to know about the San Francisco Tech Scene
Key Facts About San Francisco Tech
- Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Google, Apple, Salesforce, Meta
- Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
- Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
- Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine


.jpeg)
