CVS Health Logo

CVS Health

Staff Data Engineer

Posted 2 Days Ago
In-Office or Remote
15 Locations
107K-284K Annually
Senior level
In-Office or Remote
15 Locations
107K-284K Annually
Senior level
The Staff Data Engineer will architect and build scalable data pipelines, develop internal tools and APIs, and modernize data operations, focusing on data quality and self-service platforms.
The summary above was generated by AI

We’re building a world of health around every individual — shaping a more connected, convenient and compassionate health experience. At CVS Health®, you’ll be surrounded by passionate colleagues who care deeply, innovate with purpose, hold ourselves accountable and prioritize safety and quality in everything we do. Join us and be part of something bigger – helping to simplify health care one person, one family and one community at a time.

Position Summary

If you’re eager to make a real impact in the healthcare industry through your own meaningful contributions, join us as we pave the way for technical innovation. At CVS Health, we possess an extensive repository of healthcare data spanning over 150 million individuals, providing an unparalleled foundation for ambitious engineers.

In this high-impact, high-autonomy role, you will be a technical innovator and visionary, leading the evolution of our data infrastructure. You will take a lead role in the end-to-end development of critical data self-service platforms designed to modernize how petabyte-scale data is ingested, accessed, and managed. Your work will be instrumental in shifting from traditional, ticket-driven data handling toward a Data Mesh approach, empowering data owners to take full accountability for their data quality through the robust internal tools you build.

As a Staff Data Engineer, you will:
  • Architect Petabyte Pipelines: Engineer scalable, reliable, and performant data pipelines to assemble large and intricate datasets using SQL, DBT, and Snowflake, ensuring high data availability and integrity.

  • Build Data Platforms: Independently design and maintain internal React (TypeScript) interfaces and Python backend services that automate data ingestion and discovery, reducing lead times for application teams from weeks to minutes.

  • Develop Data APIs: Build and maintain production-grade REST and gRPC APIs that serve as the high-performance interface between our Snowflake data layer and downstream consumer touchpoints.

  • Modernize Data Operations: Implement a GitOps model for data using GitHub Actions and Argo/Kargo, integrating standardized logging, alerting, and automated observability into the heart of all data products.

  • Innovate with AI: Leverage Cursor AI, MCPs, and other AI tooling to accelerate the data engineering SDLC, from optimizing complex SQL queries to automating schema migrations.

  • Collaborate and Lead: Communicate with business leaders to translate complex data requirements into functional specifications while mentoring other engineers in modern data architecture and software best practices.

Key Responsibilities
  • Data Architecture: Design and optimize high-volume ETL/ELT pipelines using SQL, DBT, and Snowflake, ensuring data is modeled for both analytical and operational use cases.

  • Internal Tooling (Full Stack): Develop and maintain internal-facing web applications using React that allow data owners to interact with, monitor, and configure their data pipelines.

  • API Development: Architect and implement REST and gRPC APIs in Python that serve as the interface between our Snowflake data layer and downstream consumer applications.

  • CI/CD & GitOps: Own the deployment lifecycle of data services and tools using GitHub Actions for CI and Argo/Kargo for continuous delivery and lifecycle management.

  • Self-Service Platforms: Build "Data-as-a-Service" features, such as automated UI-driven ingestion workflows, reducing the reliance on manual data engineering tickets.

  • AI Integration: Utilize modern AI development tools (e.g., Claude AI) to accelerate the development of both data pipelines and management interfaces.

Required Qualifications
  • 7+ years of experience in Data Engineering with a heavy focus on Python as the primary scripting and backend language.

  • 7+ years of experience with SQL and cloud data warehouses (e.g Snowflake, AWS, GCP, etc.)

  • 7+ years of experience building high-volume ETL/ELT pipelines and data modeling.

Preferred Qualifications
  • 5+ years of experience with DBT (Data Build Tools).

  • 5+ years of experience building frontend applications with React and designing RESTful APIs.

  • 5+ years of experience with GitHub Actions and GitOps-based deployment tools (e.g., Argo or Kargo).

  • Big Data Architecture: High-level understanding of big data design patterns, including Data Lake, Data Mesh, and Iceberg, along with data normalization strategies.

  • GitOps & Deployment: Demonstrated experience with Argo/Kargo for Kubernetes-based deployments and advanced GitHub Actions for workflow automation.

  • Messaging & Streaming: Experience with message queuing technologies such as Kafka, SNS, or RabbitMQ to support real-time data movement.

  • AI-Enhanced Development: Proficiency in working with Cursor AI, GitHub CoPilot, or similar AI-driven environments to accelerate engineering cycles.

  • Observability: Strong experience with metrics, logging, monitoring, and alerting tools to ensure production system reliability.

  • Software Fundamentals: Strong grasp of data structures, algorithms, async programming patterns, and parallel programming.

  • Healthcare Domain: High-level understanding of HL7 V2.x or FHIR based interface messages.

Education
  • Bachelor’s Degree in Computer Science, Data Engineering, or a related technical field.

Anticipated Weekly Hours

40

Time Type

Full time

Pay Range

The typical pay range for this role is:

$106,605.00 - $284,280.00

This pay range represents the base hourly rate or base annual full-time salary for all positions in the job grade within which this position falls.  The actual base salary offer will depend on a variety of factors including experience, education, geography and other relevant factors.  This position is eligible for a CVS Health bonus, commission or short-term incentive program in addition to the base pay range listed above. 
 

Our people fuel our future. Our teams reflect the customers, patients, members and communities we serve and we are committed to fostering a workplace where every colleague feels valued and that they belong.

Great benefits for great people

We take pride in offering a comprehensive and competitive mix of pay and benefits that reflects our commitment to our colleagues and their families.

This full‑time position is eligible for a comprehensive benefits package designed to support the physical, emotional, and financial well‑being of colleagues and their families. The benefits for this position include medical, dental, and vision coverage, paid time off, retirement savings options, wellness programs, and other resources, based on eligibility.


Additional details about available benefits are provided during the application process and on
Benefits Moments.

We anticipate the application window for this opening will close on: 05/08/2026

Qualified applicants with arrest or conviction records will be considered for employment in accordance with all federal, state and local laws.

Similar Jobs

5 Days Ago
Remote
USA
200K-220K Annually
Senior level
200K-220K Annually
Senior level
Healthtech • Information Technology • Social Impact • Software • App development
As a Staff Data Engineer, you will design and optimize data infrastructure, collaborate with stakeholders, ensure data quality, and provide technical leadership while owning architectural improvements and driving operational excellence.
Top Skills: AirflowDbtFivetranHightouchPythonSnowflake
4 Days Ago
Remote
USA
Senior level
Senior level
Software
As a Staff Data Engineer, you'll lead the data engineering team, own the Databricks platform, build production pipelines, and engage with stakeholders while driving architectural judgement and maintaining platform quality.
Top Skills: AWSDatabricksPysparkPythonSQLTerraform
9 Days Ago
Remote
United States
180K-220K Annually
Senior level
180K-220K Annually
Senior level
Healthtech • Mobile • Analytics
The Staff Data Engineer will enhance the data ecosystem at WeightWatchers by developing data pipelines, collaborating cross-functionally, and ensuring data quality while leading projects and technical initiatives.
Top Skills: AirflowArgo CdDatadogGithub ActionsLookerMonte CarloPrefectPythonSnowflakeSQL

What you need to know about the San Francisco Tech Scene

San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.

Key Facts About San Francisco Tech

  • Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Google, Apple, Salesforce, Meta
  • Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
  • Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
  • Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account