Senior Data Engineer
Sprig has pioneered a new approach to user research. Companies like Dropbox, Square, Opendoor, Loom and Shift all use Sprig to capture research insights from their users in-context, so they can make customer-informed decisions, frictionless products, and experiences that matter.
Our user research platform enables teams to evaluate their product's user experience in-the-moment, test new product concepts, and recruit research participants from their existing user base for longer studies.
We're growing fast and ready for new team members who are passionate about surprising and delighting our own customers.
Sprig is based in San Francisco, CA with offices around the U.S. The company has raised $60M led by Andreessen Horowitz, Accel, and First Round Capital, and have recently been featured in articles by TechCrunch, Crunchbase, and Business Insider.
More about our mission, values, and why it's a great time to join us here.
Our Commitment to Diversity and Inclusion
We prioritize diversity within our team and value different perspectives, educational backgrounds, and life experiences. We encourage people from underrepresented backgrounds to apply.
About this Role
This is your chance to join a startup in one of the most exciting phases, where you can become an original, founding member of the team and play a vital part in our growth. We’re looking to quickly grow our engineering team and looking for an experienced Sr. Data Engineer to help us scale. This position is based in CA, TX, OR, or WA.
Your Impact
- In this role you’ll be working closely with various members of the team to update our data engineering roadmap
- Lead numerous critical data engineering projects to ensure pipelines are reliable, efficient, testable, & maintainable
- Evangelize high quality software engineering practices towards building data infrastructure and pipelines at scale.
- Arbitrate critical decisions correctly considering data best practices, system realities, and numerous stakeholders’ feedback
Your Strengths
- 5+ years of relevant experience
- B.A. or M.A. degree in Computer Science
- Experience designing, building and operating robust distributed systems.
- Experience designing and deploying high performance systems with reliable monitoring and logging practices
- Past experience building ETL processes
- Experience with data integrations such as informatica, python, spark, etc., data warehousing like Redshift, AWSGlue, Snowflake, Postgres, DynamoDB, Cassandra, etc., & analytics tools like Tableau, Mode, Looker, & scheduling tools like Airflow.