Software Engineer, Data Infrastructure

| Remote | Hybrid
Sorry, this job was removed at 12:47 p.m. (PST) on Saturday, May 21, 2022
Find out who's hiring remotely in San Francisco.
See all Remote Data + Analytics jobs in San Francisco
Apply
By clicking Apply Now you agree to share your profile information with the hiring company.

Our mission is to bring community and belonging to everyone in the world. Reddit is a community of communities where people can dive into anything through experiences built around their interests, hobbies, and passions. With more than 50 million people visiting 100,000+ communities daily, it is home to the most open and authentic conversations on the internet. From pets to parenting, skincare to stocks, there’s a community for everybody on Reddit. For more information, visit redditinc.com

The front page of the internet," Reddit brings over 430 million people together each month through their common interests, inviting them to share, vote, comment, and create across thousands of communities. This community of users generates 65B analytics events per day, each of which is ingested by the Data Platform team into a data warehouse that sees 55,000+ daily queries.

As a data infrastructure engineer, you will build and maintain the data infrastructure tools used by the entire company to generate, ingest, and access petabytes of raw data. A focus on performance and optimization will enable you to write scalable/fault tolerant code while collaborating with a team of top engineers. All while learning about and contributing to one of the most powerful streaming event pipelines in the world.

Not only will your work directly impact hundreds of millions of users around the world, but your output will also shape the data culture across all of Reddit!

How you will contribute:

  • Refine and maintain our data infrastructure technologies to support real-time analysis of hundreds of millions of users.
  • Consistently evolve data model & data schema based on business and engineering requirements.
  • Own the data pipeline that surfaces 65B+ daily events to all teams, and the tools we use to improve data quality.
  • Support warehousing and analytics customers that rely on our data pipeline for analysis, modeling, and reporting.
  • Build data pipelines with distributed streaming tools such as Kafka, Kinesis, Flink, or Spark
  • Ship quality code to enable scalable, fault-tolerant and resilient services in a multi-cloud architecture

Qualifications:

  • 2+ years of coding experience in a production setting writing clean, maintainable, and well-tested code.
  • Experience with object-oriented programming languages such as Scala, Python, Go, or Java.
  • Degree in Computer Science or equivalent technical field. 
  • Experience working with Terraform, Helm, Prometheus, Docker, Kubernetes, and CI/CD.
  • Excellent communication skills to collaborate with stakeholders in engineering, data science, machine learning, and product.

Reddit is committed to providing reasonable accommodations for qualified individuals with disabilities and disabled veterans in our job application procedures. If you need assistance or an accommodation due to a disability, please contact us at [email protected].

Read Full Job Description
Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.

Location

1455 Market St., San Francisco, CA 94103

Similar Jobs

Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.
Learn more about RedditFind similar jobs