Data Infrastructure Engineer

Sorry, this job was removed at 6:58 a.m. (PST) on Thursday, June 4, 2020
Find out who's hiring remotely in San Francisco.
See all Remote Data + Analytics jobs in San Francisco
Apply
By clicking Apply Now you agree to share your profile information with the hiring company.

We are a San Francisco based team building self-driving semi trucks. We have raised $117MM in total and are backed by Tiger Global Management ($70MM Series C) and Sequoia Capital ($30MM Series B). We move freight daily between LA and Phoenix using our purpose built transfer hubs. This is an incredibly exciting time for autonomous driving and our team is looking to grow.

The infrastructure team is responsible for building the systems that support all of our engineering and operations. This includes data ingestion and processing, real-time vehicle communication systems, machine learning pipelines, simulation infrastructure and much more.

Day-to-day Responsibilities:

  • Deploy, build and maintain infrastructure responsible for ingesting data from our vehicles at various centers across the country. This includes the hardware and operational processes as well as the software that powers the system
  • Develop telemetry systems which allow our vehicles to stream video and data on-demand over LTE
  • Maintain the on-vehicle code responsible for data collection, monitoring and real-time communication
  • Build scalable data pipelines which operate over petabytes of autonomous vehicle data to extract useful features, enable advanced queries, and power machine learning pipelines and simulation environments
  • Maintain the software execution environment for our vehicles including the host operating system, containerized environments, and their deployment procedures.

Your Experience Might Include:

  • Experience with big data and/or infrastructure
  • BS, MS or PhD in Computer Science, Engineering, or equivalent real-world experience
  • Significant experience with Python, C++, Go, or similar
  • Experience working with classical relational and NoSQL databases
  • Experience with Kafka, Hadoop, Spark, or other data processing tools
  • Experience building scalable data pipelines
  • Significant experience working with AWS and/or GCP
  • Experience with Docker and Kubernetes or other container orchestration frameworks
  • Proven ability to independently execute projects from concept to implementation
  • Attention to detail and a passion for building scalable and reliable systems.

When you apply, address the application to Jacqueline and let me know why you want to join our team.

A few company highlights:

Embark Blog - Series C and Transfer Hubs

Forbes - 70 Million Dollar Series C 

Video - Day in the life of a self-driving truck

Embark Blog - Disengagement Report

30 Million Dollar Series B led by Sequoia

Read Full Job Description
Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.

Location

424 Townsend St, San Francisco, CA 94107

Similar Jobs

Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.
Learn more about Embark TrucksFind similar jobs