The Lead Data Infrastructure Engineer will scale and improve the reliability of data infrastructure, build data processing pipelines, and store large-scale data for machine learning models, particularly in energy forecasting.
The Company:
Gridmatic is a startup trying to help decarbonize the grid by using deep learning to forecast energy prices. We believe better forecasting can have a real-world impact on energy and climate. As extreme weather events get more common, energy prices become increasingly volatile — for instance, in the Texas energy market, prices can go 50x higher than usual in extreme scenarios. When this happens, the ability to forecast these price spikes becomes increasingly important.
We use our machine learning (ML) forecasting and optimization to trade in energy markets, make large-scale battery storage systems more efficient, and sell energy to businesses to protect them from extreme price volatility. Our deep learning models have proven very successful in trading in energy markets, we’re successfully operating multiple large batteries (50MW+), and we now sell energy to hundreds of businesses.
We have a very strong team with significant expertise in ML, energy, and optimization. If you’re interested in working on complex real-world problems, large-scale data challenges, and applying ML to climate and energy, we’d love to talk to you.
The Role:
We’re looking for an engineer to help lead the scaling and reliability of our data infrastructure, which is core to the ML work we do at Gridmatic.
Forecasting energy prices is challenging. We have very effective price forecasting models, but we’d like to go much further — scaling the amount of data we can use in our ML models by a factor of 10-100x by incorporating petabyte-scale weather data, increasing spatial granularity of our price forecasting, and more.
We’d also like someone who can tackle the challenge of scaling and improving reliability of our data platform. We deal with a lot of real-world problems when ingesting data from external sources — downtime, late-arriving data, changing schemas. Improving the reliability of our data pipelines will be critical to our ability to make an impact on the grid.
What we’re looking for:
- Experience building the infrastructure for large-scale data processing pipelines (both batch and streaming) using tools like Spark, Kafka, Apache Flink, and Apache Beam.
- Experience designing and implementing large-scale data storage systems (feature store, timeseries DBs) for ML use cases. Strong familiarity with relational databases, data warehouses, object storage, timeseries data, and being adept at DB schema design.
- Experience building data pipelines for external data sources that are observable, debuggable, and verifiably correct. Have dealt with challenges like data versioning, point-in-time correctness, and evolving schemas.
- Strong distributed systems and infrastructure skills. Comfortable scaling and debugging Kubernetes services, writing Terraform, and working with orchestration tools like Flyte, Airflow, or Temporal.
- Strong software engineering skills. Being able to write easy-to-extend and well-tested code.
Our stack includes: Python, GCP, Kubernetes, Terraform, Flyte, React/NextJS, Postgres, BigQuery
What you might work on:
- Owning and scaling our data infrastructure by several orders of magnitude. This includes our data pipelines, distributed data processing, and data storage.
- Building a unified feature store for all our ML models.
- Efficient storing and loading hundreds of terabytes of weather data for use in AI-based weather models.
- Processing and storing predictions and evaluation metrics for large-scale forecasting models.
You might be great for this role if:
- You have 4+ years of experience building data infrastructure or data platforms
- You have experience with ML infrastructure and have worked at companies that use ML for core business functions
- You’re comfortable with ambiguity and a fast-moving environment, and have a bias for action
- You learn and pick up new skills quickly
- You’re motivated in making a real-world impact on climate and energy
FAQ
What’s your policy on remote work?
We value the ability to work and collaborate in-person in our early stage as a startup, so Gridmatic has a hybrid policy that will ask you to work in our Cupertino office 3 days a week.
What is your interview process?
You’ll usually have a chat with the hiring manager or someone on the team about your background and experience. After that, depending on the role, you’ll either have a technical phone screen with an engineer, or work on a take-home project. If that goes well, we’ll have you on site in Cupertino for an interview panel with the team, which usually takes about 4 hours.
Top Skills
Apache Beam
Apache Flink
BigQuery
Flyte
GCP
Kafka
Kubernetes
Nextjs
Postgres
Python
React
Spark
Terraform
Gridmatic Cupertino, California, USA Office
20450 Stevens Creek Blvd, Suite 100, Cupertino, CA, United States, 95014
Similar Jobs
Cloud • Computer Vision • Information Technology • Sales • Security • Cybersecurity
The Regional Sales Manager will drive new business and nurture existing relationships with enterprise clients, achieving sales quotas and collaborating with internal teams.
Top Skills:
Salesforce
Cloud • Computer Vision • Information Technology • Sales • Security • Cybersecurity
Manage healthcare accounts, execute account strategies, develop relationships with decision makers, and drive sales for CrowdStrike's cybersecurity products. Requires strong sales and communication skills.
Top Skills:
CloudCybersecuritySaaSSalesforce
Blockchain • eCommerce • Fintech • Payments • Software • Financial Services • Cryptocurrency
The Senior Data Engineer will design and maintain data solutions, build ETL pipelines, and create data models to support customer operations, driving data-driven decisions and optimizing data infrastructure.
Top Skills:
AirflowBigQueryDatabricksDbtGitLookerMySQLPrefectPythonSnowflakeSQLTerraform
What you need to know about the San Francisco Tech Scene
San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.
Key Facts About San Francisco Tech
- Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Google, Apple, Salesforce, Meta
- Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
- Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
- Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine


