Automated Tests Alone Don’t Cut It for This Hardware Engineer

Zipline needs a foolproof testing process for its high-stakes work. Here’s how the team does it.

Written by Eva Roethler
Published on Aug. 05, 2021
Automated Tests Alone Don’t Cut It for This Hardware Engineer
Brand Studio Logo

The benefits of automated testing are obvious to engineering teams: It allows for fast feedback, problem prevention and quick deployments for a customer-ready product. 

However, automated testing can be hard to pull off. In fact, automated testing fails to deliver on expectations 64 percent of the time, causing companies to drop it in favor of manual testing, according to data from Cigniti, a quality assurance software company. 

Autonomous medical supply delivery company Zipline has developed a test automation process that delivers the best of both worlds: complementary manual and automated testing. 

Zipline’s autonomous fleet covers massive distances shipping medical products to countries such as Rwanda, Ghana and the United States. With so much at stake at global scale, the team prioritizes safety and must rigorously test every piece of software on their planes. 

“We even need to test our test software,” said Electrical Test Lead Juan Albanell. “These tests identify issues in how software and hardware communicate, as well as how software runs.”

The company is hardware-centric, so the testing process has to match the product needs. “Our goal is to automate testing for all hardware production in a way that doesn’t replace humans, but complements them,” said Albanell. “And we are doing this testing at manufacturing production levels.”

Built In SF touched base with Albanell for more insight into Zipline’s test automation best practices and unique processes. 

Juan Albanell
Electrical Test Lead • Zipline

 

Tell us more about your test automation best practices.

First, we establish clear criteria. One of the most important things we do is proper test planning and working closely with users to understand failure modes at the front end. This way, you don’t build an automated test for a gap that doesn’t exist or isn’t a priority. 

Having a remote control system for performing tests is especially important as the company branches out its test systems to different offices and locations around the world. We need the ability to remotely access the testers and debug systems from afar. 

It’s easy to feel removed from operations when you are building things and testing them on a line. If you have a talented human doing tests, they might catch things that you didn’t think about and update the process in real-time. But with automation, you don’t have that agility and your feedback loop is invaluable in case you missed something. That’s why we keep a very close loop with field usage and operations. 

Finally, hardware deteriorates over time. If you’re not tracking and performing active data analysis, you will run into issues. We also insert forms of data automation into our dashboards as well, like backend analysis.

 

Albanell’s 4 Best Practices for Automated Testing

  1. Establish clear criteria.
  2. Create remote control system for tests.
  3. Close loop with field usage and operations.
  4. Automate data dashboards.

 

What kind of tests does your team automate, and why?

We go through what is generally seen as unit testing and then some emulation testing for our flight software. That testing is, in and of itself, automated, both through our hardware in the human-in-the-loop system where we emulate a physical flight, and also in our flight simulation testing that is purely software-based.

 

Tell us about your team's favorite test automation tools. How do they match up with your existing tech stack?

Bamboo can work well for software system testing. On the hardware testing side, Amazon Web Services can be used for both remotely accessing things and running analysis from afar. But a lot of our tools at Zipline are custom built. 

What I’ve found is that there are tools that are good for specific things but when it comes to hardware testing, a lot of customization is needed. We often build our own frameworks on Python. 

One example of a custom-built automation test is the tool that gives us the ability to replay tests. When we run a test with the hardware, there may be a destructive test we can only run once or a long test that has a critical flaw when it’s launched, and we need a way to redo the analysis. We collect all the data that’s necessary while we’re testing, so if we want to rerun the test with different algorithms, we can do so without starting from scratch.

Responses have been edited for length and clarity. Images via listed company and Shutterstock.

Hiring Now
Cedar
Fintech • Healthtech • Software