Quality assurance has always been an important part of the software development process, but there are many ways of integrating QA into the development lifecycle. Some companies have centralized teams of QA engineers that handle testing for all software. Other companies embed QA testers within development teams. Still others have no QA testers at all.
In short: Figuring out how to best structure QA depends on your organization’s needs and the products you’re building.
Over the years, as software companies have embraced more iterative models of development, testing has also evolved from something that happens mostly at the end of a project to a procedure tightly integrated into every step of the process. The majority of testing happens earlier in the software lifecycle, and developers are increasingly responsible for writing automated unit and integration tests, in a trend known as “shifting left.”
“Development teams are shifting to have more and more responsibility over the quality of what they produce,” said Jeff Kelly, owner of AssetLab Marketing who has had experience with a variety of QA team structures. “It allows software to be delivered more quickly and more cheaply. The closer to the developer you find the bug, the cheaper it is to fix. If it was your end customer who found the bug, after you shipped the software, that is the most expensive problem to fix — versus somebody who found a bug in a unit test.”
Shifting Left Relies on Automation and Metrics
At Atlassian, which makes the popular issue tracking software Jira, developers are expected to “own” the code that they write, which includes being on call to fix any bugs that appear in the production environment.
“We consider the responsibilities for quality and reliability to be a full-lifecycle responsibility,” said Stephen Deasy, head of cloud engineering at Atlassian. “Ultimately, the engineers building and writing the code are the ones who are accountable for the quality as it ships and runs in production.”
The idea is that developers who are in charge of fixing bugs in their code would be motivated to test their code well. Because developers’ main responsibility is still to write code, a result of this shift is that testing relies more heavily on automation — not only automated testing such as unit testing and integration testing, but also incorporating testing into the DevOps process and building out ways of tracking errors.
We’re trying to increase errors caught by automation, ideally to 100 percent.”
“That could be things like making error metrics that can be caught and measured, and then monitored for threshold violations,” Deasy said.
Error metrics don’t just help alert developers to bugs that need to be fixed — the data can also be used to improve the automation process itself. Metrics can help determine where more testing resources should be deployed to be most effective.
“We measure very carefully what percentage of those are caught by automation, caught by staff, caught by customers,” Deasy said. “We’re trying to increase errors caught by automation, ideally to 100 percent. Whether it’s our own employees or our customers, we don’t want people calling us and telling us we have a problem within the software. That’s ultimately a failing — something escaped that shouldn’t have.”
Feature Flags and Canary Releases Automate User Testing
The trend toward automation and metrics is a way to ensure quality while keeping costs down. As a code base grows more complex, the tests needed to cover it can quickly become unwieldy.
“We have hundreds of services running in production,” Deasy said. “It’s impossible to deterministically build every combination of experiences that a customer could have. And therefore, we have to rely on automation — we have to rely on reducing the blast radius, rapid roll-backs to the event of the problem — to reduce that surface area of impact to customers.”
In order to avoid writing every test needed to deterministically cover all combinations of interactions, Atlassian has made feature flags and canary releases part of its deployment strategy. Developers use feature flags to turn on new features for select users, and canary releases to direct a subset of traffic toward new features, letting developers catch errors earlier on.
Both techniques allow developers to release new features to a subset of users, relying on metrics to catch any issues before releasing the feature to all users. As a result, the company has been able to grow its number of developers without growing its number of QA engineers.
“We have not scaled the QA function in the same way that we have scaled engineers,” Deasy said.
Central or Distributed QA Teams?
While Atlassian leans heavily on automation for testing, QA testers are still an important part of the process in areas such as mobile development.
“We do have manual testers — people who will load up a version of the code,” Deasy said. “This happens heavily in places like mobile, where either test automation is not as progressed or the user experience depends heavily on human determination of whether something is acceptable — that could be performance, it could be functionality.”
These testers are embedded within Atlassian’s development teams. There is also a central QA team, whose function is to share knowledge among QA engineers and develop tools for testing.
One drawback of having a centralized quality team is that you’re pushing off responsibility for quality further away from where the code was written.”
“We still have a central team that thinks about tooling and thinks about best practice and helps provide advice,” Deasy said. “[Atlassian has a] single central test coverage tool that’s owned, operated and administered by a central team within our platform organization.”
Having both a central QA hub and embedded testers within development teams may be a good combination. Kelly, with AssetLab, said there are both upsides and downsides to having QA testers either grouped together or spread out across development teams.
“One benefit to having a centralized quality team is you have people who are constantly just thinking about quality,” he said. “One drawback of having a centralized quality team is that you’re pushing off responsibility for quality further away from where the code was written.”
Structure Comes Down to Product Needs
Kelly said that the best way for companies to organize their QA teams is to consider the specific needs of the product.
“There’s a key question as a software development organization that you have to ask that will guide much of the strategy you use for quality,” he said. “And that is, is it more important to be functionally complete, or is it more important to hit a delivery date?”
For instance, if you’re writing code for a rocket launch, Kelly said, it’s more important to ensure that all the code is properly tested before deploying. On the other hand, adding a new feature to a website can be an iterative process because it’s easy to push updates. Products that cannot easily push patches should invest in rigorous QA teams, while other types of products can use other strategies.
Kelly said that the more complex the software is, the more important it is to manage teams based on data. Different teams could even be organized differently based on what they need.
“They’re all given the ability to iterate and to change and test different organizational structures to find things that work better for them,” Kelly said. “Work better means happiness of the people, ability to deliver code on time and functionality on time that is at the quality bar.”