In December, I reported on a frank discussion I’d had with Yahoo’s chief architect, Amotz Maimon, and the company’s senior vice president of science and technology, Jay Rossiter, on their decision to eliminate the quality assurance team. The idea, they said, was to force engineers to develop tools to better check their own code, and to think about their jobs differently, as part of a larger effort to shift the company’s software development approach from batch releases to continuous delivery. Maimon told me the approach was “100 percent working,” and Rossiter said it had reduced, rather than increased, the number of problems that went live.
That post triggered a lengthy and sometimes heated discussion—in the comments on the post itself, as well as on Slashdot and on Hacker News—about the role of quality assurance in software development today. The commenters had much to say about their own experiences, about quality assurance pro and con, and about Yahoo’s products. A few examples:
“They didn't STOP testing, they just automated it. Our company did the same years ago. Literally a one button, no monitoring process to: build across multiple architectures, test for several hours across each of them, package up the releases and (with a second button press) release to the web site. This is not hard, it just requires commitment to keep it maintained and to acknowledge it does not come for free (you can't just fire your QA time and expect the engineers to develop it in their free time).”
“The point is not to ‘remove QA’, but quite on the contrary, to remove the BARRIER between engineering and QA, to shorten the feedback and accountability loop. More, better QA, with less overhead.”
“This is the most stupid thing ever.... Of course there will be fewer bugs found if there are no testers!!! Doesn't mean to say that they aren't in the software!!!!”
“I have been using Yahoo and wondered how come I started facing issues in using the emails. Now I got the answer.’”
The commenters also had some key questions. I went back to Yahoo’s Maimon for answers.
Q: Given that developers are now doing their own testing, were their project loads changed to allow time for this?
Maimon: We asked developers to invest in test automation, not manual testing. There was an initial effort we executed without any major schedule changes. This stemmed from the work of our (former) QA team, who developed the test automation process. When compared with our manual testing efforts, our automated testing process increased overall speed and quality of results, which enabled us to avoid any significant impact. By eliminating the slow manual testing from the pipeline, we were able to increase our overall speed and productivity. Moving to continuous delivery also lowered the "unit of change", or size of changes pushed to production. We pushed multiple changes a day, but each change was smaller and simpler, which reduced complexity and risk in the release process, while it improved quality.
Q: Did Yahoo need to add developers?
Maimon: A certain portion of the QA people converted to developers, but we did not need to grow the organization further as a result of the change. Since productivity went up, we were able to get more done with the same amount of people.
Q: Do you have any data/numbers to back up claims that the change made for fewer errors and a faster development cycle?
Maimon: We measure all of these, but cannot release the actual numbers. The number of software updates pushed to production systems went up by four to five times; the overall number of incidents went down, as did the number of change-related incidents—that is, something that happens when a software change that’s pushed to production causes a failure. Overall, the relative number of software change-related failures went down by about 80 percent.
Q: Finally, do you have any evidence that the change made the development job “more fun?”
Maimon: Developers like speed, fast exposure of new development, and fast real-user feedback. As such, they liked the change once the initial effort was done.