Search News Articles

Insights into Googles Approach to Testing Software

Note: this is a reprint of an article originally posted on on May 24, 2012. The original article can be found here.

By Yvette Francino

Technologies continue to evolve, but does the way we test software change? Leaders at Google, a company known for its innovation, have broken the mold and their unique test strategies are revealed in the new book, How Google Tests Software. Authors James Whittaker, Jason Arbon and Jeff Carollo discuss the book and what readers can learn from Google's approach to testing software.

SSQ: You mention in the introduction that Microsoft also has a book about how they test software, but "the approaches to testing couldn't be more different." Can you highlight some of the major differences?

James Whittaker/Jason Arbon/Jeff Carollo: At Google, testing is owned by everyone on the development team. Developers have to take responsibility for their unit tests and testers take responsibility for making developers productive testers. It's not a dev-test model; everyone is a tester at Google.

SSQ: Are many of the differences in test processes that you describe only for Web-based test efforts or do you recommend them universally for all types of software testing efforts?

Whittaker/Arbon/Carollo: At the end of the day, all software does only four things: accept input, produce output, store data and perform computation. Thus, many of the things in the book speak to this general problem and thus apply universally. Google may be known as a Web company, but Chrome is a client app, Android is an OS, App Engine is a platform...Google test processes have to span the gamut. That said, Web testing is a large part of the Google mission and in many ways the Web makes testing easier: We can deploy fixes instantaneously, we can test in production, we can utilize crowd source testing to great effect. The Web is a special case but it is still basically the same testing problems and lessons in testing that apply universally.

SSQ: You recommend that testers learn to code and that developers learn to test and that the functions of development and test are both performed by each engineer. This works if you hire people with both of these skills, but if a team is transitioning, how do you deal with testers that don't have the aptitude or interest in coding?

Whittaker/Arbon/Carollo: Their role is limited. A single manual tester acting like a user is likely to be much less effective than a tester who can also apply their skill using automation. Coding isn't that hard to learn and testers should actively want to expand their impact. That's what careers are made of!

SSQ: You say, "The first piece of advice I give people when they ask for the keys to our success: Don't hire too many testers." However, shouldn't that be, 'Don't hire too many dedicated testers?' It seems the real key is to hire people who can perform both development and test functions. Would you agree? If so, how do you assess this in your hiring process?

Whittaker/Arbon/Carollo: In the old days, Chrome had a dedicated team of manual testers. The cost to the company was astronomical. They found lots of bugs pretending to be users and soon enough the dev team got used to relying on them. In the end, it just made developers lazier and helped them write more bugs. When developers were given the tools to perform self-service testing (described in the book) they wrote fewer bugs and we needed fewer testers. When internal dog food users were given better bug reporting tools (also described in the book) they found more and better bugs than the manual testers and were much cheaper. A combination of automation and crowdsourced testers keeps the costs of testing software down.

SSQ: Do all development teams use the same methodology or framework? If so, is it based on Scrum or XP? Does it continue to evolve?

Whittaker/Arbon/Carollo: It evolves. Obviously Chrome and Google Docs, for example, share some common practices and also are different in ways. The processes described in the book are pretty general case; many teams in Google specialize in certain practices. It's one reason we did the interviews in the book, to highlight where teams did different things.

SSQ: How does Google handle specialty test efforts such as performance and security? Are developers expected to include performance and security testing as part of their test efforts?

Whittaker/Arbon/Carollo: Mostly we centralize them. There are horizontals like security that apply everywhere, require specialized knowledge and need dedicated attention.

In The News is brought to you by WinMill Software, the premier resource for systems development and integration, expert consulting, quality assurance, technology infrastructure, and software resale. For more information, contact a WinMill Account Manager at or 1-888-711-MILL (6455).