Examining how the “big boys” handle their test and quality assurance operations may assist teams and organisations in becoming more serious about (or improving) their software testing efforts. It seems to reason that organisations such as Google, Microsoft, and Amazon would not be as successful if they did not place a high value on the quality of the products they disclose to the public.
However, a deeper examination of these software behemoths reveals that there is no one recipe for success. Here’s how five of the world’s most well-known software companies manage their quality assurance (QA) and what you can learn from them.
On Google, look for best practices.
How does Google, the world’s most popular search engine, handle its testing efforts? The team and the product determine it. For example, the team in charge of the Google search engine maintains a massive and severe testing infrastructure. Because search is Google’s primary business, the group aims to guarantee that it retains the highest possible quality while avoiding sabotage.
Google has a four-stage testing strategy for search engine updates that include:
Internal testing by a team of specialised testers (Google employees)
Further testing will take place on the crowd testing site “Dogfooding,” and having Google personnel use the product in their day-to-day work.
The teams in charge of Google products that to the company’s primary business, on the other hand, adopt a far less severe QA method. In some instances, the developer is the lone tester in charge of a specific product, with no dedicated testers to provide a safety net.
In any case, Google takes testing seriously. It is rare in the business to have equal numbers of testers and developers.
Facebook developer-driven testing
Facebook and Google use dogfooding to guarantee that their software is helpful. Furthermore, it is pretty well-known for publicly shaming engineers who make mistakes (for example, breaking a build or accidentally bringing the site down) by posting a photo of the offender wearing a clown nose on an internal Facebook group.
Facebook admits that its testing approach has severe flaws.
Nonetheless, rather than going to great lengths to correct the defects, it just accepts them since, as it claims, “social media is nonessential.” Furthermore, focusing less on testing means that more resources are available to spend on other essential duties.
Instead of adequately testing its software, Facebook uses “canary” releases and an incremental rollout strategy to test fixes, upgrades, and new features in production. For example, it may first make a new feature available to only a tiny percentage of all users.
Based on usage and comments, the firm decides whether to expand the rollout or cease the feature, either improving or discontinuing it.
Amazon puts deployment first.
Amazon, like Facebook, does not have a robust quality assurance system in place. It also claims that Amazon does not value the QA profession (at least in the past). Amazon’s test engineer-to-developer ratio of about one to seven indicates that testing is a vital activity.
The corporation, on the other hand, holds a different viewpoint. The ratio of testers to developers, according to Amazon, is an output variable, not an input variable. In other words, Amazon increases its testing efforts as soon as it notices that revenue is decreasing or customers are departing owing to website anomalies.
Amazon thinks development and deployment techniques are so sophisticated (the company claims to release software every 11.6 seconds!) that elaborate and costly testing effort are unneeded. It all boils down to making software simple to deploy and, more importantly, simple to roll back in the case of a disaster.