We mentioned earlier today that Google made the claim that its cloud service was the most reliable cloud service available in the market today.  Excluding the 5+ hours outage for S3 recently, let’s take a look at what happened today and why it appears that the cat fight is on.

During the keynote at Google Cloud Next 2017 conference, Diane Green decided to not so subtly take a jab at cloud leader, Amazon Web Services, claiming “Google was recognized as having the highest reliability of any cloud over the course of 2016.”  Not sure how you take this, but essentially Google went elementary school on Amazon Web Services and claimed that their dad is bigger than Amazon Web Services data, respectively speaking of course.

As mentioned, many of us can actually remember Amazon Web Services outage last week and that it basically killed 1/3 of the Internet. First we should ask, where did her claim come from?  It’s easy to come out and say it but is it true?  Who knows because everyone has different numbers and everyone’s clouds aren’t easily comparable. Diane Greene didn’t actually cite a source, but a Google source confirmed afterward that the new numbers from CloudHarmony, a unit of the Gartner research firm, showed a chart indicating that Google Cloud had 74 minutes of “total time lost” in 2016, compared to 270 minutes for Microsoft Azure and 108 for Amazon Web Services.

Now, here’s where things get exciting.  Microsoft has come out and said, “Comparing downtime alone doesn’t take into account the larger number of regions operated by Azure, which Microsoft says would provide a more accurate picture of cloud reliability.” Hubba what?  Aren’t all clouds the same?  Although each company defines its regions differently, Microsoft says it has 34 regions, AWS says it has 16 regions, and Google Cloud Platform says it has six.  Microsoft went further saying, “When looking at average uptime across regions, rather than total downtime across a disproportionate amount of regions for each provider, Azure reliability is in line with that of the other cloud providers measured and in fact has consistently had global uptime upwards of 99.9979% for Compute in the past 12 months alone.”

To be clear, 100% uptime will eventually fail, and having more locations may increase outage possibilities because you have more places where things can go wrong.  So, who’s right? Jason Read, the Gartner Research Vice President who founded CloudHarmony, agreed that basic comparisons of total downtime can be misleading because a single extended outage can skew the comparisons; some outages have a bigger impact than others; and more regions and data centers create more potential for outages.

Another consideration is the reliability and availability of different services.  How does compute compare to storage?  How does CDN compare to DNS? And so on. CloudHarmony does provide a bit more insight into that, stating “Google Cloud Storage has historically been very reliable based on our availability checks with only a few small outages in regional buckets in 2015, but Google Compute Engine on the other hand has had a fair number of outages and in fact it was the least available compute platform in both 2015 and 2016, and was the only vendor to experience a global outage [in April 2016] due to a bad network configuration rollout. This type of outage is in the absolute worst case scenario because even multi-region fault tolerance would not have mitigated it.”

So, who wins this round?  Well, it probably depends mostly on who’s camp you fall into.  If you are an Azure fan you probably go Microsoft, whereas if you Google fan you are leaning towards supporting Google’s version. I wonder how this would go if you compared Office365 to GSuite? More to come about Google Cloud Next tomorrow.

Check out our previous Google Cloud Next ’17 blogs:
Google Cloud Video Intelligence Recognizes Video Objects
Google Cloud Container Build, Artificial Intelligence, and Machine Learning
Is Google Cloud Going to Kill AWS?