Analysis: From I/O speeds to pricing structure and storage types.
In the public cloud arena there are three dominant players; Amazon Web Services, Microsoft Azure and Google Cloud Platform.
Each offers slightly different services to the other, but they also overlap in many areas and directly compete on price.
The strengths and weaknesses of each can be key factors for you as a business looking to go down the cloud path. All three of them can be tried out on a small scale, through a proof of concept, so that you can compare the services and see which suits your needs the best.
However, because of their size, it can often be difficult to effectively compare what services are better suited to your requirements.
While I won’t go into depth on the price comparisons of individual services, it is necessary to compare pricing models, billing granularity, global network, instances, I/O speed and look at some of the major strengths and weaknesses.
The AWS compute service, Elastic Compute Cloud (EC2), splits its instances into nine families making up 38 instance types. These families are designed to be able to burst, which means that they offer spare processing power that is on offer to expand when the demand for computing capacity spikes.
Azure meanwhile provides six families with 25 instance types which starts with a basic tier that offers baseline performance for single instances, so basically you don’t get scalability, and a standard tier which is general purpose, CPU-optimised, RAM-optimised and network optimised, like the AWS offering.
The Google Cloud Platform has a much more focused offering than its two competitors, offering 18 instances that are split into four families. Like AWS it is burst-able but is different in that its instance types are standard, high-memory and high-CPU.
On the storage front, AWS offers Elastic Block Storage that supports three types of persistent disks: Magnetic, SSD and SSD with provisioned IOPS. The maximum size for the magnetic disks is 1TB while the SSD disks can hold much more at up to 16TB.
Azure calls its block storage "Page Blobs" and they support magnetic and SSD. The magnetic is considered to be the standard storage while SSD is premium and both offer up to 1TB per volume.
GCP’s offering differs by offering two types of persistent block storage, one magnetic and the other is SSD, although volumes can be as large as 10TB the company recommends sizing the volume according to required performance as this can drop as the volume grows.
The other popular storage is Object storage, AWS calls its offering "Simple Storage Service (S3), Azure’s is called "Block Blobs", and Google simply calls its offering in this area Google Storage.
I/O or input/output, which is the communication between an information processing system and the outside world, is a factor that should also be considered when looking at what service you should choose.
There are numerous factors that an affect the speed of I/O such as the number and generation of CPU cores, number of instances, network speed, caching and storage type.
For some, speed is of greater importance than others, it depends on the type of work you are undertaking.
Tests conducted by InfoWorld in 2014 show that Google Compute Engine is fastest, followed by AWS and Azure.
With one CPU, GCP achieved under 300 milliseconds on the DaCapo suite benchmark results. One CPU for AWS EC2 was at around 400 milliseconds, as was Azure.
For two CPUs, GCP came in at just under 500 milliseconds, AWS milliseconds was at 600 milliseconds and Azure at around 650 milliseconds. Stepping up another level to 8 CPUs you see that GCP is just under 600 milliseconds while AWS is slightly over 700 milliseconds and Azure comes in at over 800 milliseconds.
Based on this factor alone you can see that GCP has the fastest I/O speeds with AWS second and Azure third.
The real differences can be seen in the way the billing is implemented, with EC2 instances being billed by the hour while Azure and GCP is by the minute.
Of course it is important to note that these facts and figures are only one part of the story, the other is how they actually deal with customers and also how reliable they are.
All three of the companies have positioned themselves as the best cloud service for enterprises, each has their strengths as I’ve outlined above but each to some extent have had problems with outages or disruptions to service.
This damages how public clouds are perceived with regards to whether or not they are ready for mission critical workloads.
Clive Longbottom, founder Quocirca, told CBR: "There should be no excuse for downtime in a cloud that has been architected to support hundreds or thousands of individual customers’ systems."
Fail safes should be built in that makes everything multi-redundant which can be achieved through virtualisation and/or mirroring and all events should be planned for, even if that means planning for earthquakes, flooding or fire.
Most offer this functionality but while it is good to analyse the vendor, the business should be analysed as well, so the challenge for businesses is to pay that little bit more for high availability so that they can be assured up-time.
Longbottom, said: "Organisations need to understand their own risk profiles better – if they cannot afford to have a system outage, then they need to pay for high availability through a provider which will guarantee this. If they don’t pay for high availability, then they will have to carry the higher risks of outages."
Public cloud still has a lot to do in order to prove itself as being a perfect fit for mission critical systems and applications, and businesses need to understand their own risk profiles better before undertaking a transition to cloud.
One of the criticisms that are frequently laid at the feet of cloud vendors is their failure to be more transparent in how they report and deal with outages.
Customers need to understand that what is reported is not necessarily a 100% representative and that a well run public cloud is likely to give far higher availability levels than their own private data centre.
Microsoft offers a status page as does Google and AWS, but in my opinion the Azure status page is a little more informative, particularly with its historical incident category which gives reasonably detailed levels of information.
What I hope this highlights is that there are numerous factors involved in deciding what the best cloud provider is, and in the end the leading cloud provider may not necessarily be the one that is best suited to your needs.
In the end that is what it comes down to, how you want to use cloud. While these three leading vendors are competing in the same market, there are enough differentiating factors that will make them better suited to certain use cases than others.