Leveraging the Market to save you from Runaway Cloud

The cloud is the fastest-growing phenomenon in the history of IT. It has changed IT pros’ conception of the data centre from a massive array of physical capital assets to being a simple utility – a plug in the wall.

Much of reason for this growth has been that the cloud simplifies and accelerates the initial provisioning of virtual resources. Rather than waiting six weeks or six months for traditional IT to provision, for example, a virtual machine (VM), shadow IT can spin one up in the cloud in sixty seconds.

In addition, the cloud solves another frustration of traditional IT – chargeback. Because resources in the cloud are priced on a consumption basis as a utility, chargeback is far simpler in the cloud than in a traditional data centre where apportioning cost based on consumption isn’t core to providing data centre resources.

This speed and transparency have been huge drivers in the cloud’s growth, but the hidden reality of the cloud is that those two benefits are helpful only on Day One of a VM or container’s existence. On Day Two you still have to manage them – are they sized properly, are they performing to user expectations, are they even in the right cloud for the specific application workload? – and you have to do so through the abstraction of the cloud itself rather than within your own data centre.

Two Days Later:

Day Two matters because Day Two is when Runaway Cloud begins. What do I mean by "Runaway Cloud"?

Well, if you have an account in AWS with a specified budget, the chances are pretty good that your final bill will be two to three times larger than that budget. That, at least, is what Gartner says its customers are telling them (and frankly what we hear from our own).

How can this be? Isn’t the cloud supposed to be cheaper? Recent cloud hype might lead you to believe that, but the reality is that the cloud is often more expensive, with fewer performance benefits, than your traditional virtual data centre.

Performance isn’t guaranteed in the cloud, and can vary from availability zone to availability zone, from public cloud to public cloud (some workloads perform better in AWS, others in Azure, for example), and even from day to day or even hour to hour.

That’s another common misconception: that somehow the cloud has solved the application performance challenge that has bedevilled IT for decades. The reality of the cloud is that if you call AWS (or any public cloud for that matter) to complain about performance, the answer is almost always "choose a bigger template." Perhaps this improves performance in the short run. It doesn’t, however, address the noisy neighbour problem in any type of multi-tenant architecture, AWS included. Yet it also helps drive AWS’ bottom line (via your bill).

In that way we’re back to the future: the over-provisioned data centres of the last decade have just shifted to the cloud. After all, the cloud is nothing more than someone else’s server.

The difference is that with the cloud you’re more abstracted away from being able to control that, but feel the pain more acutely at the end of each billing cycle. Particularly since the Day One decisions aren’t centralised but rather dispersed across your company, lacking control over the consequences of those decisions on Day Two and beyond will continue to result in unexpected, unbudgeted expenses.

So does adopting public cloud mean that we’ve exchanged unending over-provisioning for speed and transparency? Not necessarily, but it has scaled to a place where it can no longer be managed by human brainpower and spreadsheets alone.

Managing The Load:

The complexity of the public cloud seems abstracted away, but the multitude of Day Two+ decisions around sizing and configuration – as well as the variables in each application and cloud that necessarily must inform those decisions – are frankly overwhelming to traditional monitoring even in a private cloud or traditional virtual data centre, let alone the public cloud.

One way to overcome the complexity around Day Two and beyond decisions is to embrace software automation that leverages market principles. The larger free market (of which the cloud is still but a tiny part) overcomes far greater complexity in allocating labour and capital resources across nations and economic unions, let alone data centres and clouds.

In its most simplistic sense, software can understand and measure application demand on a real-time, transaction-by-transaction basis, and then match that demand to the utility supply provided by public clouds to enable data centres to self-organise. In other words, exactly what happens every day with millions human life decisions based on weighing value and price.

In the cloud, the multitude of available options around sizing and configuration that can make the cloud seem so overwhelming, actually make matching that infinitely variable supply to real-time demand easier than in a private data centre.

The consumption-based, chargeback nature of the public cloud, where every template is priced and those prices available via APIs, enables software to match public cloud supply to real-time demand of application workloads. This is done by workloads "earning" virtual revenue for every transaction, indeed potentially earning even more for transactions with growing latency in relation to application SLAs (e.g., response time of 800ms or less), and then dynamically and automatically sizing templates up or down based on real-time demand.

Thus, application workloads can effectively choose exactly the supply they need – no more and no less -self-organising to assure performance at the best possible cost. By mimicking what happens every day in the free market, IT professionals can leverage software to control the public cloud so that it never becomes Runaway Cloud – for Day Two and beyond.

Published:
Lang:
Type: White Paper
Length:

Favourites

  • Favorite list is empty.
FavoriteLoadingClear favorites

Your favorite posts saved to your browsers cookies. If you clear cookies also favorite posts will be deleted.