Healthcare, telecoms, security, banking – all put in jeopardy following data centre outages.
Data centres hold all digital information in the world. If a facility goes down, services go down.
If the recent bomb threat at the Amadeus hub had actually materialised, then European airports and hotels could have been sent into a travel frenzy with bookings and reservations going down across the continent.
Patrik Sallner, CEO of MariaDB said: "A lot of these SaaS have built redundancy and disaster recovery. If there is a failure in a data centre – if a server as a physical problem, if there is a connectivity problem, or if there is an electricity supply problem – all data centres have reserved generators in them. But as we have seen in the past, when the whole data centre is threatened, redundancy and replication needs to be built in the server, in the data centre and across data centres.
CBR has compiled a list of potential scenarios in different industries in the eventuality of a data centre failure.
On March 10 this year, AVG’s AntiSpam data centre was down due to unplanned maintenance, a week after the company announced it had reached the 200 million users mark.
Thousands of customers worldwide were left powerless, with no access to their email accounts and security services for a few minutes.
AVG said in a statement: "[The] anti-spam portion of the AVG Business CloudCare service could have been disrupted as a result, possibly affecting email security services customers in all regions.
"The exact cause is still being determined but the situation is in the process of being rectified, and normal service is expected to resume within the next 60 minutes. We will continue to monitor developments and regret any inconvenience to those customers affected."
If security fails in the data centre, a host of sensitive information could end up in the hands of hackers and cyber criminals.
There have been several reports of users not able to text or call due to data centre breakdown. For example, in April this year, an electrical problem at one of Three’s data centres in Ireland brought services down to as many as two million costumers.
Even Google is not immune to data centre failures. A power outrage in the early Spring of 2010 brought down the company’s mobile apps services for a few hours.
A Google report explained that the underlying cause of the outage was a power failure in the enterprise primary data centre.
It continued: "While the Google App Engine infrastructure is designed to quickly recover from these sort of failures, this type of rare problem, combined with internal procedural issues extended the time required to restore the service."
With the world ever more connected, if the internet goes down nearly everything stops. That, together with a data centre failure, is a recipe for online mayhem.
Thousands of Australians, costumers of iiNet, the country’s second largest internet provider, were left offline after record temperatures of nearly 45°C put the enterprise’s data centre in Perth on standby on January 6 2015.
Christopher Taylor, an iiNet representative said at the time in a forum post that they had had multiple air conditioners failing on site, causing temperatures to rise rapidly. He added: "Multiple systems have been shut down on site to prevent permanent damage".
Another heat problem brought down Microsoft’s email services for 16 hours, also affecting Skydrive. The incident in March 2013 was caused by a failed software update that rose temperatures up to a breaking point.
Criminal records, police files, court cases, and many more are stored digitally as security departments enhance their IT strategy.
In the UK, almost 100 Crown courts faced operation restrictions in December 2014 after the XHIBIT system went down due to a failure at supplier CGI’s data centre.
In June same year, Marysville Data Centre in Canada cost the local economy $1.6 million leaving several state departments in the dark, the governmental report said at the time.
Critical government computer systems crashed after a power outage took the data centre offline. As a result, courthouses came to a standstill as probation reports and victim impact statements could not be generated delaying the juridical system.
Queensland’s hospitals went into meltdown on December 10 2014, when a fault in a data centre took doctors back to pen and paper.
Queensland Health, the organisation in charge of all the state’s hospitals, said an issue in its data centre led services to action "contingency plans" to keep things running.
A public advisory by QH said: "No patients are being turned away from hospitals and surgeries are occurring as normal. Routine downtime procedures have been implemented to ensure patient safety.
"All hospital and health services have experienced some level of interruption to services but all have developed contingency plans to ensure services to patients continue."
Later it was found that a software upgrade of storage controllers in the data centre in the Royal Brisbane and Women’s Hospital, Herston, unleashed the sequence of events.