Authored by: Derek Cowan, Director of System Engineering, APAC, Cohesity
Today we all depend on the cloud giants for apps and service delivery, but what happens when they suffer an outage?
A short history of business IT might report that the Noughties were all about the West taking baby steps in cloud applications such as Oracle, Salesforce and Google Apps. The 2010s saw clouds proliferate globally as pioneers took a ‘cloud-first’ stance and used platforms including AWS, Microsoft Azure and Google Cloud Platform to pilot, test and run services. According to IDC, global data creation and replication will experience a compound annual growth rate (CAGR) of 23% over the 2020-2025 period. The responsibility to maintain and manage all this consumer and business data supports the growth in cloud provider data centres.
With that in mind, it’s not unreasonable to expect that in the 2020s, businesses across the region will invest further in the cloud. A recent Accenture study found that more than 90 per cent of leaders in Asia Pacific are approaching the cloud opportunity more aggressively. And those already with data in the public cloud will attempt to harness their existing cloud investments, formalising multi-cloud and hybrid cloud strategies and clawing back control via consoles that offer visibility and manageability over increasingly disparate estates.
There are still some unmanageable areas where cloud hasn’t yet permeated but progressive companies today are asking ‘why not cloud?’ instead of ‘why cloud?’ It’s the default deployment mode for the new IT but does that dependence on a deployment model incur a risk? The short answer is ‘yes’.
Innovations to IT dependencies have traditionally incurred more risk since those early days when you didn’t get fired for buying IBM. Businesses now typically have added more silos and IT has become more distributed. In tandem with this pace of IT change has been a rapidly evolving cyber threat landscape which now can produce threats and undermine security on every infrastructure imaginable.
Cloud isn’t infallible
Cloud computing is mushrooming and we are entering a new era where tactical investments are becoming strategic and there is a return to order that’s seeing more CIOs attempt to reign in what they have and introduce controls that reduce silos, bring down costs, and mitigate risks.
We are now depending on cloud services even if we don’t realise where our data is residing or travelling at any given point in time. We luxuriate in the notion that our data is somehow safe, looked after by the internet and cloud giants so we build our trust up and up. But an inconvenient question appears: what happens when it all goes down?
Cloud services aren’t immune from outages, hacking, acts of God or worse. In 2021 alone, we saw Oracle go down, shortly to be followed by Microsoft. Then there was Salesforce and more consumer platforms such as Facebook and Instagram. If these mega-forces can go down, anything can, so we need to have a plan to rapidly restore when the worst-case scenario strikes.
Not my problem: Who’s responsible for what?
Per a recent McAfee report, 69 per cent of CISOs trust their cloud providers to keep their data secure, and 12 per cent believe cloud service providers are solely responsible for securing data. The truth of the matter is that cloud security is a shared responsibility. In an effort to educate cloud customers on what's required of them, the cloud provider giants have created a cloud shared responsibility model or SRM for short.
Simply put, the SRM denotes that customers are responsible for protecting the security of their data that resides in the cloud, just as they are responsible for it on-premises. This doesn’t change for a different cloud deployment type. Customers are wholly responsible for protecting the security of their data and identities, on-premises resources, and the cloud components you control (which varies by service type).
By 2022 it’s believed that at least 95 per cent of cloud security failures will be because of customer error, essentially not upholding their part of the SRM. So, in the context of a major cloud-based service having an outage, a customer really needs to know how much of the responsibility and heavy lifting for recovery is on them. With cloud, it’s not just about damaged undersea cables causing limitations. It’s more complex.
‘Why is backup just an insurance policy? Why can’t it do more?’
What’s required is a web-scale design that can consolidate all workloads, data, and apps (regardless of whether they are on-premises, in the cloud, or both), onto one platform for recovery. This moves companies away from being vulnerable to a single point of failure. Deduplication, indexing, and search are required too or there is a high chance of “bill shock” when you suddenly realise that all those low-cost cloud services can add up to very large sums if not managed wisely.
In 2021, having a recovery backstop for if (read: ‘when’) your cloud service provider has an outage is important for business continuity and data and regulatory governance. But why is backup data only used as an insurance policy? It typically sits idle most of the time, but could be used for business benefit. Progressive organisations are finding ways to use their backup data, rather than put added strain on the production environment. Uses include threat prevention, test and dev work, analytics, verification, and reporting. The Infocomm Media Development Authority (IMDA) of Singapore has its Cloud Outage Incident Response (COIR) guidelines to assist in business continuity and disaster recovery.
Vulnerability checking your relationship with the cloud
As we move from a world where cloud is adopted in an ad-hoc way to one where the cloud is IT, we need to rethink its surrounding support infrastructure and the responsibility model associated with it. It's expected that cloud services will have a positive impact on job creation, entrepreneurship, and economic growth across the region. More jobs will be available in growth areas like cloud computing and artificial intelligence as Singapore accelerates the creation of jobs in the ICT sector and across the economy, said Singapore Information and Communications Minister, S Iswaran.
Within most organisations, the conversation around securing your data and infrastructure has inevitably shifted with cloud services arriving and maturing, and has now moved on to; how a customer manages its data both on-premises, in the cloud and the edge and the subsequent protection dictates the success of its IT strategy.
In the past few months, we’ve seen businesses of all sizes making changes as a result of COVID-19. Migration of data and workload to the public cloud has been occurring at a fast rate, as enterprise IT seeks to overcome the problems presented by traditional data centres, be it physical access constraints or hardware issues presented by vendor supply issues. Business cannot merely take such challenges on the chin and move forward. Downtime costs revenue and reputation.
This is where the capabilities now offered by next-gen data management, support organisations in managing, protecting and leveraging their data. Designed to deliver simplicity at scale, zero trust security principles, AI-powered insights, and 3rd-party extensibility, next-gen data management addresses the challenges of cost, complexity and risk. In practise, this means helping to eliminate data silos, protecting data from ransomware and other cyber-threats, and deriving greater value from data.
When the next major cloud outage occurs, the enterprise IT team will be responsible for maintaining IT services to its users. If you’re reading this and asking yourself the question - what do we do if our biggest cloud provider goes down? Now is the time to start thinking about the answers and capabilities that next-gen data management provides. For it is the enterprise’s mission success that’s on the line in the event of a major cloud outage, not just the cloud providers.