6 Indicators That You Need to Overhaul Your Data Recovery Plan
Common Myths About The Cloud – DEBUNKED!
The demand for superior speed and agility continues to drive companies toward cloud adoption. But while earlier forecasts projected that more than 16% of enterprise workloads would be in the cloud by 2019, there’s an obvious delay in that actual statistic – which is thus far half as large at just 9%.
Generally, this lag in cloud adoption doesn’t derive from a lack of initiative. Many company heads have faced substantial setbacks along their journey toward the cloud or have expressed misgivings once they considered its impact on costs, security, latency, and more.
Discussions with countless CEOs and CIOs have revealed a similar set of myths that commonly trigger these roadblocks and reservations, hindering progress and adoption. Companies that have successfully countered these myths are the ones that have reaped the biggest benefits from their move to the cloud.
THE COST AND THE VALUE
Myth #1: The Primary Value of Businesses Moving to the Cloud Is a Reduction in IT Costs.
Many organizations associate cloud migration with the replacement of critical IT functions, access to on-demand infrastructure, database services, and more. While all these associations are certainly accurate, company leaders often get wind of them but then fail to take into account the larger role the cloud can play in revolutionizing the full IT operating model – and, in turn, the business itself.
Subsequently, when leaders proceed to write a business case for adopting the cloud, they end up spending an inordinate amount of time analyzing on-premises costs versus cloud costs, and dedicate far less time to the primary value driver of the cloud – the benefits to the business.
The reality is that the business benefits of cloud adoption far outweigh the IT cost efficiencies. Larger companies typically spend only a fraction of their total revenues – about 0.5% – on application hosting. Even if operating in the cloud could decrease this expense by 25%, that amount would be a drop in the bucket in comparison with the deeper potential business impacts from the cloud.
Any one of a variety of cloud-supported initiatives – enhanced analytics, quicker time-to-market, and greater innovation, for example – could ultimately have a more substantial impact than reductions in IT costs. In fact, the cloud is capable of benefiting almost every facet of an organization’s products, services, and processes.
Top-notch computing power can bring about a deeper understanding of customers’ needs, for instance, while additional processing capacity can be called upon to execute more intricate analytics or to generate customized business insights.
Since both experimentation and testing new ideas are more cost-effective and less time-consuming, innovation is faster and less risky. All this advances revenue growth opportunities in a number of ways, including acceleration of lead time for new products, entry into new markets, and response to competitive threats.
Myth #2: Cloud Computing Costs More Than In-House Computing.
Cloud economics is currently one of the most controversial topics in enterprise-class IT. The reality is complex, as cost is greatly determined by a company’s starting point – and its capacity to manage and maximize cloud consumption once there.
Organizations confronting large-scale datacenter upgrades will find cloud adoption appealing as a means of avoiding colossal capital expenditures on assets they may not take full advantage of for years. On the other hand, companies that may have recently footed the bill for a new datacenter might find that migrating to the cloud would double up some infrastructure costs.
Another fundamental difference is between companies with costly license agreements that are difficult to vacate and companies with limited penalties for transitioning. In addition, storage-intensive workloads are often less expensive in the cloud than those demanding a great deal of network bandwidth, as cloud service providers (CSPs) typically charge by the unit for network access.
Starting point notwithstanding, many companies moving to the cloud have enjoyed significant cost benefits thanks to the cloud’s shared-resource model and autoscaling. Rather than possessing a cluster on-premises and paying for round-the-clock access, companies pay cloud service providers for CPU on an as-needed basis.
In the event that the shared-resource model does not result in total-cost-of-ownership (TCO) savings, it’s commonly because companies lack proper resource governance, or they migrate applications intended to run internally without modifying their resource consumption models.
Such applications won’t fully harness the benefits of autoscaling and are costlier to manage and maintain than applications that are native to the cloud. Thus, to keep operating costs low and to optimize benefits, companies need to analyze their applications’ architectures, remediate their portfolio as necessary, and institute new transparency and governance processes.
The central concern for cloud economics is whether the reduced run-rate cost on the cloud legitimizes the up-front costs of remediation, providing that all configurations and governance are executed appropriately. Even in such cases where a company’s starting point makes remediation too cost-prohibitive, the business benefits explored in Myth #1 are often a stronger justification for migrating to the cloud and supersede the short-term IT cost obstacles entirely.
Myth #3: Cloud Security Is Inferior in Quality to the Security We Can Configure and Control in Our Own Datacenter.
Traditionally, executives have pointed to the security – or the perceived lack thereof – of public cloud infrastructure as one of their primary concerns and a major hindrance to cloud adoption. In recent times, however, all major CSPs have invested heavily in their fundamental security capabilities.
A CSP’s business model hinges on world-class security, and they’ve collectively spent billions on cloud security and recruiting thousands of the top cyber-experts. They’ve formulated a vast collection of new tools and techniques to make the cloud more secure, in many instances requiring developers to shoulder the security responsibility instead of looking to a traditional security team to bear the burden.
This is especially significant, as public cloud breaches have almost entirely been driven by enterprise customers’ unsecured configurations. In fact, Gartner anticipates that 99% of cloud security failures through the year 2025 will be the fault of the customer and not that of the security provider.
Hence, developers must be retrained to comply with scrupulously detailed governance and policies on how to set up the correct security controls. For instance, if policy dictates that all data must be encrypted, it is the responsibility of the developers to initiate the proper application programming interface (API), signaling to the CSP that they want data in a specific storage bucket to be encrypted.
In order for these new policies to be effective, the cloud calls for companies to adopt a DevSecOps operating model, where security is a cornerstone of every software project. IT organizations must automate security services across the full development cycle and deploy them using APIs or else run the risk of vulnerable configurations.
The central question for organizations, therefore, is not whether the cloud is more secure in the first place, but what actions they themselves need to take in order to fortify their cloud security. Companies that establish appropriate policies, implement a secure DevSecOps operating model, and develop or employ the right personnel can truly achieve safer operations in their cloud environments than on-premises.
THE TECHNICAL IMPLICATIONS
Myth #4: Applications Running on Cloud Providers’ Networks Have Greater Latency Than Those Running on In-House Networks.
Some organizational leaders anticipate that when they shift to the cloud, they will experience higher latency (aka lag) on a CSP’s network than they will on their own. However, latency is usually the end result of the IT department trying to backhaul its data through in-house datacenters.
Backhauling, or routing traffic through internal networks, can lead to higher latency, unnecessary complexity, and a dismal user experience. IT departments that opt to backhaul generally either lack experience or trust (or both) with cloud security – presuming that they’ll have greater control by backhauling – or they’re trying to access critical data or apps housed in on-premises datacenters.
It’s imperative for IT departments that are backhauling for enhanced security to recognize that CSPs now offer more robust perimeter options and that there’s no need to suffer latency for security. While backhauling was the preferred model for perimeter security just a couple of years ago, companies are now employing alternative techniques – most notably clean-sheeting, or forming a “virtual perimeter” with cloud-specific controls. In fact, in a recent McKinsey IT security survey, 89% of cloud users do not anticipate that they’ll still be utilizing a backhauling approach by the end of 2021.
IT departments that are backhauling for critical data or applications should give precedence to creating a “data lake” in conjunction with their CSP and transport the majority of their data and analytics processing to the cloud, utilizing data replication only where absolutely necessary. This will permit them to unleash the power of cloud-enabled analytics while at the same time resolving any latency problems.
Once organizations cease backhauling their data, they’re less likely to encounter higher latency on the cloud, as there’s no intrinsic difference between a cloud service provider’s IP circuits, pipes, and cables and their own datacenter’s.
Indeed, companies may even experience lower latency in the cloud, due to cloud service providers’ advantages in content delivery. With their multifaceted, global footprint of datacenters and their hefty investment in content delivery network services, CSPs can deliver content at optimum speed – contingent upon location, content request, and server availability – on a level that most companies would be hard-pressed to attain in-house.
Myth #5: Moving to the Cloud Eliminates the Need for an Infrastructure Organization.
The concept of infrastructure as a service (IaaS) – that an outside provider will oversee your essential network, hardware, and resources – is a compelling proposition for many company executives. Nevertheless, the misconception occurs when leaders interpret IaaS as a total replacement for their infrastructure organization. While the cloud profoundly alters the actions, personnel, and operating model demanded in an internal infrastructure group (and beyond it), it does not completely replace the necessity for infrastructure management.
When enterprises transition to the cloud, they will discover a number of services that can be combined and configured in order to impact performance, security, resiliency, and more. This calls for an infrastructure team that can construct and manage standard templates, architectures, and services that can be used by the company’s development teams. Since cloud infrastructure is primarily administered through code, this infrastructure team will involve a variety of skill sets so it will be able to function similarly to an app development team. Without this infrastructure team developing standardized services and platforms, many organizations will simply duplicate the fragmentation and chaos they experienced on-premises.
To make room for this shift in function, infrastructure organizations must transition to a proactive (as opposed to a reactive) operating model. Instead of addressing customized requests from development teams – which can take months and quickly become pricey – cloud infrastructure teams should proactively evaluate organizational needs and transform this into a dependable, automated platform on the cloud. As a result, the ownership rests more directly on development teams themselves, giving them more flexibility to rapidly configure the resources they require. Not only will application teams net more direct responsibility over costs, but this additional flexibility will result in improved productivity and faster speed as well.
In general, traditional infrastructure teams running the cloud would be too massive, too cost-prohibitive, and would forfeit the benefits of having app teams bear a shared responsibility for the operating costs they incur. Conversely, not having an infrastructure team at all would neutralize an organization’s ability to manage and benefit from the cloud. Alternatively, a leaner, more specialized infrastructure organization is needed in order to obtain the broader scope of agility, innovation, and performance benefits of the cloud.
Myth #6: The Most Efficient Method to Migrate to the Cloud Is to Either Concentrate on Applications or on Entire Datacenters.
It’s a standard misbelief that an enterprise must choose one of these two alternatives in order to effectively transition to the cloud.
In the application-by-application approach, organizations come up against undesirable scale dynamics. While they continue paying for on-premises datacenters and IT support, they’re also paying cloud services providers for hosting a subset of applications. Moving a subset of applications doesn’t benefit the business if those applications comprise only part of a business domain’s portfolio.
On the other hand, organizations that transport an entire datacenter to the cloud may be forced to contend with a sizable up-front investment, as well as the risk involved therewith. Many of the myriad applications in a datacenter were probably never intended to run in the cloud. Organizations will need to invest in various forms of remediation, which can become expensive and precarious when carried out all at once.
Rather, companies should look to transfer business domains to the cloud – such as customer onboarding and consumer payments. By transporting these business domains, companies will be able to enjoy the complete range of potential cloud benefits: faster time-to-market, improved agility, greater reliability, and so much more. Along with the business benefits, migrating a business domain is a much smaller lift than moving an entire datacenter, meaning that cost and risk will be more convenient. Once one business domain starts to encounter these improvements in time-to-market, agility, and reliability, making the case for transferring the remaining domains will become much easier.
Myth #7: To Move to the Cloud, You Must Either Lift and Shift Applications As They Currently Are or Refactor Them Completely.
When companies commit to moving to the cloud, they’re often pressured to move fast, keep costs down, and optimize business benefits. Subsequently, organizational leaders believe that they have to choose between a quicker and cheaper “lift and shift” transition strategy – so as to move fast and minimize costs – and a time-intensive and costly refactoring strategy – in order to harness business benefits.
While lift and shift – virtualizing the application and dropping it into the cloud “as is” – can be a quicker and more economical technique to move a great deal of applications into the cloud at once, it fails to capture the majority of the cloud’s benefits. That’s because there’s no fundamental change to the architecture of the application, which often isn’t optimized for the cloud and therefore won’t benefit from features like autoscaling, automated performance management, etc. Moreover, the non-native application will likely experience higher latency or other performance issues, and its preexisting problems will now merely reside in a CSP’s datacenter rather than the company’s.
By contrast, a total refactoring of the application and its architecture to optimize for the cloud takes a fair bit of time, skill, and money. For certain, it still achieves the advantages that lift and shift ignores, but so gradually and at such considerable cost that break-even is often unattainable. Refactoring also makes the transition more susceptible to error during the complex process of recoding, configuration, and integration.
Many organizations find that they’re better off taking a best-of-both-worlds approach that utilizes established techniques such as automation, abstraction, and containerization. These methods are more cost-effective and less time-consuming than full refactorization but still permit companies to enjoy the business benefits of enhanced agility, faster time-to-market, and greater resiliency.
Many of today’s viewpoints about the cloud are predicated on misconceptions fueled by stories of adoptions gone badly or by general aversion to momentous change. These misguided ideas only serve to impede a deeper understanding of the productive business, operational, and economic impacts of the cloud. In order for organizations to realize the full value of the cloud, these myths must be sorted out and cast aside.
If you’d like to know more about how DataGroup Technologies can help future-proof your business by moving some or all of its assets to the cloud, give us a call today at 252.329.1382 or drop us a line here!
An earlier version of this article appears as a blog post on McKinsey Digital’s website
Blog post text…