Blog

28 Feb 2019DISCOtecher

Exploring Gartner’s multi-cloud computing point of view

How do you keep your data consistent across an entire multi-cloud architecture?

In the first of two blogs on Gartner’s Technology Insight for Multicloud Computing report, Paul Scott-Murphy, VP of Product Management for Big Data and Cloud at WANdisco, chats with DISCOtecher, Director of Product and Channel Marketing at WANdisco, about the challenges of operating a multi-cloud architecture.

DISCOtecher: Hi Paul, thanks for joining me today. Multi-cloud is a huge topic right now—but what’s stopping some businesses from transitioning to a multi-cloud architecture?

Paul: As you point out, many businesses are recognizing the benefits of a multi-cloud strategy: greater flexibility and functionality, stronger resilience and disaster recovery, and improved performance, to name but a few. According to Gartner’s report, widespread adoption of multi-cloud architectures is not just likely, but inevitable. 

Different applications have distinct needs, and different cloud products have distinct strengths and weaknesses. As a result, each department within an organization may be using a different cloud platform for their apps. Even though they may not have deliberately planned it, businesses are finding themselves working with several cloud vendors.  

However, managing a complex network of multiple cloud vendors and instances, spread across many regions, raises huge data management challenges—and dealing with these challenges is a hot, hot topic right now.


However, managing a complex network of multiple cloud vendors and instances, spread across many regions, raises huge data management challenges—and dealing with these challenges is a hot, hot topic right now.


DISCOtecher: Gartner points to two different types of multi-cloud architectures. What are they and how are they different?

Paul: Gartner divides multi-cloud into two categories: redundant architectures, in which an “application is deployed, in its entirety, to multiple cloud providers”, and composite architectures, where a “single application is split across multiple cloud providers so that different application components come from different cloud providers”.

Businesses may choose a redundant architecture to reduce costs—by running a batch workload on whichever cloud platform offers the best value at that moment, for example. A more complex redundant architecture might stand up two versions of an application in different clouds simultaneously, either for load balancing or disaster recovery purposes. 
 

DISCOtecher: What is the most challenging scenario associated with a redundant architecture?

Paul: In a redundant multi-cloud scenario, the difficulty lies in ensuring that each cloud instance is capable of taking over operations without loss of data or interruption to the business. Relying on nightly backups or periodic snapshots dramatically increases the risk of losing data in the event of an unplanned outage, as data on one of the cloud instances will always be slightly out of date. In a recovery scenario, this could leave users working from outdated information, and in some cases, losing critical data.


In a redundant multi-cloud scenario, the difficulty lies in ensuring that each cloud instance is capable of taking over operations without loss of data or interruption to the business.


DISCOtecher: And why would a business run a composite architecture? What challenges does that create?

Paul: Although most applications use a single-cloud core, they may need components that rely on services from another cloud provider—for example, external API services, or supplementary software infrastructure services. Businesses using these kinds of applications will find themselves working in a composite multi-cloud architecture.

As the Gartner report explains, this architecture can create significant challenges around availability, data management, security and regulatory compliance. Gartner points out that “the use of multiple cloud providers increases the probability that, at any given point in time, at least one of these providers will be in the midst of an outage.” As standard backup and replication services cannot keep critical data up-to-date across multiple clouds continuously, any unplanned outage is likely to lead to data loss.


As standard backup and replication services cannot keep critical data up-to-date across multiple clouds continuously, any unplanned outage is likely to lead to data loss.


This raises several data management questions: how do you keep your data consistent across the entire architecture when applications in multiple clouds continuously access and modify that data? How can you ensure uniform security policies and regulatory compliance across those multiple platforms? And, crucially, if part of the infrastructure fails, how can you avoid data loss and disruption?
 

DISCOtecher: So how can businesses and IT leaders solve these challenges?

Paul: Most of these questions boil down to one central challenge—keeping your data consistent across multiple environments, even as it changes.
To enable a multi-cloud architecture that runs smoothly, adds value and is easy to restore in the event of a recovery scenario, organizations need to have a LiveData strategy. A LiveData strategy means you have globally consistent, accessible business data that is always accurate and available across the entire cloud infrastructure, even in a mixed environment that is geographically distributed.


A LiveData strategy means you have globally consistent, accessible business data that is always accurate and available across the entire cloud infrastructure, even in a mixed environment that is geographically distributed.


 

A LiveData strategy enables line-of-business users to take advantage of all enterprise data, because changes made to business data in one location are instantly replicated across the entire network. If one or more components of the multi-cloud infrastructure go down, operations can switch over seamlessly to an identical copy of data in another part of the infrastructure. Applications and even entire environments can be moved between different on-premises and cloud environments, with no disruption and no data loss.
 

DISCOtecher: Could you explain to the reader how a LiveData strategy is different than traditional replication approaches?

Paul: Traditional backup and replication services are unable to deliver data consistency in a distributed and heterogeneous cloud environment. This is a massive data operations challenge. A LiveData strategy solves this problem by enabling unstructured data to be consistent and available where you need it.  

DISCOtecher: Thank you so much for your time Paul.

Paul: You’re welcome!
 

DISCOtecher: Look out for the follow-up to this two-part blog series, where we will discuss more Gartner report insights from the Technology Insight for Multicloud Computing research report, and explore in more detail how a LiveData strategy supports multi-cloud initiatives. And in the meantime, check out our technical explainer on how the WANdisco Fusion platform enables a LiveData strategy. 


About the author

 

At WANdisco, we value our relationships with industry experts and partners and highly value their educational material. This blog series is made up of their opinions and ideas relevant to our followers and we support them - and respect their personal viewpoints.

Twitter: @WANdisco

 

As VP of Product Management at WANdisco, Paul has overall responsibility for the definition and management of WANdisco's product strategy, the delivery of product to market and its success. This includes direction of the product management team, product strategy, requirements definitions, feature management and prioritization, roadmaps, coordination of product releases with customer and partner requirements, user testing and feedback.


About WANdisco

WANdisco is the LiveData company that empowers enterprises to revolutionize their IT infrastructure with its groundbreaking distributed coordination engine (DConE) in the WANdisco Fusion platform, enabling companies to generate hyperscale economics with the same IT budget — across multiple development environments, data centers, and cloud providers. WANdisco Fusion powers hundreds of the Global 2000, including Cisco Systems, Allianz, AMD, Juniper, Morgan Stanley and more. With significant OEM relationships with IBM and Dell EMC and go-to-market partnerships with Amazon Web Services, Cisco, Microsoft Azure, Google Cloud, Oracle, Alibaba and other industry titans — WANdisco is igniting a LiveData movement worldwide.

For more information on WANdisco, visit http://www.wandisco.com

Email an Expert

Talk to us about making data movement reliable without downtime

* REQUIRED FIELDS