Contact Us: 888 965 9988

Blog & News

Cloud Backup for Hyperconverged Infrastructure: When Snapshots are Not Enough

Jul 19, 2018, 19:43 PM by Trenton Baker

The Princess Bride coined several famous sayings. One of the greatest is “I do not think that word means what you think it means.” This line might as well been written for hyperconverged infrastructure (HCI) and its definition of backup.

Why is Backup an Issue for HCI

Why is Backup an Issue for HCI?

HCI is designed to tightly integrate server compute, virtualization, networking, and storage into a scale-out infrastructure. Typically HCI platforms cluster servers together via a hypervisor running a software-defined storage (SDS) solution. HCI is software-defined, where a virtualization layer provides shared data services consisting of data redundancy (consisting of mirroring within clustered nodes for high-availability), networking, storage and a distributed file system.

Gartner’s 2018 HCI analysis predicts that — “by 2020, 20% of business-critical applications currently deployed on three-tier IT infrastructure will transition to Hyperconverged infrastructure. ”

Hyperconverged infrastructure storage is a proven high-value architecture that increases scalability while decreasing management complexity in the data center. It can cut costs, reduce administrative burden and simplify your environment. HCI backup solutions can stem downtime and keep it to a minimum or virtually eliminated by instituting rolling software upgrades across the nodes of a cluster. Yet, legacy backup tools were not created for hyperconverged environments, and HCI isn’t quite a backup infrastructure.

That's not to say HCI doesn’t do data protection – it does. Moreover, some products have tried to bake-in secondary backup to their offerings along with HA clusters, erasure coding, snapshots, and replication services. But is this a true backup solution?

The issue is that HCI is a unique architecture with unique challenges for backup. Since HCI does not share storage like physical and virtual machines do, all applications – including backup – do not offload onto shared storage. Even with unlimited snapshots HCI only protects data within its environment and thus still susceptible to a disaster impacting the entire system or site. This risk of on-premises copies can be acceptable for granular VM restores, but for regulatory compliance or massive data loss, it is entirely inadequate.

HCI Data Protection

What It Is

When to Use It

Data resiliency

HCI vendors do not employ RAID in the classic sense, but most have similar technologies based on replication and erasure codes. Nutanix, for example, uses what it calls "Replication Factor" (RF) to replicate data blocks across 2-3 clustered nodes. Erasure coding is an option in most systems although it comes with heavy processing overhead.

Protect production data against loss or corruption.


A snapshot preserves the view of a volume or file system from a point in time. The snapshot system stores the snapshot and records changed blocks occurring after the snapshot is taken.

Quickly recover from corruption or accidental deletion -- unless the HCI goes down, in which case good luck with that.


Replication tools copy data to a different storage system on-premises, at a remote site, or to the cloud.

Replication would seem to solve the problem of off-site backup copies for HCI. The challenge is that replicated copies quickly consume storage capacity, which limits it to critical data and must have applications.


Backup copies data to a secondary storage environment. The minimum number of copies is two: one at a rapid recovery cache located on-premises and one off-site. Three copies are better, with at least two of the copies stored in different remote sites. Think 321 Rule.

Backup VMs to remote sites. Strongly consider backing up to cloud environments that support snapshots, since backing up to a secondary HCI environment is an expensive proposition.

Three challenges to using HCI for backup and recovery:

  1. Expensive primary storage: HCI architectures tend to excel with all-flash arrays or at minimum hybrid-storage leveraging flash tiering. This creates a high-performance appliance but comes on the costly side for storage and isn’t quite cost-effective for backup copies. Although flash drives are cheap, data is invaluable, and overspending is never wise.
    • Remedy: Reduce costs and choose a modern backup and recovery solution designed to curtail both CapEx and OpEx. Bring down storage costs by implementing storage backup and DR tools to reduce storage use like automation, source-side deduplication, and data migration tools for managing data into storage tiers based upon its need and value. Also, it is vital to partner with a cloud provider that includes straightforward billing without the gotcha egress pricing found in the megaclouds.

  2. Greater risk: All hardware is designed to fail. Having both the primary data and secondary copies and/or snapshots in one location let alone one system is a recipe for disaster. Data availability is the purpose of backup in the first place.
    • Remedy: Enterprise backup and recovery will provide the resiliency, hyper-availability, and scale to allow backup data to be moved, managed and protected across different storage tiers and locations. For 100% certainty, data should be protected against failure with a backup target on another device separate both physically and geographically from the HCI platform. This might be another HCI cluster located elsewhere, or a storage resource such as a custom cloud solution provider.

  3. Longer RTO/RPO: Production storage does not excel in data protection even when utilizing snapshots. The typical HCI budget does not have the breadth and depth of data protection and automation tools to create a comprehensive backup and recovery solution.
    • Remedy: HCI backup efficiency can be achieved by implementing a hyper-available cloud backup and recovery solution. Ideally, this solution would provide the ability to manage data, index, backup, snapshot, and, dedupe by working in tandem with industry-leading hyper-converged solutions such as Nutanix, Cisco, or NetApp platforms to ensure backup occurs wherever the data resides and restores it anywhere.

Custom Backup Clouds for HCI

The trick is to safely copy HCI backup off the infrastructure and onto remote and cloud sites.

HCI snapshot backup runs as an HCI platform application, which presents as a virtual machine. Backup software copies the backup VM, including data and metadata, from the hypervisor to a remote backup target. The backup software then copies the backup VM to a remote HCI cluster or to a cloud.

Secondary HCI clusters are expensive, and public clouds might not be capable of backup or restore as quickly as the customer needs. The best choice for an HCI backup target is a highly customizable cloud that is built for backup and rapid recovery. This cloud should meet these specialized requirements for HCI backup:

  1. Purpose-built for high-value data backup and DR. Hyperscaled public cloud vendors are a common choice for backup targets. However, they rarely customize SLAs and might not be capable of sufficient recovery speeds. They also offer failover services for application processing, but the service and testing are expensive. The purpose-built cloud exists to protect data and applications and works jointly with partners and customers to customize solutions for backup customers.

  2. Partners with MSPs and backup software vendors who understand HCI’s unique backup needs. Not just any MSP or backup vendor is successful with HCI. Their developers and support engineers need to know how to automatically discover and protect VMs, integrate with platform APIs to extract data and quiesce VMs, be highly scalable, offer economic licensing, verify backup VMs, practice high degrees of automation and simplify HCI backup management.

  3. Operates multiple geographically separated data centers. The customizable data protection cloud does not depend on single cloud data centers, nor do they lease space from the hyperscaled public clouds. These providers own and maintain geographically separate data centers with strong physical and cybersecurity. They automatically copy backup VMs between separated sites.

  4. Offers rapid recovery and DR options. These cloud vendors efficiently back up HCI data and rapidly restore it to the customer’s HCI environment. They also offer cost-effective failover services including testing.

KeepItSafe hits the mark for HCI customers by understanding the unique architecture with unique data protection challenges for dedicated backup, DR, and archiving. Although HCI is still a nascent technology its backup and disaster recovery options vary dramatically and the need for guidance is clear.

The hyperconverged backup solution

The next step for HCI has been to collapse backup software and scale-out storage into a single solution designed to function within the multi-cloud, thus creating hyperconverged backup.

A hyperconverged cloud backup solution should consolidate backup and software into a hyperscale architecture that encompasses all the features of a legacy backup appliance. Always-on enterprise backup software tools that integrate with your HCI environment can eliminate many disaster recovery concerns.

Veeam’s agentless design provides multiple backup options such as source-side deduplication and compression, changed block tracking, parallel processing, and automatic load balancing to reinvigorate data protection for VMware vSphere and Microsoft Hyper-V HCI environments.

Scale-out HCI flash platforms like NetApp® SolidFire® seamlessly collaborate with Veeam Availability to offer high-performance storage with unified backup and replication data protection into a single cloud-ready solution.

The KeepItSafe holistic approach to backup and data availability is designed to achieve IT resiliency across your hyperconverged platform of choice.

Readers of this blog post are also interested in this webinar:

No Backup Is an Island: The Value of the Cloud in your Backup Strategy

Subscribe to our Newsletter

Enter your email below to be notified about new articles.

Download Analyst Report

Disaster Recovery Planning

“Disaster Recovery Planning: Getting from Good to Great”