5 Considerations When Replicating the Cloud for Disaster Recovery

In the past, the only way to be confident that you could get back online and up and running in the wake of a disaster was to build a duplicate server site in a remote location. In the event something went wrong, you could essentially “flip a switch” to get back up and running quickly.

While certainly the ideal disaster recovery plan, most businesses are unable to sustain the cost of maintaining a duplicate backup site. Given that backup sites sit idle most of the time, few small and midsize businesses can afford to not only invest in the equipment, but also in the maintenance and bandwidth expenses that come with such a plan. As a result, many companies have opted to rely on more piecemeal replication solutions, including backup tapes, physical storage, and some off-site replication.

Thanks to the cloud, though, even the smallest companies can replicate their data and systems for more streamlined disaster recovery. However, even though the cloud offers such advanced capabilities, replicating an entire system is not simply a matter of “copy and paste.” The complexity of most corporate networks and the variety of data being stored requires considering a few key points before making the shift.

Consideration #1: Public, Private, or Hybrid Cloud?

The first consideration for any company moving to the cloud is which type of cloud to use. Companies storing significant amounts of protected data often choose private clouds, which allow complete control over not only the configuration of the cloud, but also the security and access. Public clouds are those that are shared with other businesses, offering functionality for a lower cost. The hybrid cloud approach works well for many companies, with some functions maintained on a private server while other applications are run on a shared cloud.

In terms of replication, public clouds are gaining traction because certain public cloud applications, such as those offered by Google Apps, are automatically replicated on Google’s servers; the whole idea of using such services is to maintain unlimited access, thereby making them ideal for some disaster planning. Still, not all functions can be handled via applications on the public cloud, so there needs to be a plan for a private replication site as well.

Consideration #2. Bandwidth Issues

Sending data to be replicated into the cloud uses bandwidth. The question is, how much? The initial replication will probably use the most bandwidth, but your daily updates to the remote site will still use some as well. You need to stay on top of your usage to keep costs in check.

Consideration #3: Handling Connected Applications

A common mistake when replicating to the cloud is a failure to ensure that all of the applications that need to access the data in the cloud, or that interface with applications being replicated, are connected to the replicated system correctly, to ensure seamless recovery. It’s important to ensure that everything is being replicated properly, to avoid gaps in the backup or access issues.

Consideration #4: Synchronous or Asynchronous Replication?

Consider the typical workflow within your business. Chances are your data isn’t static. That means that replicating it isn’t a “one and done” proposition. Complete recovery requires your replicated system to have the most recent version of your data. Synchronous replication sends your data to the recovery site immediately upon creation, in real-time.

Asynchronous replication isn’t immediate; backups might happen every few hours, or overnight. Because synchronous replication happens in real-time, the potential for loss in a disaster is lessened, but it also requires a low-latency connection to the recovery site. Of course, this comes with costs, and the need for low-latency means that the recovery site cannot be located too far away, but for applications that need up-to-the-second accuracy, synchronous may be the way to go.

Consideration #5: How Many Clouds?

Finally, businesses using the cloud for replication need to avoid putting all of their eggs in one basket, so to speak, to avoid problems with recovery. Experts recommend using multiple clouds for replication, to avoid issues with access should there be unexpected downtime, issues with Internet connections, or a cyberattack on the cloud.

This is where, again, a hybrid model comes in to pay, as does prioritizing workloads. Some experts even recommend maintaining a physical component to the disaster recovery plan, as a “backup to the backup” to ensure fast and seamless recovery.

Disaster recovery is multi-faceted and contains many moving parts. Replication is just part of the plan, but taking the time to consider all of the elements and make good decisions will help you avoid excess downtime and other issues in the event something goes wrong.

Tags:

Reply