Many of today’s computer networks are made up of legacy nodes that exist on-site and include computer hardware and network infrastructure (like data communications cabling), as well as the software that orchestrates it all.
Traditional programs and data, along with software designed to act as “virtual” hardware, have often been built on legacy hardware components to optimize performance from them. As you might imagine, this scenario can make disaster recovery (DR) plans complicated. Thorough understanding of legacy hardware and software is necessary to maximize recovery after a disaster.
Networks sometimes evolve in weird ways.
The Cloud as Duplicate Data Center
One step that can make network restoration easier is the use of the cloud as a duplicate data center for the corporate network. This setup can be cost effective and allow recovery of lost data so that the network can start functioning again more quickly after a disaster. Naturally, the cloud vendor that is made part of your organization’s DR plan should have hardware and software configurations compatible with that you already run.
If you pursue the idea of the cloud as a duplicate data center and can’t find a cloud vendor that can act as a duplicate of your organization’s particular configuration, that may be a good sign that your network is specialized enough that DR is going to be complicated. It may also be a good sign that your company’s long-term plans for its IT infrastructure need to be updated.
If nothing else, the software and data may be duplicated in the cloud, allowing essential data to be stored both on site and in the cloud. Either would be accessible if the other failed. This type of contingency plan, of course, requires thorough understanding of what data is being stored where, a solid data back-up plan, and understanding of how software and data are accessed by the business, and in some cases, by the business’s clients.
Total Control Is Impossible
Even if many of your company’s operations are hosted in the cloud, you still don’t have 100% control, because “the cloud” has its own tangible resources. Major catastrophes like a terrorist attack or a catastrophic weather event like 2012’s Superstorm Sandy can hit the cloud’s physical resources too. The best way to hedge your bets against these eventualities is with very high bandwidth connections that allow real-time replication of data, applications, and logical services like virtual machines. That way, if communications or power are cut, data will still be available, even if it has to be moved physically using DVDs, USB drives or other physical data storage objects.
“High speed data transfer” after a disaster may be different from what you’re used to.
Best Practice: Focus on Data and Possible Reduced Capacity
Focusing on data is important to your DR plan. Applications and other logical constructs can be re-imaged. Sure, that can be a major headache, but if the data processed and produced by the software is gone, your business can’t function unless and until it’s restored. Your organization’s software needs to be able to operate at reduced capacity in the aftermath of a major catastrophe, and the more nimble and scalable your software is, the better. This is another reason so many business apps are being shifted to the cloud. So to sum up, your DR plan has to provide access to data, and it has to account for the possibility of operating at reduced capacity after a disaster.
DR Is Not a “One Time and Done” Exercise
Technological, economic, and environmental factors are always in flux, and your DR plan has to reflect this. Therefore, your DR plan has to be tested regularly, and it has to be updated to reflect the latest hardware, software, and data your company uses and produces. Your organization needs to know that vital data is backed up (perhaps in real time), and what the newest and fastest way to restore employee access to software and data is. Testing and updating your DR plan ensure that you’re prepared should the unthinkable happen; and as a beneficial side effect, it gives IT a stronger understanding of the network and ways it might be optimized.
One thing you don’t want to be worried about should disaster strike is tracking your organization’s IT assets. When your IT asset management software is run in the cloud, you’re on top of the status of every piece of hardware and software, even if some of it is drowned in a flood or fried in a power surge.