If you are looking for ways to improve your RPO or Recovery Point Objective you have come to the right spot.
The RPO is basically a measure of how much data you can afford to lose. Generally, the more electronic transactions or technology related your business the higher your risk from data loss. There aren’t that many ways to improve your RPO and they basically come down to the following:
If zero is your goal – move your application to the cloud and put it in active – active state with a couple thousand miles of separation. Basically, you will want the database to be running on a SQL Geo-Cluster and to be real time replicated between the various hosting sites. You might still have some data loss based on the laws of physics (the data still has to travel thousands of miles) but it should be pretty minimal.
The other option is to stand up your own database replication between two sites. This can be pretty costly to do on your own but will provide very little data loss. The cost may drive you to look at a cloud provider to get the same functionality without the cost. Most likely you will need to consider having enough bandwidth, WAN acceleration, and replication tools to pull this off. If you do, your RPO will be lowered significantly and you can rest assured knowing that you will have little if any data loss if something catastrophic happens.
If budget is a constraint, as it is for most of us, the next best thing is to find a backup solution that takes regular snapshots of the data. The OGO Replicator can do this every 15 minutes on any Windows server. SQL backup scripts can probably get a little higher frequency but come with performance trade-offs. When looking at data vaulting options you will really want to look at the frequency of the snapshots. You can also look at replication software like Neverfail or Doubletake to reduce the data loss and improve the RPO. The downside with many of these tools is the replicate the bad stuff with the good. For example, say you get a virus or corruption – it will be replicated to the other half of the solution. They also take a lot of ongoing care and feeding. Our experience has been to pick either using a Community Cloud for the database component or use a data vaulting replication and that most clients fall into three categories for RPO desires (no data loss, 15 minutes, or 24 hours).
Sending Data Offsite
Once you figure out how often you want to snapshot the data or if you need replication, you will need to start focusing on how to get the data offsite. For this you will need some sort of WAN transport tool. Different protocols work better for different solutions. For example, UDP can be much more effective for transmitting lots of data quickly compared to TCP/IP. You will need to spend some time looking at the solution, calculating your bandwidth requirements, and planning for things like latency, distance, or WAN acceleration based on what the solution requires. It is also possible, that your focus is to get to a low RPO within your production data center and you are comfortable with getting the data offsite less frequently. In this mode you can minimize your RPO for things like hardware failure or corruption which are more likely than the whole facility burning to the ground. Either way, this is where the Risk Assessment is crucial for determining your objectives in setting your RPO.
The last big variable for lowering your RPO is bandwidth. The bigger your pipe and the higher the quality of the connection the faster RPO you will achieve in getting your data offsite. You can take snapshots of the data quite frequently but if you have a small connection to your hotsite you may only be able to send the data once a night. Hence your local RPO would be small but your offsite RPO would be large.
Our experience is that you should start with a Risk Assessment to determine the RPO goals for each system or business process. Then look at the choices of technology for either an individual system or focus on the overall risk of all systems. You probably will want to be granular in your solution selection as you won’t have infinite budget to simply replicate everything and all data is not created equal. As you pick the solutions you will then need to match the offsite tools and bandwidth with the solution to make sure you have the right aggregate approach. Do all that and you should be able to hit your RPO’s at the micro and macro levels and sleep soundly at night.
Are you also concerned about how to calculate and plan for meeting different RTO, Recovery Time Objectives?
If you have these or other concerns fill out this quick form and we would love to help you analyze and solve the problem.