Sometimes there’s only a need to restore the changes made to a VM and not the full enchilada, and using change block tracking (CBT) is great for that. However, there are certain scenarios in which a full VM restore is necessary, like hardware failures, large corruption instances or a full-on-oh-my-gosh disaster recovery event.
When it comes to full VM recovery with Veeam, there are a couple of different transport modes of which you should be aware. I recommend that you integrate these considerations into your restore process to maximize the speed and efficiency. … You’re testing your restore process regularly, right? Remember, your backups are only as good as your restores, and at Mirazon we spend a lot of time reminding our clients of that.
Before you get into reading about the transport modes below, identify what you’re currently leveraging so you can think through whether or not you’re utilizing the best one for your environment.
This transport mode occurs when the backup proxy is connected directly into the SAN fabric or has access to NFS datastores. It’s the best when it comes to backup and restore speeds since it’s got direct storage access (and is not going through a proxy). Additionally, the number-one consideration when it comes to the restore portion of this is that you must restore as a thick disk in order to fully enjoy the benefits of your direct SAN access.
The speed will be much faster, but it is at the cost of storage capacity, since it is a thick eager zero disk (it zeroes out the pre-allocated storage ahead of time). This is a limitation of how VMware works when it comes to writing back to the file system, but it’s where the speed comes from, since the storage space is already reserved.
This is our preferred transport mode because of the performance gains. However, most environments are going to be thin disk, so it may seem backwards to want to restore as thick.
“Jason,” you may ask, “how could I possibly restore my thin disks as thick while using all that extra space and feel good about it?” Well, the goal here is to make the restore as fast as possible, and I’m not just talking a small speed increase here — this could easily be 20 times faster. So once the server is restored and life is back to normal, you can storage vMotion the VM back to thin. Bear in mind this process may take a little while depending on the size of your VM, but if faster restores are a priority, spending time on the backend to resize your disks is a small price to pay. Also consider your overall storage capacity – you need the space there up front for your thick disk (in addition to extra space in the event of a disaster in which you need to fully restore since you’ll need two copies of the data at once), even if you plan to shrink it down afterwards.
“Well, Jason, that all sounds great for people who have direct SAN access. I don’t.” Well, then you’ll want to look at …
This mode will use your existing virtual proxies to do what’s considered a VMware HotAdd. A HotAdd is the action of attaching running server data drives to the virtual proxy so that it can access them like a local disk. Thick disks return optimal restore performance, but you can get away with doing thin disks — you’ll just take a performance hit. In HotAdding disks to the VM proxy, the processing of the data happens locally on the proxy before it gets sent across the network to your repository. This cuts down on the total amount of data traversing your network.
This is your best option across the board when doing file recoveries, since you can’t direct access your file recovery.
We usually don’t recommend utilizing the Network Transport Mode. It’s the slowest and boasts the least amount of functionalities. How it works: you have a Veeam server and it has no proxies or storage access and has to do all its communication via VMware through the network stack.
If you don’t have proxies enabled, we need to talk. Even if you direct connect to your SAN, you can still benefit from virtual proxies for file-level restores, guest interaction, and mount roles. If you’ve already got a full VMware environment, take the time to configure your proxies. Your backups and restores will thank you.
Let’s change gears a bit and focus on Veeam’s Instant-On restore feature. In a way, it approaches the restore completely differently.
Say you’re in the midst of an unplanned outage/restore. You run the math, you know from your testing that your large file server will probably take about eight hours to restore (cue the brow sweat). What are your other options here to get back up and running quicker?
If you use Instant-On, you can boot that file server up directly from your backup. Instant-On works its magic by creating an NFS mount that is served up from your Veeam server, therefore your VMware environment sees it as just another datastore. This allows you to boot the file server backup as production and then you can vMotion it back to your production storage while it is still online and being utilized. So, it works differently than your typical restore functions.
The problem that we run across frequently is that if you’re going to run Instant-On a lot, you must build up your Veeam environment with powerful storage and a fast network that can be used to run production during these types of activities. If you have lots of business functions that are so mission critical that you need Instant-On, you’ll need hardware with production-level performance behind it.
When you design your backup system, it should be built around your organization’s restore needs first, and then backup restore point capacity second. That includes your recovery time objective (RTO) and recovery point objective (RPO) and your organization’s overall tolerance for downtime or degraded performance.