As part of Veeam Office 365 Backup Update 4, Veeam also introduced a new feature called Object Storage Repository which is somwhat similiar to the new object storage support they introduced within B&R but still entirely different at the same time. The similiar part is that it allows for backup data to be moved to a cloud based object storage solution, but they way that Veeam pushes and stores that data is different.
Now in Office 365 and Object Storage, Veeam supports the following Object repositories
- S3 Compatible Object
- IBM Cloud
- Azure Blob Storage
- Amazon S3
The focus on this blog post will be on Azure Blob Storage. With Microsoft Azure, Veeam supports the following blob storage options.
|Standard Tier||General-purpose V2||Blob||Hot, Cool, Archive|
|Standard Tier||General-purpose V1||Blob||N/A|
|Standard Tier||BlobStorage||Blob (block blobs and append blobs only)||Hot, Cool, Archive|
|Premium Tier||BlockBlobStorage||Blob (block blobs and append blobs only)||N/A|
(FYI) Still checking if Veeam supports lifecycle policy on Storage, so that storage can move to archive tier if older then X amount of days)
Now this allows us to store backup data within cheaper storage solutions within Microsoft Azure. As I mentioned previosly this support for cloud storage is different compared to regular B&R and that is how data is processed and data is pushed to the cloud based storage. When setting up Veeam Backup for Office 365 you have the option to extend a regular repository with Object Based Repository.
With this setup, Veeam is going to be storing a local cache on the backup server, but will be pushing all backup data directly to the object storage which in this case is Azure. Now the persistent cache locally on the server is usefull when you use Veeam Explorers to open backups located in object storage, Veeam Backup for Microsoft Office 365 uses cache from which it retrieves the structure of the backed-up objects of your organizations. Such a structure is then loaded into the navigation pane of each of the Veeam Explorers so that you can navigate though it without actually downloading any data from object storage.
The cache holds metadata information about backed-up objects, and is created (or updated) during each backup session.
When selecting a backup repository that was extended with object strange, all data will be compressed and backed up directly to object storage, and cache will be saved to the extended backup repository for consistency purpose.
Just to give an example of the cost of this compared to a regular disk in Azure using Cold Storage for Backup for 100TB of data.
1: Disk Storage in Azure: USD 1 310,72 (For a 32 TB disk) which needs to be multipled x3 and would also require some additional solution for software raid. so ~4 000,00$ per month for 100 TB of data.
2: Cold Blob Storage in Azure = USD 1 024,00 per month for 100 TB of data.
Now of course archive data is also a possibility here, but not directly recommended because of the latency it takes to actually get data restored from the archive storage. However if used reserved capacity (for instance once you reach a certain threshold and you now have data that you know that you need to store for 5 or 10 years, you can also reserve capacity on storage and also be able to reduce to cost on cold storage from 1 024,00 to 675,83$ a month instead if we were to choose a 3 year reserved capacity on 100 TB of Storage.
Also an important aspect as part of Update 4 is the ability to do backup against an Office 365 org using multiple backup accounts to avoid throttling. Here you basically just need a set of Azure AD users, which doesn’t need to be licensed just to have the correct permissions stored.
And just to give an example this is after I configured from using a local repository to utilizing a cloud based object storage repository. Configuration of Object storage for the different providers can be viewed here –> https://helpcenter.veeam.com/docs/vbo365/guide/adding_object_storage.html?ver=40