After some discussions with a customer this week, I decided to do a more in-depth analysis of Azure Storage, which I have done before many times. But this time I wanted to evaluate different options just to give people a better understanding of the impact of different configuration settings.
NOTE: I will follow up with numbers on LRS and Ephemeral disks as well.
I want to mention that during my testing I primarily used DiskSPD to do the benchmark of IOPS and Throughput, but used HD Tune Pro to evaluate the access time and doublecheck the performance. It is important to note that each VM SKU in Azure has a QoS limit that determines how high theoretical throughput and IOPS a machine can get.
If you are using a small VM SKU and High IOPS disk you might not get the IOPS you are paying for in the disk. For instance If I have a Premium V2 disk which has been configured to have 80,000 IOPS but I have a VM Standard_E2as_v5 it can only have up to 10,000 IOPS so I am paying for a set of IOPS that I cannot use.
My example environment was using the following configuration.
Standard_E16as_v5 which provides me with up to 800 MB/s of throughput and 40000 IOPS.
Then I have the following disk layout with all of them using 1024GB Premium Disks (Equivalent of P30) (Select a disk type for Azure IaaS VMs – managed disks – Azure Virtual Machines | Microsoft Learn) now it should note that each disk of P30 stars with a base of 5000 IOPS (combined read and write) and throughput of 200 MB/s but can be using upwards to 30,000 IOPS and 1,000 MB/s with Disk bursting enabled.
Firstly it is important to understand how Azure Disks work. Premium SSD disks are using Azure Storage as the underlying fabric to store the data, this means that all data is separate from the compute, unlike many HCI vendors today where storage is mostly available on the same host the machine is running on.
All tests were done at least 10 times and results and based upon averages of those tests.
- Premium SSD managed disks using the on-demand bursting model of disk bursting can burst beyond original provisioned targets (Managed disk bursting – Azure Virtual Machines | Microsoft Learn)
- The (IOPS) and throughput limits for Azure Premium solid-state drives (SSD), Standard SSDs, and Standard hard disk drives (HDD) that are 513 GiB and larger can be increased by enabling performance plus. Preview – Increase performance of Premium SSDs and Standard SSD/HDDs – Azure Virtual Machines | Microsoft Learn (and its free!)
- With Accelerated Networking, network traffic that arrives at the Azure Storage network interface (NIC) is forwarded directly to the Appliance. Accelerated Networking offloads all network policies that the virtual switch applied, and it applies them in hardware (Currently in Preview)
Disk type | Max Write I/O per sec | Max Throughput (MB/s) | Max Read I/O per Sec | Access Time |
D: Premium SSD ZRS – Bursting + Performance Plus + Accelerated Network + Read/Write Cache | 30596 | 672 MB/s | 32718 | 0.079 MS |
E: Premium SSD ZRS – Bursting + Performance Plus + Accelerated Network | 30615 | 815 MB/s | 30473 | 1.86 MS |
F: Premium SSD ZRS – Performance Plus + Accelerated Network | 8159 | 304 MB/s | 8158 | 1.92 MS |
G: Premium SSD ZRS – Performance Plus | 8157 | 304 MB/s | 8157 | 2.19 MS |
H: Premium SSD ZRS | 5101 | 203 MB/s | 5099 | 1.02 MS |
I: Premium SSD ZRS – Bursting | 25131 | 585 MB/s | 30550 | 2.02 MS |
J: Premium SSD v2 (Default with 8000 IOPS and 300 MB) | 8159 | 305 MB/s | 8160 | 0.330 MS |
K: Premium SSD v2 (40000 IOPS and 1200 MB) | 40776 | 814 MB/s | 40769 | 0.353 MS |
L: Premium SSD ZRS – Accelerated Network | 5099 | 203 MB/s | 5099 | 2.02 MS |
There are a couple of things to note here
- I hit a limit multiple times here for the different disks, for instance E: meets the limit for the VM SKU since it has 815 MB/s (which is burst limit for this VM SKU) secondly on the Premium SSD v2 where I also defined 1200 MB bandwidth on the disk but I meet the VM SKU limit.
- Since Accelerated Networking was enabled for ZRS I got lower access times compared to when it was not enabled – This can be because it is in preview or that the performance is still not improved because of ZRS layout.
- Premium SSDv2 provides the lowest access times and best performance in terms of throughput and bandwidth, just limited to the VM SKU type. Unfortunately It can only be used for data disks.
- Performance Plus gives a great boost to IOPS / Throughput and is also free! (also still in preview) can only be enabled using PowerShell
az disk create -g resourcegroupname -n nameofdisk --size-gb 1024 --sku Premium_ZRS -l norwayeast --enable-bursting true --performance-plus true --accelerated-network true
az vm disk attach --resource-group db01_group --vm-name db01 --name nameofdisk
Here is the different DiskSPD commands that I used to do the benchmark, the test file is created automatically by DiskSPD which you can download using this PowerShell script
Script to download DiskSPD
$zipName = “DiskSpd.zip”
$zipPath = “C:\DISKSPD”
$zipFullName = Join-Path $zipPath $zipName
$zipUrl = “https://github.com/microsoft/diskspd/releases/latest/download/” +$zipName
if (-Not (Test-Path $zipPath)) {
New-Item -Path $zipPath -ItemType Directory | Out-Null
}
Invoke-RestMethod -Uri $zipUrl -OutFile $zipFullName
Expand-Archive -Path $zipFullName -DestinationPath $zipPath
DiskSPD commands
diskspd.exe -c200G -w100 -b4K -F4 -r -o128 -W30 -d30 -Sh d:\testfile.dat
diskspd.exe -c200G -w100 -b128K -F4 -r -o128 -W30 -d30 -Sh d:\testfile.dat
diskspd.exe -c200G -b4K -F4 -r -o128 -W30 -d30 -Sh d:\testfile.dat