Getting started with Azure NetApp files

Earlier this month, Microsoft together with NetApp went GA with Azure NetApp files and now I’ve been fortunate enough to use it for a project.

NOTE: That Azure NetApp file serivce is still only available in certain regions also you need to register for the resoure provider to get access to the service (Can be done trough CLI or Cloud Shell)

Americas | us-west2 | us-east | us-southcentral

EMEA | eu-west | eu-north

az provider register --namespace Microsoft.NetApp --wait

So what is Azure NetApp files? It’s basically a managed file service in Azure, which can provide either a NFS v3 or SMB v3 based file volume using NetApp’s own filestructure and hardware from within the Azure Datacenters. Which then can be accessed from within virtual networks in Azure. In many cases you might have applications or services that are dependant on using NFS based storage, such as HPC based workloads, SAP or Container based applications (which of course also support Azure Storage, but AFS has much lower latency then regular storage accounts)

So unlike Azure Files this also supports NFS and also native AD based integration for SMB 3 based authentication (So no Azure AD DS required and more native integration) Now since this is a new feature it is not yet supported by using for instance Azure Backup, If you want to have backup of a volume that is running on NetApp Files you would need to create a snapshot of a volume either using REST or the UI in the portal –> https://docs.microsoft.com/en-us/rest/api/netapp/snapshots/create

  • One of the guys over at NetApp have already created a way to use Logic Apps to automate the process of creating a backup based upon an automated snapshot schedule –> https://github.com/kirkryan/anfScheduler

Another cool thing with Azure NetApp files is the support for NetApp Trident –> https://netapp-trident.readthedocs.io/en/stable-v18.07/kubernetes/index.html (https://netapp-trident.readthedocs.io/en/stable-v19.07/kubernetes/operations/tasks/backends/anf.html) Trident integrates natively with Kubernetes and its Persistent Volume framework to allow for provisioning and management of volumes from ANF directly from Kubernetes.

When setting up an Azure NetApp in Azure it is important to understand the Storage Hierarchy. Within a subscription you define a Azure Netapp Account which is stored within a region. Underneath a NetApp account you have one or more capacity pools. on the capacity pool you define the amount of storage (minimum 4 TB – maxium 500 TB) and the performance you want on the pool of storage.

Conceptual diagram of storage hierarchy

The performance tier is split into three different options and performance is calculated based upon service level

STANDARD | 16MB / second throughput per provisioned TB
PREMIUM | 64MB / second throughput per provisioned TB
ULTRA | 128MB / second throughput per provisioned TB

After this we need to configure a volume. A Volume is the actual mount point which can be accessed and where we configure which protocol to be used. When configuring a volume you need to specify a subnet where the mount point will be accessable.

In order to use a subnet for NetApp volumes, you will first need to delegate NetApp Service to the subnet. I recommend that you create a custom subnet where you place the different volumes in.

Subnet delegation

When a volume is created, you can access the volumes directly from your VNet, from peered VNets in the same region, or from on-premises over a Virtual Network Gateway (ExpressRoute or VPN Gateway)

NOTE: There are some restrictions that you should be aware of as well.

  • The number of IPs in use in a VNet with Azure NetApp Files (including peered VNets) cannot exceed 1000. This limitation is still there but you need to open a support ticket if you plan to exeed this number. 
  • In each Azure Virtual Network (VNet), only one subnet can be delegated to Azure NetApp Files.
  • A Volume is equal to one IP address, so plan the amount of volumes properly

After that, when the volume is created you will be given a IP address on that specific subnet which can be accessed. If using NFS I recommend that you specify a custom export policy to ensure that only authorized hosts are allowed to communicate with that specific volume.

When it comes to monitoring, you don’t get a lot of information on the status on the backend infrastructure, you only have a few metrics that you can monitor on the volume side.

But no easy way to get health information on the service itself.

Performance

On of the current issues with the service is if you are using NetApp Trident to automatically create the different volumes as part of a pod. For instance if you start with the lower possible volume which is 100 GB on a minimum 4 TB Capacity pool. This is if you want to have dynamic volume claims for NetApp volumes. If you have the lowest cost tier (standard) you essentially have 1.6 MBps troughput, so if you have multiple pods you need to either adjust the size of the volume itself or adjust the tier level.

100 GB = Standard tier = 1.6 MBps
100 GB = Premium tier = 6.4 MBps
100 GB = Ultra tier = 12.8 MBps

This of course scales with the amount of capacity that is allocated to the volume. But if you are in need of fast storage and small disk sizes such as with use of AKS or Docker, I don’t consider NetApp files a good approach.

And it should be noted that you cannot overcommit on NetApp files. A Volume can scale to 100 TB and a capacity pool can scale to 500 TB. Azure NetApp Files is billed on provisioned storage capacity.

Final thing to consider

If you are using NetApp Files or you have a vendor which is delivering you as a service trough VNET Peering, there is a current limitation which affects the total amount of IP addresses you can have within a VNET The number of IPs in use in a VNet with Azure NetApp Files (including peered VNets) cannot exceed 1000 IP. This means that if you have more then 1000 IP addresses in a VNET (Together with peered VNET’s) the NetApp Files service will drop connections and your services will lose connectivity. If you are using AKS or Docker Enterprise which because of the overlay network, provisioning a large amount of IP addresses you might need to reconsider how you design your network.

 

Leave a Reply

Scroll to Top