Getting started with Azure NetApp files

data_fabric_icon_1-60

Earlier this month, Microsoft together with NetApp went GA with Azure NetApp files and now I’ve been fortunate enough to use it for a project.

NOTE: That Azure NetApp file serivce is still only available in certain regions also you need to register for the resoure provider to get access to the service (Can be done trough CLI or Cloud Shell)

Americas | us-west2 | us-east | us-southcentral

EMEA | eu-west | eu-north

az provider register --namespace Microsoft.NetApp --wait

So what is Azure NetApp files? It’s basically a managed file service in Azure, which can provide either a NFS v3 or SMB v3 based file volume using NetApp’s own filestructure and hardware from within the Azure Datacenters. Which then can be accessed from within virtual networks in Azure. In many cases you might have applications or services that are dependant on using NFS based storage, such as HPC based workloads, SAP or Container based applications (which of course also support Azure Storage, but AFS has much lower latency then regular storage accounts)

So unlike Azure Files this also supports NFS and also native AD based integration for SMB 3 based authentication (So no Azure AD DS required and more native integration) Now since this is a new feature it is not yet supported by using for instance Azure Backup, If you want to have backup of a volume that is running on NetApp Files you would need to create a snapshot of a volume either using REST or the UI in the portal –> https://docs.microsoft.com/en-us/rest/api/netapp/snapshots/create

  • One of the guys over at NetApp have already created a way to use Logic Apps to automate the process of creating a backup based upon an automated snapshot schedule –> https://github.com/kirkryan/anfScheduler

Another cool thing with Azure NetApp files is the support for NetApp Trident –> https://netapp-trident.readthedocs.io/en/stable-v18.07/kubernetes/index.html (https://netapp-trident.readthedocs.io/en/stable-v19.07/kubernetes/operations/tasks/backends/anf.html) Trident integrates natively with Kubernetes and its Persistent Volume framework to allow for provisioning and management of volumes from ANF directly from Kubernetes.

When setting up an Azure NetApp in Azure it is important to understand the Storage Hierarchy. Within a subscription you define a Azure Netapp Account which is stored within a region. Underneath a NetApp account you have one or more capacity pools. on the capacity pool you define the amount of storage (minimum 4 TB – maxium 500 TB) and the performance you want on the pool of storage.

Conceptual diagram of storage hierarchy

The performance tier is split into three different options and performance is calculated based upon service level

STANDARD | 16MB / second throughput per provisioned TB
PREMIUM | 64MB / second throughput per provisioned TB
ULTRA | 128MB / second throughput per provisioned TB

After this we need to configure a volume. A Volume is the actual mount point which can be accessed and where we configure which protocol to be used. When configuring a volume you need to specify a subnet where the mount point will be accessable.

In order to use a subnet for NetApp volumes, you will first need to delegate NetApp Service to the subnet. I recommend that you create a custom subnet where you place the different volumes in.

Subnet delegation

When a volume is created, you can access the volumes directly from your VNet, from peered VNets in the same region, or from on-premises over a Virtual Network Gateway (ExpressRoute or VPN Gateway)

NOTE: There are some restrictions that you should be aware of as well.

  • The number of IPs in use in a VNet with Azure NetApp Files (including peered VNets) cannot exceed 1000.
  • In each Azure Virtual Network (VNet), only one subnet can be delegated to Azure NetApp Files.
  • A Volume is equal to one IP address, so plan the amount of volumes properly

After that, when the volume is created you will be given a IP address on that specific subnet which can be accessed. If using NFS I recommend that you specify a custom export policy to ensure that only authorized hosts are allowed to communicate with that specific volume.

When it comes to monitoring, you don’t get a lot of information on the status on the backend infrastructure, you only have a few metrics that you can monitor on the volume side.

But no easy way to get health information on the service itself.

 

You May Also Like

About the Author: Marius Sandbu

Leave a Reply

Your email address will not be published. Required fields are marked *