This is a follow-up from an blog post I had written earlier about Azure Virtual WAN( https://msandbu.org/azure-virtual-wan-and-putting-the-pieces-together/) and in combination with some of the new announcements that came out of Ignite.
As a quick refresh of Azure Virtual WAN: With the release of Azure Virtual WAN, Microsoft now supports SD-WAN functionality of out the box for the “middle-mile” transport. Meaning that traffic from one location to another can benefit of using Azure’s core backbone network to transport data between multiple locations. The traffic follows the pattern: branch device ->ISP->Microsoft network edge->Microsoft DC (hub VNet)->Microsoft network edge->ISP->branch device
Now there has been some updates since last time I wrote this article, first of
- Hub to Hub support is not available in Azure VWAN Standard (Hubs are all connected to each other in a virtual WAN. This implies that a branch, user, or VNet connected to a local hub can communicate with another branch or VNet using the full mesh architecture of the connected hubs. You can also connect VNets within a hub transiting through the virtual hub, as well as VNets across hub, using the hub-to-hub connected framework.)
- Global VNET Peering is supported
- An Azure Virtual WAN hub can support up to 1,000 S2S connections, 10,000 P2S connections, and 4 ExpressRoute connections simultaneously (Much more then regular S2S VPN Gateways)
So what makes Azure VWAN different from a regular Site to Site VPN Connection? It is essentially the connection flow since all hub addresses are advertises locally to each edge. As mentioned earlier to connection from for VWAN is like this, if we for instance have a VPN Site-to-Site setup between the Site the hub.
Branch device (Australia) ->ISP->Microsoft edge (Australia)->Microsoft DC (hub VNet West Europe)->Microsoft edge->ISP (Norway)->branch device.
Using a regular VPN Gateway the flow would go like this.
Branch device (Australia) ->ISP->->Microsoft DC (Gateway VNet West Europe)->Microsoft edge->ISP (Norway)->branch device.
Meanin that traffic flow would be a lot less efficient since the ISP would need to route the traffic entirely from the localtion in Australia using what ever route tables that is configured there to get the traffic to the gateway that is running in West Europe. So with this you can see that Azure VWAN is a solution to handle the “middle mile” aka handling the traffic between the user and the destination.
What is the use case for Azure Virtual WAN?
So what is the use-case for Azure Virtual WAN compared to other solutions? Azure Virtual WAN is not optimized to handle internet traffic well, so if you have a force tunnel on a hub in a location which forces all traffic across the Azure Virtual WAN there is no option to configure such as breakout for SaaS or other internet traffic. Azure Virtual WAN is only suiteable for internal communication with other parts of the business network. Internet traffic should still be configured as part of a local breakout using a SD-WAN vendor at each branch or other solutions. When it comes to comparison with regular Azure VPN Gateway, the VWAN has some advantages where one is Scale (Since it can handle more troughput, the Azure VWAN Gateway feature is always active/active and more connection) then we have optimized traffic path (since all traffic goes to the closest edge and then traverses the VWAN hub) so what about if we need to have traffic between two regions in Azure? Then the best approach would be to use just Azure Global VNET Peering. Global VNET Peering would allow for direct region-to-region traffic. You can mix Azure VWAN Hub with Global VNET Peering which is attached to the Hub VNET, then have sites (branches) connected to the hub using S2S Connections which can then access the other region using the VNET Peering. If you do not have the need to connect branches (So no need for VPN) then you do not need Azure VWAN. If you only need service publishing use services such as Frontdoor or something else.
How well does Azure Virtual WAN Work?
As a test for a customer that we did some months ago we got some numbers to confirm the test for Azure Virtual WAN compared to 1: Regular Internet traffic, 2: MPLS based connections. In this example the customer had their main office in Oslo, the Azure Virtual WAN Hub was running in West Europe and Site-to-Site to that office in Oslo and then we connected multiple branches with Japan Seoul, US Washington, Australia Canberra. (It should be noted that in this example we have some penalty that traffic needs to come to the hub in West Europe and then down to Norway again instead of going the optimized path.)
|Japan||208 MS||292 MS||222 MS|
|Australia||313 MS||367 MS||310 MS|
|USA||108 MS||134 MS||112 MS|
Here we essentially did some basic TCP probes across with a sample of 10 tests troughout the day. Now of course if most of the services would be running directly in West Europe region in the datacenter there we would not get the penalty and the traffic latency would be even lower. And note you should have this as part of your documentation: https://docs.microsoft.com/en-us/azure/networking/azure-network-latency
And stay tuned for next week when Thousand Eyes will publish their research on Cloud Network Performance and connectivity here –> https://t.co/F3sKFhCanK?amp=1