In some customer projects now we been working with implementation of API Management in a Hub-and-spoke architecture where we also have Application Gateway as part of the design for secure exposure of services located in the different spokes and on-premises enviroments. One question that comes up often is, should I have my API endpoints publicly available, or should I have it behind an application gateway or something else?
API management is a service that is used to publish, secure, transform, maintain, and monitor API’s. It has some security features to protect from certain types of attacks which I’m coming to back to in a bit.
Application Gateway provides much of the same functionality to publish, secure, transform and monitor web services. Application Gateway also has some more functionality such as providing load balancing and more security features using its web application firewall. Both do behave like a reverse proxy, APIM provides a policy framework to manipulate requests both inbound and outbound, along with features such as rate limiting and conditional caching. While Application Gateway has more features in terms of rewriting and manipulating traffic on an HTTP protocol stack.
So, some things we need to consider.
- Redundancy and availability
By default, Application Gateway and APIM is only available within a single region (However it does support Availability Zones now Azure API Management support for Availability Zones now generally available | Azure updates | Microsoft Azure) . Unless you use Premium Tier of APIM where the API Gateway is deployed to all regions. In most cases you have API configured to some backend either an App Service, Function, or some other API backend which can, of course, be running on multiple backends to provide scale. APIM by itself does not provide any load balancing mechanisms.
This means that you are dependent on a highly available backend, or you can use some policy mechanism like the retry trigger to send to another route –> Azure API Management advanced policies | Microsoft Docs by the best approach is to have some form of load balancing in the backend. However, Application Gateway can provide load balancing web services to provide high availability on the services. Secondly, with this, you can also add security features to the application gateway. One issue is that this only solves the application gateway we will still need a way to expose the web UI for our application if we have any.
Then we would have a combination of the two that are exposed publicly.
However, if you want to place the API Gateway within a virtual network. It only becomes accessible from within the VNET making it not publicly externally. This approach means that you can use APIM for internal consumers as well. This means that we also need to have a way to expose API through API management using something as well. Well, we can use Application Gateway to publish these resources externally.
This allows us to use a lot of the security features from both features as well. Just to give an indication on some of those security features that we have.
- Security mechanisms
Application Gateway v2 built-on NGINX provides a lot of features when it comes to defining security policies such as custom SSL policies, defining HTTP rewrite rules to correct and remove abuse of someone trying to access certain URL’s. There is a Geo match to allow to define custom firewall block actions. Then we also have certain WAF features to protect against OWASP rules
- Protection against other common web attacks, such as command injection, HTTP request smuggling, HTTP response splitting, and remote file inclusion.
- Protection against HTTP protocol violations.
- Protection against HTTP protocol anomalies, such as missing host user-agent and accept headers.
- Protection against crawlers and scanners.
Not all rules are directly useful for API’s but more against websites. Lastly is that Application Gateway is a reverse proxy, which means it can do SSL Offloading, so it terminates the SSL session to be able to do the WAF protection features.
Now there are some limitations when setting up with this design where you have Application Gateway in the front.
- Application Gateway support HTTP/2 but only frontend and not backend, while API Management supports HTTP/2 both ways.
- API Management supports mTLS while Application Gateway does not since it does SSL termination. This means it will reestablish a new SSL session to the backend, so it will break any type of SSL authentication connection.
- API Management also supports Azure AD-based authentication, while Application Gateway does not.
- Will still require some form of load balancing backend if you are running for instance IaaS, app services and such can automatically scale if configured properly.
Now there are other means to do authentication, but it could be useful having one point where all public-facing services are showing. Also, the HTTP/2 feature is being rolled out to support a full HTTP/2 gateway between the frontend and backend.
Regarding load balancing, you need to figure out what kind of load balancing mechanism you need depending on the backend target.
Hybrid Scenarios with self-hosted gateway APIM
Another thing is that API management also supports hybrid deployments using a Self-Hosted gateway which means that you can control API access through a containerized hosted gateway where you still manage the API access through, Microsoft Azure.
Now it should be noted that this only serves the same API internally. If you want to use this in combination with a hub and spoke topology you would need to set up a VPN connection between the existing setup and Microsoft Azure.
To set up and configure a self-hosted gateway you need to allow for outbound TCP/IP traffic on port 443 towards Azure and since it is a container it can either be run as a standalone docker container or on Kubernetes as a pod.
When you create a gateway within API Management you will need to download an environment file that describes the connection and the information. This is done from the Gateway part within the API Management Portal. Then you go into deployment and download the environment file (env.conf) and running the docker command.
Then start the docker container with the environment file.
docker run -d -p 80:8080 -p 443:8081 –name customer01 –env-file env.conf mcr.microsoft.com/azure-api-management/gateway:latest
Once the deployment is done you can use docker logs dockerid to inspect and see that it is connecting to API Management. However, it is important to note that it does not have any APIs attached to it yet, so you must assign APIs to the self-hosted gateway as well, which is also done via the portal.
Yes, I have a cat fact API…
Now you can check that the self-hosted API is downloading the API configuration by inspecting the logs using the docker logs dockerid (you can find the containerid by running docker ps) and you should see an image like this
As seen in the portal as well you can see that this API is now available on the self-hosted gateway as well. Which you can see under the gateway info where it says “self-hosted”
Now by querying the local gateway from the same host where I’m running the docker container as well using the defined ports that are available and authenticating using the API. (NOTE: You can get the Ocp-APIM key from within the Azure Portal)
curl –insecure -H “Ocp-Apim-Subscription-Key:APIMKEY” https://localhost/hybrid/images/search
And you can see that the API is working as intended from the self-hosted gateway.
APIM provides a flexible architecture which means that it supports different designs depending on if you want to provide internal or external facing API services, and it also supports hybrid scenarios. If you want to limit your public exposing services, you can place the API gateway behind an application gateway and focus more security on the exposing endpoints on the Application Gateway. However, this will affect the use of other services such as having Azure AD native authentication or mTLS based authentication. If you have APIM directly exposed just ensure to have rate-limiting or other mechanisms in place to protect your API’s.