5

End to End TLS for Azure Front Door and Azure Kubernetes Service – Steven Kang

 2 years ago
source link: https://ssbkang.com/2020/08/17/end-to-end-tls-for-azure-front-door-and-azure-kubernetes-service/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Introduction

Whilst exploring options for exposing Azure Kubernetes Service (AKS) container services in public behind Web Application Firewall (WAF), I was able to find many references on how to accomplish end to end TLS encrypted connections between Azure Application Gateway and AKS (specifically Application Gateway Ingress Controller, AGIC), but not with Azure Front Door (AFD).

In this post, I will share how I achieved end to end TLS connectivity between AFD and AKS, including high level design, issue, resolution and optimised Azure cost.

Design

In a nutshell, the below diagram represents the high level overview regarding what I wanted to achieve:

  • Custom frontend domains to expose (they don’t exist, just examples):
    • https://web.ssbkang.com
    • https://api.ssbkang.com
  • Backend pools to be the Azure Load Balancer acting as an ingress controller for the AKS cluster:
    • Backend host header to be null so that the request hostname determines this value.
      For instance, the requests in curl will be:
      • curl -vvv -H "Host: web.ssbkang.com" "LB IP Address"
      • curl -vvv -H "Host: api.ssbkang.com" "LB IP Address"
    • HTTPS health probe hitting /healthz with HEAD protocol
  • 1 x routing rule to be (apply URL rewrite and/or caching if required):
    • Accepted frontend protocol: HTTPS only
    • Forwarding protocol: HTTPS only
      Note: Implementing only one routing rule is to minimise the Azure cost by having only one routing rule but can scale out to multiple frontends / backend pools

Issue

I was expecting everything to work smoothly, however, faced the following from the browser:

Our services aren't available right now.
We're working to restore all services as soon as possible. Please check back soon.

The first impression was that it sounded like the AFD instance was not fully deployed, but when I digged into it further, I managed to determine that the actual issue was related to the TLS certificate at the ingress controller level. I deployed the ingress controller using the official NGINX Helm Chart and by default, it leverages a self-signed certificate with the subject name ingress.local. Hence, when the AFD tried health probing at the backend (/healthz), it must have been returning bad request due to the fact that it does not match the certificate subject name.

Resolution

To resolve the above issue, there were two elements had to be amended; ingress controller and a public DNS entry for it.

I’ve initially deployed the ingress controller as a Helm chart leveraging FluxCD and whilst Googling, I managed to find an extra argument called default-ssl-certificate. This will automatically assign a TLS certificate if tls is missing from ingress manifests.

First of all, I used openssl to extract a crt file and a key file from a pfx certificate:

openssl pkcs12 -in wildcard.pfx -clcerts -nokeys -out tls.crt
openssl pkcs12 -in wildcard.pfx -nocerts -nodes -out tls.key

And then created a secret manifest in my Flux repository:

apiVersion: v1
kind: Secret
metadata:
name: public-ingress-tls
namespace: public-ingress
type: tls
data:
tls.crt: {CRT}
tls.key: {KEY}

Then I have updated the HelmRelease with the default-ssl-certificate argument as below:

apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
name: public-nginx-ingress
namespace: public-ingress
spec:
releaseName: public-nginx-ingress
targetNamespace: public-ingress
chart:
name: nginx-ingress
version: 1.39.0
values:
controller:
ingressClass: public-nginx-ingress
useIngressClassOnly: true
replicaCount: 3
nodeSelector.beta.kubernetes.io/os: "linux"
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "false"
extraArgs:
default-ssl-certificate: "public-ingress/public-ingress-tls"
defaultBackend:
nodeSelector.beta.kubernetes.io/os: "linux"

Finally, I have added a public DNS for my public ingress controller aks-public-ingress.ssbkang.com and updated the backend accordingly.

After the DNS registration has finished, wallah, everything started working, magic 😀

Conclusion

In this blog post, I have discussed how to achieve end to end TLS encrypted connectivity between AFD and AKS ingress controller including how to optimise Azure cost at the AFD level.

AFD in fact support TLS terminations, however, as the ingress controller cannot be located in a private VNet i.e. a public instance, it is highly recommended to implement end to end TLS encryption.
Also, another security layer to consider is to update the NSG attached to the AKS subnet to be only accepting HTTP/HTTPS traffics from the AFD instance only (can use the service tag called AzureFrontDoor.Backend). This way, no one will be able to bypass AFD hitting the ingress controller directly.

Hope this helped and if you have any questions or require clarifications, leave a comment 🙂


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK