How to create an Azure Private Link to a Load Balancer in AKS with Terraform

Andreas Lärfors • 4 minutes • 2021-11-02

How to create an Azure Private Link to a Load Balancer in AKS with Terraform

Infrastructure-as-Code (IaC) is a great way of managing your infrastructure, increasing maintainability and traceability, among other things. Terraform is a very popular IaC tool, in part because it supports so many different platforms and tools with its large community of Providers. If you're working with Azure Cloud, for example, you will probably use the Azure Cloud Terraform Provider (azurerm - for it uses the Azure Resource Manager API).

You may need to use more than one provider in order to deploy your entire system. In this blog, we are creating an Azure Kubernetes Service (AKS) Cluster and then deploying a series of Kubernetes Resources in it. The aforementioned azurerm provider can manage Azure Cloud Resources, but its ability to deploy resources into our cluster is limited. Therefore, we are also using the Kubernetes Terraform Provider.

In our use case, we are building an AKS cluster with security requirements, so it should not have any Public IP addresses. This means the Load Balancers we create will have internal IP addresses and should only be accessible from certain Virtual Networks in our Azure Cloud subscription. We are achieving this connection via Azure Private Link Service, which allows a Private Endpoint to route traffic to a Load Balancer.

The problem#

So, to the crux of our problem: our Load Balancers are managed by the Kubernetes provider and once they are created we need to use the azurerm Terraform provider to create the Private Link Services. In order to create a Private Link Service to a Load Balancer, we need to know the ID of the Frontend IP Configuration. This is where things get a little bit tricky because the Azure LoadBalancer and Frontend IP Configurations are created by our Kubernetes cluster when we deploy our Kubernetes Service of type LoadBalancer. The ordering therefore goes:

  1. [azurerm] Create the AKS cluster
  2. [kubernetes] Deploy Kubernetes Service of type LoadBalancer
  3. [azurerm] Fetch the ID of the Azure Load Balancer's Front IP Configuration created indirectly in step 2

Side note: an alternative way of creating this setup would be to create the Azure Load Balancers in Azure directly (instead of through a Kubernetes service). We could then rely on our single azurerm provider and if you want to use that method, you can stop reading this blog here. The downside of this approach is that we must also create the backend configuration, routing rules, frontend IP configuration etc. that the Azure Kubernetes Service so nicely takes care of for us if we create the Load Balancer in Kubernetes. When we declare our Load Balancer in AKS, we provide the Selector and Ports configurations and AKS takes care of the rest of the required setup.

[As a final note on this topic, the application being deployed here is a COTS application which provides the Kubernetes service of type LoadBalancer as part of the application package, so it is being used as part of retaining the application support provided by the retailer.]

Back to our problem: Using the Kubernetes Terraform provider to deploy a service of type LoadBalancer means that the attribute reference will be missing Azure-specific resource IDs (and other Azure-specific values) that you might need when creating related objects, such as Private Links. An Azure Cloud Private Link requires a frontend IP configuration ID, for example, and the Kubernetes provider resource "kubernetes_service" exports no such attribute.

The solution#

We will fetch the load balancer as an azurerm provider "azurerm_lb" data object from which we can get the frontend ip configuration id.

Example#

Given that we are setting up two environments, "prod" and "test", in a single cluster, let's create a load balancer for each ...

 1variable "environments" {
 2  description = "List of environments"
 3  type = list
 4  default = ["prod", "test"]
 5}
 6
 7...
 8
 9# Create a loadbalancer for each environment
10resource "kubernetes_service" "loadbalancer" {
11  for_each = toset( var.environments )
12  metadata {
13    name        = "${each.key}-loadbalancer"
14    ...
15  }
16  spec {
17    selector = {
18      ...
19    }
20    port {
21      ...
22    }
23    type = "LoadBalancer"
24  }
25}

Now we need to create a Private Link Service to each Load Balancer. This is where the magic happens (and it's pretty ugly magic at that):

 1resource "azurerm_private_link_service" "example" {
 2  for_each = kubernetes_service.loadbalancer
 3
 4  name                = "pl-${each.value.metadata.0.name}"
 5  resource_group_name = azurerm_resource_group.example.name
 6  location            = azurerm_resource_group.example.location
 7
 8
 9  load_balancer_frontend_ip_configuration_ids = [
10    data.azurerm_lb.example.frontend_ip_configuration[
11      index(        data.azurerm_lb.example.frontend_ip_configuration.*.private_ip_address,
12        each.value.status.0.load_balancer.0.ingress.0.ip
13      )
14    ].id
15  ]
16  ...
17}

Let's break down what's going on here. We have two representations of our load balancers:

  1. kubernetes_service.loadbalancer- is the Kubernetes Provider created resource (A Kubernetes Service resource of kind LoadBalancer).
  2. data.azurerm_lb.example - is a data object we have fetched after the Kubernetes Provider has created the load balancers (a standard Azure Cloud Load Balancer).

Creating our azurerm_private_link_service.example resource requires the load_balancer_frontend_ip_configuration_ids argument. Only data.azurerm_lb.example has this value, and we are iterating over kubernetes_service.loadbalancer. So we need to match the frontend ip configuration in the data.azurerm_lb.example to the current kubernetes_service.loadbalancer.

We achieve this using the index( ... ) function in Terraform:

1index(
2  data.azurerm_lb.example.frontend_ip_configuration.*.private_ip_address,
3  each.value.status.0.load_balancer.0.ingress.0.ip
4)

The call to index( ... ) above will return the index of the item which matches the condition. The condition is that the private_ip_address of the data.azurerm_lb.example.frontend_ip_configuration matches the ip of the current kubernetes_service.loadbalancer being iterated over. Note how we must get the latter value from the status attribute.

Once we have our index value, we fetch the correct map and retrieve the "id" field from there:

1load_balancer_frontend_ip_configuration_ids = [
2    data.azurerm_lb.example.frontend_ip_configuration[
3      index( ... )
4    ].id
5  ]

A bit of a messy solution to a simple problem, but such is sometimes the nature of working with multiple providers in Terraform.

Full example#

 1variables.tf
 2
 3variable "environments" {
 4
 5description = "List of environments"
 6
 7type = list
 8
 9default = ["prod", "test"]
10
11}
12
13kubernetes.tf
14
15# Create the cluster
16resource "azurerm_kubernetes_cluster" "example" {
17  name                = "aks-example"
18  ...
19}
20
21# Configure the kubernetes provider
22provider "kubernetes" {
23  host = azurerm_kubernetes_cluster.example.kube_config.0.host
24
25  client_certificate     = base64decode( azurerm_kubernetes_cluster.example.kube_config.0.client_certificate)
26  client_key             = base64decode( azurerm_kubernetes_cluster.example.kube_config.0.client_key)
27  cluster_ca_certificate = base64decode( azurerm_kubernetes_cluster.example.kube_config.0.cluster_ca_certificate)
28}
29
30
31# Create a loadbalancer for each environment
32resource "kubernetes_service" "loadbalancer" {
33  for_each = toset( var.environments )
34  metadata {
35    name        = "${each.key}-loadbalancer"
36    ...
37  }
38  spec {
39    selector = {
40      ...
41    }
42    port {
43      ...
44    }
45    type = "LoadBalancer"
46  }
47}
48
49# Get the Kubernetes LB as an azurerm object
50data "azurerm_lb" "example" {
51  name                = "kubernetes"
52  resource_group_name = azurerm_kubernetes_cluster.example.node_resource_group
53}
54
55
56# Create a private link service for each environment
57resource "azurerm_private_link_service" "example" {
58  for_each = kubernetes_service.loadbalancer
59
60  name                = "pl-${each.value.metadata.0.name}"
61  ...
62
63
64  load_balancer_frontend_ip_configuration_ids = [
65    data.azurerm_lb.example.frontend_ip_configuration[
66      index(        data.azurerm_lb.example.frontend_ip_configuration.*.private_ip_address,
67        each.value.status.0.load_balancer.0.ingress.0.ip
68      )
69    ].id
70  ]
71
72  nat_ip_configuration {
73    ...
74  }
75}

Azurerm Terraform provider

Kubernetes Terraform provider


Authors


Comments


Read similar posts

Blog

2024-01-24

13 minutes

How to secure Terraform code with Trivy

Learn how Trivy can be used to secure your Terraform code and integrated into your development workflow.

Blog

2023-08-22

7 minutes

How to build dashboards of your Kubernetes cluster with Steampipe

In this blog post we will take a look at Steampipe, which is a tool that can be used to query all kinds of APIs using an unified language for the queries; SQL. We’ll be querying a Kubernetes cluster with Steampipe and then building a beautiful dashboard out of our queries without breaking a sweat.

Blog

2023-05-29

8 minutes

How to scale Kubernetes with any metrics using Kubernetes Event-driven Autoscaling (KEDA)

In this blog, we will try to explore how a sample application like Elastic Stack can be scaled based on metrics other than CPU, memory or storage usage.

Sign up for our monthly newsletter.

By submitting this form you agree to our Privacy Policy