How to Deploy Applications in Kubernetes using Terraform

How to publish multiple replicas of an Application (from the Docker Registry) and create a NodePort in Kubernetes using Terraform (in 10 seconds)

Publishing Containers in Kubernetes with Terraform

Terraform is a great tool to programmatically define infrastructure (IaC or Infrastructure as Code) since Kubernetes Applications are containerized, its deployment can be done with a small Terraform configuration file that defines the resources that should be created in Kubernetes.

This tutorial shows how to publish an Application and create a NodePort in Kubernetes using Terraform. It only takes 10 seconds!

The application will be retrieved from the Docker Public Registry and Terraform will instruct Kubernetes to create 3 replicas and publish its services in a NodePort.

How to deploy applications in Kubernetes using Terraform

  1. Check Kubernetes Cluster Connection Context

    Check the context file using the command kubectl config view.

  2. Test Kubernetes Cluster Connectivity

    Check Kubernetes Cluster Connectivity using the command kubectl get nodes.

  3. Create a Terraform file for Application Deployment in Kubernetes

    Use the Terraform Kubernetes provider and set the config_context to use. Define a kubernetes_deployment resource with the Kubernetes metadata, specs, and container (Docker) image.

  4. Initialize Terraform Kubernetes Provider

    Initialize the Terraform Providers (Terraform Kubernetes provider) with the terraform init command. It will download the required plugins.

  5. Publish the Application In Kubernetes and its NodePort with Terraform

    Run terraform plan and terraform apply to Publish the Application In Kubernetes and create a NodePort (if needed).

  6. Use the Kubernetes Dashboard to Review the Deployment

    The Kubernetes Dashboard shows the number of replicas or PODS of the application and a Service with the NPort to access the application from outside the Kubernetes cluster.

Demo: Publishing Containers in Kubernetes with Terraform:

This demo shows how Terraform is used to deploy an application image from the Docker Public Registry into Kubernetes.

Click the ► play button to see the asciinema demo:

Prerequisites

Check Kubernetes Cluster Connection Context

For our example, we will use an existing Kubernetes cluster connection configuration available at the standard location ~/.kube/config

The ~/.kube/config file can have many different contexts, a context defines a cluster, a user, and a name for the context.

Check the context file using the command kubectl config view:

Zj7i2amef572KHsabWyPFnH0wSHhw80DJa3UMujA1mI3JyUYtCuqpeSceZTkNDyF3iNpcF79PrOFXfkCqLDKbZQQQ4tPGqfrxxvT7l5LSghyN1f0IjPYmARPlAbIQUisHk3clsuQOUDEA3W0gEjS2VlNh7uphTvQFh3Zj7i2amef572KHsabWyPFnH0wSHhw80DJa3UMujA1mI3JyUYtCuqpeSceZTkNDyF3iNpcF79PrOFXfkCqLDKbZQQQ4tPGqfrxxvT7l5LSghyN1f0IjPYmARPlAbIQUisHk3clsuQOUDEA3W0gEjS2VlNh7uphTvQFh3

$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.50.11:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@ditwl-k8s-01
current-context: kubernetes-admin@ditwl-k8s-01
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED</pre>

The context named kubernetes-admin@ditwl-k8s-01 is shown as it appears at ~/.kube/config file:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lH....Q0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.50.11:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@ditwl-k8s-01
current-context: kubernetes-admin@ditwl-k8s-01
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJWTJsaU5WWjVrZ1V3....U4rbW9qL1l6V0NJdURnSXZBRU1NZDVIMnBOaHMvcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1XRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpWSUlFcFFJQkFBS0NBUUVBK295RXBYVTZu.....BLRVktLS0tLQo=

We will use the context kubernetes-admin@ditwl-k8s-01 in our Terraform provider definition for Kubernetes.

If your context differs either update the Terraform file or rename the context using the commands:

kubectl config get-contexts
kubectl config rename-context

In the following example, an existing context named kubernetes-admin@kubernetes is renamed to kubernetes-admin@ditwl-k8s-01

$ kubectl config get-contexts
CURRENT   NAME                            CLUSTER      AUTHINFO           NAMESPACE
*         kubernetes-admin@kubernetes   kubernetes   kubernetes-admin
$ kubectl config rename-context kubernetes-admin@kubernetes kubernetes-admin@ditwl-k8s-01
Context "kubernetes-admin@kubernetes" renamed to "kubernetes-admin@ditwl-k8s-01".
$ kubectl config get-contexts
CURRENT   NAME                            CLUSTER      AUTHINFO           NAMESPACE
*         kubernetes-admin@ditwl-k8s-01   kubernetes   kubernetes-admin

Test Kubernetes Cluster Connectivity

Please make sure that your Kubernetes configuration file has the correct credentials by connecting to the cluster with the kubectl command.

jruiz@XPS13:~/git/github/terraform-kubernetes-deploy-app$ kubectl get nodes
NAME      STATUS   ROLES    AGE     VERSION
k8s-m-1   Ready    master   3h33m   v1.18.6
k8s-n-1   Ready    <none>   3h30m   v1.18.6
k8s-n-2   Ready    <none>   3h27m   v1.18.6

Write Terraform file for Application Deployment in Kubernetes

Create or download from GitHub the file terraform.tf:

# Copyright (C) 2018 - 2020 IT Wonder Lab (https://www.itwonderlab.com)
#
# This software may be modified and distributed under the terms
# of the MIT license.  See the LICENSE file for details.
# -------------------------------- WARNING --------------------------------
# IT Wonder Lab's best practices for infrastructure include modularizing 
# Terraform configuration. 
# In this example, we define everything in a single file. 
# See other tutorials for Terraform best practices for Kubernetes deployments.
# -------------------------------- WARNING --------------------------------
terraform {
  required_version = "~> 0.12" #cannot contain interpolations. Means requiered version >= 0.12 and < 0.13
}
#-----------------------------------------
# Default provider: Kubernetes
#-----------------------------------------
provider "kubernetes" {
  #Context to choose from the config file.
  config_context = "kubernetes-admin@ditwl-k8s-01"
  version = "~> 1.12"
}
#-----------------------------------------
# KUBERNETES DEPLOYMENT COLOR APP
#-----------------------------------------
resource "kubernetes_deployment" "color" {
    metadata {
        name = "color-blue-dep"
        labels = {
            app   = "color"
            color = "blue"
        } //labels
    } //metadata
    
    spec {
        selector {
            match_labels = {
                app   = "color"
                color = "blue"
            } //match_labels
        } //selector
        #Number of replicas
        replicas = 3
        #Template for the creation of the pod
        template { 
            metadata {
                labels = {
                    app   = "color"
                    color = "blue"
                } //labels
            } //metadata
            spec {
                container {
                    image = "itwonderlab/color"   #Docker image name
                    name  = "color-blue"          #Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL).
                    
                    #Block of string name and value pairs to set in the container's environment
                    env { 
                        name = "COLOR"
                        value = "blue"
                    } //env
                    
                    #List of ports to expose from the container.
                    port { 
                        container_port = 8080
                    }//port          
                    
                    resources {
                        limits {
                            cpu    = "0.5"
                            memory = "512Mi"
                        } //limits
                        requests {
                            cpu    = "250m"
                            memory = "50Mi"
                        } //requests
                    } //resources
                } //container
            } //spec
        } //template
    } //spec
} //resource
#-------------------------------------------------
# KUBERNETES DEPLOYMENT COLOR SERVICE NODE PORT
#-------------------------------------------------
resource "kubernetes_service" "color-service-np" {
  metadata {
    name = "color-service-np"
  } //metadata
  spec {
    selector = {
      app = "color"
    } //selector
    session_affinity = "ClientIP"
    port {
      port      = 8080 
      node_port = 30085
    } //port
    type = "NodePort"
  } //spec
} //resource

Lines 13 to 15 define the required version for Terraform. We will be using Terraform 0.12

Lines 21 to 25 define the config_context from the file ~/.kube/config that will be used, we will use kubernetes-admin@ditwl-k8s-01, it also sets the required version for the Terraform Kubernetes provider.

Lines 32 to 91 define a Terraform Kubernetes deployment resource named color with the following properties:

  • Name = “color-blue-dep”
  • Labels
    • app=”color”
    • color=”blue”
  • Replicas: 3
  • All replicas use a container template for the PODs that:
    • Pulls the Docker image “itwonderlab/color” from the public Docket registry (https://hub.docker.com/u/itwonderlab)
    • Sets an environment variable in the Docker container COLOR with the value “blue”
    • Publish container port 8080 (the HTTP port used by our color application)
    • Sets CPU and memory limits.

Since we use our Local Kubernetes Cluster using Vagrant and Ansible tutorial, a NodePort must be created to expose the application port outside the VirtualBox network.

Lines 97 to 112 create a NodePort that publishes app “color” port 8080 as Node Port 30085 in all Kubernetes nodes’ public IPs. See Using a NodePort in a Kubernetes Cluster on top of VirtualBox for more information.

Initialize Terraform Kubernetes Providers

Initialize the Terraform Kubernetes Provider by running terraform init. It will download the required plugins. This step is needed when a new provider has been added to the Terraform plan.

~/git/github/terraform-kubernetes-deploy-app$ terraform init
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "kubernetes" (hashicorp/kubernetes) 1.12.0...
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,rerun this command to reinitialize your working directory. If you forget, othercommands will detect it and remind you to do so if necessary 

Publish the Application In Kubernetes and its NodePort with Terraform

Publish the application by applying the Terraform plan.

Run terraform plan

jruiz@XPS13:~/git/github/terraform-kubernetes-deploy-app$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
Terraform will perform the following actions:
  # kubernetes_deployment.color will be created
  + resource "kubernetes_deployment" "color" {
      + id               = (known after apply)
      + wait_for_rollout = true
      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "app"   = "color"
              + "color" = "blue"
            }
          + name             = "color-blue-dep"
          + namespace        = "default"
          + resource_version = (known after apply)
          + self_link        = (known after apply)
          + uid              = (known after apply)
        }
      + spec {
          + min_ready_seconds         = 0
          + paused                    = false
          + progress_deadline_seconds = 600
          + replicas                  = 3
 ...
      + spec {
          + cluster_ip                  = (known after apply)
          + external_traffic_policy     = (known after apply)
          + publish_not_ready_addresses = false
          + selector                    = {
              + "app" = "color"
            }
          + session_affinity            = "ClientIP"
          + type                        = "NodePort"
          + port {
              + node_port   = 30085
              + port        = 8080
              + protocol    = "TCP"
              + target_port = (known after apply)
            }
        }
    }
Plan: 2 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Run terraform apply to make the necessary changes in the Kubernetes cluster:

jruiz@XPS13:~/git/github/terraform-kubernetes-deploy-app$ terraform plan
jruiz@XPS13:~/git/github/terraform-kubernetes-deploy-app$ terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
Terraform will perform the following actions:
  # kubernetes_deployment.color will be created
  + resource "kubernetes_deployment" "color" {
      + id               = (known after apply)
      + wait_for_rollout = true
      + metadata {
          + generation       = (known after apply)
          + labels           = {
              + "app"   = "color"
              + "color" = "blue"
            }
          + name             = "color-blue-dep"
          + namespace        = "default"
          + resource_version = (known after apply)
          + self_link        = (known after apply)
          + uid              = (known after apply)
        }
      + spec {
          + min_ready_seconds         = 0
          + paused                    = false
          + progress_deadline_seconds = 600
          + replicas                  = 3
...
          + template {
              + metadata {
                  + generation       = (known after apply)
                  + labels           = {
                      + "app"   = "color"
                      + "color" = "blue"
                    }
                  + name             = (known after apply)
                  + resource_version = (known after apply)
                  + self_link        = (known after apply)
                  + uid              = (known after apply)
                }
              + spec {
...
                  + container {
                      + image                    = "itwonderlab/color"
                      + image_pull_policy        = (known after apply)
                      + name                     = "color-blue"
                      + stdin                    = false
                      + stdin_once               = false
                      + termination_message_path = "/dev/termination-log"
                      + tty                      = false
                      + env {
                          + name  = "COLOR"
                          + value = "blue"
                        }
                      + port {
                          + container_port = 8080
                          + protocol       = "TCP"
                        }
 ...
    }
  # kubernetes_service.color-service-np will be created
  + resource "kubernetes_service" "color-service-np" {
      + id                    = (known after apply)
  ...
      + spec {
          + cluster_ip                  = (known after apply)
          + external_traffic_policy     = (known after apply)
          + publish_not_ready_addresses = false
          + selector                    = {
              + "app" = "color"
            }
          + session_affinity            = "ClientIP"
          + type                        = "NodePort"
          + port {
              + node_port   = 30085
              + port        = 8080
              + protocol    = "TCP"
              + target_port = (known after apply)
            }
        }
    }
Plan: 2 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.
  Enter a value: yes
kubernetes_service.color-service-np: Creating...
kubernetes_service.color-service-np: Creation complete after 0s [id=default/color-service-np]
kubernetes_deployment.color: Creating...
kubernetes_deployment.color: Creation complete after 8s [id=default/color-blue-dep]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

The application has been published.

Access the App using the NodePort

Open the URL http://192.168.50.11:30085/ to access the Color App (All Nodes in Kubernetes expose the same NodePort, you can use any of the Cluster IPs as explained in Using a NodePort in a Kubernetes Cluster on top of VirtualBox.

Application deployed in Kubernetes using Terraform
Color App Published in Kubernetes with Terraform

Now that you have accessed one of the Color App replicas, you can explore the changes that Terraform did in Kubernetes using the Kubernetes Dashboard, you can also modify the Terraform configuration, and apply changes to the deployed application.

Use the Kubernetes Dashboard to Review the Deployment

If using the tutorial for a local Kubernetes cluster, access the Kubernetes Dashboard with the URL https://192.168.50.11:30002/

Kubernetes Dashboard showing the color app resources (PODs and Services)

The Dashboard shows the 3 replicas or PODS of the application, a replica set that tells Kubernetes the number of PODs that it has to keep alive, and a Service with the NPort.

Modify the number of replicas to 1

See how Terraform and Kubernetes modify the application number of replicas without disturbing existing connections by using a rolling update strategy.

Modify the terraform.tf file to change the number of replicas:

replicas = 1

Run terraform plan to see what will change:

jruiz@XPS13:~/git/github/terraform-kubernetes-deploy-app$ terraform plan
Refreshing Terraform state in-memory prior to plan...
...
  ~ update in-place
Terraform will perform the following actions:
  # kubernetes_deployment.color will be updated in-place
  ~ resource "kubernetes_deployment" "color" {
        id               = "default/color-blue-dep"
        wait_for_rollout = true
        metadata {
            annotations      = {}
            generation       = 1
            labels           = {
                "app"   = "color"
                "color" = "blue"
            }
...
      ~ spec {
            min_ready_seconds         = 0
            paused                    = false
            progress_deadline_seconds = 600
          ~ replicas                  = 3 -> 1
            revision_history_limit    = 10
...
            strategy {
                type = "RollingUpdate"
                rolling_update {
                    max_surge       = "25%"
                    max_unavailable = "25%"
                }
            }
 ...
    }
Plan: 0 to add, 1 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Run terraform apply to make the change:

jruiz@XPS13:~/git/github/terraform-kubernetes-deploy-app$ terraform apply
...
Resource actions are indicated with the following symbols:
  ~ update in-place
Terraform will perform the following actions:
  # kubernetes_deployment.color will be updated in-place
  ~ resource "kubernetes_deployment" "color" {
        id               = "default/color-blue-dep"
        wait_for_rollout = true
...
      ~ spec {
            min_ready_seconds         = 0
            paused                    = false
            progress_deadline_seconds = 600
          ~ replicas                  = 3 -> 1
            revision_history_limit    = 10
...
    }
Plan: 0 to add, 1 to change, 0 to destroy.
...
kubernetes_deployment.color: Modifying... [id=default/color-blue-dep]
kubernetes_deployment.color: Modifications complete after 1s [id=default/color-blue-dep]
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Terraform has instructed Kubernetes to change the number of replicas from 3 to 1, if you look at the dashboard or monitor the Kubernetes cluster during the change you will notice how Kubernetes applies a rolling update.

Table of Contents
Primary Item (H2)Sub Item 1 (H3)Sub Item 2 (H4)
Sub Item 3 (H5)
Sub Item 4 (H6)

Related Cloud Tutorials

AWS and Terraform Naming Best Practices
Terraform and AWS resource naming should follow a company standard. Each company has different requirements and the standard should be adjusted.
How To Debug Terraform
Enable Terraform debug Terraform uses the value from the environment variable TF_LOG to define the LOG level. Available values are TRACE, DEBUG, INFO, WARN or ERROR. Additionally, you can specify a destination file for the log by setting the environment variable TF_LOG_PATH to the full path of the desired destination. Set the debug variables and […]
Terraform logo
HCL
HashiCorp Configuration Language HCL is a domain-specific language developed by HashiCorp, a company known for its infrastructure automation tools such as Terraform, Vault, Consul, and Nomad. HCL is designed specifically for writing configuration files that define infrastructure components and their settings. It is used in HashiCorp’s suite of tools to create and manage infrastructure as […]
AWS Terraform module
IaC
Infrastructure as Code IaC is an approach to managing and provisioning computing infrastructure through machine-readable code and automation, rather than manual processes. In IaC, infrastructure is defined, configured, and managed using code, which can be version-controlled and treated like any other software application. IaC involves: IaC provides several benefits, including improved efficiency, reduced manual errors, […]
OSI model
The 7 layers of the OSI model The OSI model is a conceptual framework that is used to describe how a network functions. It identifies seven fundamental networking layers, from the physical hardware up to high-level software applications. Each layer in the model handles a specific networking function. The standard helps administrators to visualize networks, […]
Terraform Cloud Agents in a Kubernetes Cluster
What are the Terraform Cloud Agents? With Terraform Cloud Agents, a company can manage its private infrastructure as code and benefit from all the functionality of Terraform in a SaaS scenario.
LAN, NAT, HOST Only and Tunnel Kubernetes networks
Kubernetes Cluster using Vagrant and Ansible with Containerd (in 3 minutes)
Tutorial and full source code explaining how to create a Kubernetes cluster with Ansible and Vagrant for local development under 3 minutes.
terraform-aws-ec2-rds-basic-free - ITWL_AWS_Terraform_VPC_WP_RDS_tags.png
How to Share Infrastructure in Multiple Terraform Projects?
Methods to divide Terraform AWS infrastructure between different teams and projects using Terraform: Using Terraform Data Sources, Accessing a Remote Terraform State-file From Other Project, ...
Installing Istio on Kubernetes
How to install Istio in a Kubernetes Cluster to use it as a service mesh for a microservices architecture.
Installing the Kubernetes Dashboard and managing the Cluster using kubectl
How to install the Kubernetes Dashboard and manage the cluster after installation.
1 2 3

Javier Ruiz

IT Wonder Lab tutorials are based on the rich and diverse experience of Javier Ruiz, who founded and bootstrapped a SaaS company in the energy sector. His company, which was later acquired by a NASDAQ traded company, managed over €2 billion per year of electricity for prominent energy producers across Europe and America. Javier has more than 20 years of experience in building and managing IT companies, developing cloud infrastructure, leading cross-functional teams, and transitioning his own company from on-premises, consulting, and custom software development to a successful SaaS model that scaled globally.

Leave a Reply

Your email address will not be published. Required fields are marked *


linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram