A tutorial showing how to deploy an application from a Docker Registry into a local Kubernetes cluster created with k3s.
This tutorial shows how to publish an Application and create a NodePort in Kubernetes using OpenTofu. It only takes 10 seconds!
The application will be retrieved from the Docker Public Registry, and OpenTofu will instruct Kubernetes to create 3 replicas and publish its services using a LoadBalancer and/or a NodePort.
How to deploy applications in Kubernetes using OpenTofu
Install OpenTofu and a local Kubernetes cluster like K3s
Install OpenTofu and your favorite distribution of Kubernetes for local deployment, e.g. see K3s.io.
Check the Kubernetes Context
Use the command kubectl config view
to check Kubernetes cluster connectivity and the context used.
Create an OpenTofu plan file
Create an OpenTofu plan file describing the desired Kubernetes resource deployment.
Initialize OpenTofu Plan
Initialize the OpenTofu plan to download the necessary provider plugins.
Execute OpenTofu Plan and Apply
Check and apply the changes that OpenTofu will perform on the Kubernetes cluster to deploy the application.
Check that the Application has been deployed
Add a Load Balancer or a NodePort to access the Application
Add a Kubernetes Load Balancer or NodePort to access the application web server port from your local machine.
Access the Application with a web browser
Access the Application from a web browser.
Destroy the Kubernetes Application using OpenTofu
Use OpenTofu to remove the Kubernetes resources created by the application.
See the How to Install OpenTofu tutorial to install the latest OpenTofu release.
This tutorial is fully compatible with Terraform, if you prefer to use Terraform replace tofu
with terraform
in the commands.
If you already have a local or remote Kubernetes cluster ignore the following instructions to install a local Kubernetes cluster.
See the How to Install K3s tutorial to install K3s.io is a Lightweight Kubernetes cluster.
Check that the Kubernetes cluster is running by requesting information about the nodes:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION xps13 Ready control-plane,master 8m23s v1.27.7+k3s1
Check the name of the Kubernetes context by requesting information about the kubeconfig file:
$ kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://127.0.0.1:6443 name: default contexts: - context: cluster: default user: default name: default current-context: default kind: Config preferences: {} users: - name: default user: client-certificate-data: DATA+OMITTED client-key-data: DATA+OMITTED
In this example the Kubernetes cluster name is default.
Create a Terraform file named color_app.tf
that describes the OpenTofu required version, the Kubernetes provider, and its configuration along with the desired state of the application and resources to deploy in Kubernetes using OpenTofu.
Set the config_path = "/etc/rancher/k3s/k3s.yaml" to the kubeconfig file path if it differs from the standard location and set the config_context if it is not the default. See Kubernetes Terraform documentation for other provider options.
This example deploys an application named color from the itwonderlab/color Docker repository. The color app uses the environment variable COLOR
to change the background color of the web page it shows. Adjust the resource "kubernetes_deployment" to your needs and the name of the containerized application that will be deployed.
# Copyright (C) 2018 - 2023 IT Wonder Lab (https://www.itwonderlab.com) # # This software may be modified and distributed under the terms # of the MIT license. See the LICENSE file for details. # -------------------------------- WARNING -------------------------------- # IT Wonder Lab's best practices for infrastructure include modularizing # Terraform configuration. # In this example, we define everything in a single file. # See other tutorials for Terraform best practices for Kubernetes deployments. # -------------------------------- WARNING -------------------------------- terraform { required_version = "> 1.5" } #----------------------------------------- # Default provider: Kubernetes #----------------------------------------- provider "kubernetes" { #kubeconfig file, if using K3S set the path config_path = "/etc/rancher/k3s/k3s.yaml" #Context to choose from the config file. Change if not default. #config_context = "local-k3s" } #----------------------------------------- # KUBERNETES: Deploy App #----------------------------------------- resource "kubernetes_deployment" "color" { metadata { name = "color-blue-dep" labels = { app = "color" color = "blue" } //labels } //metadata spec { selector { match_labels = { app = "color" color = "blue" } //match_labels } //selector #Number of replicas replicas = 3 #Template for the creation of the pod template { metadata { labels = { app = "color" color = "blue" } //labels } //metadata spec { container { image = "itwonderlab/color" #Docker image name name = "color-blue" #Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). #Block of string name and value pairs to set in the container's environment env { name = "COLOR" value = "blue" } //env #List of ports to expose from the container. port { container_port = 8080 }//port resources { requests = { cpu = "250m" memory = "50Mi" } //requests } //resources } //container } //spec } //template } //spec } //resource
OpenTofu needs to download the providers used by the plan into the .terraform directory. Run tofu init
to initialize.
$ tofu init Initializing the backend... Initializing provider plugins... - Finding latest version of hashicorp/kubernetes... - Installing hashicorp/kubernetes v2.23.0... - Installed hashicorp/kubernetes v2.23.0 (signed, key ID 0DC64ED093B3E9FF) Providers are signed by their developers. If you'd like to know more about provider signing, you can read about it here: https://opentofu.org/docs/cli/plugins/signing/ OpenTofu has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that OpenTofu can guarantee to make the same selections by default when you run "tofu init" in the future. OpenTofu has been successfully initialized! You may now begin working with OpenTofu. Try running "tofu plan" to see any changes that are required for your infrastructure. All OpenTofu commands should now work. If you ever set or change modules or backend configuration for OpenTofu, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
Run the OpenTofu plan command to see what resources will be created, changed, or destroyed:
$ tofu plan OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create OpenTofu will perform the following actions: # kubernetes_deployment.color will be created + resource "kubernetes_deployment" "color" { + id = (known after apply) + wait_for_rollout = true + metadata { + generation = (known after apply) + labels = { + "app" = "color" + "color" = "blue" } + name = "color-blue-dep" + namespace = "default" + resource_version = (known after apply) + uid = (known after apply) } + spec { + min_ready_seconds = 0 + paused = false + progress_deadline_seconds = 600 + replicas = "3" + revision_history_limit = 10 + selector { + match_labels = { + "app" = "color" + "color" = "blue" } } + template { + metadata { + generation = (known after apply) + labels = { + "app" = "color" + "color" = "blue" } + name = (known after apply) + resource_version = (known after apply) + uid = (known after apply) } + spec { + automount_service_account_token = true + dns_policy = "ClusterFirst" + enable_service_links = true + host_ipc = false + host_network = false + host_pid = false + hostname = (known after apply) + node_name = (known after apply) + restart_policy = "Always" + scheduler_name = (known after apply) + service_account_name = (known after apply) + share_process_namespace = false + termination_grace_period_seconds = 30 + container { + image = "itwonderlab/color" + image_pull_policy = (known after apply) + name = "color-blue" + stdin = false + stdin_once = false + termination_message_path = "/dev/termination-log" + termination_message_policy = (known after apply) + tty = false + env { + name = "COLOR" + value = "blue" } + port { + container_port = 8080 + protocol = "TCP" } + resources { + limits = (known after apply) + requests = { + "cpu" = "250m" + "memory" = "50Mi" } } } } } } } Plan: 1 to add, 0 to change, 0 to destroy. ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── Note: You didn't use the -out option to save this plan, so OpenTofu can't guarantee to take exactly these actions if you run "tofu apply" now.
Run the OpenTofu apply command to deploy the application into Kubernetes:
$ tofu apply OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create OpenTofu will perform the following actions: # kubernetes_deployment.color will be created + resource "kubernetes_deployment" "color" { + id = (known after apply) + wait_for_rollout = true + metadata { + generation = (known after apply) + labels = { + "app" = "color" + "color" = "blue" } + name = "color-blue-dep" + namespace = "default" + resource_version = (known after apply) + uid = (known after apply) } + spec { + min_ready_seconds = 0 + paused = false + progress_deadline_seconds = 600 + replicas = "3" + revision_history_limit = 10 + selector { + match_labels = { + "app" = "color" + "color" = "blue" } } + template { + metadata { + generation = (known after apply) + labels = { + "app" = "color" + "color" = "blue" } + name = (known after apply) + resource_version = (known after apply) + uid = (known after apply) } + spec { + automount_service_account_token = true + dns_policy = "ClusterFirst" + enable_service_links = true + host_ipc = false + host_network = false + host_pid = false + hostname = (known after apply) + node_name = (known after apply) + restart_policy = "Always" + scheduler_name = (known after apply) + service_account_name = (known after apply) + share_process_namespace = false + termination_grace_period_seconds = 30 + container { + image = "itwonderlab/color" + image_pull_policy = (known after apply) + name = "color-blue" + stdin = false + stdin_once = false + termination_message_path = "/dev/termination-log" + termination_message_policy = (known after apply) + tty = false + env { + name = "COLOR" + value = "blue" } + port { + container_port = 8080 + protocol = "TCP" } + resources { + limits = (known after apply) + requests = { + "cpu" = "250m" + "memory" = "50Mi" } } } } } } } Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions? OpenTofu will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes kubernetes_deployment.color: Creating... kubernetes_deployment.color: Still creating... [10s elapsed] kubernetes_deployment.color: Creation complete after 15s [id=default/color-blue-dep] Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Check the Kubernetes pods:
$ kubectl get pods NAME READY STATUS RESTARTS AGE color-blue-dep-8656458999-kmfpk 1/1 Running 0 4m10s color-blue-dep-8656458999-7grbw 1/1 Running 0 4m10s color-blue-dep-8656458999-9rbjg 1/1 Running 0 4m10s
Check an individual pod (deployment). Use the NAME of one of the pods to ask for a detailed description.
$ kubectl describe pod/color-blue-dep-8656458999-kmfpk Name: color-blue-dep-8656458999-kmfpk Namespace: default Priority: 0 Service Account: default Node: xps13/192.168.100.105 Start Time: Mon, 06 Nov 2023 22:53:51 +0800 Labels: app=color color=blue pod-template-hash=8656458999 Annotations: <none> Status: Running IP: 10.42.0.10 IPs: IP: 10.42.0.10 Controlled By: ReplicaSet/color-blue-dep-8656458999 Containers: color-blue: Container ID: containerd://8585df64e461979b80886fe73c712d8861c24e11d5c767866592ba9d673669d0 Image: itwonderlab/color Image ID: docker.io/itwonderlab/color@sha256:7f17f34b41590a7684b4768b8fc4ea8d3d5f111c37d934c1e5fe5ea3567edbec Port: 8080/TCP Host Port: 0/TCP State: Running Started: Mon, 06 Nov 2023 22:54:00 +0800 Ready: True Restart Count: 0 Requests: cpu: 250m memory: 50Mi Environment: COLOR: blue Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c9df2 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-c9df2: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m36s default-scheduler Successfully assigned default/color-blue-dep-8656458999-kmfpk to xps13 Normal Pulling 4m35s kubelet Pulling image "itwonderlab/color" Normal Pulled 4m27s kubelet Successfully pulled image "itwonderlab/color" in 8.070953604s (8.070960862s including waiting) Normal Created 4m27s kubelet Created container color-blue Normal Started 4m27s kubelet Started container color-blue
Our Color application uses port 8080 to publish its internal web server, Kubernetes exposes that port on each pod (replica) running the application as defined in the containerPort specification for the deployment:
To access the Application from outside the Kubernetes cluster a Load Balancer or a NodePort service is needed to expose the port. Kubernetes will route service traffic to pods with label keys and values matching this selector specified, in our example the label app with a value "color" is used as the selector.
ports: - containerPort: 8080 protocol: TCP
Kubernetes Load Balancer Services are usually cloud infrastructure provider dependant, e.g. AWS or Azure have an external load balancer implementation.
Usually, local Kubernetes implementations don't have a LoadBalancer service available so instead a NodePort can be used to publish Pods ports, but if using K8s the default Load Balancer implementation is the Klipper Service Load Balancer which can create a local Load Balancer. This tutorial includes both options.
Using a LoadBalancer is preferred as the Load Balancer gives access to multiple Kubernetes nodes.
K3s provides a load balancer known as ServiceLB (formerly Klipper LoadBalancer)
Ports used:
Modify the Terraform file named color_app.tf that describes the OpenTofu Kubernetes desired stated to add a Load Balancer service using the above ports.
#------------------------------------------------- # KUBERNETES: Add a LoadBalancer #------------------------------------------------- resource "kubernetes_service" "color-service-lb" { metadata { name = "color-service-lb" } //metadata spec { selector = { app = "color" } //selector session_affinity = "ClientIP" port { port = 18080 target_port = 8080 node_port = 30007 } //port type = "LoadBalancer" } //spec } //resource
Use a NodePort when a LoadBalancer is not available.
Ports used in the NodePort:
Modify the Terraform file named color_app.tf that describes the OpenTofu Kubernetes desired stated to add a NodePort service to expose the application web port to the local machine.
#------------------------------------------------- # KUBERNETES: Add a NodePort #------------------------------------------------- resource "kubernetes_service" "color-service-np" { metadata { name = "color-service-np" } //metadata spec { selector = { app = "color" } //selector session_affinity = "ClientIP" port { port = 8080 node_port = 30085 } //port type = "NodePort" } //spec } //resource
Execute again the OpenTofu plan
and apply
commands.
Plan:
$ tofu plan kubernetes_deployment.color: Refreshing state... [id=default/color-blue-dep] OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create OpenTofu will perform the following actions: # kubernetes_service.color-service-lb will be created + resource "kubernetes_service" "color-service-lb" { + id = (known after apply) + status = (known after apply) + wait_for_load_balancer = true + metadata { + generation = (known after apply) + name = "color-service-lb" + namespace = "default" + resource_version = (known after apply) + uid = (known after apply) } + spec { + allocate_load_balancer_node_ports = true + cluster_ip = (known after apply) + cluster_ips = (known after apply) + external_traffic_policy = (known after apply) + health_check_node_port = (known after apply) + internal_traffic_policy = (known after apply) + ip_families = (known after apply) + ip_family_policy = (known after apply) + publish_not_ready_addresses = false + selector = { + "app" = "color" } + session_affinity = "ClientIP" + type = "LoadBalancer" + port { + node_port = 30007 + port = 18080 + protocol = "TCP" + target_port = "8080" } } } # kubernetes_service.color-service-np will be created + resource "kubernetes_service" "color-service-np" { + id = (known after apply) + status = (known after apply) + wait_for_load_balancer = true + metadata { + generation = (known after apply) + name = "color-service-np" + namespace = "default" + resource_version = (known after apply) + uid = (known after apply) } + spec { + allocate_load_balancer_node_ports = true + cluster_ip = (known after apply) + cluster_ips = (known after apply) + external_traffic_policy = (known after apply) + health_check_node_port = (known after apply) + internal_traffic_policy = (known after apply) + ip_families = (known after apply) + ip_family_policy = (known after apply) + publish_not_ready_addresses = false + selector = { + "app" = "color" } + session_affinity = "ClientIP" + type = "NodePort" + port { + node_port = 30085 + port = 8080 + protocol = "TCP" + target_port = (known after apply) } } } Plan: 2 to add, 0 to change, 0 to destroy. ───────────────────────────────────────────────────────── Note: You didn't use the -out option to save this plan, so OpenTofu can't guarantee to take exactly these actions if you run "tofu apply" now.
Apply:
Check the assigned ports for Load Balancer and/or the NodePort:
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 14h color-service-lb LoadBalancer 10.43.233.32 192.168.100.105 18080:30007/TCP 102s color-service-np NodePort 10.43.224.107 <none> 8080:30085/TCP 102
Open http://192.168.100.105:30007/ (for Load Balancer) or http://localhost:30085/ (for NodePort) to access the Application.
Review the value of the HostName as it shows which Kubernetes pod (replica) of the Applications is serving the web page.
Run tofu destroy to remove the application replicas and the NodePort.
$ tofu destroy kubernetes_service.color-service-np: Refreshing state... [id=default/color-service-np] kubernetes_service.color-service-lb: Refreshing state... [id=default/color-service-lb] kubernetes_deployment.color: Refreshing state... [id=default/color-blue-dep] OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: - destroy OpenTofu will perform the following actions: # kubernetes_deployment.color will be destroyed - resource "kubernetes_deployment" "color" { - id = "default/color-blue-dep" -> null - wait_for_rollout = true -> null - metadata { - annotations = {} -> null - generation = 1 -> null - labels = { - "app" = "color" - "color" = "blue" } -> null - name = "color-blue-dep" -> null - namespace = "default" -> null - resource_version = "6465" -> null - uid = "74f3f5d6-3a3a-4afc-bc14-1e29d2730e2e" -> null } - spec { - min_ready_seconds = 0 -> null - paused = false -> null - progress_deadline_seconds = 600 -> null - replicas = "3" -> null - revision_history_limit = 10 -> null - selector { - match_labels = { - "app" = "color" - "color" = "blue" } -> null } - strategy { - type = "RollingUpdate" -> null - rolling_update { - max_surge = "25%" -> null - max_unavailable = "25%" -> null } } - template { - metadata { - annotations = {} -> null - generation = 0 -> null - labels = { - "app" = "color" - "color" = "blue" } -> null } - spec { - active_deadline_seconds = 0 -> null - automount_service_account_token = true -> null - dns_policy = "ClusterFirst" -> null - enable_service_links = true -> null - host_ipc = false -> null - host_network = false -> null - host_pid = false -> null - node_selector = {} -> null - restart_policy = "Always" -> null - scheduler_name = "default-scheduler" -> null - share_process_namespace = false -> null - termination_grace_period_seconds = 30 -> null - container { - args = [] -> null - command = [] -> null - image = "itwonderlab/color" -> null - image_pull_policy = "Always" -> null - name = "color-blue" -> null - stdin = false -> null - stdin_once = false -> null - termination_message_path = "/dev/termination-log" -> null - termination_message_policy = "File" -> null - tty = false -> null - env { - name = "COLOR" -> null - value = "blue" -> null } - port { - container_port = 8080 -> null - host_port = 0 -> null - protocol = "TCP" -> null } - resources { - limits = {} -> null - requests = { - "cpu" = "250m" - "memory" = "50Mi" } -> null } } } } } } # kubernetes_service.color-service-lb will be destroyed - resource "kubernetes_service" "color-service-lb" { - id = "default/color-service-lb" -> null - status = [ - { - load_balancer = [ - { - ingress = [ - { - hostname = "" - ip = "192.168.100.105" }, ] }, ] }, ] -> null - wait_for_load_balancer = true -> null - metadata { - annotations = {} -> null - generation = 0 -> null - labels = {} -> null - name = "color-service-lb" -> null - namespace = "default" -> null - resource_version = "6510" -> null - uid = "2675c3fc-b414-4fe0-9106-f60ca0457619" -> null } - spec { - allocate_load_balancer_node_ports = true -> null - cluster_ip = "10.43.184.189" -> null - cluster_ips = [ - "10.43.184.189", ] -> null - external_ips = [] -> null - external_traffic_policy = "Cluster" -> null - health_check_node_port = 0 -> null - internal_traffic_policy = "Cluster" -> null - ip_families = [ - "IPv4", ] -> null - ip_family_policy = "SingleStack" -> null - load_balancer_source_ranges = [] -> null - publish_not_ready_addresses = false -> null - selector = { - "app" = "color" } -> null - session_affinity = "ClientIP" -> null - type = "LoadBalancer" -> null - port { - node_port = 30007 -> null - port = 18080 -> null - protocol = "TCP" -> null - target_port = "8080" -> null } - session_affinity_config { - client_ip { - timeout_seconds = 10800 -> null } } } } # kubernetes_service.color-service-np will be destroyed - resource "kubernetes_service" "color-service-np" { - id = "default/color-service-np" -> null - status = [ - { - load_balancer = [ - { - ingress = [] }, ] }, ] -> null - wait_for_load_balancer = true -> null - metadata { - annotations = {} -> null - generation = 0 -> null - labels = {} -> null - name = "color-service-np" -> null - namespace = "default" -> null - resource_version = "6490" -> null - uid = "0e8670c6-2f9a-47f0-8129-20a048ab18d0" -> null } - spec { - allocate_load_balancer_node_ports = true -> null - cluster_ip = "10.43.239.131" -> null - cluster_ips = [ - "10.43.239.131", ] -> null - external_ips = [] -> null - external_traffic_policy = "Cluster" -> null - health_check_node_port = 0 -> null - internal_traffic_policy = "Cluster" -> null - ip_families = [ - "IPv4", ] -> null - ip_family_policy = "SingleStack" -> null - load_balancer_source_ranges = [] -> null - publish_not_ready_addresses = false -> null - selector = { - "app" = "color" } -> null - session_affinity = "ClientIP" -> null - type = "NodePort" -> null - port { - node_port = 30085 -> null - port = 8080 -> null - protocol = "TCP" -> null - target_port = "8080" -> null } - session_affinity_config { - client_ip { - timeout_seconds = 10800 -> null } } } } Plan: 0 to add, 0 to change, 3 to destroy. Do you really want to destroy all resources? OpenTofu will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes kubernetes_service.color-service-np: Destroying... [id=default/color-service-np] kubernetes_service.color-service-lb: Destroying... [id=default/color-service-lb] kubernetes_deployment.color: Destroying... [id=default/color-blue-dep] kubernetes_deployment.color: Destruction complete after 0s kubernetes_service.color-service-np: Destruction complete after 0s kubernetes_service.color-service-lb: Destruction complete after 0s Destroy complete! Resources: 3 destroyed.
IT Wonder Lab tutorials are based on the diverse experience of Javier Ruiz, who founded and bootstrapped a SaaS company in the energy sector. His company, later acquired by a NASDAQ traded company, managed over €2 billion per year of electricity for prominent energy producers across Europe and America. Javier has over 25 years of experience in building and managing IT companies, developing cloud infrastructure, leading cross-functional teams, and transitioning his own company from on-premises, consulting, and custom software development to a successful SaaS model that scaled globally.
Are you looking for cloud automation best practices tailored to your company?