Using a NodePort in a Kubernetes Cluster on top of VirtualBox

Kubernetes tutorial explaining how to use a NodePort to publish applications in a Kubernetes cluster running in VirtualBox with Vagrant and Ansible

Accessing applications deployed in a Kubernetes Cluster using VirtualBox requires a NodePort. In a cloud provider, a load balancer will be used.

Deploying a Kubernetes Cluster on top of VirtualBox requires a specific network configuration as seen in the Kubernetes Cluster using Vagrant and Ansible and First Steps After Kubernetes Installation tutorials.

LAN, NAT, HOST Only and Tunnel Kubernetes networks
Networking in a Kubernetes Cluster using VirtualBox

The tutorial shows how to deploy an application and a NodePort to publish the application outside the Kubernetes Cluster.

Prerequisites:

Deploying Applications in Kubernetes

Create an Application configuration file that defines the name of the application, labels, number of replicas, and the docker images that are needed. Save the file as nginx.yaml.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: my-echo
          image: gcr.io/google_containers/echoserver:1.8

Apply the configuration to the Kubernetes Cluster

$ kubectl apply -f nginx.yaml 
deployment.apps/nginx-deployment created

The Docker image, in this example nginx, has to be available in the official public Docker Registry (if using a local Docker image, it first needs to be uploaded to the public registry or a private registry, see Pull an Image from a Private Registry or the section Building the Color Application in the Istio Patterns: Traffic Splitting in Kubernetes (Canary Release) tutorial).

Publish an Application Outside Kubernetes Cluster

In order to access the echo server that we just deployed from outside the cluster a NodePort needs to be created, the NodePort will allow us to access each of the Nginx echo servers (in round-robin) from outside the cluster.

In a Kubernetes cluster hosted in a cloud provider like Azure, Google or AWS a cloud-native Load Balancer will be used instead of the NodePort.

Add the following definition to nginx.yaml (the file created before)

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service-np
  labels:
    name: nginx-service-np
spec:
  type: NodePort
  ports:
    - port: 8082        # Cluster IP, i.e. http://10.103.75.9:8082
      targetPort: 8080  # Application port
      nodePort: 30000   # (EXTERNAL-IP VirtualBox IPs) i.e. http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
      protocol: TCP
      name: http
  selector:
    app: nginx

Apply the configuration to the Kubernetes Cluster

$ kubectl apply -f nginx.yaml 
deployment.apps/nginx-deployment unchanged
service/nginx-service-np created

Check to see if the NodePort hast been created

$ kubectl get services
NAME               TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes         ClusterIP      10.96.0.1      <none>        443/TCP          7h55m
nginx-service-np   NodePort       10.103.75.9    <none>        8082:30000/TCP   5h43m

Access the application with curl (or with a web browser) from the development environment using one of the public Kubernetes Cluster IPs (192.168.50.11, 192.168.50.12, 192.168.50.13) and the NodePort (30000):

$ curl http://192.168.50.11:30000/
Hostname: nginx-deployment-d7b95894f-2hpjk
Pod Information:
	-no pod information available-
Server values:
	server_version=nginx: 1.13.3 - lua: 10008
Request Information:
	client_address=192.168.116.0
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.11:8080/
Request Headers:
	accept=*/*
	host=192.168.50.11:30000
	user-agent=curl/7.61.0
Request Body:
	-no body in request-

Repeat the request and see that the value of hostname changes as Kubernetes is accessing the different instances of the application among the available PODS. The hostname corresponds to the NAME of the different PODS for the Nginx deployment:

$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-d7b95894f-2hpjk   1/1     Running   1          5h44m
nginx-deployment-d7b95894f-49lrh   1/1     Running   1          5h44m
nginx-deployment-d7b95894f-wl497   1/1     Running   1          5h44m

Kubernetes NodePort Diagram

The diagram shows the different elements that are involved in publishing an application outside the Kubernetes cluster using a NodePort. The Kubernetes master, not shown, executes a proxy as well.

NodePort traffic in Kubernetes Cluster

Network traffic when doing curl http://192.168.50.12:30000/

  1. A request to access 192.168.50.12:30000 (Node 1 Port NodePort) is received by the vboxnet router.
  2. It is routed the external IP of the Kubernetes node 1 in port 30000
  3. The request is sent to the Kubernetes Proxy
  4. The Kubernetes Proxy sends the request to the Cluster IP 10.103.75.9 and port 8082 that has been assigned to the nginx-services-np (Nginx service NodePort).
  5. The Nginx service NodePort sends the request to one of the available pods for the application Nginx, it can be any of the pods located at the same Kubernetes node or in a different node (like K8N-N-2)
Table of Contents
Primary Item (H2)Sub Item 1 (H3)Sub Item 2 (H4)
Sub Item 3 (H5)
Sub Item 4 (H6)

Related Cloud Tutorials

How to Deploy Applications in Kubernetes using Terraform
How to publish multiple replicas of an Application (from the Docker Registry) and create a NodePort in Kubernetes using Terraform (in 10 seconds)
OSI model
The 7 layers of the OSI model The OSI model is a conceptual framework that is used to describe how a network functions. It identifies seven fundamental networking layers, from the physical hardware up to high-level software applications. Each layer in the model handles a specific networking function. The standard helps administrators to visualize networks, […]
Terraform Cloud Agents in a Kubernetes Cluster
What are the Terraform Cloud Agents? With Terraform Cloud Agents, a company can manage its private infrastructure as code and benefit from all the functionality of Terraform in a SaaS scenario.
LAN, NAT, HOST Only and Tunnel Kubernetes networks
Kubernetes Cluster using Vagrant and Ansible with Containerd (in 3 minutes)
Tutorial and full source code explaining how to create a Kubernetes cluster with Ansible and Vagrant for local development under 3 minutes.
Installing Istio on Kubernetes
How to install Istio in a Kubernetes Cluster to use it as a service mesh for a microservices architecture.
Installing the Kubernetes Dashboard and managing the Cluster using kubectl
How to install the Kubernetes Dashboard and manage the cluster after installation.
Running Postgres in Kubernetes with a Persistent NFS Volume
How to create a Kubernetes persistent volume for Postgres long term storage of data using a NFS Volume
Istio Patterns: Traffic Splitting in Kubernetes (Header/Cookie Based)
How to split traffic in Kubernetes with Istio based on request headers, tutorial, and examples with source code.
Istio - ansible-kubernetes-vagrant-tutorial-Istio-2.png
Istio Patterns: Traffic Splitting in Kubernetes (Canary Release)
Tutorial on how to use Istio on Kubernetes for releasing new versions of software to the Cloud.

Javier Ruiz

IT Wonder Lab tutorials are based on the rich and diverse experience of Javier Ruiz, who founded and bootstrapped a SaaS company in the energy sector. His company, which was later acquired by a NASDAQ traded company, managed over €2 billion per year of electricity for prominent energy producers across Europe and America. Javier has more than 20 years of experience in building and managing IT companies, developing cloud infrastructure, leading cross-functional teams, and transitioning his own company from on-premises, consulting, and custom software development to a successful SaaS model that scaled globally.

4 comments on “Using a NodePort in a Kubernetes Cluster on top of VirtualBox”

  1. Congratulations on a wonderful piece of work.

    It saved me several hours of work and I have moved away from spending hours trying out different stuff to working on real life stuff that matters.

    Regards

    Rama

  2. running headless can access nginx app on local workstation, can you tell me easiest way to access from remote workstation / internet

Leave a Reply

Your email address will not be published. Required fields are marked *


linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram