Using a Persistent NFS Volume for a Postgres Kubernetes Deployment (Using Vagrant, Ansible and VirtualBox)

In this tutorial I present a way to launch a Postgres database instance in our Kubernetes Vagrant cluster deployed using Vagrant, Ansible and VirtualBox. The Postgres instance will have all its data stored in an remote NFS Server to preserve database data across Kubernetes cluster destroy or Postgres pod unintended disruptions.

Persistent Volumes in Kubernetes

Kubernetes containers are mostly used for stateless applications, where each instance is disposable, does not store data that needs to be persisted across restarts inside the container or needed for client sessions as its storage is ephemeral.

On the contrary, the stateful applications need to store data and have it available between restarts and sessions. Databases like Postgres or MySQL are typical stateful applications.

Kubernetes provides support for many types of volumes depending on the Cloud provider. For our local Kubernetes Cluster the most appropriate and easy to configures is a NFS volume.

NFS Server Installation

If you don’t have an existing NFS Server, it is easy to create a local NFS server for our Kubernetes Cluster.

Install the Ubuntu needed packages and create a local directory /mnt/vagrant-kubernetes:

Edit the /etc/exports file to add the exported local directory and limit the share to the CIDR 192.168.50.0/24 of our Kubernetes Vagrant Cluster VirtualBox machines.

Start the NFS server

Kubernetes Deployment File

The Postgres database deployment is composed of the following Kubernetes Objects:

  • ConfigMap: stores common configuration data for the Postgres database server
  • PersistentVolume: creates an NFS client that connects to the server and makes the NFS share available for volumen claims.
  • PersistentVolumeClaim: defines the characteristics of the needed volume, that Kubernetes will try to solve using an available PersistentVolume with the requested configuration.
  • Deployment: starts an instance of a postgres docker container using the supplied ConfigMap to redefine the value of Postgres configuration environment variables and mounts the PersistentVolumeClaim inside the container to replace postgresql Data directory.
  • Service (NodePort):  publishes the Postgres server port outside the Kubernete Cluster.

Postgres Kubernetes ConfigMap

The config map is a key value store that is available to all Kubernetes nodes. The data will be used to set some environment variables of the Postgres container:

  • POSTGRES_DB: db (the database to be created at startup if it doesn’t exists)
  • POSTGRES_USER: user (the admin user that will be created)
  • POSTGRES_PASSWORD: pass (the password for the admin user that will be created)
  • PGDATA: /var/lib/postgresql/data/k8s (the location for the data to be used upon initialization, we will be using the subdirectory k8s for this instance)

Postgres Kubernetes PersistentVolume

Defines a NFS PersistentVolume located at:

  • Server: 192.168.50.1 (The IP assigned by VirtualBox to our host)
  • Path: /mnt/vagrant-kubernetes/data (a subdirectory “data” inside the shared folder)

The volume has labels:

  • app: psql
  • ver: itwl-dev-01-pv

It is able to store up to 1 Gigabyte.

Supports many clients reading and writing at the same time.

After usage the volume is not destroyed (Retain policy) .

 

Postgres Kubernetes PersistentVolumeClaim

The PersistentVolumeClaim is the object that is assigned to the deployment. The PersistentVolumeClaim defines how the volume needs to be, and Kubernetes tries to to find a corresponding PersistentVolume that satisfies all the requirements.

The PersistentVolumeClaim asks for a volume with the following labels:

  • app: psql
  • ver: itwl-dev-01-pv

An access mode of ReadWriteMany and at least 1 Gigabyte of storage.

 

Postgres Kubernetes Deployment

A regular application deployment descriptor with the following characteristics:

A single replica will be deployed. Istio is not needed, annotation sidecar.istio.io/inject false prevents the macro to inject the proxy.

The container uses postgres latest image from the public docker registry (https://hub.docker.com/) and sets:

  • Export container port 5432
  • Set environment variables for the container using the Kubernetes config map psql-itwl-dev-01-cm
  • Replace the directory /var/lib/postgresql/data inside the Postgres container with a volumen named pgdatavol
  • Define the pgdatavol volume as an instance of the persistentVolumeClaim psql-itwl-dev-01-pvc defined before

Postgres Kubernetes NodePort Service

Since our local Kubernetes Cluster doesn’t have a Cloud provided Load Balancer, we are using the NodePort functionality to access published ports in containers.

The NodePort will publish Postgres in port 30100 of every Kubernetes Master and Nodes:

  • 192.168.50.11:30100
  • 192.168.50.12:30100
  • 192.168.50.13:30100

Deploy Postgres in Kubernetes

Create the Kubernetes resources

Check Kubernetes

Check that all resources have been created:

See the Kubernetes Postgres pod log file

Test the Kubernetes Postgres database

Use the Postgres client psql (apt-get install -y postgresql-client)

Password is pass

Create a table and add data

Delete and Recreate the Postgres Instance

You can now destroy the Postgres instance and create it again, the data is preserved.

Use the psql client to check that the database tables have been preserved:

 

 

 

 

Categories: KubernetesTutorial

Leave a Reply

avatar

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  Subscribe  
Notify of