Tutorial with full source code explaining how to create a Kubernetes cluster with Ansible and Vagrant for local development.

Ansible Playbook for Kubernetes 

Kubernetes LogoKubernetes (a.k.a K8s) is the leading platform for container deployment and management.

In this tutorial, I present a way to launch a Kubernetes cluster made up of one master server (API, Scheduler, Controller) and n nodes (Pods, kubelet, proxy, and Docker) running project Calico to implement the Kubernetes networking model.

Vagrant is used to spin up the virtual machines using the Virtualbox hypervisor and to run the Ansible playbooks to configure the Kubernetes cluster environment. The objective is to be able to provision a development and learning cluster with many workers.

Download de the code from IT Wonder Lab public GitHub repository.

This Ansible playbook and Vagrantfile for installing Kubernetes have been possible with the help from other blogs (https://kubernetes.io/blog and https://docs.projectcalico.org/). I have included a link to the relevant pages in the source code. 

After installing your local Kubernetes cluster, add Istio for load balancing external o internal traffic, controlling failures, retries, routing and applying limits and monitoring network traffic between services or adding secure communication to your microservices architecture. See tutorial Installing Istio in Kubernetes with Ansible and Vagrant for local development.

Updates:

  • 26 Sep 2019: Update Calico networking and network security  to release 3.9
  • 6 June 2019: Fix issue: kubectl was not able to recover logs. See new task “Configure node-ip … at kubelet” in ansible

The Ansible playbook follows IT Wonder Lab best practices and can be used to configure a new Kubernetes cluster in a cloud provider or in a different hypervisor as it doesn’t have dependencies with Virtualbox or Vagrant.

Creating a Kubernetes Cluster with Vagrant and Ansible 

Click on the play button to see the execution of Vagrant creating a Kubernetes Cluster with Ansible.

File Structure

The code used to create a Kubernetes Cluster with Vagrant and Ansible is composed of:

File structure of the Ansible Virtual Box Kubernetes GIT repository

  • Kubernetes Ansible roles:
    • roles/add_packages: Ansible playbook to install/remove packages using APT in an Ubuntu system.
    • roles/k8s/common: installs the needed packages for Kubernetes (delegating in add_packages) and configures the common settings for Kubernetes master and nodes.
    • roles/k8s/master: Ansible playbook to configure a Kubernetes master, it uses the common playbook for shared components between the Kubernetes master and the nodes.
    • roles/k8s/node: Ansible playbook to configure a Kubernetes node, it uses the common playbook for shared components between the Kubernetes master and the nodes.
    • k8s.yml: Ansible playbook that uses the Kubernetes ansible roles.
  • Vagrantfile: contains the definition of the machines (CPU, memory, network and Ansible playbook and properties)
  • .vagrant/: hidden directory for Vagrant tracking. It includes a Vagrant generated Inventory file: vagrant_ansible_inventory that is used by Ansible to match virtual machines and roles.
  • roles/ditwl-k8s-01-join-command: this file is generated by the Kubernetes master and includes a temporary token and the command needed to join Kubernetes nodes to the cluster.

Kubernetes Network Overview

The Kubernetes and Virtualbox network will be composed of at least 3 networks shown at the VirtualBox Kubernetes cluster network diagram:

LAN, NAT, HOST Only and Tunnel Kubernetes networks

Kubernetes Network Overview

Kubernetes External IPs

The VirtualBox HOST ONLY network will be the network used to access the Kubernetes master and nodes from outside the network, it can be considered the Kubernetes public network for our development environment.

In the diagram, it is shown in green with connections to each Kubernetes machine and a VirtualBox virtual interface vboxnet:

  • K8S-M-1 at eth1: 192.168.50.11
  • K8S-N-1 at eth1: 192.168.50.12
  • K8S-N-2 at eth1: 192.168.50.13
  • vboxnet0 virtual iface: 192.168.50.1 

VirtualBox creates the necessary routes and the vboxnet0 interface:

$ route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         _gateway        0.0.0.0         UG    100    0        0 enx106530cde22a
link-local      0.0.0.0         255.255.0.0     U     1000   0        0 enx106530cde22a
192.168.50.0    0.0.0.0         255.255.255.0   U     0      0        0 vboxnet0
192.168.100.0   0.0.0.0         255.255.255.0   U     100    0        0 enx106530cde22a

Applications published using a Kubernetes NodePort will be available at all the IPs assigned to the Kubernetes servers. Example, for an application published at nodePort 30000 the following URLs will allow access from outside the Kubernetes cluster:

  • http://192.168.50.11:30000/
  • http://192.168.50.12:30000/
  • http://192.168.50.13:30000/

See how to Publish an Application Outside Kubernetes Cluster.

It is also possible to access the Kubernetes servers by ssh using those IPs.

$ ssh vagrant@192.168.50.11
vagrant@192.168.50.11's password: vagrant
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-29-generic x86_64)
...
Last login: Mon Apr 22 16:45:17 2019 from 10.0.2.2

VirtualBox NAT Network 

The NAT network interface, with the same IP (10.0.2.15) for all servers, is assigned to the first interface or each VirtualBox machine, it is used to access the external world (LAN & Internet) from inside the Kubernetes cluster. 

In the diagram, it is shown in yellow with connections to each Kubernetes machine and a NAT router that connects to the LAN and the Internet.

For example, it is used during the Kubernetes cluster configuration to download the needed packages. Since it is a NAT interface it doesn’t allow inbound connections by default.

Kubernetes POD Network

The internal connections between Kubernetes PODs use a tunnel network with IPS on the CIDR range 192.168.112.0/20 (as configured by our Ansible playbook)

In the diagram, it is shown in orange with connections to each Kubernetes machine using tunnel interfaces.

Kubernetes will assign IPs from the POD Network to each POD that it creates. POD IPs are not accessible from outside the Kubernetes cluster and will change when PODs are destroyed and created.

Kubernetes Cluster Network (Cluster-IP)

The Kubernetes Cluster Network is a private IP range used inside the cluster to give each Kubernetes service a dedicated IP. 

In the diagram, it is shown in purple.

As shown in the following example, a different CLUSTER-IP is assigned to each service:

$ kubectl get all
NAME                                   READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-d7b95894f-2hpjk   1/1     Running   0          5m47s
pod/nginx-deployment-d7b95894f-49lrh   1/1     Running   0          5m47s
pod/nginx-deployment-d7b95894f-wl497   1/1     Running   0          5m47s

NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/kubernetes         ClusterIP      10.96.0.1      <none>        443/TCP          137m
service/nginx-service-np   NodePort       10.103.75.9    <none>        8082:30000/TCP   5m47s

NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx-deployment   3/3     3            3           5m47s

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-deployment-d7b95894f   3         3         3       5m47s

Cluster-IPs can’t be accessed from outside the Kubernetes cluster, therefore, a NodePort is created (or a LoadBalancer in a Cloud provider) to publish an app. NodePort uses the Kubernetes external IPs. 

See “Publish an Application outside Kubernetes Cluster” for an example of exposing an App in Kubernetes.

Vagrantfile

IMAGE_NAME = "bento/ubuntu-18.04"
K8S_NAME = "ditwl-k8s-01"
MASTERS_NUM = 1
MEM = 2048
CPU = 2
NODES_NUM = 2
IP_BASE = "192.168.50."

Vagrant.configure("2") do |config|
    config.ssh.insert_key = false

    config.vm.provider "virtualbox" do |v|
        v.memory = MEM
        v.cpus = CPU
    end

    (1..MASTERS_NUM).each do |i|      
        config.vm.define "k8s-m-#{i}" do |master|
            master.vm.box = IMAGE_NAME
            master.vm.network "private_network", ip: "#{IP_BASE}#{i + 10}"
            master.vm.hostname = "k8s-m-#{i}"
            master.vm.provision "ansible" do |ansible|
                ansible.playbook = "roles/k8s.yml"
                #Generate Ansible Groups for inventory
                ansible.groups = {
                    "k8s-master" => ["k8s-m-[1:#{MASTERS_NUM}]"],
                    "k8s-node" => ["k8s-n-[1:#{NODES_NUM}]"]
                }
                #Redefine defaults
                ansible.extra_vars = {
                    k8s_cluster_name:       K8S_NAME,                    
                    k8s_master_admin_user:  "vagrant",
                    k8s_master_admin_group: "vagrant",
                    k8s_master_apiserver_advertise_address: "#{IP_BASE}#{i + 10}",
                    k8s_master_node_name: "k8s-m-#{i}",
                    k8s_node_public_ip: "#{IP_BASE}#{i + 10}"
                }                
            end
        end
    end

    (1..NODES_NUM).each do |j|
        config.vm.define "k8s-n-#{j}" do |node|
            node.vm.box = IMAGE_NAME
            node.vm.network "private_network", ip: "#{IP_BASE}#{j + 10 + MASTERS_NUM}"
            node.vm.hostname = "k8s-n-#{j}"
            node.vm.provision "ansible" do |ansible|
                ansible.playbook = "roles/k8s.yml"
                #Generate Ansible Groups for inventory              
                ansible.groups = {
                    "k8s-master" => ["k8s-m-[1:#{MASTERS_NUM}]"],
                    "k8s-node" => ["k8s-n-[1:#{NODES_NUM}]"]
                }                    
                #Redefine defaults
                ansible.extra_vars = {
                    k8s_cluster_name:     K8S_NAME,
                    k8s_node_admin_user:  "vagrant",
                    k8s_node_admin_group: "vagrant",
                    k8s_node_public_ip: "#{IP_BASE}#{j + 10 + MASTERS_NUM}"
                }
            end
        end
    end
end    

Lines 1 to 7 define the configuration properties of the Kubernetes cluster:

  • IMAGE_NAME: is the box (virtual machine image) that Vagrant will download and use to create the Kubernetes cluster. We will be using a box identified as “bento/ubuntu-18.04”, it corresponds to a minimal installation of Ubuntu 18.04 packaged by the Bento project.
  • K8S_NAME: is the name that our cluster will have, it is used to identify the join-command file. In or case, the name is “ditwl-k8s-01”, its is an acronym for Demo IT Wonder Lab Kubernetes(aka k8s) and 01, the number of cluster.
  • MASTERS_NUM: number of master nodes, it is used to create a high availability Kubernetes cluster by increasing the number of master nodes (not implemented in the example Ansible code). 
  • MEM: amount of memory in megabytes for each k8s node. We are giving 2048 to each server.
  • CPU: the amount of CPUs available to each k8s node. We are giving  2 CPUs to each server.
  • NODES_NUM = number of workers, the Kubernetes nodes run the pods using a contained runtime, in our demo we are creating 2 nodes.
  • IP_BASE = First three octets of the IP address that will be used to define the VirtualBox Host network and assign IP addresses to the external interface of the Kubernetes machines. The example uses 192.168.50. to produce the following associations:
    • 192.168.50.1 for the vboxnet0
    • 192.168.50.11 for k8s-m-1 (Kubernetes master node 1)
    • 192.168.50.12 for k8s-n-1 (Kubernetes worker node 1)
    • 192.168.50.13 for k8s-n-2 (Kubernetes worker node 2)

Lines 12 to 15  configure the memory and CPU count of the machines.

Masters are created in lines 17 to 40, a loop is used to create MASTERS_NUM master machines with the following characteristics:  

  • The name of the machine is created with the expression “k8s-m-#{i}” that identifies a machine as a member of Kubernetes (k8s), a master and the number (the loop variable i).
  • Ansible as provisioner using the playbook “roles/k8s.yml”
  • A manually generated Ansible inventory of machines in two groups k8s-master and k8s-node using name patterns:

    • “k8s-master” => [“k8s-m-[1:#{MASTERS_NUM}]”],
    • “k8s-node” => [“k8s-n-[1:#{NODES_NUM}]”]

      The patterns are converted and used at the  Ansible inventory file. This is an Ansible – Vagrant hack and it is necessary to include the masters and the nodes as we need to regenerate the inventory with changes from any of the machines.
# Generated by Vagrant

k8s-n-2 ansible_host=127.0.0.1 ansible_port=2201 ansible_user='vagrant' ansible_ssh_private_key_file='/home/jruiz/.vagrant.d/insecure_private_key'
k8s-m-1 ansible_host=127.0.0.1 ansible_port=2222 ansible_user='vagrant' ansible_ssh_private_key_file='/home/jruiz/.vagrant.d/insecure_private_key'
k8s-n-1 ansible_host=127.0.0.1 ansible_port=2200 ansible_user='vagrant' ansible_ssh_private_key_file='/home/jruiz/.vagrant.d/insecure_private_key'

[k8s-master]
k8s-m-[1:1]

[k8s-node]
k8s-n-[1:2]
  • Ansible extra vars are defined to modify the default values assigned by the Ansible playbooks.
    • k8s_cluster_name: K8S_NAME
    • k8s_master_admin_user: “vagrant”
    • k8s_master_admin_group: “vagrant”
    • k8s_master_apiserver_advertise_address: “#{IP_BASE}#{i + 10}”
    • k8s_master_node_name: “k8s-m-#{i}”

Worker nodes are created in lines 42 to 64 using similar code as the one used in the masters. The name of the worker nodes is created with the expression “k8s-n-#{j}” that identifies a machine as a member of Kubernetes (k8s), a node and the number (the loop variable j).

Ansible

k8s/master

The k8s/master ansible Playbook creates the Kubernetes master node, it uses the following configuration that can be redefined in the Vagrantfile (see Vagrantfile Ansible extra vars).

k8s_master_admin_user:  "ubuntu"
k8s_master_admin_group: "ubuntu"

k8s_master_node_name: "k8s-m"
k8s_cluster_name:     "k8s-cluster"

k8s_master_apiserver_advertise_address: "192.168.101.100"
k8s_master_pod_network_cidr: "192.168.112.0/20"

Since this is a small Kubernetes cluster the default 192.168.0.0/16 pod network has been changed to be 192.168.112.0/20 (a smaller network 192.168.112.0 – 192.168.127.255). It could be modified either at the Ansible master defaults file or at the Vagrantfile as an extra var.

The role requires the installation of some packages that are common to master and worker Kubernetes nodes, since there should be an Ansible roles for each task, the Ansible meta folder is used to list role dependencies and pass the value of the variables (each role has its own variables that are assigned at this step). The master Kubernetes Ansible role has a dependency with k8s/common:

dependencies:
  - { role: k8s/common,
      k8s_common_admin_user: "{{k8s_master_admin_user}}",
      k8s_common_admin_group: "{{k8s_master_admin_group}}"
    }  

Once the dependencies are met, the playbook for the master role is executed:

#https://docs.projectcalico.org/v3.6/getting-started/kubernetes/

- name: Configure kubectl
  command: kubeadm init --apiserver-advertise-address="{{ k8s_master_apiserver_advertise_address }}" --apiserver-cert-extra-sans="{{ k8s_master_apiserver_advertise_address }}" --node-name="{{ k8s_master_node_name }}" --pod-network-cidr="{{ k8s_master_pod_network_cidr }}"
  args: 
    creates: /etc/kubernetes/manifests/kube-apiserver.yaml

- name: Create .kube dir for {{ k8s_master_admin_user }} user
  file:
      path: "/home/{{ k8s_master_admin_user }}/.kube"
      state: directory

- name: Copy kube config to {{ k8s_master_admin_user }} home .kube dir 
  copy:
    src: /etc/kubernetes/admin.conf
    dest:  /home/{{ k8s_master_admin_user }}/.kube/config
    remote_src: yes
    owner: "{{ k8s_master_admin_user }}"
    group: "{{ k8s_master_admin_group }}"
    mode: 0660

#Rewrite calico replacing defaults
#https://docs.projectcalico.org/v3.9/getting-started/kubernetes/installation/calico
- name: Rewrite calico.yaml
template:
src: calico/3.9/calico.yaml
dest: /home/{{ k8s_master_admin_user }}/calico.yaml
- name: Install Calico (using Kubernetes API datastore) become: false command: kubectl apply -f /home/{{ k8s_master_admin_user }}/calico.yaml # Step 2.6 from https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/ - name: Generate join command command: kubeadm token create --print-join-command register: join_command - name: Copy join command for {{ k8s_cluster_name }} cluster to local file become: false local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./{{ k8s_cluster_name }}-join-command"

 

k8s/node

Each worker node needs to be added to the cluster by executing the join command that was generated on the master node. The join command uses kubeadm join with api-server-endpoint (It is located at Kubernetes master server) and the token and a hash to validate the root CA public key.

kubeadm join 192.168.50.11:6443 --token lmnbkq.80h4j8ez0vfktytw --discovery-token-ca-cert-hash sha256:54bbeb6b1a519700ae1f2e53c6f420vd8d4fe2d47ab4dbd7ce1a7f62c457f68a1 

The playbook to install a node is very small, it has a dependency with the k8s/common packaged:

dependencies:
  - { role: k8s/common,
      k8s_common_admin_user: "{{k8s_node_admin_user}}",
      k8s_common_admin_group: "{{k8s_node_admin_group}}"
    }

Specific worker node tasks:

- name: Copy the join command to {{ k8s_cluster_name }} cluster
  copy: 
    src: "./{{ k8s_cluster_name }}-join-command" 
    dest: /home/{{ k8s_node_admin_user }}/{{ k8s_cluster_name }}-join-command
    owner: "{{ k8s_node_admin_user }}"
    group: "{{ k8s_node_admin_group }}"
    mode: 0760  

- name: Join the node to cluster {{ k8s_cluster_name }}
  command: sh /home/{{ k8s_node_admin_user }}/{{ k8s_cluster_name }}-join-command

 

k8s/common

The k8s/common role is used by the Kubernetes master and worker nodes Ansible playbooks.

Using the Ansible meta folder, the add_packages is added as a dependency:

The add_packages will install the packages listed at the Ansible variable k8s_common_add_packages_names along with its corresponding repositories and public keys.

dependencies:
  - { role: add_packages,
    linux_add_packages_repositories: "{{ k8s_common_add_packages_repositories }}",
    linux_add_packages_keys: "{{ k8s_common_add_packages_keys }}",
    linux_add_packages_names: "{{ k8s_common_add_packages_names }}",
    linux_remove_packages_names: "{{ k8s_common_remove_packages_names }}"
    }

Definition of variables:

k8s_common_add_packages_keys:
- key: https://download.docker.com/linux/ubuntu/gpg
- key: https://packages.cloud.google.com/apt/doc/apt-key.gpg

k8s_common_add_packages_repositories:
- repo: "deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ansible_distribution_release}} stable"
- repo: "deb https://apt.kubernetes.io/ kubernetes-xenial main" #k8s not available for Bionic (Ubuntu 18.04)

k8s_common_add_packages_names:
- name: apt-transport-https
- name: curl
- name: docker-ce
- name: docker-ce-cli 
- name: containerd.io
- name: kubeadm 
- name: kubelet 
- name: kubectl

k8s_common_remove_packages_names:
- name: 

k8s_common_admin_user:  "ubuntu"
k8s_common_admin_group: "ubuntu"

The Ansible playbook for k8s/common:

- name: Remove current swaps from fstab<br>  lineinfile:<br>    dest: /etc/fstab<br>    regexp: '^/[\S]+\s+none\s+swap '<br>    state: absent

- name: Disable swap
  command: swapoff -a
  when: ansible_swaptotal_mb > 0

- name: Add k8s_common_admin_user user to docker group
  user: 
    name: "{{ k8s_common_admin_user }}"
    group: docker

- name: Check that docker service is started
  service: 
        name: docker 
        state: started

- name: Configure node-ip {{ k8s_node_public_ip }} at kubelet
  lineinfile:
    path: '/etc/systemd/system/kubelet.service.d/10-kubeadm.conf'
    line: 'Environment="KUBELET_EXTRA_ARGS=--node-ip={{ k8s_node_public_ip }}"'
    regexp: 'KUBELET_EXTRA_ARGS='
    insertafter: '\[Service\]'
    state: present
  notify:
    - restart kubelet

roles/add_packages

This Ansible role specializes in the installation and removal of packages:

Steps:

  • Adds repositories keys,
  • Adds repositories to the sources lists,
  • Updates the cache if needed (if new repositories),
  • Removes and installs packages.
---
- name: Add new repositories keys
  apt_key:
    url='{{item.key}}'
  with_items: "{{ linux_add_packages_keys | default([])}}"
  when: linux_add_packages_keys is defined and not (linux_add_packages_keys is none or linux_add_packages_keys | trim == '')
  register: aptnewkeys

- name: Add new repositories to sources
  apt_repository:
    repo='{{item.repo}}'
  with_items: "{{ linux_add_packages_repositories | default([])}}"
  when: linux_add_packages_repositories is defined and not (linux_add_packages_repositories is none or linux_add_packages_repositories | trim == '')

- name: Force update cache if new keys added
  set_fact:
        linux_add_packages_cache_valid_time: 0
  when: aptnewkeys.changed

- name: Remove packages
  apt:
    name={{ item.name }}
    state=absent
  with_items: "{{ linux_remove_packages_names | default([])}}"
  when: linux_remove_packages_names is defined and not (linux_remove_packages_names is none or linux_remove_packages_names | trim == '')

- name: Install packages
  apt:
    name={{ item.name }}
    state=present
    update_cache=yes
    cache_valid_time={{linux_add_packages_cache_valid_time}}
  with_items: "{{ linux_add_packages_names | default([])}}"
  when: linux_add_packages_names is defined and not (linux_add_packages_names is none or linux_add_packages_names | trim == '')

 

First Steps After Kubernetes Installation

We will use vagrant ssh command to access the servers.

jruiz@XPS13:~/git/github/ansible-vbox-vagrant-kubernetes$ vagrant ssh k8s-m-1
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-29-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Fri Apr 19 16:31:11 UTC 2019

  System load:  0.02              Users logged in:        0
  Usage of /:   5.1% of 61.80GB   IP address for eth0:    10.0.2.15
  Memory usage: 9%                IP address for eth1:    192.168.50.11
  Swap usage:   0%                IP address for docker0: 172.17.0.1
  Processes:    105

140 packages can be updated.
78 updates are security updates.

Last login: Fri Apr 19 14:02:05 2019 from 10.0.2.2
vagrant@k8s-m-1:~$ 

Check Syslog

Open a Vagrant SSH to k8s-m-1 and check syslog file for errors

jruiz@XPS13:~/git/github/ansible-vbox-vagrant-kubernetes$ vagrant ssh k8s-m-1
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-29-generic x86_64)

...
vagrant@k8s-m-1:~$ tail -f /var/log/syslog 
Apr 20 14:32:05 k8s-m-1 systemd[7025]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Apr 20 14:32:05 k8s-m-1 systemd[7025]: Reached target Paths.
Apr 20 14:32:05 k8s-m-1 systemd[7025]: Reached target Timers.
Apr 20 14:32:05 k8s-m-1 systemd[7025]: Listening on GnuPG cryptographic agent and passphrase cache.
Apr 20 14:32:05 k8s-m-1 systemd[7025]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Apr 20 14:32:05 k8s-m-1 systemd[7025]: Reached target Sockets.
Apr 20 14:32:05 k8s-m-1 systemd[7025]: Reached target Basic System.
Apr 20 14:32:05 k8s-m-1 systemd[7025]: Reached target Default.
Apr 20 14:32:05 k8s-m-1 systemd[1]: Started User Manager for UID 1000.
Apr 20 14:32:05 k8s-m-1 systemd[7025]: Startup finished in 29ms.
^C

Check that Calico is Running 

Open a Vagrant SSH to k8s-m-1 and execute kubectl get pods –all-namespaces

jruiz@XPS13:~/git/github/ansible-vbox-vagrant-kubernetes$ vagrant ssh k8s-m-1
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-29-generic x86_64)
...
vagrant@k8s-m-1:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
default       nginx-deployment-6dd86d77d-l9dd9           1/1     Running   1          19h
default       nginx-deployment-6dd86d77d-qr6dl           1/1     Running   1          19h
kube-system   calico-kube-controllers-5cbcccc885-pwgmg   1/1     Running   2          21h
kube-system   calico-node-2cj4q                          1/1     Running   2          21h
kube-system   calico-node-q25j7                          1/1     Running   2          21h
kube-system   calico-node-vkbj5                          1/1     Running   2          21h
kube-system   coredns-fb8b8dccf-nfs4w                    1/1     Running   2          21h
kube-system   coredns-fb8b8dccf-tmrcg                    1/1     Running   2          21h
kube-system   etcd-k8s-m-1                               1/1     Running   2          21h
kube-system   kube-apiserver-k8s-m-1                     1/1     Running   2          21h
kube-system   kube-controller-manager-k8s-m-1            1/1     Running   2          21h
kube-system   kube-proxy-jxfjf                           1/1     Running   2          21h
kube-system   kube-proxy-ljr26                           1/1     Running   2          21h
kube-system   kube-proxy-mdgmb                           1/1     Running   2          21h
kube-system   kube-scheduler-k8s-m-1                     1/1     Running   2          21h
kube-system   kubernetes-dashboard-5f7b999d65-l8bsx      1/1     Running   1          19h

 

Administer the Kubernetes Cluster from your host

Install kubectl to administer the Kubernetes Cluster from your development host

sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl

Copy the Kubernetes config to your local home .kube dir

#Create the configuration directory
$ mkdir -p ~/.kube

#Find the SSH port of the k8s-m-1 server
$ vagrant port k8s-m-1
The forwarded ports for the machine are listed below. Please note that
these values may differ from values configured in the Vagrantfile if the
provider supports automatic port collision detection and resolution.

    22 (guest) => 2222 (host)

#Copy the file using scp (ssh password is vagrant)
$ scp -P 2222 vagrant@127.0.0.1:/home/vagrant/.kube/config ~/.kube/config
vagrant@127.0.0.1's password: vagrant
config                                                                     100% 5449   118.7KB/s   00:00

List the Kubernetes cluster nodes using  kubectl from your development host:

$ kubectl cluster-info
Kubernetes master is running at https://192.168.50.11:6443
KubeDNS is running at https://192.168.50.11:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl get nodes --all-namespaces
NAME      STATUS   ROLES    AGE   VERSION
k8s-m-1 Ready master 18m v1.16.0
k8s-n-1 Ready <none> 13m v1.16.0
k8s-n-2 Ready <none> 10m v1.16.0 $ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default nginx-deployment-6dd86d77d-l9dd9 1/1 Running 2 2d default nginx-deployment-6dd86d77d-qr6dl 1/1 Running 2 2d kube-system calico-kube-controllers-5cbcccc885-pwgmg 1/1 Running 3 2d2h kube-system calico-node-2cj4q 1/1 Running 3 2d2h kube-system calico-node-q25j7 1/1 Running 3 2d2h kube-system calico-node-vkbj5 1/1 Running 3 2d2h kube-system coredns-fb8b8dccf-nfs4w 1/1 Running 3 2d2h kube-system coredns-fb8b8dccf-tmrcg 1/1 Running 3 2d2h kube-system etcd-k8s-m-1 1/1 Running 3 2d2h kube-system kube-apiserver-k8s-m-1 1/1 Running 3 2d2h kube-system kube-controller-manager-k8s-m-1 1/1 Running 3 2d2h kube-system kube-proxy-jxfjf 1/1 Running 3 2d2h kube-system kube-proxy-ljr26 1/1 Running 3 2d2h kube-system kube-proxy-mdgmb 1/1 Running 3 2d2h kube-system kube-scheduler-k8s-m-1 1/1 Running 3 2d2h kube-system kubernetes-dashboard-5f7b999d65-l8bsx 1/1 Running 2 2d

 

Deploying Applications in Kubernetes

Create an Application configuration file that defines the name of the application, labels, number of replicas and the docker images that are needed. Save the file as nginx.yaml.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: my-echo
          image: gcr.io/google_containers/echoserver:1.8  

Apply the configuration to the Kubernetes Cluster

$ kubectl apply -f nginx.yaml 
deployment.apps/nginx-deployment created

Publish an Application Outside Kubernetes Cluster

In order to access the echo server that we just deployed from outside the cluster a NodePort needs to be created, the NodePort will allow us to access each of the nginx echo servers (in round-robin) from outside the cluster.

In a Kubernetes cluster hosted in a cloud provider like Azure, Google or AWS a cloud-native Load Balancer will be used instead of the NodePort.

Add the following definition to nginx.yaml (the file created before)

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service-np
  labels:
    name: nginx-service-np
spec:
  type: NodePort
  ports:
    - port: 8082        # Cluster IP, i.e. http://10.103.75.9:8082
      targetPort: 8080  # Application port
      nodePort: 30000   # (EXTERNAL-IP VirtualBox IPs) i.e. http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
      protocol: TCP
      name: http
  selector:
    app: nginx

Apply the configuration to the Kubernetes Cluster

$ kubectl apply -f nginx.yaml 
deployment.apps/nginx-deployment unchanged
service/nginx-service-np created

Check to see if the NodePort hast been created

$ kubectl get services
NAME               TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes         ClusterIP      10.96.0.1      <none>        443/TCP          7h55m
nginx-service-np   NodePort       10.103.75.9    <none>        8082:30000/TCP   5h43m

Access the application with curl (or with a web browser) from the development environment using one of the public Kubernetes Cluster IPs (192.168.50.11, 192.168.50.12, 192.168.50.13) and the nodePort (30000):

$ curl http://192.168.50.11:30000/
Hostname: nginx-deployment-d7b95894f-2hpjk

Pod Information:
	-no pod information available-

Server values:
	server_version=nginx: 1.13.3 - lua: 10008

Request Information:
	client_address=192.168.116.0
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.11:8080/

Request Headers:
	accept=*/*
	host=192.168.50.11:30000
	user-agent=curl/7.61.0

Request Body:
	-no body in request-

Repeat the request and see that the value of Hostname changes as Kubernetes is accessing the different instances of the application among the available PODS.

    The hostname corresponds to the NAME of the different PODS for the Nginx deployment:

    $ kubectl get pods
    NAME                               READY   STATUS    RESTARTS   AGE
    nginx-deployment-d7b95894f-2hpjk   1/1     Running   1          5h44m
    nginx-deployment-d7b95894f-49lrh   1/1     Running   1          5h44m
    nginx-deployment-d7b95894f-wl497   1/1     Running   1          5h44m
    

     

    Kubernetes NodePort Diagram

    The diagram shows the different elements that are involved in publishing an application outside the Kubernetes cluster using a NodePort. The Kubernetes master, not shown, executes a proxy as well.

     

    Shows two Kubernetes PODs connected and using a k-proxy for Node Port connectivity

     

    Network traffic when doing curl http://192.168.50.12:30000/

    1. A request to access 192.168.50.12:30000 (Node 1 Port NodePort) is received by the vboxnet router.
    2. It is routed the external IP of the Kubernetes node 1 in port 30000
    3. The requests is sent to the Kubernetes Proxy
    4. The Kubernetes Proxy sends the request to the Cluster IP 10.103.75.9 and port 8082 that has been assigned to the nginx-services-np (Nginx service NodePort).
    5. The Nginx service NodePort sends the request to one of the available pods for the application Nginx, it can be any of the pods located at the same Kubernetes node or in a different node (like K8N-N-2)

    Useful Vagrant commands

    #Create the cluster or start the cluster after a host reboot
    vagrant up
    
    #Execute again the Ansible playlist in all the vagrant boxes, useful during development of Ansible playbooks
    vagrant provision 
    
    #Execute again the Ansible playlist in the Kubernetes node 1
    vagrant provision k8s-n-1
    
    #Open an ssh connection to the Kubernetes master
    vagrant ssh k8s-m-1
    
    #Open an ssh connection to the Kubernetes node 1
    vagrant ssh k8s-n-1
    
    #Open an ssh connection to the Kubernetes node 2
    vagrant ssh k8s-n-2
    
    #Stop all Vagrant machines (use vagrant up to start)
    vagrant halt

    Next steps

    After installing Kubernetes using Vagrant, add a service mesh to connect all the microservices, see Installing Istio in Kubernetes under VirtualBox.

     

     

    How useful was this post?

    Click on a star to rate it!

    Average rating / 5. Vote count:

    We are sorry that this post was not useful for you!

    Let us improve this post!


    1
    Leave a Reply

    avatar
    1 Comment threads
    0 Thread replies
    0 Followers
     
    Most reacted comment
    Hottest comment thread
    1 Comment authors
    jsanz Recent comment authors

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    newest oldest most voted
    jsanz
    Guest
    jsanz

    Very descriptive and useful. I will definitely use it in my future projects. Thanks!