Home A Kubernetes deployment showing off Auto Scaling
Post
Cancel

A Kubernetes deployment showing off Auto Scaling

This a complicated Kubernetes deployment.

  • It creates a custom image
  • It pushes the custom image to docker hub
  • The deployment YAML deploys:
    • NodePort service to make the internal pods available externally
    • An Horizontal AutoScaler for specific containers
    • A container with nginx with PHP and some sample web files

Explanation

The first section is a Service called nodePort. This exposes an external TCP port at 30080. Internally, this service monitors port 80. NodePort provides “load balancing” so the incoming requests are spread across the containers using the tag specified by spec.selector.app.

The HorizontalPodAutoscaler section implements the autoscaler and specifies how it autoscales.

  • spec.minReplicas and spec.maxReplicas specifies the minimum and maximum number of pods that should exist.
  • spec.metrics specified the target CPU utilization for each pod. Above this the autoscaler springs into action. 10% is only for demo purposes. 50% is more likely.
  • spec.behavior defines the scaleDown and scaleUp policies.

The last section defines a deployment of the nginx/PHP pod. Everything here link of explains itself.

Many of these settings are not production worthy as the settings below are to demo the autoscaling.

Deployment

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
    - port: 80
      nodePort: 30080
      targetPort: 80
      name: http
      protocol: TCP
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: nginx
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx
  minReplicas: 1
  maxReplicas: 90
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 10
  behavior:
    scaleDown:
      policies:
      - type: Pods
        value: 5
        periodSeconds: 6
      - type: Percent
        value: 10
        periodSeconds: 6
    scaleUp:
      stabilizationWindowSeconds: 0
      policies:
      - type: Percent
        value: 15
        periodSeconds: 6
      - type: Pods
        value: 4
        periodSeconds: 6
      selectPolicy: Max
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 3
      maxUnavailable: 5
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: username/image-name
          ports:
            - containerPort: 80
          resources:
            limits:
              cpu: 500m
            requests:
              cpu: 200m
          # Just spin & wait forever
          command: [ "/bin/bash", "-c", "--" ]
          args: [ "service php7.4-fpm start; service nginx start && while true; do sleep 30; done;" ]

Supplemental files


config/default:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
server {
        listen 80 default_server;
        listen [::]:80 default_server;

        root /var/www/html;

        # Add index.php to the list if you are using PHP
        index index.php index.html index.htm index.nginx-debian.html;

        server_name _;


        location ~* \.php$ {
                fastcgi_pass unix:/run/php/php7.4-fpm.sock;
                include         fastcgi_params;
                fastcgi_param   SCRIPT_FILENAME    $document_root$fastcgi_script_name;
                fastcgi_param   SCRIPT_NAME        $fastcgi_script_name;
        }

        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                try_files $uri $uri/ =404;
        }

webroot/index.html file:

1
Hello

webroot/phpinfo.php file:

1
2
3
4
5
<?php
  echo 'Server IP Address - '.$_SERVER['SERVER_ADDR'];
  echo "<br>";
  phpinfo();
?>

If you use curl, you will see the different pod internal address. NodePort uses cookies to maintain the same browser connection to the same pod. curl does not use cookies.

webroot/prime.php file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<?php
$i = 2;
$primes = array();
while(true)
{
    $prime=true;
    $sqrt=floor(sqrt($i));
    foreach($primes as $num)
    {
        if($i%$num==0)
        {
            $prime=false;
            break;
        }
        if($num>$sqrt) break;
    }
    #if($prime) echo "$i\n";
    $i++;
    #if ($i > 100000) exit;
}
 echo "Done!";
?>

Building and Pushing your custom image

Dockerfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# Pinning to Ubuntu 20.04. v22 has some quirks right now.
FROM ubuntu:20.04

# Set timezone
ENV TZ=America/New_York
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

# Updating and Upgrading
RUN apt-get -y update
RUN apt-get -y upgrade

# Installs happen now
RUN apt-get -y install nginx
RUN apt-get -y install php php-cli php-fpm
#RUN apt-get -y install htop nano curl wget net-tools
RUN apt-get -y autoremove

RUN sed -i 's/cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/g' /etc/php/7.4/fpm/php.ini

# Copy config files into container
COPY config/default /etc/nginx/sites-available/default
COPY webroot/* /var/www/html/

# Expose port
EXPOSE 80/tcp

# Start services and hang out
# Does not execute in Kubernetes
ENTRYPOINT service php7.4-fpm start && service nginx start && bash

Build the image and push it to Docker Hub

1
2
docker build --tag "username/image-name" .
sudo docker push username/image-name

Creating your deployment

1
kubectl apply -f deployment.yaml

Installing Apache Bench

On a different VM, install Apache Bench

1
sudo apt-get install htop apache2-utils

Monitoring your cluster

On a different VM, start the monitoring script (“Monitoring a Kubernetes cluster”)

Creating traffic

On the VM you installed the Apache Bench utility, run this command.

1
ab -s 90 -n 15 -c 10 http://master_node_ip:30080/prime.php
  • 90 second timeout for each request. prime.php takes a while.
  • 15 requests
  • Using 10 concurrent threads

The autoscaling takes a while to scale up or down. The metrics server only gathers data about ever 15 seconds.


Networking

Here are the pods I created:

1
2
3
4
5
$ k get pods -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
php-apache-5b56f9df94-82m68   1/1     Running   0          50m   10.244.2.204   worker2   <none>           <none>
php-apache-5b56f9df94-c8nfq   1/1     Running   0          50m   10.244.1.179   worker1   <none>           <none>
php-apache-5b56f9df94-vxtpb   1/1     Running   0          52m   10.244.4.18    worker4   <none>           <none>

The IP space is 192.168.0.0/16. In reality there are a lot more IP addresses assigned:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ k get pods -o wide -A
NAMESPACE      NAME                              READY   STATUS      RESTARTS      AGE   IP             NODE      NOMINATED NODE   READINESS GATES
default        php-apache-5b56f9df94-82m68       1/1     Running     0             54m   10.244.2.204   worker2   <none>           <none>
default        php-apache-5b56f9df94-c8nfq       1/1     Running     0             54m   10.244.1.179   worker1   <none>           <none>
default        php-apache-5b56f9df94-vxtpb       1/1     Running     0             55m   10.244.4.18    worker4   <none>           <none>
kube-flannel   kube-flannel-ds-56qmg             1/1     Running     9 (61m ago)   22d   192.168.1.30   master1   <none>           <none>
kube-flannel   kube-flannel-ds-5hlmw             1/1     Running     8 (61m ago)   21d   192.168.1.32   worker2   <none>           <none>
kube-flannel   kube-flannel-ds-8pnww             1/1     Running     9 (61m ago)   21d   192.168.1.33   worker3   <none>           <none>
kube-flannel   kube-flannel-ds-92gk2             1/1     Running     5 (61m ago)   18d   192.168.1.34   worker4   <none>           <none>
kube-flannel   kube-flannel-ds-n7t4n             1/1     Running     9 (61m ago)   22d   192.168.1.31   worker1   <none>           <none>
kube-flannel   kube-flannel-ds-qs9fn             1/1     Running     7 (61m ago)   18d   192.168.1.35   worker5   <none>           <none>
kube-system    coredns-565d847f94-jdgmt          1/1     Running     9 (61m ago)   22d   10.244.0.20    master1   <none>           <none>
kube-system    coredns-565d847f94-jtn7k          1/1     Running     9 (61m ago)   22d   10.244.0.21    master1   <none>           <none>
kube-system    etcd-master1                      1/1     Running     9 (61m ago)   22d   192.168.1.30   master1   <none>           <none>
kube-system    kube-apiserver-master1            1/1     Running     9 (61m ago)   22d   192.168.1.30   master1   <none>           <none>
kube-system    kube-controller-manager-master1   1/1     Running     9 (61m ago)   22d   192.168.1.30   master1   <none>           <none>
kube-system    kube-proxy-6pbcb                  1/1     Running     8 (61m ago)   21d   192.168.1.32   worker2   <none>           <none>
kube-system    kube-proxy-fm58w                  1/1     Running     8 (61m ago)   21d   192.168.1.33   worker3   <none>           <none>
kube-system    kube-proxy-gx4bm                  1/1     Running     8 (61m ago)   22d   192.168.1.31   worker1   <none>           <none>
kube-system    kube-proxy-h6m5w                  1/1     Running     5 (61m ago)   18d   192.168.1.34   worker4   <none>           <none>
kube-system    kube-proxy-jps9v                  1/1     Running     9 (61m ago)   22d   192.168.1.30   master1   <none>           <none>
kube-system    kube-proxy-jstzb                  1/1     Running     6 (61m ago)   18d   192.168.1.35   worker5   <none>           <none>
kube-system    kube-scheduler-master1            1/1     Running     9 (61m ago)   22d   192.168.1.30   master1   <none>           <none>
kube-system    metrics-server-668fd5f9b8-v4zsh   1/1     Running     7 (61m ago)   21d   10.244.3.191   worker3   <none>           <none>

Within a pod:

  • One pod can talk to any other pod.
  • One pod can talk to all of the worker nodes (192.168.1.30-35)
  • One pod can talk to anything outside the container and even to the internet

Within a pod, here are the netmask, default gateway and routing table:

1
2
3
4
5
6
# ip -o -f inet addr show | awk '/scope global/{sub(/[^.]+\//,"0/",$4);print $4}'
10.244.2.0/24
# ip route
default via 10.244.2.1 dev eth0
10.244.0.0/16 via 10.244.2.1 dev eth0
10.244.2.0/24 dev eth0  proto kernel  scope link  src 10.244.2.204
This post is licensed under CC BY 4.0 by the author.