Change-Id: I56e6765a61431e6613f3aad1c8a7a3212adc229f
9.1 KiB
Kubernetes density test report
- Abstract
This document is the report for
Kubernetes_density_test_plan
Environment description
This report is collected on the hardware described in intel_mirantis_performance_lab_1
.
Software
Kubernetes is installed using Kargo
deployment tool on Ubuntu 16.04.1.
- Node roles:
-
- node1: minion+master+etcd
- node2: minion+master+etcd
- node3: minion+etcd
- node4: minion
- Software versions (test case #1):
-
- OS: Ubuntu 16.04.1 LTS (Xenial Xerus)
- Kernel: 4.4.0-47
- Docker: 1.12.1
- Kubernetes: 1.4.3
- Software versions (test case #2):
-
- OS: Ubuntu 16.04.1 LTS (Xenial Xerus)
- Kernel: 4.4.0-36
- Docker: 1.13.1
- Kubernetes: 1.5.3
Reports
Test Case #1: Maximum pods per node
Pod startup time is measured with help of MMM(MySQL/Master/Minions) testing suite. To schedule all pods on a single node the original replication controller for minions is updated with scheduler hint. To do this add the following lines into template's spec section:
nodeSelector:
kubernetes.io/hostname: node4
Pod status from Kubernetes point of view is retrieved from kubectl
tool. The process is automated with kubectl_mon.py <kubectl-mon/kubectl_mon.py>
,
which produces output in CSV format. Charts are created by pod_stats.py <kubectl-mon/pod_stats.py>
script.
Every measurement starts with empty namespace. Then Kubernetes replication controller is created with specified number of pods. We collect pod's report time and kubectl stats. The summary data is presented below.
Detailed Stats
50 pods
Start replication controller with 50 pods
Terminate replication controller with 50 pods
100 pods
Start replication controller with 100 pods
Terminate replication controller with 100 pods
200 pods
Start replication controller with 200 pods
Terminate replication controller with 200 pods
400 pods
Start replication controller with 400 pods.
Note: In this experiment all pods successfully reported, however from Kubernetes API point of view less than 60 pods were in running state. The number of pods reported as running was slowly increasing over the time, but the speed was very low to treat the process as succeed.
Terminate replication controller with 400 pods.
Scale by 100 pods steps
In this experiment we scale replication controller up by steps of 100 pods. Scaling process is invoked after all pods are reported as running. On step 3 (201-300 pods) the process has become significantly slower and we've started scaling replication controller down. The full cycle is visualized below.
System metrics from API nodes and minion are below
Full Kubernetes stats are available online.
Test Case #2: Measure Kubelet capacity
Pod startup time is measured with help of MMM(MySQL/Master/Minions) testing suite. Original code was updated. We added automatic creation of charts with pod's status, when pod startup (or down). To schedule all pods on a single node the original replication controller for minions is updated with scheduler hint. To do this add the following lines into template's spec section:
nodeSelector:
kubernetes.io/hostname: <node>
Pod status from Kubernetes point of view is retrieved from kubectl
tool. The process is automated with kubectl_mon_v2.py <kubectl-mon/kubectl_mon_v2.py>
,
which collects information about pod's status and sends to database.
Charts are created by updated MMM(MySQL/Master/Minions)
testing suite.
Every measurement starts with empty namespace. Then Kubernetes replication controller is created with specified number of pods. We collect pod's report time and kubectl stats. The summary data is presented below.
Detailed Stats
Note: You can download these reports in HTML format here <data/reports.tar.bz2>
50 pods (~1 pod per core) on 50 nodes
Start replication controller with 50 pods on 50 nodes
Terminate replication controller with 50 pods on 50 nodes
100 pods (~2 pod per core) on 50 nodes
Start replication controller with 100 pods on 50 nodes
Terminate replication controller with 100 pods on 50 nodes
200 pods (~4 pod per core) on 50 nodes
Start replication controller with 200 pods on 50 nodes
Terminate replication controller with 200 pods on 50 nodes
50 pods (~1 pod per core) on 100 nodes
Start replication controller with 50 pods on 100 nodes
Terminate replication controller with 50 pods on 100 nodes
100 pods (~2 pod per core) on 100 nodes
Start replication controller with 100 pods on 100 nodes
Terminate replication controller with 100 pods on 100 nodes
200 pods (~4 pod per core) on 100 nodes
Start replication controller with 200 pods on 100 nodes
Terminate replication controller with 200 pods on 100 nodes
50 pods (~1 pod per core) on 200 nodes
Start replication controller with 50 pods on 100 nodes
Terminate replication controller with 50 pods on 100 nodes
100 pods (~2 pod per core) on 200 nodes
Start replication controller with 100 pods on 200 nodes
Terminate replication controller with 100 pods on 200 nodes
200 pods (~4 pod per core) on 200 nodes
Start replication controller with 200 pods on 200 nodes
Note: Docker service is frozen on 27 nodes
Terminate replication controller with 200 pods on 200 nodes
400 pods (~8 pod per core) on 50 nodes
Start replication controller with 400 pods on 50 nodes
Terminate replication controller with 400 pods on 50 nodes