Fixups to deployment guide

I just ran through the deployment guide to deploy a new kolla-kube
and encountered some issues with the docs on the way. Here are the
tweaks I had to make. In addition, this commit also adds a skip option
for kubernetes v1.9.x.

Co-Authored-By: Yushiro FURUKAWA <y.furukawa_2@jp.fujitsu.com>
Change-Id: I189871946db3c76e09c5b4cfd2a5f6a1aeee8899
This commit is contained in:
Michael Still 2017-11-30 21:30:29 +11:00 committed by Yushiro FURUKAWA
parent 22ed0c232d
commit 957bbdd1da
1 changed files with 154 additions and 98 deletions

View File

@ -158,7 +158,7 @@ Install Kubernetes 1.6.4 or later and other dependencies::
sudo apt-get install -y docker.io kubelet kubeadm kubectl kubernetes-cni
Centos and Ubuntu
CentOS and Ubuntu
-----------------
Enable and start Docker::
@ -174,13 +174,19 @@ Enable the proper CGROUP driver::
CGROUP_DRIVER=$(sudo docker info | grep "Cgroup Driver" | awk '{print $3}')
sudo sed -i "s|KUBELET_KUBECONFIG_ARGS=|KUBELET_KUBECONFIG_ARGS=--cgroup-driver=$CGROUP_DRIVER |g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Centos and Ubuntu
CentOS and Ubuntu
-----------------
Setup the DNS server with the service CIDR::
sudo sed -i 's/10.96.0.10/10.3.3.10/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Add an option for kubelet to skip 'running with swap on':
.. code-block:: bash
sudo sed -i '/^\[Service\]$/a Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
.. note::
Kubernetes uses x.x.x.10 as the DNS server. The Kolla developers don't
@ -203,7 +209,7 @@ Enable and start docker and kubelet::
Deploy Kubernetes with kubeadm::
sudo kubeadm init --pod-network-cidr=10.1.0.0/16 --service-cidr=10.3.3.0/24
sudo kubeadm init --pod-network-cidr=10.1.0.0/16 --service-cidr=10.3.3.0/24 --ignore-preflight-errors=all
.. note::
@ -219,8 +225,16 @@ Deploy Kubernetes with kubeadm::
If the following issue occurs after running this command:
`[preflight] Some fatal errors occurred:
/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set
to 1`
[init] Using Kubernetes version: v1.9.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please
disable swap
[preflight] If you know what you are doing, you can make a
check non-fatal with `--ignore-preflight-errors=...`
There are two work-arounds:
@ -228,10 +242,12 @@ Deploy Kubernetes with kubeadm::
`net.bridge.bridge-nf-call-iptables = 1` to
``/etc/sysctl.conf``
- Type `sysctl -p` to apply the settings from /etc/sysctl.conf
- Type `sysctl net.bridge.bridge-nf-call-ip6tables` and
- Type `sysctl net.bridge.bridge-nf-call-ip6tables` and
`sysctl net.bridge.bridge-nf-call-iptables` to verify the values are set to 1.
- Or alternatively Run with `--skip-preflight-checks`. This runs
the risk of missing other issues that may be flagged.
- Or alternatively run with following options.This runs the risk of missing
other issues that may be flagged.
- For kubernetes before **v1.8.5**: `---skip-preflight-checks`
- For kubernetes **v1.9.0**: `---ignore-preflight-errors=all`
Load the kubedm credentials into the system::
@ -250,10 +266,10 @@ CNI drivers may be used if they are properly configured.
Deploy the Canal CNI driver::
curl -L https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/1.6/rbac.yaml -o rbac.yaml
curl -L https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/1.7/rbac.yaml -o rbac.yaml
kubectl apply -f rbac.yaml
curl -L https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/1.6/canal.yaml -o canal.yaml
curl -L https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/1.7/canal.yaml -o canal.yaml
sed -i "s@10.244.0.0/16@10.1.0.0/16@" canal.yaml
kubectl apply -f canal.yaml
@ -290,15 +306,15 @@ Verify DNS works properly by running below command within the busybox container:
This should return a nslookup result without error::
$ kubectl run -i -t $(uuidgen) --image=busybox --restart=Never
Waiting for pod default/33c30c3b-8130-408a-b32f-83172bca19d0 to be running, status is Pending, pod ready: false
kubectl run -i -t $(uuidgen) --image=busybox --restart=Never
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server: 10.3.3.10
Address 1: 10.3.3.10 kube-dns.kube-system.svc.cluster.local
# nslookup kubernetes
Server: 10.3.3.10
Address 1: 10.3.3.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.3.3.1 kubernetes.default.svc.cluster.local
Name: kubernetes
Address 1: 10.3.3.1 kubernetes.default.svc.cluster.local
/ #
.. warning::
@ -314,8 +330,8 @@ Step 3: Deploying kolla-kubernetes
Override default RBAC settings::
kubectl update -f <(cat <<EOF
apiVersion: rbac.authorization.k8s.io/v1alpha1
kubectl apply -f <(cat <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-admin
@ -348,10 +364,31 @@ Verify both the client and server version of Helm are consistent::
helm version
Install repositories necessary to install packaging::
Install repositories necessary to install packaging:
CentOS
------
.. code-block:: bash
sudo yum install -y epel-release ansible python-pip python-devel
Ubuntu
------
Pre-configuration for ansible installation:
.. code-block:: bash
sudo apt update; sudo apt install -y software-properties-common
sudo apt-add-repository ppa:ansible/ansible
Install ansible:
.. code-block:: bash
sudo apt install -y ansible python-pip python-dev
.. note::
You may find it helpful to create a directory to contain the files downloaded
@ -370,11 +407,12 @@ Clone kolla-kubernetes::
Install kolla-ansible and kolla-kubernetes::
sudo pip install -U kolla-ansible/ kolla-kubernetes/
sudo pip install -U ./kolla-ansible/
sudo pip install -U ./kolla-kubernetes/
Copy default Kolla configuration to /etc::
sudo cp -aR /usr/share/kolla-ansible/etc_examples/kolla /etc
sudo cp -aR /usr/local/share/kolla-ansible/etc_examples/kolla /etc
Copy default kolla-kubernetes configuration to /etc::
@ -395,11 +433,12 @@ Label the AIO node as the compute and controller node::
.. warning:
The kolla-kubernetes deliverable has two configuration files. This is a little
clunky and we know about the problem :) We are working on getting all configuration
into cloud.yaml. Until that is fixed the variable in globals.yml `kolla_install_type`
must have the same contents as the variable in cloud.yaml `install_type`. In this
document we use the setting `source` although `binary` could also be used.
The kolla-kubernetes deliverable has two configuration files. This is a
little clunky and we know about the problem :) We are working on getting
all configuration into cloud.yaml. Until that is fixed the variable in
globals.yml `kolla_install_type` must have the same contents as the
variable in cloud.yaml `install_type`. In this document we use the setting
`source` although `binary` could also be used.
Modify Kolla ``/etc/kolla/globals.yml`` configuration file::
@ -409,6 +448,10 @@ Modify Kolla ``/etc/kolla/globals.yml`` configuration file::
Neutron interface name. E.g: `eth1`. This is the external
interface that Neutron will use. It must not have an IP address
assigned to it.
3. If you try to deploy AIO, set `kolla_internal_vip_address` in
`/etc/kolla/globals.yml` to the IP address which is able to access from
the host. It's simple to set the IP address for `network_interface`.
In addition, set `enable_haproxy` to `no`.
Add required configuration to the end of ``/etc/kolla/globals.yml``::
@ -470,8 +513,11 @@ QEMU libvirt functionality and enable a workaround for a bug in libvirt::
Generate the default configuration::
ansible-playbook -e ansible_python_interpreter=/usr/bin/python -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml -e CONFIG_DIR=/etc/kolla ansible/site.yml
pushd /usr/local/share/kolla-kubernetes/
sudo ansible-playbook -e ansible_python_interpreter=/usr/bin/python \
-e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml \
-e CONFIG_DIR=/etc/kolla ./ansible/site.yml
popd
Generate the Kubernetes secrets and register them with Kubernetes::
@ -494,7 +540,7 @@ Build all Helm microcharts, service charts, and metacharts::
kolla-kubernetes/tools/helm_build_all.sh .
Check that all Helm images have been built by verifying the number is > 150::
Check that all Helm images have been built by verifying the number is > 200::
ls | grep ".tgz" | wc -l
@ -502,62 +548,67 @@ Create a local cloud.yaml file for the deployment of the charts::
cat <<EOF > cloud.yaml
global:
kolla:
all:
docker_registry: docker.io
image_tag: "4.0.0"
kube_logger: false
external_vip: "192.168.7.105"
base_distro: "centos"
install_type: "source"
tunnel_interface: "docker0"
keystone:
all:
admin_port_external: "true"
dns_name: "192.168.7.105"
public:
all:
port_external: "true"
rabbitmq:
all:
cookie: 67
glance:
api:
all:
port_external: "true"
cinder:
api:
all:
port_external: "true"
volume_lvm:
all:
element_name: cinder-volume
daemonset:
lvm_backends:
- '192.168.7.105': 'cinder-volumes'
ironic:
conductor:
daemonset:
selector_key: "kolla_conductor"
nova:
placement_api:
all:
port_external: true
novncproxy:
all:
port: 6080
port_external: true
openvswitch:
all:
add_port: true
ext_bridge_name: br-ex
ext_interface_name: enp1s0f1
setup_bridge: true
horizon:
all:
port_external: true
kolla:
all:
docker_registry: docker.io
image_tag: "4.0.0"
kube_logger: false
external_vip: "192.168.7.105"
base_distro: "centos"
install_type: "source"
tunnel_interface: "docker0"
keystone:
all:
admin_port_external: "true"
dns_name: "192.168.7.105"
port: 5000
public:
all:
port_external: "true"
rabbitmq:
all:
cookie: 67
glance:
api:
all:
port_external: "true"
cinder:
api:
all:
port_external: "true"
volume_lvm:
all:
element_name: cinder-volume
daemonset:
lvm_backends:
- '192.168.7.105': 'cinder-volumes'
ironic:
conductor:
daemonset:
selector_key: "kolla_conductor"
nova:
placement_api:
all:
port_external: true
novncproxy:
all:
port: 6080
port_external: true
openvswitch:
all:
add_port: true
ext_bridge_name: br-ex
ext_interface_name: enp1s0f1
setup_bridge: true
horizon:
all:
port_external: true
EOF
.. warning::
base_distro: ``ubuntu`` does not currently work. Use ``centos``.
.. warning::
This file is populated with several values that will need to
@ -701,6 +752,11 @@ Create a floating IP address and add to the VM::
openstack server add floating ip demo1 $(openstack floating ip create public1 -f value -c floating_ip_address)
.. warning::
This command doesn't work correctly due to following bug:
https://bugs.launchpad.net/python-openstackclient/+bug/1701079
Troubleshooting and Tear Down
=============================
@ -715,13 +771,13 @@ TroubleShooting
Determine IP and port information::
$ kubectl get svc -n kube-system
kubectl get svc -n kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
canal-etcd 10.3.3.100 <none> 6666/TCP 16h
kube-dns 10.3.3.10 <none> 53/UDP,53/TCP 16h
tiller-deploy 10.3.3.7 <none> 44134/TCP 16h
$ kubectl get svc -n kolla
kubectl get svc -n kolla
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cinder-api 10.3.3.6 10.240.43.81 8776/TCP 15h
glance-api 10.3.3.150 10.240.43.81 9292/TCP 15h
@ -742,7 +798,7 @@ Determine IP and port information::
View all k8's namespaces::
$ kubectl get namespaces
kubectl get namespaces
NAME STATUS AGE
default Active 16h
kolla Active 15h
@ -756,14 +812,14 @@ Kolla Describe a pod in full detail::
View all deployed services::
$ kubectl get deployment -n kube-system
kubectl get deployment -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-dns 1 1 1 1 20h
tiller-deploy 1 1 1 1 20h
View configuration maps::
$ kubectl get configmap -n kube-system
kubectl get configmap -n kube-system
NAME DATA AGE
canal-config 4 20h
cinder-control.v1 1 20h
@ -784,13 +840,13 @@ View configuration maps::
General Cluster information::
$ kubectl cluster-info
kubectl cluster-info
Kubernetes master is running at https://192.168.122.2:6443
KubeDNS is running at https://192.168.122.2:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns
View all jobs::
$ kubectl get jobs --all-namespaces
kubectl get jobs --all-namespaces
NAMESPACE NAME DESIRED SUCCESSFUL AGE
kolla cinder-create-db 1 1 20h
kolla cinder-create-keystone-endpoint-admin 1 1 20h
@ -801,7 +857,7 @@ View all jobs::
View all deployments::
$ kubectl get deployments --all-namespaces
kubectl get deployments --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kolla cinder-api 1 1 1 1 20h
kolla glance-api 1 1 1 1 20h
@ -818,13 +874,13 @@ View all deployments::
View secrets::
$ kubectl get secrets
kubectl get secrets
NAME TYPE DATA AGE
default-token-3dzfp kubernetes.io/service-account-token 3 20h
View docker images::
$ sudo docker images
sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/kubernetes-helm/tiller v2.3.1 38527daf791d 7 days ago 56 MB
quay.io/calico/cni v1.6.2 db2dedf2181a 2 weeks ago 65.08 MB
@ -888,16 +944,16 @@ Access Horizon GUI
------------------
1. Determine Horizon `EXTERNAL IP` Address::
$ kubectl get svc horizon --namespace=kolla
kubectl get svc horizon --namespace=kolla
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
horizon 10.3.3.237 10.240.43.175 80/TCP 1d
2. Determine username and password from keystone::
$ cat ~/keystonerc_admin | grep OS_USERNAME
cat ~/keystonerc_admin | grep OS_USERNAME
export OS_USERNAME=admin
$ cat ~/keystonerc_admin | grep OS_PASSWORD
cat ~/keystonerc_admin | grep OS_PASSWORD
export OS_PASSWORD=Sr6XMFXvbvxQCJ3Cib1xb0gZ3lOtBOD8FCxOcodU
3. Run a browser that has access to your network, and access Horizon