Merge "Cleanup troubleshooting docs"
This commit is contained in:
commit
3af383279c
@ -390,6 +390,12 @@ created, using the following commands:
|
||||
kubectl exec -n ceph ceph-mon-0 -- ceph osd pool create images 8
|
||||
kubectl exec -n ceph ceph-mon-0 -- ceph osd pool create vms 8
|
||||
|
||||
The number of placement groups can be altered by replacing the 8
|
||||
to meet your needs. It is important to note that using too large
|
||||
of a number for your placement groups may result in Ceph
|
||||
becoming unhealthy. For more information on this topic, reference
|
||||
Ceph's documentation `here <http://docs.ceph.com/docs/master/rados/operations/placement-groups/#choosing-the-number-of-placement-groups>`_.
|
||||
|
||||
MariaDB Installation and Verification
|
||||
-------------------------------------
|
||||
|
||||
|
@ -8,34 +8,6 @@ deploying Charts in this repository.
|
||||
Galera Cluster
|
||||
==============
|
||||
|
||||
**CHART:** openstack-helm/mariadb (when ``developer-mode: false``)
|
||||
|
||||
MariaDB is a ``StatefulSet`` (``PetSets`` have been retired in
|
||||
Kubernetes v1.5.0). As such, it initiates a 'seed' which is used to
|
||||
deploy MariaDB members via `affinity/anti-affinity
|
||||
<https://kubernetes.io/docs/user-guide/node-selection/>`__
|
||||
features. Ceph uses this as well. So what you will notice is the
|
||||
following behavior:
|
||||
|
||||
::
|
||||
|
||||
openstack mariadb-0 0/1 Running 0 28s 10.25.49.199 kubenode05
|
||||
openstack mariadb-seed-0ckf4 1/1 Running 0 48s 10.25.162.197 kubenode01
|
||||
|
||||
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
|
||||
openstack mariadb-0 1/1 Running 0 1m 10.25.49.199 kubenode05
|
||||
openstack mariadb-1 0/1 Pending 0 0s <none>
|
||||
openstack mariadb-1 0/1 Pending 0 0s <none> kubenode04
|
||||
openstack mariadb-1 0/1 ContainerCreating 0 0s <none> kubenode04
|
||||
openstack mariadb-1 0/1 Running 0 3s 10.25.178.74 kubenode04
|
||||
|
||||
What you're seeing is the output of
|
||||
``kubectl get pods -o wide --all-namespaces``, which is used to monitor
|
||||
the seed host preparing each of the MariaDB/Galera members in order:
|
||||
mariadb-0, then mariadb-1, then mariadb-2. This process can take up to a
|
||||
few minutes, so be patient.
|
||||
|
||||
To test MariaDB, do the following:
|
||||
|
||||
::
|
||||
@ -50,32 +22,3 @@ To test MariaDB, do the following:
|
||||
| performance_schema |
|
||||
+--------------------+
|
||||
admin@kubenode01:~/projects/openstack-helm$
|
||||
|
||||
Now you can see that MariaDB is loaded, with databases intact! If you're
|
||||
at this point, the rest of the installation is easy. You can run the
|
||||
following to check on Galera:
|
||||
|
||||
::
|
||||
|
||||
admin@kubenode01:~/projects/openstack-helm$ kubectl describe po/mariadb-0 -n openstack
|
||||
Name: mariadb-0
|
||||
Namespace: openstack
|
||||
Node: kubenode05/192.168.3.25
|
||||
Start Time: Fri, 23 Dec 2016 16:15:49 -0500
|
||||
Labels: app=mariadb
|
||||
galera=enabled
|
||||
Status: Running
|
||||
IP: 10.25.49.199
|
||||
Controllers: StatefulSet/mariadb
|
||||
...
|
||||
...
|
||||
...
|
||||
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
5s 5s 1 {default-scheduler } Normal Scheduled Successfully assigned mariadb-0 to kubenode05
|
||||
3s 3s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Pulling pulling image "quay.io/stackanetes/stackanetes-mariadb:newton"
|
||||
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Pulled Successfully pulled image "quay.io/stackanetes/stackanetes-mariadb:newton"
|
||||
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Created Created container with docker id f702bd7c11ef; Security:[seccomp=unconfined]
|
||||
2s 2s 1 {kubelet kubenode05} spec.containers{mariadb} Normal Started Started container with docker id f702bd7c11ef
|
||||
|
||||
So you can see that galera is enabled.
|
||||
|
@ -1,64 +0,0 @@
|
||||
Diagnosing the problem
|
||||
----------------------
|
||||
|
||||
In order to protect your general sanity, we've included a curated list
|
||||
of verification and troubleshooting steps that may help you avoid some
|
||||
potential issues while developing Openstack-Helm.
|
||||
|
||||
MariaDB
|
||||
~~~~~~~
|
||||
|
||||
To verify the state of MariaDB, use the following command:
|
||||
|
||||
::
|
||||
|
||||
$ kubectl exec mariadb-0 -it -n openstack -- mysql -u root -p password -e 'show databases;'
|
||||
+--------------------+
|
||||
| Database |
|
||||
+--------------------+
|
||||
| information_schema |
|
||||
| mysql |
|
||||
| performance_schema |
|
||||
+--------------------+
|
||||
$
|
||||
|
||||
Helm Server/Repository
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Sometimes you will run into Helm server or repository issues. For our
|
||||
purposes, it's mostly safe to whack these. If you are developing
|
||||
charts for other projects, use at your own risk (you most likely know
|
||||
how to resolve these issues already).
|
||||
|
||||
To check for a running instance of Helm Server:
|
||||
|
||||
::
|
||||
|
||||
$ ps -a | grep "helm serve"
|
||||
29452 ttys004 0:00.23 helm serve .
|
||||
35721 ttys004 0:00.00 grep --color=auto helm serve
|
||||
|
||||
Kill the "helm serve" running process:
|
||||
|
||||
::
|
||||
|
||||
$ kill 29452
|
||||
|
||||
To clear out previous Helm repositories, and reinstall a local
|
||||
repository:
|
||||
|
||||
::
|
||||
|
||||
$ helm repo list
|
||||
NAME URL
|
||||
stable https://kubernetes-charts.storage.googleapis.com/
|
||||
local http://localhost:8879/charts
|
||||
$
|
||||
$ helm repo remove local
|
||||
|
||||
This allows you to read your local repository, if you ever need to do
|
||||
these steps:
|
||||
|
||||
::
|
||||
|
||||
$ helm repo add local http://localhost:8879/charts
|
@ -8,8 +8,6 @@ Sometimes things go wrong. These guides will help you solve many common issues w
|
||||
:maxdepth: 2
|
||||
|
||||
database
|
||||
development
|
||||
networking
|
||||
persistent-storage
|
||||
|
||||
Getting help
|
||||
|
@ -1,9 +0,0 @@
|
||||
==========
|
||||
Networking
|
||||
==========
|
||||
|
||||
This guide is to help users debug any networking issues when deploying
|
||||
Charts in this repository.
|
||||
|
||||
Diagnosing the problem
|
||||
======================
|
@ -3,81 +3,15 @@ Persistent Storage
|
||||
==================
|
||||
|
||||
This guide is to help users debug any general storage issues when
|
||||
deploying Charts in this repository.
|
||||
deploying charts in this repository.
|
||||
|
||||
Ceph
|
||||
====
|
||||
|
||||
**CHART:** openstack-helm/ceph
|
||||
Ceph Deployment Status
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Ceph Validating PVC
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To validate persistent volume claim (PVC) creation, we've placed a test
|
||||
manifest in the ``./test/`` directory. Deploy this pvc and explore the
|
||||
deployment:
|
||||
|
||||
::
|
||||
|
||||
admin@kubenode01:~$ kubectl get pvc -o wide --all-namespaces -w
|
||||
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
|
||||
ceph pvc-test Bound pvc-bc768dea-c93e-11e6-817f-001fc69c26d1 1Gi RWO 9h
|
||||
admin@kubenode01:~$
|
||||
|
||||
The output above indicates that the PVC is 'bound' correctly. Now
|
||||
digging deeper:
|
||||
|
||||
::
|
||||
|
||||
admin@kubenode01:~/projects/openstack-helm$ kubectl describe pvc pvc-test -n ceph
|
||||
Name: pvc-test
|
||||
Namespace: ceph
|
||||
StorageClass: general
|
||||
Status: Bound
|
||||
Volume: pvc-bc768dea-c93e-11e6-817f-001fc69c26d1
|
||||
Labels: <none>
|
||||
Capacity: 1Gi
|
||||
Access Modes: RWO
|
||||
No events.
|
||||
admin@kubenode01:~/projects/openstack-helm$
|
||||
|
||||
We can see that we have a VolumeID, and the 'capacity' is 1GB. It is a
|
||||
'general' storage class. It is just a simple test. You can safely delete
|
||||
this test by issuing the following:
|
||||
|
||||
::
|
||||
|
||||
admin@kubenode01:~/projects/openstack-helm$ kubectl delete pvc pvc-test -n ceph
|
||||
persistentvolumeclaim "pvc-test" deleted
|
||||
admin@kubenode01:~/projects/openstack-helm$
|
||||
|
||||
Ceph Validating StorageClass
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Next we can look at the storage class, to make sure that it was created
|
||||
correctly:
|
||||
|
||||
::
|
||||
|
||||
admin@kubenode01:~$ kubectl describe storageclass/general
|
||||
Name: general
|
||||
IsDefaultClass: No
|
||||
Annotations: <none>
|
||||
Provisioner: kubernetes.io/rbd
|
||||
Parameters: adminId=admin,adminSecretName=pvc-ceph-conf-combined-storageclass,adminSecretNamespace=ceph,monitors=ceph-mon.ceph:6789,pool=rbd,userId=admin,userSecretName=pvc-ceph-client-key
|
||||
No events.
|
||||
admin@kubenode01:~$
|
||||
|
||||
The parameters is what we're looking for here. If we see parameters
|
||||
passed to the StorageClass correctly, we will see the
|
||||
``ceph-mon.ceph:6789`` hostname/port, things like ``userid``, and
|
||||
appropriate secrets used for volume claims. This all looks great, and it
|
||||
time to Ceph itself.
|
||||
|
||||
Ceph Validation
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Most commonly, we want to validate that Ceph is working correctly. This
|
||||
First, we want to validate that Ceph is working correctly. This
|
||||
can be done with the following ceph command:
|
||||
|
||||
::
|
||||
@ -96,26 +30,55 @@ can be done with the following ceph command:
|
||||
admin@kubenode01:~$
|
||||
|
||||
Use one of your Ceph Monitors to check the status of the cluster. A
|
||||
couple of things to note above; our health is 'HEALTH\_OK', we have 3
|
||||
mons, we've established a quorum, and we can see that our active mds is
|
||||
'ceph-mds-2810413505-gtjgv'. We have a healthy environment.
|
||||
couple of things to note above; our health is `HEALTH\_OK`, we have 3
|
||||
mons, we've established a quorum, and we can see that all of our OSDs
|
||||
are up and in the OSD map.
|
||||
|
||||
For Glance and Cinder to operate, you will need to create some storage
|
||||
pools for these systems. Additionally, Nova can be configured to use a
|
||||
pool as well, but this is off by default.
|
||||
PVC Preliminary Validation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Before proceeding, it is important to ensure that you have deployed a
|
||||
client key in the namespace you wish to fulfill ``PersistentVolumeClaims``.
|
||||
To verify that your deployment namespace has a client key:
|
||||
|
||||
::
|
||||
|
||||
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create volumes 128
|
||||
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create images 128
|
||||
admin@kubenode01: $ kubectl get secret -n openstack
|
||||
NAME TYPE DATA AGE
|
||||
default-token-nvl10 kubernetes.io/service-account-token 3 7d
|
||||
pvc-ceph-client-key kubernetes.io/rbd 1 6m
|
||||
|
||||
Nova storage would be added like this:
|
||||
Without this, your RBD-backed PVCs will never reach the ``Bound`` state. For
|
||||
more information, see how to `activate namespace for ceph <../install/multinode.html#activating-control-plane-namespace-for-ceph>`_.
|
||||
|
||||
Note: This step is not relevant for PVCs within the same namespace Ceph
|
||||
was deployed.
|
||||
|
||||
Ceph Validating PVC Operation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To validate persistent volume claim (PVC) creation, we've placed a test
|
||||
manifest `here <https://raw.githubusercontent.com/openstack/openstack-helm/master/tests/pvc-test.yaml>`_.
|
||||
Deploy this manifest and verify the job completes successfully.
|
||||
|
||||
Ceph Validating StorageClass
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Next we can look at the storage class, to make sure that it was created
|
||||
correctly:
|
||||
|
||||
::
|
||||
|
||||
kubectl exec -n ceph -it ceph-mon-0 ceph osd pool create vms 128
|
||||
admin@kubenode01:~$ kubectl describe storageclass/general
|
||||
Name: general
|
||||
IsDefaultClass: No
|
||||
Annotations: <none>
|
||||
Provisioner: kubernetes.io/rbd
|
||||
Parameters: adminId=admin,adminSecretName=pvc-ceph-conf-combined-storageclass,adminSecretNamespace=ceph,monitors=ceph-mon.ceph:6789,pool=rbd,userId=admin,userSecretName=pvc-ceph-client-key
|
||||
No events.
|
||||
admin@kubenode01:~$
|
||||
|
||||
The choosing the amount of storage is up to you and can be changed by
|
||||
replacing the 128 to meet your needs.
|
||||
|
||||
We are now ready to install our next chart, MariaDB.
|
||||
The parameters are what we're looking for here. If we see parameters
|
||||
passed to the StorageClass correctly, we will see the
|
||||
``ceph-mon.ceph:6789`` hostname/port, things like ``userid``, and
|
||||
appropriate secrets used for volume claims.
|
||||
|
Loading…
Reference in New Issue
Block a user