Document a common Ceph bootstrap failure scenario

Seen a user with this problem recently in IRC, and encountered it myself
frequently. If the Ceph bootstrap fails mid-way, subsequent deploys will
commonly fail on the 'fetching ceph keyrings' task.

Change-Id: I97176aa0904cd3153dfafe468f5cf94c95175ff7
This commit is contained in:
Paul Bourke 2016-06-27 12:25:18 +01:00
parent 01d183da27
commit 49f32cd3c6
1 changed files with 20 additions and 0 deletions

View File

@ -185,3 +185,23 @@ The default pool Ceph creates is named **rbd**. It is safe to remove this pool:
::
docker exec ceph_mon ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
Troubleshooting
===============
Deploy fails during 'Fetching Ceph keyrings ... No JSON object could be decoded'
--------------------------------------------------------------------------------
If an initial deploy of Ceph fails, perhaps due to improper configuration or
similar, the cluster will be partially formed and will need to be reset for a
successful deploy.
In order to do this the operator should remove the `ceph_mon_config` volume
from each Ceph monitor node:
::
ansible \
-i ansible/inventory/multinode \
-a 'docker volume rm ceph_mon_config' \
ceph-mon