From 49f32cd3c6da656132333115fce4e38f5d779b16 Mon Sep 17 00:00:00 2001 From: Paul Bourke Date: Mon, 27 Jun 2016 12:25:18 +0100 Subject: [PATCH] Document a common Ceph bootstrap failure scenario Seen a user with this problem recently in IRC, and encountered it myself frequently. If the Ceph bootstrap fails mid-way, subsequent deploys will commonly fail on the 'fetching ceph keyrings' task. Change-Id: I97176aa0904cd3153dfafe468f5cf94c95175ff7 --- doc/ceph-guide.rst | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/doc/ceph-guide.rst b/doc/ceph-guide.rst index 1f892e13ba..8a88f78b94 100644 --- a/doc/ceph-guide.rst +++ b/doc/ceph-guide.rst @@ -185,3 +185,23 @@ The default pool Ceph creates is named **rbd**. It is safe to remove this pool: :: docker exec ceph_mon ceph osd pool delete rbd rbd --yes-i-really-really-mean-it + +Troubleshooting +=============== + +Deploy fails during 'Fetching Ceph keyrings ... No JSON object could be decoded' +-------------------------------------------------------------------------------- + +If an initial deploy of Ceph fails, perhaps due to improper configuration or +similar, the cluster will be partially formed and will need to be reset for a +successful deploy. + +In order to do this the operator should remove the `ceph_mon_config` volume +from each Ceph monitor node: + +:: + + ansible \ + -i ansible/inventory/multinode \ + -a 'docker volume rm ceph_mon_config' \ + ceph-mon