Merge "Add ceph docs"

This commit is contained in:
Jenkins 2016-10-20 13:09:36 +00:00 committed by Gerrit Code Review
commit b30741f6bb
3 changed files with 300 additions and 1 deletions

149
doc/source/ceph.rst Normal file
View File

@ -0,0 +1,149 @@
.. _ceph:
====================
Ceph and Swift guide
====================
This guide provides instruction for adding Ceph and Swift support for CCP
deployment.
.. NOTE:: It's expected that an external Ceph cluster is already available and
accessible from the all k8s nodes. If you don't have a Ceph cluster, but
still want to try CCP with Ceph, you can use :doc:`ceph_cluster` guide for
deploying a simple 3 node Ceph cluster.
Ceph
~~~~
Prerequirements
---------------
You need to ensure that these pools are created:
* images
* volumes
* vms
And that users "glance" and "cinder" are created and have these permissions:
::
client.cinder
caps: [mon] allow r
caps: [osd] allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images
client.glance
caps: [mon] allow r
caps: [osd] allow rwx pool=images, allow rwx pool=vms
Deploy CCP with Ceph
--------------------
In order to deploy CCP with Ceph you have to edit the ``ccp.yaml`` the file:
::
configs:
ceph:
fsid: "FSID_OF_THE_CEPH_CLUSTER"
mon_host: "CEPH_MON_HOSTNAME"
cinder:
ceph:
enable: true
key: "CINDER_CEPH_KEY"
rbd_secret_uuid: "RANDOM_UUID"
glance:
ceph:
enable: true
key: "GLANCE_CEPH_KEY"
nova:
ceph:
enable: true
Example:
::
configs:
ceph:
fsid: "afca8524-2c47-4b81-a0b7-2300e62212f9"
mon_host: "10.90.0.5"
cinder:
ceph:
enable: true
key: "AQBShfJXID9pFRAAm4VLpbNXa4XJ9zgAh7dm2g=="
rbd_secret_uuid: "b416770d-f3d4-4ac9-b6db-b6a7ac1c61c0"
glance:
ceph:
enable: true
key: "AQBShfJXzXyNBRAA5kqXzCKcFoPBn2r6VDYdag=="
nova:
ceph:
enable: true
- ``fsid`` - Should be the same as ``fsid`` variable in the Ceph cluster
``ceph.conf`` file.
- ``mon_host`` - Should contain any Ceph mon node IP or hostname.
- ``key`` - Should be taken from the corresponding Ceph user. You can
use the ``ceph auth list`` command on the Ceph node to fetch list of all
users and their keys.
- ``rbd_secret_uuid`` - Should be randomly generated. You can use the
``uuidgen`` command for this.
Make sure that your deployment topology has a ``cinder`` service. You could
use ``etc/topology-with-ceph-example.yaml`` as a reference.
Now youre ready to deploy CCP with Ceph support.
Swift
~~~~~
Prerequirements
---------------
Make sure that your deployment topology has a ``radosgw`` service. You could
use ``etc/topology-with-ceph-example.yaml`` as a reference.
Deploy CCP with Swift
---------------------
.. NOTE:: Currently, in CCP, only Glance supports Swift as a backend.
In order to deploy CCP with Swift you have to edit ``ccp.yaml`` the file:
::
ceph:
fsid: "FSID_OF_THE_CEPH_CLUSTER"
mon_host: "CEPH_MON_HOSTNAME"
radosgw:
key: "RADOSGW_CEPH_KEY"
glance:
swift:
enable: true
store_create_container_on_put: true
Example:
::
ceph:
fsid: "afca8524-2c47-4b81-a0b7-2300e62212f9"
mon_host: "10.90.0.2,10.90.0.3,10.90.0.4"
radosgw:
key: "AQBIGP5Xs6QFCRAAkCf5YWeBHBlaj6S1rkcCYA=="
glance:
swift:
enable: true
store_create_container_on_put: true
Troubleshooting
---------------
If the Glance image upload failed, you should check few things:
- Glance-api pod logs
- Radosgw pod logs
- Keystone pod logs

148
doc/source/ceph_cluster.rst Normal file
View File

@ -0,0 +1,148 @@
.. _ceph_cluster:
=======================
Ceph cluster deployment
=======================
.. WARNING:: This setup is very simple, limited, and not suitable for real
production use. Use it as an example only.
Using this guide you'll deploy a 3 nodes Ceph cluster with RadosGW.
Prerequirements
~~~~~~~~~~~~~~~
- Three nodes with at least one unused disk available.
- In this example we're going to use Ubuntu 16.04 OS, if you're using a
different one, you have to edit the following configs and commands to suit
your OS.
In this doc we refer to these nodes as
- ceph_node_hostname1
- ceph_node_hostname2
- ceph_node_hostname3
Installation
~~~~~~~~~~~~
::
sudo apt install ansible
git clone https://github.com/ceph/ceph-ansible.git
.. NOTE: You'll need `this patch <https://github.com/ceph/ceph-ansible/pull/1011/>`__
for proper radosgw setup.
Configuration
~~~~~~~~~~~~~
cd into ceph-ansible directory:
::
cd ceph-ansible
Create ``group_vars/all`` with:
::
ceph_origin: upstream
ceph_stable: true
ceph_stable_key: https://download.ceph.com/keys/release.asc
ceph_stable_release: jewel
ceph_stable_repo: "http://download.ceph.com/debian-{{ ceph_stable_release }}"
cephx: true
generate_fsid: false
# Pre-created static fsid
fsid: afca8524-2c47-4b81-a0b7-2300e62212f9
# interface which ceph should use
monitor_interface: NAME_OF_YOUR_INTERNAL_IFACE
monitor_address: 0.0.0.0
journal_size: 1024
# network which you want to use for ceph
public_network: 10.90.0.0/24
cluster_network: "{{ public_network }}"
Make sure you change the ``NAME_OF_YOUR_INTERNAL_IFACE`` placeholder to the
actual interface name, like ``eth0`` or ``ens*`` in modern OSs.
Create ``group_vars/osds`` with:
::
fsid: afca8524-2c47-4b81-a0b7-2300e62212f9
# Devices to use in ceph on all osd nodes.
# Make sure the disk is empty and unused.
devices:
- /dev/sdb
# Journal placement option.
# This one means that journal will be on the same drive but another partition
journal_collocation: true
Create ``group_vars/mons`` with:
::
fsid: afca8524-2c47-4b81-a0b7-2300e62212f9
monitor_secret: AQAjn8tUwBpnCRAAU8X0Syf+U8gfBvnbUkDPyg==
Create inventory file with:
::
[mons]
ceph_node_hostname1
ceph_node_hostname2
ceph_node_hostname3
[osds]
ceph_node_hostname1
ceph_node_hostname2
ceph_node_hostname3
Deploy
~~~~~~
Make sure you have passwordless ssh key access to each node and run:
::
ansible-playbook -e 'host_key_checking=False' -i inventory_file site.yml.sample
Check Ceph deployment
~~~~~~~~~~~~~~~~~~~~~
Go to any ceph node and run with root permissions:
::
ceph -s
``health`` should be HEALTH_OK. HEALTH_WARN signify non-critical error, check
the description of the error to get the idea of how to fix it. HEALTH_ERR
signify critical error or a failed deployment.
Configure pools and users
~~~~~~~~~~~~~~~~~~~~~~~~~
On any Ceph node run:
::
rados mkpool images
rados mkpool volumes
rados mkpool vms
::
ceph auth get-or-create client.glance osd 'allow rwx pool=images, allow rwx pool=vms' mon 'allow r' -o /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder osd 'allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' mon 'allow r' -o /etc/ceph/ceph.client.cinder.keyring
ceph auth get-or-create client.radosgw.gateway osd 'allow rwx' mon 'allow rwx' -o /etc/ceph/ceph.client.radosgw.keyring
To list all user with permission and keys, run:
::
ceph auth list
Now you're ready to use this Ceph cluster with CCP.

View File

@ -23,6 +23,8 @@ Advanced topics
:maxdepth: 1
deploying_multiple_parallel_environments
ceph
ceph_cluster
Developer docs
--------------
@ -44,7 +46,7 @@ Design docs
design/index
Indices and tables
==================
------------------
* :ref:`genindex`
* :ref:`modindex`