Doc: Configuration: Remove some outdated Ceph info

A few small cleanups:
  - Remove irrelevant text about linux kernel
  - Remove recommendations about file systems
  - Remove outdated note about config option

Closes-Bug: #1716991
Change-Id: I0c1593a72473f0db5fb8b5e4d436fee4c9f5c62a
This commit is contained in:
Eric Harney
2017-10-03 11:12:30 -04:00
parent 68c668cfc8
commit e4dfc98378

View File

@@ -10,10 +10,9 @@ Ceph is a massively scalable, open source, distributed storage system.
It is comprised of an object store, block store, and a POSIX-compliant It is comprised of an object store, block store, and a POSIX-compliant
distributed file system. The platform can auto-scale to the exabyte distributed file system. The platform can auto-scale to the exabyte
level and beyond. It runs on commodity hardware, is self-healing and level and beyond. It runs on commodity hardware, is self-healing and
self-managing, and has no single point of failure. Ceph is in the Linux self-managing, and has no single point of failure. Due to its open-source
kernel and is integrated with the OpenStack cloud operating system. Due nature, you can install and use this portable storage platform in
to its open-source nature, you can install and use this portable storage public or private clouds.
platform in public or private clouds.
.. figure:: ../../figures/ceph-architecture.png .. figure:: ../../figures/ceph-architecture.png
@@ -32,9 +31,9 @@ components:
OSD (physical or logical storage unit for your data). OSD (physical or logical storage unit for your data).
You must run this daemon on each server in your cluster. For each You must run this daemon on each server in your cluster. For each
OSD, you can have an associated hard drive disk. For performance OSD, you can have an associated hard drive disk. For performance
purposes, pool your hard drive disk with raid arrays, logical volume purposes, pool your hard drive disk with raid arrays, or logical volume
management (LVM), or B-tree file system (Btrfs) pooling. By default, management (LVM). By default, the following pools are created: data,
the following pools are created: data, metadata, and RBD. metadata, and RBD.
*Meta-Data Server (MDS)* *Meta-Data Server (MDS)*
Stores metadata. MDSs build a POSIX file Stores metadata. MDSs build a POSIX file
@@ -50,19 +49,6 @@ components:
the data. In an ideal setup, you must run at least three ``ceph-mon`` the data. In an ideal setup, you must run at least three ``ceph-mon``
daemons on separate servers. daemons on separate servers.
Ceph developers recommend XFS for production deployments, Btrfs for
testing, development, and any non-critical deployments. Btrfs has the
correct feature set and roadmap to serve Ceph in the long-term, but XFS
and ext4 provide the necessary stability for todays deployments.
.. note::
If using Btrfs, ensure that you use the correct version (see `Ceph
Dependencies <http://ceph.com/docs/master/start/os-recommendations/.>`__).
For more information about usable file systems, see
`ceph.com/ceph-storage/file-system/ <http://ceph.com/ceph-storage/file-system/>`__.
Ways to store, use, and expose data Ways to store, use, and expose data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -101,9 +87,4 @@ Driver options
The following table contains the configuration options supported by the The following table contains the configuration options supported by the
Ceph RADOS Block Device driver. Ceph RADOS Block Device driver.
.. note::
The ``volume_tmp_dir`` option has been deprecated and replaced by
``image_conversion_dir``.
.. include:: ../../tables/cinder-storage_ceph.inc .. include:: ../../tables/cinder-storage_ceph.inc