Transfer ceph page to rst format

Signed-off-by: Ramona Rautenberg <rautenberg@osism.tech>
Change-Id: Ie040566682d2cac2d9a84ac19281d222398502ac
This commit is contained in:
Ramona Rautenberg 2022-07-01 11:56:15 +02:00
parent 97066f8399
commit ad0d6f0141
1 changed files with 32 additions and 4 deletions

View File

@ -1,6 +1,34 @@
==================
Ceph and Openstack
==================
================================
Large Scale SIG/CephAndOpenStack
================================
# WIP
Ceph is often used in combination with OpenStack, which raises a number of questions when scaling.
FAQ
---
Q: Is it better to have a single large Ceph cluster for a given datacenter or availability zone, or multiple smaller clusters to limit the impact in case of failure?
A:
Q: How to optimize Ceph cluster performance when it's composed of high-performance SSD disks and standard HDDs?
A:
Q: Should erasure coding be used to increase resilience ? What's its impact on performance in a VM use case?
A:
Q: How to optimize a Ceph cluster (number of nodes, configration, enabled features...) in large OpenStack clusters (> 100 compute nodes)?
A:
Resources
---------
* https://www.openstack.org/videos/summits/sydney-2017/the-dos-and-donts-for-ceph-and-openstack
* https://www.youtube.com/watch?v=OopRMUYiY5E
* https://www.youtube.com/watch?v=21LF2LC58MM
* https://www.youtube.com/watch?v=0i7ew3XXb7Q