large-scale/doc/source/journey/ceph.rst
Ramona Rautenberg ad0d6f0141 Transfer ceph page to rst format
Signed-off-by: Ramona Rautenberg <rautenberg@osism.tech>
Change-Id: Ie040566682d2cac2d9a84ac19281d222398502ac
2022-07-01 11:56:15 +02:00

1011 B

Large Scale SIG/CephAndOpenStack

Ceph is often used in combination with OpenStack, which raises a number of questions when scaling.

FAQ

Q: Is it better to have a single large Ceph cluster for a given datacenter or availability zone, or multiple smaller clusters to limit the impact in case of failure?

A:

Q: How to optimize Ceph cluster performance when it's composed of high-performance SSD disks and standard HDDs?

A:

Q: Should erasure coding be used to increase resilience ? What's its impact on performance in a VM use case?

A:

Q: How to optimize a Ceph cluster (number of nodes, configration, enabled features...) in large OpenStack clusters (> 100 compute nodes)?

A:

Resources