Enable Ceph charts to be rack aware for CRUSH

Add support for a rack level CRUSH map. Rack level CRUSH support is
enabled by using the "rack_replicated_rule" crush rule.

Change-Id: I4df224f2821872faa2eddec2120832e9a22f4a7c
This commit is contained in:
Matthew Heler
2018-11-16 12:20:52 -06:00
parent 5d356f9265
commit 5ce9f2eb3b
5 changed files with 46 additions and 9 deletions

View File

@@ -107,6 +107,18 @@ conf:
osd_mount_options_xfs: "rw,noatime,largeio,inode64,swalloc,logbufs=8,logbsize=256k,allocsize=4M"
osd_journal_size: 10240
pool:
default:
# NOTE(supamatt): Accepted values are:
# same_host for a single node
# replicated_rule for a multi node
# rack_replicated_rule for a multi node in multiple (>=3) racks
# Ceph cluster must be in a healthy state.
crush_rule: replicated_rule
# NOTE(supamatt): By default use the first 8 characters of the hostname to
# define the the rack type bucket names for CRUSH.
rack_regex: "1-8"
storage:
# NOTE(portdirect): for homogeneous clusters the `osd` key can be used to
# define OSD pods that will be deployed across the cluster.