ceph: ceph.conf set max pgs per osd

When installing StarlingX in AIO-SX configuration Ceph cluster enters
"too many PGs per OSD" health warn status because of the small number of
OSDs available combined with the number of pools times pg_num  and
replication factor.

In Ceph Luminous the default value for mon_max_pg_per_osd is set to 200
which is below minimum requirements for StarlingX AIO-SX.

See https://ceph.com/community/new-luminous-pg-overdose-protection/
"""
  There is now a mon_max_pg_per_osd limit (default: 200) that prevents
  you from creating new pools or adjusting pg_num or replica count for
  existing pools if it pushes you over the configured limit (as
  determined by dividing the total number of PG instances by the total
  number of “in” OSDs).
"""

To fix this issue we need to set higher values for: mon_max_pg_per_osd
and osd_max_pg_per_osd_hard_ratio.

Story: 2003605
Task: 28860

Depends-On: I7b534e31868e53ec479c2321d6883604c12aa6d3
Change-Id: I302c850191d8ca9548ee12053f803df5abfdd5b4
Signed-off-by: Daniel Badea <daniel.badea@windriver.com>
This commit is contained in:
Daniel Badea 2019-03-19 18:22:34 +00:00
parent d45193bf25
commit 428a3ff771

View File

@ -38,6 +38,13 @@
# Use Hammer's report interval default value
osd_mon_report_interval_max = 120
# Configure max PGs per OSD to cover worst-case scenario of all possible
# StarlingX deployments i.e. AIO-SX with one OSD. Otherwise using
# the default value provided by Ceph Mimic leads to "too many PGs per OSD"
# health warning as the pools needed by stx-openstack are being created.
mon_max_pg_per_osd = 2048
osd_max_pg_per_osd_hard_ratio = 1.2
[osd]
osd_mkfs_type = xfs
osd_mkfs_options_xfs = "-f"