Reduce the default values for Ceph pgs
Required to keep Ceph working once we move to Luminous 12.2.1 Change-Id: I8d3e56f2053c939ea313c60cc04c0ff79dd27d25 Closes-Bug: 1763356
This commit is contained in:
parent
cc6960dd04
commit
36f33f089b
@ -723,8 +723,11 @@ ceph_rule: "default host {{ 'indep' if ceph_pool_type == 'erasure' else 'firstn'
|
|||||||
ceph_cache_rule: "cache host firstn"
|
ceph_cache_rule: "cache host firstn"
|
||||||
|
|
||||||
# Set the pgs and pgps for pool
|
# Set the pgs and pgps for pool
|
||||||
ceph_pool_pg_num: 128
|
# WARNING! These values are dependant on the size and shape of your cluster -
|
||||||
ceph_pool_pgp_num: 128
|
# the default values are not suitable for production use. Please refer to the
|
||||||
|
# Kolla Ceph documentation for more information.
|
||||||
|
ceph_pool_pg_num: 8
|
||||||
|
ceph_pool_pgp_num: 8
|
||||||
|
|
||||||
#####################
|
#####################
|
||||||
# VMware support
|
# VMware support
|
||||||
|
@ -126,6 +126,18 @@ RadosGW is optional, enable it in ``/etc/kolla/globals.yml``:
|
|||||||
|
|
||||||
.. end
|
.. end
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
Regarding number of placement groups (PGs)
|
||||||
|
|
||||||
|
Kolla sets very conservative values for the number of PGs per pool
|
||||||
|
(`ceph_pool_pg_num` and `ceph_pool_pgp_num`). This is in order to ensure
|
||||||
|
the majority of users will be able to deploy Ceph out of the box. It is
|
||||||
|
*highly* recommended to consult the official Ceph documentation regarding
|
||||||
|
these values before running Ceph in any kind of production scenario.
|
||||||
|
|
||||||
|
.. end
|
||||||
|
|
||||||
RGW requires a healthy cluster in order to be successfully deployed. On initial
|
RGW requires a healthy cluster in order to be successfully deployed. On initial
|
||||||
start up, RGW will create several pools. The first pool should be in an
|
start up, RGW will create several pools. The first pool should be in an
|
||||||
operational state to proceed with the second one, and so on. So, in the case of
|
operational state to proceed with the second one, and so on. So, in the case of
|
||||||
|
@ -267,8 +267,11 @@ kolla_internal_vip_address: "10.10.10.254"
|
|||||||
#enable_ceph_rgw_keystone: "no"
|
#enable_ceph_rgw_keystone: "no"
|
||||||
|
|
||||||
# Set the pgs and pgps for pool
|
# Set the pgs and pgps for pool
|
||||||
#ceph_pool_pg_num: 128
|
# WARNING! These values are dependant on the size and shape of your cluster -
|
||||||
#ceph_pool_pgp_num: 128
|
# the default values are not suitable for production use. Please refer to the
|
||||||
|
# Kolla Ceph documentation for more information.
|
||||||
|
#ceph_pool_pg_num: 8
|
||||||
|
#ceph_pool_pgp_num: 8
|
||||||
|
|
||||||
#############################
|
#############################
|
||||||
# Keystone - Identity Options
|
# Keystone - Identity Options
|
||||||
|
13
releasenotes/notes/reduce-ceph-pgs-27e88e3b6e3b809c.yaml
Normal file
13
releasenotes/notes/reduce-ceph-pgs-27e88e3b6e3b809c.yaml
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
---
|
||||||
|
issues:
|
||||||
|
- |
|
||||||
|
As of Ceph Luminous 12.2.1 the maximum number of PGs per OSD before the
|
||||||
|
monitor issues a warning has been reduced from 300 to 200 PGs. In addition,
|
||||||
|
Ceph now fails with an error rather than a warning in the case of exeeding
|
||||||
|
the max value.
|
||||||
|
In order to allow Kolla to continue to be used out of the box we have
|
||||||
|
reduced the default values for pg_num and pgp_num from 128 to 8. This will
|
||||||
|
allow a deploy of Kolla with all possible services enabled and then some,
|
||||||
|
with the minimum recommended three OSDs. Operators are *highly*
|
||||||
|
recommended to review the Ceph documentation regarding these values in
|
||||||
|
order to ensure optimal performance for their own cluster.
|
Loading…
Reference in New Issue
Block a user