Deprecate CephPoolDefaultPgNum and CephPoolDefaultSize

CephPoolDefaultPgNum and CephPoolDefaultSize no longer affect the
Ceph deployment and pools are created without a pg_num unless one
is provided _per_ pool by overriding CephPools. Parameters are
still present but have no effect.

If you deloyed a Ceph RBD cluster, then would you expect an overcloud
deployment which happened after that to change the default PG number
or replica count? That kind of thing should be decoupled from overcloud
deployment and handled between 'openstack overcloud ceph deploy' and
'openstack overcloud deploy'. Pool creation during overcloud deployment
is another matter however.

It's no longer required to pass a PG number when creating pools so don't
by default. When you create pools you should set a target_size_ratio (or
PG number) by overriding CephPools. Else you'll inherit Ceph's, not
OpenStack's, defualt PG and replica values per Ceph release (now 32 PGs
in Pacific).

Update the CephPools example comment to show use of target_size_ratio;
volumes 40%, images 10%, vms 30% (which leaves 20% of space free) so
user can borrow it and fill it in without worrying about PG numbers.
If they overlook it and go with all defaults (no target_size_ratio
and default pg_num), then pg_autoscale_mode will fix it later though
"all data that is written will end up moving approximately once after
it is written ... but the overhead of ignorance is at least bounded
and reasonable." [1]

[1] https://ceph.io/en/news/blog/2019/new-in-nautilus-pg-merging-and-autotuning

Depends-On: I18a898ad7f6fcd3818bac707ce34a93303a59430
Depends-On: Ief1a40161a1c57be8fd038b473a6f21feb9aa8fc
Depends-On: I982dedb53582fbd76391165c3ca72954c129b84a
Change-Id: Ieb16a502a46bb53b75e6159e9ad56c98800e2b80
This commit is contained in:
John Fulton 2022-06-03 18:25:42 +00:00
parent 0be8830b33
commit 66ba918c74
7 changed files with 43 additions and 34 deletions

View File

@ -113,14 +113,10 @@ parameter_defaults:
- gnocchi://?archive_policy=ceilometer-high-rate
CeilometerQdrPublishEvents: true
ManageEventPipeline: true
Debug: true
DockerPuppetDebug: True
CephPoolDefaultPgNum: 8
CephPoolDefaultSize: 1
CephPools:
- name: altrbd
pg_num: 8
rule_name: replicated_rule
#NOTE: These ID's and keys should be regenerated for
# a production deployment. What is here is suitable for

View File

@ -59,15 +59,12 @@ parameter_defaults:
ManagePipeline: true
Debug: true
DeployedCeph: true
CephPoolDefaultPgNum: 8
CephPoolDefaultSize: 1
CephEnableDashboard: true
CephDashboardPort: 8445
GrafanaDashboardPort: 3200
CinderRbdExtraPools: altrbd,pool2,pool3
CephPools:
- name: altrbd
pg_num: 8
rule_name: replicated_rule
application: rbd
#NOTE: These ID's and keys should be regenerated for

View File

@ -33,8 +33,6 @@ parameter_defaults:
GlanceSparseUploadEnabled: true
ManagePolling: true
Debug: true
CephPoolDefaultPgNum: 8
CephPoolDefaultSize: 1
CephEnableDashboard: false
CephDashboardPort: 8445
CephAdmVerbose: true

View File

@ -75,11 +75,8 @@ parameter_defaults:
nova::compute::libvirt::virt_type: qemu
octavia::controller::connection_retry_interval: 10
Debug: true
CephPoolDefaultPgNum: 8
CephPoolDefaultSize: 1
CephPools:
- name: altrbd
pg_num: 8
rule_name: replicated_rule
CephClusterFSID: '4b5c8c0a-ff60-454b-a1b4-9747aa737d19'
CephClusterName: mycephcluster

View File

@ -33,8 +33,6 @@ parameter_defaults:
8CF1A7EA-7B4B-4433-AC83-17675514B1B8: {"foo2": "bar2"}
Debug: true
HideSensitiveLogs: false
CephPoolDefaultPgNum: 8
CephPoolDefaultSize: 1
#NOTE: These ID's and keys should be regenerated for
# a production deployment. What is here is suitable for
# developer and CI testing only.

View File

@ -74,16 +74,24 @@ parameters:
description: >
Enable Ceph msgr2 secure mode to enable on-wire encryption between Ceph
daemons and also between Ceph clients and daemons.
CephPoolDefaultPgNum:
description: default pg_num to use for the RBD pools
type: number
default: 16
CephPools:
description: >
It can be used to override settings for one of the predefined pools, or to create
additional ones. Example:
[{"name": "volumes", "pg_num": 64, "rule_name": "replicated_rule"},
{"name": "vms", "target_size_ratio": "0.4", "rule_name": "replicated_rule"}]
Used to override settings (mainly target_size_ratio or pg_num) in pools.
Pacific has pg_autoscale_mode enabled by default so set target_size_ratio
as a percentage of the expected data consumption. The example below sets
cinder volumes 40%, glance images 10%, nova vms 30% (20% of space free).
Not set by default but overrides are encouraged to avoid data rebalancing.
For example,
CephPools:
- name: volumes
target_size_ratio: 0.4
application: rbd
- name: images
target_size_ratio: 0.1
application: rbd
- name: vms
target_size_ratio: 0.3
application: rbd
default: []
type: json
CinderRbdPoolName:
@ -167,10 +175,6 @@ parameters:
hidden: true
constraints:
- allowed_pattern: "^[a-zA-Z0-9+/]{38}==$"
CephPoolDefaultSize:
description: default minimum replication for RBD copies
type: number
default: 3
ManilaCephFSDataPoolName:
default: manila_data
type: string
@ -197,6 +201,16 @@ parameters:
ContainerCephDaemonImage:
description: image
type: string
# start DEPRECATED options for compatibility with older versions
CephPoolDefaultPgNum:
description: default pg_num to use for the RBD pools
type: number
default: 16
CephPoolDefaultSize:
description: default minimum replication for RBD copies
type: number
default: 3
# end DEPRECATED options for compatibility with older versions
ContainerImageRegistryCredentials:
type: json
hidden: true
@ -354,7 +368,9 @@ parameters:
parameter_groups:
- label: deprecated
description: Do not use deprecated params, they will be removed.
parameters: []
parameters:
- CephPoolDefaultPgNum
- CephPoolDefaultSize
conditions:
custom_registry_host:
@ -414,10 +430,7 @@ resources:
properties:
type: json
value:
vars:
osd_pool_default_size: {get_param: CephPoolDefaultSize}
osd_pool_default_pg_num: {get_param: CephPoolDefaultPgNum}
osd_pool_default_pgp_num: {get_param: CephPoolDefaultPgNum}
vars: {}
CephAdmVars:
type: OS::Heat::Value
@ -522,12 +535,9 @@ outputs:
- true
- false
extra_pools: {get_param: CephPools}
pg_num: {get_param: CephPoolDefaultPgNum}
manila_pools:
data: {get_param: ManilaCephFSDataPoolName}
metadata: {get_param: ManilaCephFSMetadataPoolName}
data_pg_num: {get_param: CephPoolDefaultPgNum}
metadata_pg_num: {get_param: CephPoolDefaultPgNum}
ceph_keys:
openstack_client:
name: {get_param: CephClientUserName}

View File

@ -0,0 +1,13 @@
---
deprecations:
- |
CephPoolDefaultPgNum and CephPoolDefaultSize have been deprecated and no
longer affect the Ceph deployment because the Ceph deployment is run before
these parameters are used. I.e. Ceph is deployed by TripleO via 'openstack
overcloud ceph deploy' which does not use these parameters. It is no longer
required to pass a PG number when creating Ceph pools but it is recommended
to use CephPools to override target_size_ratio (or PG number) so pools do
not inherit the default PG replica values depending on the Ceph release.
Since Ceph pg_autoscale_mode is enabled by default in Pacific, PG numbers
will adjust themselves correctly. However, data migration can be reduced by
setting target_size_ratio (or PG number) in advance.