66ba918c74
CephPoolDefaultPgNum and CephPoolDefaultSize no longer affect the Ceph deployment and pools are created without a pg_num unless one is provided _per_ pool by overriding CephPools. Parameters are still present but have no effect. If you deloyed a Ceph RBD cluster, then would you expect an overcloud deployment which happened after that to change the default PG number or replica count? That kind of thing should be decoupled from overcloud deployment and handled between 'openstack overcloud ceph deploy' and 'openstack overcloud deploy'. Pool creation during overcloud deployment is another matter however. It's no longer required to pass a PG number when creating pools so don't by default. When you create pools you should set a target_size_ratio (or PG number) by overriding CephPools. Else you'll inherit Ceph's, not OpenStack's, defualt PG and replica values per Ceph release (now 32 PGs in Pacific). Update the CephPools example comment to show use of target_size_ratio; volumes 40%, images 10%, vms 30% (which leaves 20% of space free) so user can borrow it and fill it in without worrying about PG numbers. If they overlook it and go with all defaults (no target_size_ratio and default pg_num), then pg_autoscale_mode will fix it later though "all data that is written will end up moving approximately once after it is written ... but the overhead of ignorance is at least bounded and reasonable." [1] [1] https://ceph.io/en/news/blog/2019/new-in-nautilus-pg-merging-and-autotuning Depends-On: I18a898ad7f6fcd3818bac707ce34a93303a59430 Depends-On: Ief1a40161a1c57be8fd038b473a6f21feb9aa8fc Depends-On: I982dedb53582fbd76391165c3ca72954c129b84a Change-Id: Ieb16a502a46bb53b75e6159e9ad56c98800e2b80 |
||
---|---|---|
.. | ||
ceph-base.yaml | ||
ceph-client.yaml | ||
ceph-external.yaml | ||
ceph-grafana.yaml | ||
ceph-ingress.yaml | ||
ceph-mds.yaml | ||
ceph-mgr.yaml | ||
ceph-mon.yaml | ||
ceph-nfs.yaml | ||
ceph-osd.yaml | ||
ceph-rbdmirror.yaml | ||
ceph-rgw.yaml |