d2f6c88f90
The ceph.conf file packaged in the Ceph RPM sets 'osd_pool_default_size = 2'. This is a valid initial value for most deployments. The exception is for the AIO-SX single OSD installation (which is our default minimum AIO-SX configuration). In this deployment configuration, this value will produce a HEALTH_WARN specifying "Degraded data redundancy". This commit will set 'osd_pool_default_size' based on the deployment and specifically set it to '1' for the AIO-SX. This will provide a HEALTH_OK cluster on controller unlock. If/when additional OSDs are added, the 'system storage-backend-modify' command can be used to change the replication factor to provide a higher level of data redundancy. This change removes the long-stanging need to run the following command when provisioning the AIO-SX: ceph osd pool ls | xargs -i ceph osd pool set {} size 1 This will also now enable automatic loading of the platform-integ-apps k8s application and subsequent loading of the rbd-provisioner for persistent volume claims on the AIO-SX. Change-Id: I901b339f1c7770aa16a7bbfecf193d0c1e5e9eaa Story: 2005424 Task: 33471 Signed-off-by: Robert Church <robert.church@windriver.com> |
||
---|---|---|
.. | ||
build_srpm.data | ||
puppet-manifests.spec |