Update Ceph container CPU/memory limits in Ceph scenarios

Ceph containers are started with `docker run --memory`
and `docker run --cpus` to limit their memory and CPU
resources. The defaults for OSD and MDS containers were
recently increased [1] to values better for production
but this change keeps them at lower values just for
CI.

[1] https://github.com/ceph/ceph-ansible/pull/2304

Change-Id: I5b5cf5cc52907af092bea5e162d4b577ee05c23a
Related-Bug: 1741499
This commit is contained in:
John Fulton 2018-01-05 15:26:22 -05:00
parent 7e148af75f
commit d68619a26e
2 changed files with 4 additions and 0 deletions

View File

@ -125,6 +125,8 @@ parameter_defaults:
rule_name: ''
CephAnsibleExtraConfig:
centos_package_dependencies: []
ceph_osd_docker_memory_limit: '1g'
ceph_mds_docker_memory_limit: '1g'
CephAnsibleSkipTags: ''
#NOTE: These ID's and keys should be regenerated for
# a production deployment. What is here is suitable for

View File

@ -104,6 +104,8 @@ parameter_defaults:
CephPoolDefaultSize: 1
CephAnsibleExtraConfig:
centos_package_dependencies: []
ceph_osd_docker_memory_limit: '1g'
ceph_mds_docker_memory_limit: '1g'
CephAnsibleSkipTags: ''
#NOTE: These ID's and keys should be regenerated for
# a production deployment. What is here is suitable for