Ceph uprev v13.2.2 Mimic

* ceph: update crushmap for ceph mimic

* puppet-ceph: remove ceph jewel rest-api configuration

    ceph-rest-api is implemented in ceph-mgr on ceph mimic/v13.2.2 version.
    Remove the configuration which is for ceph-v10.2.6/ceph-rest-api

* puppet-ceph: enable mgr-restful-plugin

    ceph configuration is under puppet control. ceph-mgr/restful
    plugin is going to be started in mgr-restful-plugin script.

    output log when starting mgr-restful-plugin
    output log in puppet log to know the execute commands.

* puppet-ceph: pass osdid to ceph::osd when creating resources

    ceph::osd needs to be created with the same OSD ID that's
    already present in sysinv database.

* puppet-ceph: update ceph.conf with osd device path

* puppet-ceph: fix aio-dx unlock issue caused by ceph-mon

* puppet-ceph: ensure radosgw systemd service is not started

    Make sure radosgw service is not accidentally
    started by systemd.

* puppet-sm: provision mgr-restful-plugin

    After mgr-restful-plugin is enabled by ceph.pp, SM will
    monitor mgr-restful-plugin status and contor its status.

* sysinv-common: ceph use status instead of overall_status

    'overall_status' is deprecated in Ceph Mimic. Use 'status' instead.

* sysinv-common: ceph incorrect parsing of osd_crush_tree output

    len(body) is used to iterate osd crush tree which is not
    correct because the crush tree dictionary is stored in
    body['output']

* sysinv-common: ceph refactor crushmap_tiers_add

    Refactor crushmap_tiers_add() to always check/create missing
    ceph tiers and corresponding crush rules. This is currently
    gated by tier.status == constants.SB_TIER_STATUS_DEFINED

* sysinv-conductor: remove api/v0.1 from ceph api endpoint

    "restapi base url"(ceph.conf) is removed from ceph Mimic
    version. remove the base url now.

* sysinv-conductor: ceph convert pool quota None to zero

    On non-kubernetes setup kube_pool_gib is None which
    raises an exception when trying to do integer
    arithmetic.

* sysinv-conductor: remove unused update_ceph_services

    update_ceph_services() is triggering application of
    runtime manifests but that's no longer supported on
    stx/containers.

    Removing dead/unused code.

* helm: rbd-provisioner setup kube-rbd pool

    Ceph Mimic no longer supports "ceph osd pool set <pool-name>
    crush_rule <ruleset>" with a numeric ruleset value. Crush
    rule name should be used instead.

    Starting with Ceph Luminous pools require application tags
    to be configured with: "ceph osd pool application enable
    <pool-name> <app-name> " otherwise ceph health warning is
    reported.

    Enable "rbd" application on "kube-rbd" pool.

* sysinv-helm: remove custom ceph_config_helper_image

    Remove custom ceph config helper image needed to adapt
    upstream helm charts to using Ceph Jewel release. Because
    we're using Ceph Mimic this helper image is no longer
    needed.

* sysinv-helm: ceph use rule name instead of id

    Ceph osd pool crush_rule is set by name. (Jewel release
    used numerical value for crush ruleset)

Story: 2003605
Task: 24932

Signed-off-by: Changcheng Liu <changcheng.liu@intel.com>
Signed-off-by: Ovidiu Poncea <ovidiu.poncea@windriver.com>
Signed-off-by: Dehao Shang <dehao.shang@intel.com>
Signed-off-by: Yong Hu <yong.hu@intel.com>
Signed-off-by: Daniel Badea <daniel.badea@windriver.com>

Depends-On: Ibfbecf0a8beb38009b9d7192ca9455a841402040
Change-Id: Ia322e5468026842d86e738ece82afd803dec315c
This commit is contained in:
Daniel Badea 2019-01-31 14:39:59 +08:00 committed by dbadea
parent 3dfa5a4514
commit fd128876ba
1 changed files with 2 additions and 3 deletions

View File

@ -60,13 +60,12 @@ data:
set -ex
# Get the ruleset from the rule name.
ruleset=$(ceph osd crush rule dump $POOL_CRUSH_RULE_NAME | grep "\"ruleset\":" | grep -Eo '[0-9]*')
# Make sure the pool exists.
ceph osd pool stats $POOL_NAME || ceph osd pool create $POOL_NAME $POOL_CHUNK_SIZE
# Set pool configuration.
ceph osd pool application enable $POOL_NAME rbd
ceph osd pool set $POOL_NAME size $POOL_REPLICATION
ceph osd pool set $POOL_NAME crush_rule $ruleset
ceph osd pool set $POOL_NAME crush_rule $POOL_CRUSH_RULE_NAME
if [[ -z $USER_ID && -z $CEPH_USER_SECRET ]]; then
msg="No need to create secrets for pool $POOL_NAME"