Add Ceph/RBD support to playbooks

Currently the playbooks do not allow Ceph to be configured as a backend
for Cinder, Glance or Nova. This commit adds a new role called
ceph_client to do the required configuration of the hosts and updates
the service roles to include the required configuration file changes.
This commit requires that a Ceph cluster already exists and does not
make any changes to that cluster.

ceph_client role, run on the OpenStack service hosts
  - configures the Ceph apt repo
  - installs any required Ceph dependencies
  - copies the ceph.conf file and appropriate keyring file to /etc/ceph
  - creates the necessary libvirt secrets

os_glance role
glance-api.conf will set the following variables for Ceph:
  - [DEFAULT]/show_image_direct_url
  - [glance_store]/stores
  - [glance_store]/rbd_store_pool
  - [glance_store]/rbd_store_user
  - [glance_store]/rbd_store_ceph_conf
  - [glance_store]/rbd_store_chunk_size

os_nova role
nova.conf will set the following variables for Ceph:
  - [libvirt]/rbd_user
  - [libvirt]/rbd_secret_uuid
  - [libvirt]/images_type
  - [libvirt]/images_rbd_pool
  - [libvirt]/images_rbd_ceph_conf
  - [libvirt]/inject_password
  - [libvirt]/inject_key
  - [libvirt]/inject_partition
  - [libvirt]/live_migration_flag

os_cinder is not updated because ceph is defined as a backend and that
is generated from a dictionary of the config, for an example backend
config, see etc/openstack_deploy/openstack_user_config.yml.example

pw-token-gen.py is updated so that variables ending in uuid are assigned
a UUID.

DocImpact
Implements: blueprint ceph-block-devices
Closes-Bug: #1455238
Change-Id: Ie484ce0bbb93adc53c30be32f291aa5058b20028
This commit is contained in:
Serge van Ginderachter 2015-05-07 16:41:18 +02:00 committed by git-harry
parent 98153efac1
commit b878370a0b
27 changed files with 713 additions and 5 deletions

View File

@ -615,6 +615,30 @@
# netapp_login: username
# netapp_password: password
#
#
# Example:
#
# Use the ceph RBD backend in availability zone 'cinderAZ_3':
#
# container_vars:
# cinder_storage_availability_zone: cinderAZ_3
# cinder_default_availability_zone: cinderAZ_1
# cinder_backends:
# limit_container_types: cinder_volume
# volumes_hdd:
# volume_driver: cinder.volume.drivers.rbd.RBDDriver
# rbd_pool: volumes_hdd
# rbd_ceph_conf: /etc/ceph/ceph.conf
# rbd_flatten_volume_from_snapshot: 'false'
# rbd_max_clone_depth: 5
# rbd_store_chunk_size: 4
# rados_connect_timeout: -1
# glance_api_version: 2
# volume_backend_name: volumes_hdd
# rbd_user: "{{ cinder_ceph_client }}"
# rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
#
#
# --------
#
# Level: log_hosts (required)

View File

@ -43,6 +43,8 @@ cinder_container_mysql_password:
cinder_service_password:
cinder_v2_service_password:
cinder_profiler_hmac_key:
## Ceph/rbd: a UUID to be used by libvirt to refer to the client.cinder user
#cinder_ceph_client_uuid:
## Glance Options
glance_container_mysql_password:

View File

@ -25,7 +25,9 @@ glance_ceilometer_enabled: False
nova_ceilometer_enabled: False
## Glance Options
# Set default_store to "swift" if using Cloud Files or swift backend
# Set glance_default_store to "swift" if using Cloud Files or swift backend
# or "rbd" if using ceph backend; the latter will trigger ceph to get
# installed on glance
glance_default_store: file
glance_notification_driver: noop
@ -33,8 +35,23 @@ glance_notification_driver: noop
# `publicURL` to communicate with swift over the public network
glance_swift_store_endpoint_type: internalURL
# Ceph client user for glance to connect to the ceph cluster
#glance_ceph_client: glance
# Ceph pool name for Glance to use
#glance_rbd_store_pool: images
#glance_rbd_store_chunk_size: 8
## Nova
# When nova_libvirt_images_rbd_pool is defined, ceph will be installed on nova
# hosts.
#nova_libvirt_images_rbd_pool: vms
# by default we assume you use rbd for both cinder and nova, and as libvirt
# needs to access both volumes (cinder) and boot disks (nova) we default to
# reuse the cinder_ceph_client
# only need to change this if you'd use ceph for boot disks and not for volumes
#nova_ceph_client:
#nova_ceph_client_uuid:
# This defaults to KVM, if you are deploying on a host that is not KVM capable
# change this to your hypervisor type: IE "qemu", "lxc".
# nova_virt_type: kvm
@ -51,6 +68,26 @@ glance_swift_store_endpoint_type: internalURL
#glance_swift_store_container: "NAME_OF_SWIFT_CONTAINER"
#glance_swift_store_region: "NAME_OF_REGION"
## Cinder
# Ceph client user for cinder to connect to the ceph cluster
#cinder_ceph_client: cinder
## Ceph
#ceph_apt_repo_url_region: "www" # or "eu" for Netherlands based mirror
#ceph_stable_release: hammer
#
# Enable these if you use ceph rbd for at least one component (glance, cinder, nova)
# ceph cluster-id, overrule with correct uuid!
#ceph_fsid: d4ab416b-490c-4ab9-87e8-2364b524e0f2
#ceph_conf:
# global:
# fsid: '{{ ceph_fsid }}'
# mon_initial_members: 'mon1.example.local,mon2.example.local,mon3.example.local'
# mon_host: '10.16.5.40,10.16.5.41,10.16.5.42'
# auth_cluster_required: cephx
# auth_service_required: cephx
# auth_client_required: cephx
## Apache SSL Settings
# These do not need to be configured unless you're creating certificates for
# services running behind Apache (currently, Horizon and Keystone).

View File

@ -97,6 +97,8 @@ nova_service_adminurl: "{{ nova_service_adminuri }}/v2/%(tenant_id)s"
nova_service_region: "{{ service_region }}"
nova_metadata_port: 8775
nova_keystone_auth_plugin: password
nova_ceph_client: '{{ cinder_ceph_client }}'
nova_ceph_client_uuid: '{{ cinder_ceph_client_uuid | default() }}'
## Neutron
@ -171,6 +173,24 @@ horizon_service_region: "{{ service_region }}"
heat_service_region: "{{ service_region }}"
## Cinder
# cinder_backend_rbd_inuse: True if current host has an rbd backend
cinder_backend_rbd_inuse: '{{ (cinder_backends|default("")|to_json).find("cinder.volume.drivers.rbd.RBDDriver") != -1 }}'
# cinder_backends_rbd_inuse: true if at least 1 cinder_backend on any
# cinder_volume host uses Ceph RBD
# http://stackoverflow.com/questions/9486393/jinja2-change-the-value-of-a-variable-inside-a-loop
cinder_backends_rbd_inuse: >
{% set _var = {'rbd_inuse': False} %}{%
for host in groups.cinder_volume %}{%
if hostvars[host].cinder_backend_rbd_inuse | bool %}{%
if _var.update({'rbd_inuse': True }) %}{%
endif %}{%
endif %}{%
endfor %}{{
_var.rbd_inuse }}
cinder_ceph_client: cinder
## OpenStack Openrc
openrc_os_auth_url: "{{ keystone_service_internalurl_v3 }}"
openrc_os_password: "{{ keystone_auth_admin_password }}"

View File

@ -109,6 +109,11 @@
- cinder-logs
roles:
- { role: "os_cinder", tags: [ "os-cinder" ] }
- role: "ceph_client"
openstack_service_system_user: "{{ cinder_system_user_name }}"
tags:
- "cinder-ceph-client"
- "ceph-client"
- role: "rsyslog_client"
rsyslog_client_log_dir: "/var/log/cinder"
rsyslog_client_config_name: "99-cinder-rsyslog-client.conf"

View File

@ -75,6 +75,11 @@
roles:
- { role: "os_glance", tags: [ "os-glance" ] }
- { role: "openstack_openrc", tags: [ "openstack-openrc" ] }
- role: "ceph_client"
openstack_service_system_user: "{{ glance_system_user_name }}"
tags:
- "glance-ceph-client"
- "ceph-client"
- role: "rsyslog_client"
rsyslog_client_log_dir: "/var/log/glance"
rsyslog_client_config_name: "99-glance-rsyslog-client.conf"

View File

@ -91,6 +91,11 @@
- nova-logs
roles:
- { role: "os_nova", tags: [ "os-nova" ] }
- role: "ceph_client"
openstack_service_system_user: "{{ nova_system_user_name }}"
tags:
- "nova-ceph-client"
- "ceph-client"
- { role: "openstack_openrc", tags: [ "openstack-openrc" ] }
- role: "rsyslog_client"
rsyslog_client_log_dir: "/var/log/nova"

View File

@ -0,0 +1,80 @@
---
# Copyright 2015, Serge van Ginderachter <serge@vanginderachter.be>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# to user Ceph in OSAD, you need to
# - have the needed pools and a client user (for glance, cinder and/or nova)
# pre-provisioned in your ceph cluster; OSAD assumes to have root access to
# the monitor hosts
# - configure / overrules following defaults in osad's user config
# - some ceph specific vars are (also) part of other role defaults:
# * glance
# * nova
# - cinder gets configured with ceph if there are cinder backends defined with
# the rbd driver (see openstack_user_config.yml.example)
# Ceph GPG Keys
ceph_gpg_keys:
- key_name: 'ceph'
keyserver: 'hkp://keyserver.ubuntu.com:80'
fallback_keyserver: 'hkp://p80.pool.sks-keyservers.net:80'
hash_id: '0x7ebfdd5d17ed316d'
# Ceph Repositories
ceph_apt_repo_url_region: "www" # or "eu" for Netherlands based mirror
ceph_stable_release: hammer
ceph_apt_repo_url: "http://{{ ceph_apt_repo_url_region }}.ceph.com/debian-{{ ceph_stable_release }}/"
ceph_apt_repo:
repo: "deb {{ ceph_apt_repo_url }} {{ ansible_lsb.codename }} main"
state: "present"
ceph_apt_pinned_packages: [{ package: "*", release: Inktank, priority: 1001 }]
# Ceph Authentication
cephx: true
# Ceph Monitors
# A list of the IP addresses for your Ceph monitors
ceph_mons: []
# Path to local ceph.conf file
# Leave this commented to obtain a ceph.conf from one of the monitors defined in ceph_mons
#ceph_conf_file: |
# [global]
# fsid = 4037aa5f-abde-4378-9470-f73dbd6ceaba
# mon_initial_members = mon1.example.local,mon2.example.local,mon3.example.local
# mon_host = 10.16.5.40,10.16.5.41,10.16.5.42
# auth_cluster_required = cephx
# auth_service_required = cephx
# auth_client_required = cephx
# Ceph client usernames for glance and cinder+nova
glance_ceph_client: glance
cinder_ceph_client: cinder
# by default we assume you use rbd for both cinder and nova, and as libvirt
# needs to access both volumes (cinder) as boot disks (nova) we default to
# reuse the cinder_ceph_client
# only need to change this if you'd use ceph for boot disks and not for volumes
nova_ceph_client: '{{ cinder_ceph_client }}'
# overruled in user_secrets:
nova_ceph_client_uuid: 457eb676-33da-42ec-9a8c-9293d545c337
cephkeys_access_group: cephkeys
openstack_service_system_user: null
ceph_cinder_service_names:
- cinder-volume
ceph_nova_service_names:
- nova-compute
ceph_glance_service_names:
- glance-api

View File

@ -0,0 +1,25 @@
# Copyright 2014, Rackspace US, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Restart os services
service:
name: "{{ item.1 }}"
state: restarted
pattern: "{{ item.1 }}"
with_subelements:
- ceph_components
- service
when: inventory_hostname in groups[item.0.component]
failed_when: false

View File

@ -0,0 +1,6 @@
---
dependencies:
- role: apt_package_pinning
apt_pinned_packages: "{{ ceph_apt_pinned_packages }}"
tags:
- ceph-pre-preinstall

View File

@ -0,0 +1,43 @@
---
# Copyright 2015, Serge van Ginderachter <serge@vanginderachter.be>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# if glance, cinder and/or nova use rbd, ceph_conf needs to be defined
- assert:
that:
- ceph_mons != []
- ceph_mons | list == ceph_mons
tags:
- ceph-config
- ceph-auth
- include: ceph_preinstall.yml
tags: ceph-preinstall
- include: ceph_install.yml
tags: ceph-install
- include: ceph_get_mon_host.yml
tags:
- ceph-config
- ceph-auth
- include: ceph_config.yml
tags: ceph-config
- include: ceph_auth.yml
when: >
cephx | bool
tags: ceph-auth

View File

@ -0,0 +1,135 @@
---
# Copyright 2015, Serge van Ginderachter <serge@vanginderachter.be>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## Ceph client keyrings
#TODO: also be able to create users, keys and pools on ceph
- name: Retrieve keyrings for openstack clients from ceph cluster
# the first get makes sure the client exists, so the second only runs when it
# exists, the trick is the different output of both, the second has the right
# output to put in a keyring; ceph admin should have already created the user
shell: ceph auth get client.{{ item.1 }} >/dev/null && ceph auth get-or-create client.{{ item.1 }}
with_subelements:
- ceph_components
- client
when: inventory_hostname in groups[item.0.component]
always_run: true
changed_when: false
delegate_to: '{{ ceph_mon_host }}'
register: ceph_client_keyrings
until: ceph_client_keyrings|success
tags:
- ceph-auth-client-keyrings
- name: Create cephkeys_access_group group
group:
name: "{{ cephkeys_access_group }}"
- name: Add OpenStack service to cephkeys_access_group group
user:
name: "{{ openstack_service_system_user }}"
groups: "{{ cephkeys_access_group }}"
append: yes
- name: Provision ceph client keyrings
# TODO: do we really need a template for this? what's the added value compare to
# ceph get-or-create ... ... -o file?
template:
src: ceph.client.keyring.j2
dest: /etc/ceph/ceph.client.{{ item.item.1 }}.keyring
backup: true
owner: root
# TODO
group: "{{ cephkeys_access_group }}"
# ideally the permission will be: 0600 and the owner/group will be either
# glance , nova or cinder. For keys that require access by different users
# (the cinder one) we should probably create a group 'cephkeys' and add
# nova/cinder to it.
# If I'm correct, the use case for multiple users is on the computre nodes,
# access needed by users libvirt-qemu and nova
mode: 0640
with_items: ceph_client_keyrings.results
when: not item|skipped and inventory_hostname in groups[item.item.0.component]
notify:
- Restart os services
tags:
- ceph-auth-client-keyrings
## Ceph nova client libvirt secret
- name: Retrieve nova secret from cephcluster
command: ceph auth get-key client.{{ nova_ceph_client }}
when: inventory_hostname in groups.nova_compute
register: ceph_client_nova_secret
always_run: true
changed_when: false
failed_when: false
delegate_to: '{{ ceph_mon_host }}'
register: ceph_nova_secret
tags:
- ceph-auth-nova-libvirt-secret
- name: Check if nova secret is defined in libvirt
shell: virsh secret-list|grep {{ nova_ceph_client_uuid }}
when: inventory_hostname in groups.nova_compute
always_run: true
failed_when: false
changed_when: false
register: libvirt_nova_defined
tags:
- ceph-auth-nova-libvirt-secret
- name: Provide xml file to create the secret
template:
src: secret.xml.j2
dest: /tmp/nova-secret.xml
when: inventory_hostname in groups.nova_compute and libvirt_nova_defined.rc is defined and libvirt_nova_defined.rc != 0
tags:
- ceph-auth-nova-libvirt-secret
- name: Define libvirt nova secret
command: virsh secret-define --file /tmp/nova-secret.xml
when: inventory_hostname in groups.nova_compute and libvirt_nova_defined.rc is defined and libvirt_nova_defined.rc != 0
notify:
- Restart os services
tags:
- ceph-auth-nova-libvirt-secret
- name: Check if nova secret value is set in libvirt
command: virsh secret-get-value {{ nova_ceph_client_uuid }}
when: inventory_hostname in groups.nova_compute
always_run: true
failed_when: false
changed_when: false
register: libvirt_nova_set
tags:
- ceph-auth-nova-libvirt-secret
- name: Set nova secret value in libvirt
shell: virsh secret-set-value --secret {{ nova_ceph_client_uuid }} --base64 {{ ceph_nova_secret.stdout }}
when: >
inventory_hostname in groups.nova_compute and libvirt_nova_set.rc is defined
and
(libvirt_nova_set.rc != 0
or
(libvirt_nova_set.rc == 0 and libvirt_nova_set.stdout != ceph_nova_secret.stdout)
)
notify:
- Restart os services
tags:
- ceph-auth-nova-libvirt-secret

View File

@ -0,0 +1,61 @@
---
# Copyright 2015, Serge van Ginderachter <serge@vanginderachter.be>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Provide ceph configuration directory
file:
dest: /etc/ceph
state: directory
owner: root
group: root
mode: 0755
tags:
- ceph-config-create-dir
- name: Get ceph.conf and store contents when ceph_conf_file is not defined
command: |
cat /etc/ceph/ceph.conf
register: ceph_conf_content_mon
delegate_to: '{{ ceph_mon_host }}'
changed_when: false
when: ceph_conf_file is not defined
tags:
- ceph-config-get-config
- name: Register ceph_conf fact when ceph_conf_file is not defined
set_fact:
ceph_conf: "{{ ceph_conf_content_mon.stdout }}"
when: ceph_conf_file is not defined
tags:
- ceph-config-get-config
- name: Register ceph_conf fact when ceph_conf_file is defined
set_fact:
ceph_conf: "{{ ceph_conf_file }}"
when: ceph_conf_file is defined
tags:
- ceph-config-get-config
- name: Create ceph.conf from mon host
copy:
content: '{{ ceph_conf }}'
dest: /etc/ceph/ceph.conf
owner: root
group: root
mode: 0644
notify:
- Restart os services
tags:
- ceph-config-create-config

View File

@ -0,0 +1,40 @@
---
# Copyright 2015, Serge van Ginderachter <serge@vanginderachter.be>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# look for 1 ceph monitor host that is up
- name: Verify Ceph monitors are up
# using netcat instead of wait_for allows to both check the rc and the
# output, rc not being available using wait_for + failed_when: false
# failed_when: false is needed to not loose any hosts, as this check expects
# some to be down.
local_action: command nc -w 1 {{ item }} 22
with_items: '{{ ceph_mons }}'
changed_when: false
failed_when: false
register: ceph_mon_upcheck
tags:
- ceph-config-create-config
- ceph-auth-client-keyrings
- ceph-auth-nova-libvirt-secret
- name: Set ceph_mon_host to an online monitor host
set_fact:
ceph_mon_host: '{{ item.item }}'
when: item.rc == 0 and "OpenSSH" in item.stdout
with_items: ceph_mon_upcheck.results
tags:
- ceph-config-create-config
- ceph-auth-client-keyrings
- ceph-auth-nova-libvirt-secret

View File

@ -0,0 +1,32 @@
---
# Copyright 2015, Serge van Ginderachter <serge@vanginderachter.be>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Install ceph packages
apt:
name: '{{ item.1 }}'
state: latest
update_cache: yes
cache_valid_time: 600
register: install_packages
until: install_packages|success
retries: 5
delay: 2
with_subelements:
- ceph_components
- package
when: inventory_hostname in groups[item.0.component]
notify:
- Restart os services

View File

@ -0,0 +1,54 @@
---
# Copyright 2015, Serge van Ginderachter <serge@vanginderachter.be>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- name: Add ceph apt-keys
apt_key:
id: "{{ item.hash_id }}"
keyserver: "{{ item.keyserver }}"
state: "present"
register: add_keys
until: add_keys|success
ignore_errors: True
retries: 5
delay: 2
with_items: ceph_gpg_keys
tags:
- ceph-apt-keys
- name: Add ceph apt-keys using fallback keyserver
apt_key:
id: "{{ item.hash_id }}"
keyserver: "{{ item.fallback_keyserver }}"
state: "present"
register: add_keys_fallback
until: add_keys_fallback|success
retries: 5
delay: 2
with_items: ceph_gpg_keys
when: add_keys|failed and item.fallback_keyserver is defined
tags:
- ceph-apt-keys
- name: Add ceph repo(s)
apt_repository:
repo: "{{ ceph_apt_repo.repo }}"
state: "{{ ceph_apt_repo.state }}"
register: add_repos
until: add_repos|success
retries: 5
delay: 2
tags:
- ceph-repos

View File

@ -0,0 +1,29 @@
---
# Copyright 2015, Serge van Ginderachter <serge@vanginderachter.be>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- include: ceph_all.yml
when: >
(inventory_hostname in groups['glance_api']
and glance_default_store == 'rbd')
or
(inventory_hostname in groups['cinder_volume']
and cinder_backend_rbd_inuse|bool)
or
(inventory_hostname in groups['nova_compute']
and (cinder_backends_rbd_inuse|bool or
nova_libvirt_images_rbd_pool is defined))
tags:
- ceph-client

View File

@ -0,0 +1,2 @@
# {{ ansible_managed }}
{{ item.stdout }}

View File

@ -0,0 +1,7 @@
# {{ ansible_managed }}
{% for section in ceph_conf %}
[{{ section }}]
{% for key, value in ceph_conf[section]|dictsort %}
{{ key }} = {{ value }}
{% endfor %}
{% endfor %}

View File

@ -0,0 +1,5 @@
# {{ ansible_managed }}
Package: *
Pin: release o=Inktank
Pin-Priority: 1001

View File

@ -0,0 +1,7 @@
<!-- {{ ansible_managed }} -->
<secret ephemeral='no' private='no'>
<uuid>{{ nova_ceph_client_uuid}}</uuid>
<usage type='ceph'>
<name>client.{{ nova_ceph_client }} secret</name>
</usage>
</secret>

View File

@ -0,0 +1,43 @@
---
# Copyright 2015, Serge van Ginderachter <serge@vanginderachter.be>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# http://ceph.com/docs/master/rbd/rbd-openstack/
ceph_components:
- component: glance_api
package:
- python-ceph
client:
- '{{ glance_ceph_client }}'
service: '{{ ceph_glance_service_names }}'
- component: cinder_volume
package:
- ceph # TODO: remove this once http://tracker.ceph.com/issues/11388 is resolved
- ceph-common
- python-ceph
client:
- '{{ cinder_ceph_client }}'
service: '{{ ceph_cinder_service_names }}'
- component: nova_compute
package:
- libvirt-bin
- ceph # TODO: remove this once http://tracker.ceph.com/issues/11388 is resolved
- ceph-common
- python-ceph
client:
- '{{ nova_ceph_client }}'
service: '{{ ceph_nova_service_names }}'

View File

@ -32,12 +32,13 @@ glance_system_shell: /bin/false
glance_system_comment: glance system user
glance_system_user_home: "/var/lib/{{ glance_system_user_name }}"
glance_flavor: "keystone+cachemanagement"
glance_registry_host: "{{ internal_lb_vip_address }}"
glance_ceilometer_notification_driver: messagingv2
glance_notification_driver: noop
glance_rpc_backend: glance.openstack.common.rpc.impl_kombu
glance_default_store: file
glance_flavor: "{% if glance_default_store == 'rbd' %}keystone{% else %}keystone+cachemanagement{% endif %}"
glance_show_image_direct_url: "{{ glance_default_store == 'rbd' }}"
## API options
@ -133,6 +134,12 @@ glance_policy_dirs: policy.d
# "add_image": ""
# "delete_image": ""
## Ceph rbd Options
glance_ceph_client: glance
glance_rbd_store_pool: images
glance_rbd_store_user: '{{ glance_ceph_client }}'
glance_rbd_store_chunk_size: 8
# Common apt packages
glance_apt_packages:
- rpcbind

View File

@ -48,6 +48,9 @@ scrub_time = 43200
scrubber_datadir = {{ glance_system_user_home }}/scrubber/
image_cache_dir = {{ glance_system_user_home }}/cache/
# defaults to true if RBD is used as default store
show_image_direct_url = {{ glance_show_image_direct_url }}
[task]
task_executor = {{ glance_task_executor }}
@ -104,6 +107,12 @@ swift_store_large_object_size = {{ glance_swift_store_large_object_size }}
swift_store_large_object_chunk_size = {{ glance_swift_store_large_object_chunk_size }}
swift_store_retry_get_count = 5
swift_store_endpoint_type = {{ glance_swift_store_endpoint_type }}
{% elif glance_default_store == "rbd" %}
stores = glance.store.rbd.Store,glance.store.http.Store,glance.store.cinder.Store
rbd_store_pool = {{ glance_rbd_store_pool }}
rbd_store_user = {{ glance_rbd_store_user }}
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = {{ glance_rbd_store_chunk_size }}
{% endif %}
[profiler]

View File

@ -235,6 +235,11 @@ nova_scheduler_program_name: nova-scheduler
# If you want to regenerate the nova users SSH keys, on each run, set this var to True
# Otherwise keys will be generated on the first run and not regenerated each run.
nova_recreate_keys: False
# Nova Ceph rbd
# Enble and define nova_libvirt_images_rbd_pool to use rbd as nova backend
#nova_libvirt_images_rbd_pool: vms
nova_ceph_client: '{{ cinder_ceph_client }}'
nova_ceph_client_uuid: 517a4663-3927-44bc-9ea7-4a90e1cd4c66
## General Neutron configuration
# If ``nova_osapi_compute_workers`` is unset the system will use half the number of available VCPUS to

View File

@ -239,3 +239,19 @@ use_virtio_for_bridges = True
cpu_mode = {{ nova_cpu_mode }}
virt_type = {{ nova_virt_type }}
remove_unused_resized_minimum_age_seconds = {{ nova_remove_unused_resized_minimum_age_seconds }}
{% if cinder_backends_rbd_inuse|bool or nova_libvirt_images_rbd_pool is defined %}
rbd_user = {{ nova_ceph_client }}
rbd_secret_uuid = {{ nova_ceph_client_uuid }}
{% endif %}
{% if nova_libvirt_images_rbd_pool is defined %}
# ceph rbd support
images_type = rbd
images_rbd_pool = {{ nova_libvirt_images_rbd_pool }}
images_rbd_ceph_conf = /etc/ceph/ceph.conf
inject_password = false
inject_key = false
inject_partition = -2
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST"
{% endif %}

View File

@ -20,6 +20,7 @@ import hashlib
import os
import random
import tarfile
import uuid
from Crypto import Random
try:
@ -63,7 +64,7 @@ class CredentialGenerator(object):
func = getattr(self, '_%s_gen' % pw_type)
return func(encoded_bytes=encoded_bytes)
else:
raise SystemExit('Unknown secrete type passed. [ %s ]' % pw_type)
raise SystemExit('Unknown secret type passed. [ %s ]' % pw_type)
@staticmethod
def _random_bytes():
@ -135,8 +136,8 @@ def main():
This will open a file that was specified on the command line. The file
specified is assumed to be in valid YAML format, which is used in ansible.
When the YAML file will be processed and any key with a null value that
ends with 'password', 'token', or 'key' will have a generated password set
as the value.
ends with 'password', 'token', 'key' or 'uuid' will have a generated
password set as the value.
The main function will create a backup of all changes in the file as a
tarball in the same directory as the file specified.
@ -176,6 +177,9 @@ def main():
elif entry.endswith('key'):
changed = True
user_vars[entry] = generator.generator(pw_type='key')
elif entry.endswith('uuid'):
changed = True
user_vars[entry] = str(uuid.uuid4())
elif entry.startswith('swift_hash_path'):
changed = True
user_vars[entry] = generator.generator(pw_type='key')