ceph pool name change from volumes_hdd to volumes to match ceph defaults cinder backend name change from volumes_hdd to rbd_backend Change-Id: Ia003f94b5266e3fa70f499d68139eff7b7fa7c6b
20 KiB
Configuring the Block (cinder) storage service (optional)
By default, the Block (cinder) storage service installs on the host itself using the LVM backend.
Note
While this is the default for cinder, using the LVM backend results in a Single Point of Failure.
The LVM back end needs to run on the host, however most of the other
back ends can be deployed inside a container. If the storage back ends
deployed within your environment are able to run inside containers, then
it is recommended to set is_metal: False in the
env.d/cinder.yml file.
Note
Due to a limitation of the container system, you must deploy the volume service directly onto the host when using back ends depending on iSCSI. That is the case, for example, for storage appliances configured to use the iSCSI protocol.
NFS backend
Edit /etc/openstack_deploy/openstack_user_config.yml and
configure the NFS client on each storage node if the NetApp backend is
configured to use an NFS storage protocol.
For each storage node, add one
cinder_backendsblock underneath the a newcontainer_varssection.container_varsare used to allow container/host individualized configuration. Each cinder back end is defined with a unique key. For example,nfs-volume1. This later represents a unique cinder backend and volume type.container_vars: cinder_backends: nfs-volume1:Configure the appropriate cinder volume backend name:
volume_backend_name: NFS_VOLUME1Configure the appropriate cinder NFS driver:
volume_driver: cinder.volume.drivers.nfs.NfsDriverConfigure the location of the file that lists shares available to the block storage service. This configuration file must include
nfs_shares_config:nfs_shares_config: FILENAME_NFS_SHARESReplace
FILENAME_NFS_SHARESwith the location of the share configuration file. For example,/etc/cinder/nfs_shares_volume1.Define mount options for the NFS mount. For example:
nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120"Configure one or more NFS shares:
shares: - { ip: "HOSTNAME", share: "PATH_TO_NFS_VOLUME" }Replace
HOSTNAMEwith the IP address or hostname of the NFS server, and thePATH_TO_NFS_VOLUMEwith the absolute path to an existing and accessible NFS share (excluding the IP address or hostname).
The following is a full configuration example of a cinder NFS backend
named NFS1. The cinder playbooks will automatically add a custom
volume-type and nfs-volume1 as in this
example:
container_vars: cinder_backends: nfs-volume1: volume_backend_name: NFS_VOLUME1 volume_driver: cinder.volume.drivers.nfs.NfsDriver nfs_shares_config: /etc/cinder/nfs_shares_volume1 nfs_mount_options: "rsize=65535,wsize=65535,timeo=1200,actimeo=120" shares: - { ip: "1.2.3.4", share: "/vol1" }
Backup
You can configure cinder to backup volumes to Object Storage (swift).
Enable the default configuration to back up volumes to a swift
installation accessible within your environment. Alternatively, you can
set cinder_service_backup_swift_url and other variables to
back up to an external swift installation.
Add or edit the following line in the
/etc/openstack_deploy/user_variables.ymlfile and set the value toTrue:cinder_service_backup_program_enabled: TrueBy default, cinder uses the access credentials of the user initiating the backup. Default values are set in the
/opt/openstack-ansible/playbooks/roles/os_cinder/defaults/main.ymlfile. You can override those defaults by setting variables in/etc/openstack_deploy/user_variables.ymlto change how cinder performs backups. Add and edit any of the following variables to the/etc/openstack_deploy/user_variables.ymlfile:... cinder_service_backup_swift_auth: per_user # Options include 'per_user' or 'single_user'. We default to # 'per_user' so that backups are saved to a user's swift # account. cinder_service_backup_swift_url: # This is your swift storage url when using 'per_user', or keystone # endpoint when using 'single_user'. When using 'per_user', you # can leave this as empty or as None to allow cinder-backup to # obtain a storage url from environment. cinder_service_backup_swift_url: cinder_service_backup_swift_auth_version: 2 cinder_service_backup_swift_user: cinder_service_backup_swift_tenant: cinder_service_backup_swift_key: cinder_service_backup_swift_container: volumebackups cinder_service_backup_swift_object_size: 52428800 cinder_service_backup_swift_retry_attempts: 3 cinder_service_backup_swift_retry_backoff: 2 cinder_service_backup_compression_algorithm: zlib cinder_service_backup_metadata_version: 2
During installation of cinder, the backup service is configured.
Using Ceph for cinder backups
You can deploy Ceph to hold cinder volume backups. To get started,
set the cinder_service_backup_driver Ansible variable:
cinder_service_backup_driver: cinder.backup.drivers.cephConfigure the Ceph user and the pool to use for backups. The defaults are shown here:
cinder_service_backup_ceph_user: cinder-backup
cinder_service_backup_ceph_pool: backupsAvailability zones
Create multiple availability zones to manage cinder storage hosts.
Edit the /etc/openstack_deploy/openstack_user_config.yml
and /etc/openstack_deploy/user_variables.yml files to set
up availability zones.
For each cinder storage host, configure the availability zone under the
container_varsstanza:cinder_storage_availability_zone: CINDERAZReplace
CINDERAZwith a suitable name. For examplecinderAZ_2.If more than one availability zone is created, configure the default availability zone for all the hosts by creating a
cinder_default_availability_zonein your/etc/openstack_deploy/user_variables.ymlcinder_default_availability_zone: CINDERAZ_DEFAULTReplace
CINDERAZ_DEFAULTwith a suitable name. For example,cinderAZ_1. The default availability zone should be the same for all cinder hosts.
OpenStack Dashboard (horizon) configuration for cinder
You can configure variables to set the behavior for cinder volume management in OpenStack Dashboard (horizon). By default, no horizon configuration is set.
- The default destination availability zone is
novaif you use multiple availability zones andcinder_default_availability_zonehas no definition. Volume creation with horizon might fail if there is no availability zone namednova. Setcinder_default_availability_zoneto an appropriate availability zone name so thatAny availability zoneworks in horizon. - horizon does not populate the volume type by default. On the new
volume page, a request for the creation of a volume with the default
parameters fails. Set
cinder_default_volume_typeso that a volume creation request without an explicit volume type succeeds.
Configuring cinder to use LVM
List the
container_varsthat contain the storage options for the target host.Note
The vars related to the cinder availability zone and the
limit_container_typesare optional.To configure an LVM, utilize the following example:
storage_hosts: Infra01: ip: 172.29.236.16 container_vars: cinder_storage_availability_zone: cinderAZ_1 cinder_default_availability_zone: cinderAZ_1 cinder_backends: lvm: volume_backend_name: LVM_iSCSI volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver volume_group: cinder-volumes iscsi_ip_address: "{{ cinder_storage_address }}" limit_container_types: cinder_volume
To use another backend in a container instead of bare metal, copy the
env.d/cinder.yml to
/etc/openstack_deploy/env.d/cinder.yml file and change the
is_metal: true stanza under the
cinder_volumes_container properties to
is_metal: false.
Alternatively, you can also selectively override, like this:
container_skel:
cinder_volumes_container:
properties:
is_metal: falseConfiguring cinder to use Ceph
In order for cinder to use Ceph, it is necessary to configure for both the API and backend. When using any forms of network storage (iSCSI, NFS, Ceph) for cinder, the API containers can be considered as backend servers. A separate storage host is not required.
Copy the env.d/cinder.yml to
/etc/openstack_deploy/env.d/cinder.yml file and change the
is_metal: true stanza under the
cinder_volumes_container properties to
is_metal: false.
Alternatively, you can also selectively override, like this:
container_skel:
cinder_volumes_container:
properties:
is_metal: falseList of target hosts on which to deploy the cinder API. We recommend that a minimum of three target hosts are used for this service.
storage-infra_hosts: infra1: ip: 172.29.236.101 infra2: ip: 172.29.236.102 infra3: ip: 172.29.236.103To configure an RBD backend, utilize the following example:
container_vars: cinder_storage_availability_zone: cinderAZ_3 cinder_default_availability_zone: cinderAZ_1 cinder_backends: limit_container_types: cinder_volume rbd_backend: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: volumes rbd_ceph_conf: /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot: 'false' rbd_max_clone_depth: 5 rbd_store_chunk_size: 4 rados_connect_timeout: 30 volume_backend_name: rbd_backend rbd_user: "{{ cinder_ceph_client }}" rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
The following example sets cinder to use the
cinder_volumes pool. The example uses cephx authentication
and requires existing cinder account for
cinder_volumes pool.
In user_variables.yml:
ceph_mons: - 172.29.244.151 - 172.29.244.152 - 172.29.244.153
In openstack_user_config.yml:
storage_hosts: infra1: ip: 172.29.236.101 container_vars: cinder_backends: limit_container_types: cinder_volume rbd: volume_group: cinder-volumes volume_driver: cinder.volume.drivers.rbd.RBDDriver volume_backend_name: rbd rbd_pool: cinder-volumes rbd_ceph_conf: /etc/ceph/ceph.conf rbd_user: cinder infra2: ip: 172.29.236.102 container_vars: cinder_backends: limit_container_types: cinder_volume rbd: volume_group: cinder-volumes volume_driver: cinder.volume.drivers.rbd.RBDDriver volume_backend_name: rbd rbd_pool: cinder-volumes rbd_ceph_conf: /etc/ceph/ceph.conf rbd_user: cinder infra3: ip: 172.29.236.103 container_vars: cinder_backends: limit_container_types: cinder_volume rbd: volume_group: cinder-volumes volume_driver: cinder.volume.drivers.rbd.RBDDriver volume_backend_name: rbd rbd_pool: cinder-volumes rbd_ceph_conf: /etc/ceph/ceph.conf rbd_user: cinder
This link provides a complete working example of Ceph setup and integration with cinder (nova and glance included):
Configuring cinder to use Dell EqualLogic
To use the Dell EqualLogic volume driver as a back end, edit the
/etc/openstack_deploy/openstack_user_config.yml file and
configure the storage nodes that will use it.
Define the following parameters.
Add
dellqlxstanza under thecinder_backendsfor each storage node:cinder_backends: delleqlx:Specify volume back end name:
volume_backend_name: DellEQLX_iSCSIUse Dell EQLX San ISCSI driver:
volume_driver: cinder.volume.drivers.dell_emc.ps.PSSeriesISCSIDriverSpecify the SAN IP address:
san_ip: ip_of_dell_storageSpecify SAN username (Default: grpadmin):
san_login: grpadminSpecify the SAN password:
san_password: passwordSpecify the group name for pools (Default: group-0):
eqlx_group_name: group-0Specify the pool where Cinder will create volumes and snapshots (Default: default):
eqlx_pool: defaultEnsure the
openstack_user_config.ymlconfiguration is accurate:storage_hosts: Infra01: ip: infra_host_ip container_vars: cinder_backends: limit_container_types: cinder_volume delleqlx: volume_backend_name: DellEQLX_iSCSI volume_driver: cinder.volume.drivers.dell_emc.ps.PSSeriesISCSIDriver san_ip: ip_of_dell_storage san_login: grpadmin san_password: password eqlx_group_name: group-0 eqlx_pool: default
Note
For more details about available configuration options, see https://docs.openstack.org/ocata/config-reference/block-storage/drivers/dell-equallogic-driver.html
Configuring cinder to use a NetApp appliance
To use a NetApp storage appliance back end, edit the
/etc/openstack_deploy/openstack_user_config.yml file and
configure each storage node that will use it.
Note
Ensure that the NAS Team enables httpd.admin.access.
Add the
netappstanza under thecinder_backendsstanza for each storage node:cinder_backends: netapp:The options in subsequent steps fit under the
netappstanza.The backend name is arbitrary and becomes a volume type within cinder.
Configure the storage family:
netapp_storage_family: STORAGE_FAMILYReplace
STORAGE_FAMILYwithontap_7modefor Data ONTAP operating in 7-mode orontap_clusterfor Data ONTAP operating as a cluster.Configure the storage protocol:
netapp_storage_protocol: STORAGE_PROTOCOLReplace
STORAGE_PROTOCOLwithiscsifor iSCSI ornfsfor NFS.For the NFS protocol, specify the location of the configuration file that lists the shares available to cinder:
nfs_shares_config: FILENAME_NFS_SHARESReplace
FILENAME_NFS_SHARESwith the location of the share configuration file. For example,/etc/cinder/nfs_shares.Configure the server:
netapp_server_hostname: SERVER_HOSTNAMEReplace
SERVER_HOSTNAMEwith the hostnames for both netapp controllers.Configure the server API port:
netapp_server_port: PORT_NUMBERReplace
PORT_NUMBERwith 80 for HTTP or 443 for HTTPS.Configure the server credentials:
netapp_login: USER_NAME netapp_password: PASSWORDReplace
USER_NAMEandPASSWORDwith the appropriate values.Select the NetApp driver:
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriverConfigure the volume back end name:
volume_backend_name: BACKEND_NAMEReplace
BACKEND_NAMEwith a value that provides a hint for the cinder scheduler. For example,NETAPP_iSCSI.Ensure the
openstack_user_config.ymlconfiguration is accurate:storage_hosts: Infra01: ip: 172.29.236.16 container_vars: cinder_backends: limit_container_types: cinder_volume netapp: netapp_storage_family: ontap_7mode netapp_storage_protocol: nfs netapp_server_hostname: 111.222.333.444 netapp_server_port: 80 netapp_login: openstack_cinder netapp_password: password volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver volume_backend_name: NETAPP_NFSFor
netapp_server_hostname, specify the IP address of the Data ONTAP server. Include iSCSI or NFS for thenetapp_storage_familydepending on the configuration. Add 80 if using HTTP or 443 if using HTTPS fornetapp_server_port.The
cinder-volume.ymlplaybook will automatically install thenfs-commonfile across the hosts, transitioning from an LVM to a NetApp back end.
Configuring cinder qos specs
Deployers may optionally define the variable
cinder_qos_specs to create qos specs. This variable is a
list of dictionaries that contain the options for each qos spec. cinder
volume-types may be assigned to a qos spec by defining the key
cinder_volume_types in the desired qos spec dictionary.
- name: high-iops
options:
consumer: front-end
read_iops_sec: 2000
write_iops_sec: 2000
cinder_volume_types:
- volumes-1
- volumes-2
- name: low-iops
options:
consumer: front-end
write_iops_sec: 100
Shared storage and synchronized UID/GID
Specify a custom UID for the cinder user and GID for the cinder group to ensure they are identical on each host. This is helpful when using shared storage on Compute nodes because it allows instances to migrate without filesystem ownership failures.
By default, Ansible creates the cidner user and group without specifying the UID or GID. To specify custom values for the UID or GID, set the following Ansible variables:
cinder_system_user_uid = <specify a UID>
cinder_system_group_gid = <specify a GID>Warning
Setting this value after deploying an environment with OpenStack-Ansible can cause failures, errors, and general instability. These values should only be set once before deploying an OpenStack environment and then never changed.