This introduces a new section in the cinder configuration doc where deployers learn how to override the default horizon behavior with OSA. Change-Id: I123e85a526e5e8d00cf937738148b260bd9353e0 Closes-Bug: #1518845
6.7 KiB
Home OpenStack-Ansible Installation Guide
Configuring the Block Storage service (optional)
configure-cinder-nfs.rst configure-cinder-backup.rst configure-cinder-az.rst configure-cinder-horizon.rst
By default, the Block Storage service installs on the host itself using the LVM backend. While this is the default for cinder it should be noted that using a LVM backend results in a Single Point of Failure. As a result of the volume service being deployed directly to the host is_metal is true when using LVM.
Configuring Cinder to use LVM
List the container_vars which contain the storage options for this target host. Note that the vars related to the Cinder availability zone and the limit_container_types are optional.
To configure an LVM you would utilize the following example:
storage_hosts: Infra01: ip: 172.29.236.16 container_vars: cinder_storage_availability_zone: cinderAZ_1 cinder_default_availability_zone: cinderAZ_1 cinder_backends: lvm: volume_backend_name: LVM_iSCSI volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver volume_group: cinder-volumes iscsi_ip_address: "{{ storage_address }}" limit_container_types: cinder_volume
If you rather use another backend (like Ceph, NetApp, etc.) in a
container instead of bare metal, you may edit the
/etc/openstack_deploy/env.d/cinder.yml and remove the
is_metal: true stanza under the cinder_volumes_container
properties.
Configuring Cinder to use Ceph
In order for Cinder to use Ceph it will be necessary to configure for both the API and backend.
List of target hosts on which to deploy the cinder API. It is recommended that a minumum of three target hosts are used for this service.
storage-infra_hosts: infra1: ip: 172.29.236.101 infra2: ip: 172.29.236.102 infra3: ip: 172.29.236.103List of target hosts on which to deploy the cinder volume service. It is recommended that there is at least one target host for this service configured. Typically this list contains target hosts that do not reside in other levels of the configuration.
storage_hosts: storage1: ip: 172.29.236.121 ...To configure an RBD backend utilize the following example:
container_vars: cinder_storage_availability_zone: cinderAZ_3 cinder_default_availability_zone: cinderAZ_1 cinder_backends: limit_container_types: cinder_volume volumes_hdd: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: volumes_hdd rbd_ceph_conf: /etc/ceph/ceph.conf rbd_flatten_volume_from_snapshot: 'false' rbd_max_clone_depth: 5 rbd_store_chunk_size: 4 rados_connect_timeout: -1 volume_backend_name: volumes_hdd rbd_user: "{{ cinder_ceph_client }}" rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}"
Configuring Cinder to use a NetApp appliance
To use a NetApp storage appliance back end, edit the
/etc/openstack_deploy/openstack_user_config.yml file and
configure each storage node that will use it:
Ensure that the NAS Team enables httpd.admin.access.
Add the
netappstanza under thecinder_backendsstanza for each storage node:cinder_backends: netapp:The options in subsequent steps fit under the
netappstanza.The back end name is arbitrary and becomes a volume type within the Block Storage service.
Configure the storage family:
netapp_storage_family: STORAGE_FAMILYReplace
STORAGE_FAMILYwithontap_7modefor Data ONTAP operating in 7-mode orontap_clusterfor Data ONTAP operating as a cluster.Configure the storage protocol:
netapp_storage_protocol: STORAGE_PROTOCOLReplace
STORAGE_PROTOCOLwithiscsifor iSCSI ornfsfor NFS.For the NFS protocol, you must also specify the location of the configuration file that lists the shares available to the Block Storage service:
nfs_shares_config: SHARE_CONFIGReplace
SHARE_CONFIGwith the location of the share configuration file. For example,/etc/cinder/nfs_shares.Configure the server:
netapp_server_hostname: SERVER_HOSTNAMEReplace
SERVER_HOSTNAMEwith the hostnames for both netapp controllers.Configure the server API port:
netapp_server_port: PORT_NUMBERReplace
PORT_NUMBERwith 80 for HTTP or 443 for HTTPS.Configure the server credentials:
netapp_login: USER_NAME netapp_password: PASSWORDReplace
USER_NAMEandPASSWORDwith the appropriate values.Select the NetApp driver:
volume_driver: cinder.volume.drivers.netapp.common.NetAppDriverConfigure the volume back end name:
volume_backend_name: BACKEND_NAMEReplace
BACKEND_NAMEwith a suitable value that provides a hint for the Block Storage scheduler. For example,NETAPP_iSCSI.Check that the
openstack_user_config.ymlconfiguration is accurate:storage_hosts: Infra01: ip: 172.29.236.16 container_vars: cinder_backends: limit_container_types: cinder_volume netapp: netapp_storage_family: ontap_7mode netapp_storage_protocol: nfs netapp_server_hostname: 111.222.333.444 netapp_server_port: 80 netapp_login: openstack_cinder netapp_password: password volume_driver: cinder.volume.drivers.netapp.common.NetAppDriver volume_backend_name: NETAPP_NFSFor
netapp_server_hostname, specify the IP address of the Data ONTAP server. Include iSCSI or NFS for thenetapp_storage_familydepending on the configuration. Add 80 if using HTTP or 443 if using HTTPS fornetapp_server_port.The
cinder-volume.ymlplaybook will automatically install thenfs-commonfile across the hosts, transitioning from an LVM to a NetApp back end.