Install and configure a storage node This section describes how to install and configure storage nodes for the Block Storage service. For simplicity, this configuration references one storage node with an empty local block storage device /dev/sdb that contains a suitable partition table with one partition /dev/sdb1 occupying the entire device. The service provisions logical volumes on this device using the LVM driver and provides them to instances via iSCSI transport. You can follow these instructions with minor modifications to horizontally scale your environment with additional storage nodes. To configure prerequisites You must configure the storage node before you install and configure the volume service on it. Similar to the controller node, the storage node contains one network interface on the management network. The storage node also needs an empty block storage device of suitable size for your environment. For more information, see . Configure the management interface: IP address: 10.0.0.41 Network mask: 255.255.255.0 (or /24) Default gateway: 10.0.0.1 Set the hostname of the node to block1. Copy the contents of the /etc/hosts file from the controller node to the storage node and add the following to it: # block1 10.0.0.41 block1 Also add this content to the /etc/hosts file on all other nodes in your environment. Install and configure NTP using the instructions in . If you intend to use non-raw image types such as QCOW2 and VMDK, install the QEMU support package: # apt-get install qemu # yum install qemu # zypper install qemu Install the LVM packages: # apt-get install lvm2 # yum install lvm2 Some distributions include LVM by default. Start the LVM metadata service and configure it to start when the system boots: # systemctl enable lvm2-lvmetad.service # systemctl start lvm2-lvmetad.service Create the LVM physical volume /dev/sdb1: # pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created If your system uses a different device name, adjust these steps accordingly. Create the LVM volume group cinder-volumes: # vgcreate cinder-volumes /dev/sdb1 Volume group "cinder-volumes" successfully created The Block Storage service creates logical volumes in this volume group. Only instances can access Block Storage volumes. However, the underlying operating system manages the devices associated with the volumes. By default, the LVM volume scanning tool scans the /dev directory for block storage devices that contain volumes. If projects use LVM on their volumes, the scanning tool detects these volumes and attempts to cache them which can cause a variety of problems with both the underlying operating system and project volumes. You must reconfigure LVM to scan only the devices that contain the cinder-volume volume group. Edit the /etc/lvm/lvm.conf file and complete the following actions: In the devices section, add a filter that accepts the /dev/sdb device and rejects all other devices: devices { ... filter = [ "a/sdb/", "r/.*/"] Each item in the filter array begins with a for accept or r for reject and includes a regular expression for the device name. The array must end with r/.*/ to reject any remaining devices. You can use the vgs -vvvv command to test filters. If your storage nodes use LVM on the operating system disk, you must also add the associated device to the filter. For example, if the /dev/sda device contains the operating system: filter = [ "a/sda/", "a/sdb/", "r/.*/"] Similarly, if your compute nodes use LVM on the operating system disk, you must also modify the filter in the /etc/lvm/lvm.conf file on those nodes to include only the operating system disk. For example, if the /dev/sda device contains the operating system: filter = [ "a/sda/", "r/.*/"] Install and configure Block Storage volume components Install the packages: # apt-get install cinder-volume python-mysqldb # yum install openstack-cinder targetcli python-oslo-db python-oslo-log MySQL-python # zypper install openstack-cinder-volume tgt python-mysql Edit the /etc/cinder/cinder.conf file and complete the following actions: In the [database] section, configure database access: [database] ... connection = mysql://cinder:CINDER_DBPASS@controller/cinder Replace CINDER_DBPASS with the password you chose for the Block Storage database. In the [DEFAULT] and [oslo_messaging_rabbit] sections, configure RabbitMQ message queue access: [DEFAULT] ... rpc_backend = rabbit [oslo_messaging_rabbit] ... rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ. In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access: [DEFAULT] ... auth_strategy = keystone [keystone_authtoken] ... auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = CINDER_PASS Replace CINDER_PASS with the password you chose for the cinder user in the Identity service. Comment out or remove any other options in the [keystone_authtoken] section. In the [DEFAULT] section, configure the my_ip option: [DEFAULT] ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network interface on your storage node, typically 10.0.0.41 for the first node in the example architecture. In the [lvm] section, configure the LVM back end with the LVM driver, cinder-volumes volume group, iSCSI protocol, and appropriate iSCSI service: [lvm] ... volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = tgtadm [lvm] ... volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm In the [DEFAULT] section, enable the LVM back end: [DEFAULT] ... enabled_backends = lvm Back-end names are arbitrary. As an example, this guide uses the name of the driver as the name of the back end. In the [DEFAULT] section, configure the location of the Image service: [DEFAULT] ... glance_host = controller In the [oslo_concurrency] section, configure the lock path: [oslo_concurrency] ... lock_path = /var/lock/cinder (Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section: [DEFAULT] ... verbose = True Install and configure Block Storage volume components Install the packages: # apt-get install cinder-volume python-mysqldb Respond to the prompts for database management, Identity service credentials, service endpoint registration, and message broker credentials.. Respond to prompts for the volume group to associate with the Block Storage service. The script scans for volume groups and attempts to use the first one. If your system only contains the cinder-volumes volume group, the script should automatically choose it. To finalize installation Restart the Block Storage volume service including its dependencies: # service tgt restart # service cinder-volume restart Start the Block Storage volume service including its dependencies and configure them to start when the system boots: # systemctl enable openstack-cinder-volume.service target.service # systemctl start openstack-cinder-volume.service target.service # systemctl enable openstack-cinder-volume.service tgtd.service # systemctl start openstack-cinder-volume.service tgtd.service By default, the Ubuntu packages create an SQLite database. Because this configuration uses a SQL database server, remove the SQLite database file: # rm -f /var/lib/cinder/cinder.sqlite