Our installation guide walks through configuring storage nodes
using the LVM driver. LVM now defaults to thinly provisioned,
which requires thing-provisioning-tools to be installed on the
host. So by default, our instructions will result in failure
when we attempt to perform thin provision operations.
This adds mention of installing the required package for each
platform's instructions to get the necessary tools installed.
It also adds device-mapper-persistent-data to bindep for Red
Hat based platforms to get those thin provisioning tools that
were previously missing for these platforms.
Tools appear to be installed by default on Suse platforms.
Change-Id: I2a84ae99d71c3551814197917d114057430858b7
Closes-bug: #1738409
Closes-bug: #1740262
(cherry picked from commit 78fa04624d)
7.0 KiB
Install and configure a storage node
Prerequisites
Before you install and configure the Block Storage service on the storage node, you must prepare the storage device.
Note
Perform these steps on the storage node.
Install the supporting utility packages:
Install the LVM packages:
# yum install lvm2 device-mapper-persistent-dataStart the LVM metadata service and configure it to start when the system boots:
# systemctl enable lvm2-lvmetad.service # systemctl start lvm2-lvmetad.service
Note
Some distributions include LVM by default.
Create the LVM physical volume
/dev/sdb:# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully createdCreate the LVM volume group
cinder-volumes:# vgcreate cinder-volumes /dev/sdb Volume group "cinder-volumes" successfully createdThe Block Storage service creates logical volumes in this volume group.
Only instances can access Block Storage volumes. However, the underlying operating system manages the devices associated with the volumes. By default, the LVM volume scanning tool scans the
/devdirectory for block storage devices that contain volumes. If projects use LVM on their volumes, the scanning tool detects these volumes and attempts to cache them which can cause a variety of problems with both the underlying operating system and project volumes. You must reconfigure LVM to scan only the devices that contain thecinder-volumesvolume group. Edit the/etc/lvm/lvm.conffile and complete the following actions:In the
devicessection, add a filter that accepts the/dev/sdbdevice and rejects all other devices:devices { ... filter = [ "a/sdb/", "r/.*/"]Each item in the filter array begins with
afor accept orrfor reject and includes a regular expression for the device name. The array must end withr/.*/to reject any remaining devices. You can use thevgs -vvvvcommand to test filters.Warning
If your storage nodes use LVM on the operating system disk, you must also add the associated device to the filter. For example, if the
/dev/sdadevice contains the operating system:filter = [ "a/sda/", "a/sdb/", "r/.*/"]Similarly, if your compute nodes use LVM on the operating system disk, you must also modify the filter in the
/etc/lvm/lvm.conffile on those nodes to include only the operating system disk. For example, if the/dev/sdadevice contains the operating system:filter = [ "a/sda/", "r/.*/"]
Install and configure components
Install the packages:
# yum install openstack-cinder targetcli python-keystoneEdit the
/etc/cinder/cinder.conffile and complete the following actions:In the
[database]section, configure database access:[database] # ... connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinderReplace
CINDER_DBPASSwith the password you chose for the Block Storage database.In the
[DEFAULT]section, configureRabbitMQmessage queue access:[DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controllerReplace
RABBIT_PASSwith the password you chose for theopenstackaccount inRabbitMQ.In the
[DEFAULT]and[keystone_authtoken]sections, configure Identity service access:[DEFAULT] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = CINDER_PASSReplace
CINDER_PASSwith the password you chose for thecinderuser in the Identity service.Note
Comment out or remove any other options in the
[keystone_authtoken]section.In the
[DEFAULT]section, configure themy_ipoption:[DEFAULT] # ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESSReplace
MANAGEMENT_INTERFACE_IP_ADDRESSwith the IP address of the management network interface on your storage node, typically 10.0.0.41 for the first node in theexample architecture <overview-example-architectures>.In the
[lvm]section, configure the LVM back end with the LVM driver,cinder-volumesvolume group, iSCSI protocol, and appropriate iSCSI service. If the[lvm]section does not exist, create it:[lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadmIn the
[DEFAULT]section, enable the LVM back end:[DEFAULT] # ... enabled_backends = lvmNote
Back-end names are arbitrary. As an example, this guide uses the name of the driver as the name of the back end.
In the
[DEFAULT]section, configure the location of the Image service API:[DEFAULT] # ... glance_api_servers = http://controller:9292In the
[oslo_concurrency]section, configure the lock path:[oslo_concurrency] # ... lock_path = /var/lib/cinder/tmp
Finalize installation
Start the Block Storage volume service including its dependencies and configure them to start when the system boots:
# systemctl enable openstack-cinder-volume.service target.service # systemctl start openstack-cinder-volume.service target.service