Change-Id: Id41dd05fd3f2f4a8b20227d594dad6379aa67d92
11 KiB
Install and configure a storage node
This section describes how to install and configure storage nodes for
the Block Storage service. For simplicity, this configuration references
one storage node with an empty local block storage device /dev/sdb
that contains a
suitable partition table with one partition /dev/sdb1
occupying the
entire device. The service provisions logical volumes on this device
using the LVM <Logical Volume Manager (LVM)>
driver and
provides them to instances via iSCSI
transport. You can follow these instructions
with minor modifications to horizontally scale your environment with
additional storage nodes.
To configure prerequisites
You must configure the storage node before you install and configure
the volume service on it. Similar to the controller node, the storage
node contains one network interface on the management network
. The
storage node also needs an empty block storage device of suitable size
for your environment. For more information, see basic_environment
.
Configure the management interface:
IP address: 10.0.0.41
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
Set the hostname of the node to
block1
.Copy the contents of the
/etc/hosts
file from the controller node to the storage node and add the following to it:# block1 10.0.0.41 block1
Also add this content to the
/etc/hosts
file on all other nodes in your environment.Install and configure
NTP <Network Time Protocol (NTP)>
using the instructions inthe section called "Other nodes" <basics-ntp-other-nodes>
.
obs
If you intend to use non-raw image types such as QCOW2 and VMDK, install the QEMU support package:
# zypper install qemu
Install the LVM packages:
rdo
If you intend to use non-raw image types such as QCOW2 and VMDK, install the QEMU support package:
# yum install qemu
Install the LVM packages:
# yum install lvm2
Note
Some distributions include LVM by default.
Start the LVM metadata service and configure it to start when the system boots:
# systemctl enable lvm2-lvmetad.service # systemctl start lvm2-lvmetad.service
ubuntu
If you intend to use non-raw image types such as QCOW2 and VMDK, install the QEMU support package:
# apt-get install qemu
Note
Some distributions include LVM by default.
Install the LVM packages:
# apt-get install lvm2
Note
Some distributions include LVM by default.
Create the LVM physical volume
/dev/sdb1
:# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created
Note
If your system uses a different device name, adjust these steps accordingly.
Create the LVM volume group
cinder-volumes
:# vgcreate cinder-volumes /dev/sdb1 Volume group "cinder-volumes" successfully created
The Block Storage service creates logical volumes in this volume group.
Only instances can access Block Storage volumes. However, the underlying operating system manages the devices associated with the volumes. By default, the LVM volume scanning tool scans the
/dev
directory for block storage devices that contain volumes. If projects use LVM on their volumes, the scanning tool detects these volumes and attempts to cache them which can cause a variety of problems with both the underlying operating system and project volumes. You must reconfigure LVM to scan only the devices that contain thecinder-volume
volume group. Edit the/etc/lvm/lvm.conf
file and complete the following actions:In the
devices
section, add a filter that accepts the/dev/sdb
device and rejects all other devices:devices { ... filter = [ "a/sdb/", "r/.*/"]
Each item in the filter array begins with
a
for accept orr
for reject and includes a regular expression for the device name. The array must end withr/.*/
to reject any remaining devices. You can use thevgs -vvvv
command to test filters.Warning
If your storage nodes use LVM on the operating system disk, you must also add the associated device to the filter. For example, if the
/dev/sda
device contains the operating system:filter = [ "a/sda/", "a/sdb/", "r/.*/"]
Similarly, if your compute nodes use LVM on the operating system disk, you must also modify the filter in the
/etc/lvm/lvm.conf
file on those nodes to include only the operating system disk. For example, if the/dev/sda
device contains the operating system:filter = [ "a/sda/", "r/.*/"]
Install and configure Block Storage volume components
obs
Install the packages:
# zypper install openstack-cinder-volume tgt python-mysql
rdo
Install the packages:
# yum install openstack-cinder targetcli python-oslo-db \ python-oslo-log MySQL-python
ubuntu
Install the packages:
# apt-get install cinder-volume python-mysqldb
Edit the
/etc/cinder/cinder.conf
file and complete the following actions:In the
[database]
section, configure database access:[database] ... connection = mysql://cinder:CINDER_DBPASS@controller/cinder
Replace
CINDER_DBPASS
with the password you chose for the Block Storage database.In the
[DEFAULT]
and[oslo_messaging_rabbit]
sections, configureRabbitMQ
message queue access:[DEFAULT] ... rpc_backend = rabbit [oslo_messaging_rabbit] ... rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS
Replace
RABBIT_PASS
with the password you chose for theopenstack
account inRabbitMQ
.In the
[DEFAULT]
and[keystone_authtoken]
sections, configure Identity service access:[DEFAULT] ... auth_strategy = keystone [keystone_authtoken] ... auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = CINDER_PASS
Replace
CINDER_PASS
with the password you chose for thecinder
user in the Identity service.Note
Comment out or remove any other options in the
[keystone_authtoken]
section.In the
[DEFAULT]
section, configure themy_ip
option:[DEFAULT] ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
Replace
MANAGEMENT_INTERFACE_IP_ADDRESS
with the IP address of the management network interface on your storage node, typically 10.0.0.41 for the first node in theexample architecture <overview-example-architectures>
.
obs or ubuntu
In the
[lvm]
section, configure the LVM back end with the LVM driver,cinder-volumes
volume group, iSCSI protocol, and appropriate iSCSI service:[lvm] ... volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = tgtadm
rdo
In the
[lvm]
section, configure the LVM back end with the LVM driver,cinder-volumes
volume group, iSCSI protocol, and appropriate iSCSI service:[lvm] ... volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm
In the
[DEFAULT]
section, enable the LVM back end:[DEFAULT] ... enabled_backends = lvm
Note
Back-end names are arbitrary. As an example, this guide uses the name of the driver as the name of the back end.
In the
[DEFAULT]
section, configure the location of the Image service:[DEFAULT] ... glance_host = controller
In the
[oslo_concurrency]
section, configure the lock path:[oslo_concurrency] ... lock_path = /var/lock/cinder
(Optional) To assist with troubleshooting, enable verbose logging in the
[DEFAULT]
section:[DEFAULT] ... verbose = True
To finalize installation
obs
Start the Block Storage volume service including its dependencies and configure them to start when the system boots:
# systemctl enable openstack-cinder-volume.service tgtd.service # systemctl start openstack-cinder-volume.service tgtd.service
rdo
Start the Block Storage volume service including its dependencies and configure them to start when the system boots:
# systemctl enable openstack-cinder-volume.service target.service # systemctl start openstack-cinder-volume.service target.service
ubuntu
Restart the Block Storage volume service including its dependencies:
# service tgt restart # service cinder-volume restart
By default, the Ubuntu packages create an SQLite database. Because this configuration uses a SQL database server, remove the SQLite database file:
# rm -f /var/lib/cinder/cinder.sqlite