From 07dc1fcc3796259e1b73c63d1a0ae0365bd8c1b9 Mon Sep 17 00:00:00 2001 From: Matthew Kassawara Date: Fri, 17 Oct 2014 20:57:56 -0500 Subject: [PATCH] Improve install guide cinder chapter I improved the cinder chapter of the installation guide as follows: 1) Renamed storage node files, titles, and XML IDs to conform with standards. 2) Rewrote introductory content to increase depth. 3) Clarified requirements for controller and storage nodes. 4) Added steps to configure storage node operating system prior to installing the volume service. 5) Rewrote LVM filter content because the original content was vague and confusing. 6) Added more command output. 7) Added 'cinder list' command to verify section. I eventually want to restructure the architecture and basic environment content to integrate organization and configuration of optional nodes. Adding steps to configure the storage node operating in this chapter temporarily fills a void. Change-Id: Iaa404ee7b3fcbc0a14450cab6ae378f698890d7d Implements: blueprint installation-guide-improvements --- doc/install-guide/ch_cinder.xml | 24 +- ...xml => section_cinder-controller-node.xml} | 11 +- doc/install-guide/section_cinder-node.xml | 269 ---------------- .../section_cinder-storage-node.xml | 291 ++++++++++++++++++ doc/install-guide/section_cinder-verify.xml | 18 +- 5 files changed, 325 insertions(+), 288 deletions(-) rename doc/install-guide/{section_cinder-controller.xml => section_cinder-controller-node.xml} (97%) delete mode 100644 doc/install-guide/section_cinder-node.xml create mode 100644 doc/install-guide/section_cinder-storage-node.xml diff --git a/doc/install-guide/ch_cinder.xml b/doc/install-guide/ch_cinder.xml index a62400b442..fa843404c2 100644 --- a/doc/install-guide/ch_cinder.xml +++ b/doc/install-guide/ch_cinder.xml @@ -5,17 +5,21 @@ version="5.0" xml:id="ch_cinder"> Add the Block Storage service - The OpenStack Block Storage service works through the - interaction of a series of daemon processes named cinder-* that reside persistently on - the host machine or machines. You can run the binaries from a - single node or across multiple nodes. You can also run them on the - same node as other OpenStack services. The following sections - introduce Block Storage service components and concepts and show - you how to configure and install the Block Storage service. + The OpenStack Block Storage service provides block storage devices + to instances using various backends. The Block Storage API and scheduler + services run on the controller node and the volume service runs on one + or more storage nodes. Storage nodes provide volumes to instances using + local block storage devices or SAN/NAS backends with the appropriate + drivers. For more information, see the + Configuration Reference. + + This chapter omits the backup manager because it depends on the + Object Storage service. + - - + +
Next steps diff --git a/doc/install-guide/section_cinder-controller.xml b/doc/install-guide/section_cinder-controller-node.xml similarity index 97% rename from doc/install-guide/section_cinder-controller.xml rename to doc/install-guide/section_cinder-controller-node.xml index 8a6860e33d..f3dae844bc 100644 --- a/doc/install-guide/section_cinder-controller.xml +++ b/doc/install-guide/section_cinder-controller-node.xml @@ -7,13 +7,8 @@ Install and configure controller node This section describes how to install and configure the Block Storage service, code-named cinder, on the controller node. This - optional service requires at least one additional node to provide - storage volumes created by the - logical volume manager (LVM) - and served over - iSCSI transport. + service requires at least one additional storage node that provides + volumes to instances. To configure prerequisites Before you install and configure the Block Storage service, you must @@ -36,7 +31,7 @@ database: GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ IDENTIFIED BY 'CINDER_DBPASS'; -mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ +GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ IDENTIFIED BY 'CINDER_DBPASS'; Replace CINDER_DBPASS with a suitable password. diff --git a/doc/install-guide/section_cinder-node.xml b/doc/install-guide/section_cinder-node.xml deleted file mode 100644 index bc2d015f3b..0000000000 --- a/doc/install-guide/section_cinder-node.xml +++ /dev/null @@ -1,269 +0,0 @@ - -
- - Configure a Block Storage service node - After you configure the services on the controller node, - configure a Block Storage service node, which contains the disk - that serves volumes. - You can configure OpenStack to use various storage systems. - This procedure uses LVM as an example. - - To configure the operating system - - Refer to the instructions in - to configure the operating system. Note the following differences - from the installation instructions for the controller node: - - - Set the host name to block1 and use - 10.0.0.41 as IP address on the management - network interface. Ensure that the IP addresses and host - names for both controller node and Block Storage service - node are listed in the /etc/hosts file - on each system. - - - Follow the instructions in to synchronize the time from the controller node. - - - - - - To create a logical volume - - Install the LVM packages: - # apt-get install lvm2 - # yum install lvm2 - - Some distributions include LVM by default. - - - - Start the LVM metadata service and configure it to start when the - system boots: - # systemctl enable lvm2-lvmetad.service -# systemctl start lvm2-lvmetad.service - - - Create the LVM physical volume and volume group. This guide - assumes a second disk /dev/sdb is being used - for this purpose: - # pvcreate /dev/sdb -# vgcreate cinder-volumes /dev/sdb - - - In the devices section in the - /etc/lvm/lvm.conf file, add the filter entry - r/.*/ to prevent LVM from scanning devices - used by virtual machines: - devices { -... -filter = [ "a/sda1/", "a/sdb/", "r/.*/"] -... -} - - You must add the required physical volumes for LVM on the - Block Storage host. Run the pvdisplay - command to get a list of physical volumes. - - Each item in the filter array starts with either an - a for accept, or an r - for reject. The physical volumes on the Block Storage host have - names that begin with a. The array must end - with "r/.*/" to reject any device not - listed. - In this example, the /dev/sda1 volume is - where the volumes for the operating system for the node - reside, while /dev/sdb is the volume - reserved for cinder-volumes. - - - - Install and configure Block Storage service node components - - Install the packages for the Block Storage service: - # apt-get install cinder-volume python-mysqldb - # yum install openstack-cinder targetcli python-oslo-db MySQL-python - # zypper install openstack-cinder-volume tgt python-mysql - - - Respond to the debconf prompts about the database - management, [keystone_authtoken] settings, - and RabbitMQ credentials. - Enter the same details as you did for your Block Storage service - controller node. - Another screen prompts you for the volume-group to use. The Debian - package configuration script detects every active volume group - and tries to use the first one it sees, provided that the - lvm2 package was - installed before Block Storage. This should be the case if you - configured the volume group first, as this guide recommends. - If you have only one active volume group on your Block - Storage service node, its name is automatically detected when you install the cinder-volume package. If no volume-group is available when you install - cinder-common, you - must use dpkg-reconfigure to manually - configure or re-configure cinder-common. - - - Edit the /etc/cinder/cinder.conf file - and complete the following actions: - - - In the [database] section, configure - database access: - [database] -... -connection = mysql://cinder:CINDER_DBPASS@controller/cinder - Replace CINDER_DBPASS with - the password you chose for the Block Storage database. - - - In the [DEFAULT] section, configure - RabbitMQ message broker access: - [DEFAULT] -... -rpc_backend = rabbit -rabbit_host = controller -rabbit_password = RABBIT_PASS - Replace RABBIT_PASS with the - password you chose for the guest account in - RabbitMQ. - - - In the [DEFAULT] and - [keystone_authtoken] sections, - configure Identity service access: - [DEFAULT] -... -auth_strategy = keystone - -[keystone_authtoken] -... -auth_uri = http://controller:5000/v2.0 -identity_uri = http://controller:35357 -admin_tenant_name = service -admin_user = cinder -admin_password = CINDER_PASS - Replace CINDER_PASS with the - password you chose for the cinder user in the - Identity service. - - Comment out any auth_host, - auth_port, and - auth_protocol options because the - identity_uri option replaces them. - - - - In the [DEFAULT] section, configure the - my_ip option: - [DEFAULT] -... -my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS - Replace - MANAGEMENT_INTERFACE_IP_ADDRESS with - the IP address of the management network interface on your - storage node, typically 10.0.0.41 for the first node in the - example - architecture. - - - In the [DEFAULT] section, configure the - location of the Image Service: - [DEFAULT] -... -glance_host = controller - - - In the [DEFAULT] section, configure Block - Storage to use the lioadm iSCSI - service: - [DEFAULT] -... -iscsi_helper = lioadm - - - (Optional) To assist with troubleshooting, - enable verbose logging in the [DEFAULT] - section: - [DEFAULT] -... -verbose = True - - - - - Due to a packaging bug, the Block Storage service cannot - execute commands with administrative privileges using the - sudo command. Run the following command to - resolve this issue: - # cp /etc/sudoers.d/cinder_sudoers /etc/sudoers.d/cinder_sudoers.orig -# sed -i 's,/etc/cinder/rootwrap.conf,/etc/cinder/rootwrap.conf *,g' \ - /etc/sudoers.d/cinder_sudoers - For more information, see the - bug report. - - - - To finalize installation - - Restart the Block Storage services with the new - settings: - # service tgt restart -# service cinder-volume restart - - - By default, the Ubuntu packages create a SQLite database. - Because this configuration uses a SQL database server, remove - the SQLite database file: - # rm -f /var/lib/cinder/cinder.sqlite - - - Enable the target service: - # systemctl enable target.service - - - Start the target service: - # systemctl start target.service - - - Start and configure the Block Storage services to start -- when the system boots: - On SLES: - # service tgtd start -# chkconfig tgtd on - On openSUSE: -# systemctl enable tgtd.service -# systemctl start tgtd.service - - - Start and configure the cinder volume service to start - when the system boots: - # systemctl enable openstack-cinder-volume.service -# systemctl start openstack-cinder-volume.service - - - Start and configure the cinder volume service to start - when the system boots: - On SLES: - # service openstack-cinder-volume start -# chkconfig openstack-cinder-volume on - On openSUSE: - # systemctl enable openstack-cinder-volume.service -# systemctl start openstack-cinder-volume.service - - -
diff --git a/doc/install-guide/section_cinder-storage-node.xml b/doc/install-guide/section_cinder-storage-node.xml new file mode 100644 index 0000000000..f4fd541033 --- /dev/null +++ b/doc/install-guide/section_cinder-storage-node.xml @@ -0,0 +1,291 @@ + +
+ + Install and configure a storage node + This section describes how to install and configure storage nodes + for the Block Storage service. For simplicity, this configuration + references one storage node with an empty local block storage device + /dev/sdb. The service provisions logical volumes + on this device using the LVM driver and provides + them to instances via + iSCSI transport. You can follow these instructions with + minor modifications to horizontally scale your environment with + additional storage nodes. + + To configure prerequisites + You must configure the storage node before you install and + configure the volume service on it. Similar to the controller node, + the storage node contains one network interface on the + management network. The storage node also + needs an empty block storage device of suitable size for your + environment. For more information, see + . + + Configure the management interface: + IP address: 10.0.0.41 + Network mask: 255.255.255.0 (or /24) + Default gateway: 10.0.0.1 + + + Set the hostname of the node to + block1. + + + Copy the contents of the /etc/hosts file from + the controller node to the storage node and add the following + to it: + # block1 +10.0.0.41 block1 + Also add this content to the /etc/hosts file + on all other nodes in your environment. + + + Install and configure + NTP + using the instructions in + . + + + Install the LVM packages: + # apt-get install lvm2 + # yum install lvm2 + + Some distributions include LVM by default. + + + + Start the LVM metadata service and configure it to start when the + system boots: + # systemctl enable lvm2-lvmetad.service +# systemctl start lvm2-lvmetad.service + + + Create the LVM physical volume /dev/sdb: + # pvcreate /dev/sdb + Physical volume "/dev/sdb" successfully created + + If your system uses a different device name, adjust these + steps accordingly. + + + + Create the LVM volume group + cinder-volumes: + # vgcreate cinder-volumes /dev/sdb + Volume group "cinder-volumes" successfully created + The Block Storage service creates logical volumes in this + volume group. + + + Only instances can access Block Storage volumes. However, the + underlying operating system manages the devices associated with + the volumes. By default, the LVM volume scanning tool scans the + /dev directory for block storage devices that + contain volumes. If tenants use LVM on their volumes, the scanning + tool detects these volumes and attempts to cache them which can cause + a variety of problems with both the underlying operating system + and tenant volumes. You must reconfigure LVM to scan only the devices + that contain the cinder-volume volume group. Edit + the /etc/lvm/lvm.conf file and complete the + following actions: + + + In the devices section, add a filter + that accepts the /dev/sdb device and rejects + all other devices: + devices { +... +filter = [ "a/sdb/", "r/.*/"] + Each item in the filter array begins with a + for accept or r for + reject and includes a regular expression + for the device name. The array must end with + r/.*/ to reject any remaining + devices. You can use the vgs -vvvv + command to test filters. + + If your storage nodes use LVM on the operating system disk, + you must also add the associated device to the filter. For + example, if the /dev/sda device contains + the operating system: + filter = [ "a/sda", "a/sdb/", "r/.*/"] + Similarly, if your compute nodes use LVM on the operating + system disk, you must also modify the filter in the + /etc/lvm/lvm.conf file on those nodes to + include only the operating system disk. For example, if the + /dev/sda device contains the operating + system: + filter = [ "a/sda", "r/.*/"] + + + + + + + Install and configure Block Storage volume components + + Install the packages: + # apt-get install cinder-volume python-mysqldb + # yum install openstack-cinder targetcli python-oslo-db MySQL-python + # zypper install openstack-cinder-volume tgt python-mysql + + + Edit the /etc/cinder/cinder.conf file + and complete the following actions: + + + In the [database] section, configure + database access: + [database] +... +connection = mysql://cinder:CINDER_DBPASS@controller/cinder + Replace CINDER_DBPASS with + the password you chose for the Block Storage database. + + + In the [DEFAULT] section, configure + RabbitMQ message broker access: + [DEFAULT] +... +rpc_backend = rabbit +rabbit_host = controller +rabbit_password = RABBIT_PASS + Replace RABBIT_PASS with the + password you chose for the guest account in + RabbitMQ. + + + In the [DEFAULT] and + [keystone_authtoken] sections, + configure Identity service access: + [DEFAULT] +... +auth_strategy = keystone + +[keystone_authtoken] +... +auth_uri = http://controller:5000/v2.0 +identity_uri = http://controller:35357 +admin_tenant_name = service +admin_user = cinder +admin_password = CINDER_PASS + Replace CINDER_PASS with the + password you chose for the cinder user in the + Identity service. + + Comment out any auth_host, + auth_port, and + auth_protocol options because the + identity_uri option replaces them. + + + + In the [DEFAULT] section, configure the + my_ip option: + [DEFAULT] +... +my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS + Replace + MANAGEMENT_INTERFACE_IP_ADDRESS with + the IP address of the management network interface on your + storage node, typically 10.0.0.41 for the first node in the + example + architecture. + + + In the [DEFAULT] section, configure the + location of the Image Service: + [DEFAULT] +... +glance_host = controller + + + In the [DEFAULT] section, configure Block + Storage to use the lioadm iSCSI + service: + [DEFAULT] +... +iscsi_helper = lioadm + + + (Optional) To assist with troubleshooting, + enable verbose logging in the [DEFAULT] + section: + [DEFAULT] +... +verbose = True + + + + + Due to a packaging bug, the Block Storage service cannot + execute commands with administrative privileges using the + sudo command. Run the following command to + resolve this issue: + # cp /etc/sudoers.d/cinder_sudoers /etc/sudoers.d/cinder_sudoers.orig +# sed -i 's,/etc/cinder/rootwrap.conf,/etc/cinder/rootwrap.conf *,g' \ + /etc/sudoers.d/cinder_sudoers + For more information, see the + bug report. + + + + Install and configure Block Storage volume components + + Install the packages: + # apt-get install cinder-volume python-mysqldb + + + Respond to the prompts for + database management, + Identity service + credentials, + service endpoint + registration, and + message broker + credentials.. + + + Respond to prompts for the volume group to associate with the + Block Storage service. The script scans for volume groups and + attempts to use the first one. If your system only contains the + cinder-volumes volume group, the script should + automatically choose it. + + + + To finalize installation + + Restart the Block Storage volume service including its + dependencies: + # service tgt restart +# service cinder-volume restart + + + Start the Block Storage volume service including its dependencies + and configure them to start when the system boots: + # systemctl enable openstack-cinder-volume.service target.service +# systemctl start openstack-cinder-volume.service target.service + On SLES: + # service tgtd start +# chkconfig tgtd on +# service openstack-cinder-volume start +# chkconfig openstack-cinder-volume on + On openSUSE: + # systemctl enable openstack-cinder-volume.service tgtd.service +# systemctl start openstack-cinder-volume.service tgtd.service + + + By default, the Ubuntu packages create an SQLite database. + Because this configuration uses a SQL database server, remove + the SQLite database file: + # rm -f /var/lib/cinder/cinder.sqlite + + +
diff --git a/doc/install-guide/section_cinder-verify.xml b/doc/install-guide/section_cinder-verify.xml index b5b013896f..76612396ad 100644 --- a/doc/install-guide/section_cinder-verify.xml +++ b/doc/install-guide/section_cinder-verify.xml @@ -14,9 +14,25 @@ Perform these commands on the controller node. + + Source the admin credentials to gain access to + admin-only CLI commands: + $ source admin-openrc.sh + + + List service components to verify successful launch of each + process: + $ cinder service-list ++------------------+------------+------+---------+-------+----------------------------+-----------------+ +| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | ++------------------+------------+------+---------+-------+----------------------------+-----------------+ +| cinder-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None | +| cinder-volume | block1 | nova | enabled | up | 2014-10-18T01:30:57.000000 | None | ++------------------+------------+------+---------+-------+----------------------------+-----------------+ + Source the demo tenant credentials to perform - these steps as a non-administrative tenant: + the following steps as a non-administrative tenant: $ source demo-openrc.sh