openstack-manuals/doc/install-guide/source/cinder-storage-install.rst
Andreas Jaeger 2d44b2b36d Prepare for Sphinx 1.5
The new sphinx version introduces some changes that break build:

* Warns if code cannot be parsed for highlighting. Fix the code so
  that it can be parsed, this includes uncommenting "..." lines.
  Note that not every config file is an ini-file.
  Also, the parser seems to have bugs and cannot parse all files.
  Fix mysql ini file and enable the parameter, see
http://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_file_per_table
* :option: works only with declared options, replace useage with
  simple ``.

This change only handles a few files, more to come later.

Change-Id: I7c7335e514581622dd562ee355f62d6ae1beaa18
2017-01-11 20:37:55 +01:00

9.9 KiB

Install and configure a storage node

This section describes how to install and configure storage nodes for the Block Storage service. For simplicity, this configuration references one storage node with an empty local block storage device. The instructions use /dev/sdb, but you can substitute a different value for your particular node.

The service provisions logical volumes on this device using the LVM <Logical Volume Manager (LVM)> driver and provides them to instances via iSCSI <iSCSI Qualified Name (IQN)> transport. You can follow these instructions with minor modifications to horizontally scale your environment with additional storage nodes.

Prerequisites

Before you install and configure the Block Storage service on the storage node, you must prepare the storage device.

Note

Perform these steps on the storage node.

  1. Install the supporting utility packages:

    obs

    • Install the LVM packages:

      # zypper install lvm2
    • (Optional) If you intend to use non-raw image types such as QCOW2 and VMDK, install the QEMU package:

      # zypper install qemu

    rdo

    • Install the LVM packages:

      # yum install lvm2
    • Start the LVM metadata service and configure it to start when the system boots:

      # systemctl enable lvm2-lvmetad.service
      # systemctl start lvm2-lvmetad.service

    ubuntu

    # apt install lvm2

    Note

    Some distributions include LVM by default.

  2. Create the LVM physical volume /dev/sdb:

    # pvcreate /dev/sdb
    
    Physical volume "/dev/sdb" successfully created
  3. Create the LVM volume group cinder-volumes:

    # vgcreate cinder-volumes /dev/sdb
    
    Volume group "cinder-volumes" successfully created

    The Block Storage service creates logical volumes in this volume group.

  4. Only instances can access Block Storage volumes. However, the underlying operating system manages the devices associated with the volumes. By default, the LVM volume scanning tool scans the /dev directory for block storage devices that contain volumes. If projects use LVM on their volumes, the scanning tool detects these volumes and attempts to cache them which can cause a variety of problems with both the underlying operating system and project volumes. You must reconfigure LVM to scan only the devices that contain the cinder-volumes volume group. Edit the /etc/lvm/lvm.conf file and complete the following actions:

    • In the devices section, add a filter that accepts the /dev/sdb device and rejects all other devices:

      devices {
      ...
      filter = [ "a/sdb/", "r/.*/"]

      Each item in the filter array begins with a for accept or r for reject and includes a regular expression for the device name. The array must end with r/.*/ to reject any remaining devices. You can use the vgs -vvvv command to test filters.

      Warning

      If your storage nodes use LVM on the operating system disk, you must also add the associated device to the filter. For example, if the /dev/sda device contains the operating system:

      filter = [ "a/sda/", "a/sdb/", "r/.*/"]

      Similarly, if your compute nodes use LVM on the operating system disk, you must also modify the filter in the /etc/lvm/lvm.conf file on those nodes to include only the operating system disk. For example, if the /dev/sda device contains the operating system:

      filter = [ "a/sda/", "r/.*/"]

Install and configure components

obs

  1. Install the packages:

    # zypper install openstack-cinder-volume tgt

rdo

  1. Install the packages:

    # yum install openstack-cinder targetcli python-keystone

ubuntu or debian

  1. Install the packages:

    # apt install cinder-volume
  1. Edit the /etc/cinder/cinder.conf file and complete the following actions:

    • In the [database] section, configure database access:

      [database]
      # ...
      connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

      Replace CINDER_DBPASS with the password you chose for the Block Storage database.

    • In the [DEFAULT] section, configure RabbitMQ message queue access:

      [DEFAULT]
      # ...
      transport_url = rabbit://openstack:RABBIT_PASS@controller

      Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.

    • In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:

      [DEFAULT]
      # ...
      auth_strategy = keystone
      
      [keystone_authtoken]
      # ...
      auth_uri = http://controller:5000
      auth_url = http://controller:35357
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = default
      user_domain_name = default
      project_name = service
      username = cinder
      password = CINDER_PASS

      Replace CINDER_PASS with the password you chose for the cinder user in the Identity service.

      Note

      Comment out or remove any other options in the [keystone_authtoken] section.

    • In the [DEFAULT] section, configure the my_ip option:

      [DEFAULT]
      # ...
      my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

      Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network interface on your storage node, typically 10.0.0.41 for the first node in the example architecture <overview-example-architectures>.

    obs or ubuntu

    • In the [lvm] section, configure the LVM back end with the LVM driver, cinder-volumes volume group, iSCSI protocol, and appropriate iSCSI service:

      [lvm]
      # ...
      volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
      volume_group = cinder-volumes
      iscsi_protocol = iscsi
      iscsi_helper = tgtadm

    rdo

    • In the [lvm] section, configure the LVM back end with the LVM driver, cinder-volumes volume group, iSCSI protocol, and appropriate iSCSI service. If the [lvm] section does not exist, create it:

      [lvm]
      volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
      volume_group = cinder-volumes
      iscsi_protocol = iscsi
      iscsi_helper = lioadm
    • In the [DEFAULT] section, enable the LVM back end:

      [DEFAULT]
      # ...
      enabled_backends = lvm

      Note

      Back-end names are arbitrary. As an example, this guide uses the name of the driver as the name of the back end.

    • In the [DEFAULT] section, configure the location of the Image service API:

      [DEFAULT]
      # ...
      glance_api_servers = http://controller:9292
    • In the [oslo_concurrency] section, configure the lock path:

      [oslo_concurrency]
      # ...
      lock_path = /var/lib/cinder/tmp

obs

  1. Create the /etc/tgt/conf.d/cinder.conf file with the following data:

    include /var/lib/cinder/volumes/*

Finalize installation

obs

  • Start the Block Storage volume service including its dependencies and configure them to start when the system boots:

    # systemctl enable openstack-cinder-volume.service tgtd.service
    # systemctl start openstack-cinder-volume.service tgtd.service

rdo

  • Start the Block Storage volume service including its dependencies and configure them to start when the system boots:

    # systemctl enable openstack-cinder-volume.service target.service
    # systemctl start openstack-cinder-volume.service target.service

ubuntu or debian

  1. Restart the Block Storage volume service including its dependencies:

    # service tgt restart
    # service cinder-volume restart