diff --git a/doc/install-guide/source/cinder-controller-node.rst b/doc/install-guide/source/cinder-controller-install.rst similarity index 62% rename from doc/install-guide/source/cinder-controller-node.rst rename to doc/install-guide/source/cinder-controller-install.rst index a5d94c0780..4e9b8be72b 100644 --- a/doc/install-guide/source/cinder-controller-node.rst +++ b/doc/install-guide/source/cinder-controller-install.rst @@ -1,17 +1,18 @@ -===================================== +.. _cinder-controller: + Install and configure controller node -===================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure the Block Storage service, code-named cinder, on the controller node. This service requires at least one additional storage node that provides volumes to instances. -To configure prerequisites -~~~~~~~~~~~~~~~~~~~~~~~~~~ +Prerequisites +------------- Before you install and configure the Block Storage service, you -must create a database, service credentials, and API endpoint. +must create a database, service credentials, and API endpoints. #. To create the database, complete these steps: @@ -54,46 +55,29 @@ must create a database, service credentials, and API endpoint. .. code-block:: console - $ openstack user create --password-prompt cinder + $ openstack user create --domain default --password-prompt cinder User Password: Repeat User Password: - +----------+----------------------------------+ - | Field | Value | - +----------+----------------------------------+ - | email | None | - | enabled | True | - | id | 881ab2de4f7941e79504a759a83308be | - | name | cinder | - | username | cinder | - +----------+----------------------------------+ + +-----------+----------------------------------+ + | Field | Value | + +-----------+----------------------------------+ + | domain_id | default | + | enabled | True | + | id | bb279f8ffc444637af38811a5e1f0562 | + | name | cinder | + +-----------+----------------------------------+ * Add the ``admin`` role to the ``cinder`` user: .. code-block:: console $ openstack role add --project service --user cinder admin - +-------+----------------------------------+ - | Field | Value | - +-------+----------------------------------+ - | id | cd2cb9a39e874ea69e5d4b896eb16128 | - | name | admin | - +-------+----------------------------------+ - * Create the ``cinder`` service entities: + .. note:: - .. code-block:: console + This command provides no output. - $ openstack service create --name cinder \ - --description "OpenStack Block Storage" volume - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Block Storage | - | enabled | True | - | id | 1e494c3e22a24baaafcaf777d4d467eb | - | name | cinder | - | type | volume | - +-------------+----------------------------------+ + * Create the ``cinderv2`` service entity: .. code-block:: console @@ -104,63 +88,65 @@ must create a database, service credentials, and API endpoint. +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | - | id | 16e038e449c94b40868277f1d801edb5 | + | id | eb9fd245bdbc414695952e93f29fe3ac | | name | cinderv2 | | type | volumev2 | +-------------+----------------------------------+ - .. note:: - - The Block Storage service requires both the ``volume`` and - ``volumev2`` services. However, both services use the same API - endpoint that references the Block Storage version 2 API. - #. Create the Block Storage service API endpoints: .. code-block:: console - $ openstack endpoint create \ - --publicurl http://controller:8776/v2/%\(tenant_id\)s \ - --internalurl http://controller:8776/v2/%\(tenant_id\)s \ - --adminurl http://controller:8776/v2/%\(tenant_id\)s \ - --region RegionOne \ - volume + $ openstack endpoint create --region RegionOne \ + volumev2 public http://controller:8776/v2/%\(tenant_id\)s +--------------+-----------------------------------------+ | Field | Value | +--------------+-----------------------------------------+ - | adminurl | http://controller:8776/v2/%(tenant_id)s | - | id | d1b7291a2d794e26963b322c7f2a55a4 | - | internalurl | http://controller:8776/v2/%(tenant_id)s | - | publicurl | http://controller:8776/v2/%(tenant_id)s | + | enabled | True | + | id | 513e73819e14460fb904163f41ef3759 | + | interface | public | | region | RegionOne | - | service_id | 1e494c3e22a24baaafcaf777d4d467eb | - | service_name | cinder | - | service_type | volume | - +--------------+-----------------------------------------+ - - .. code-block:: console - - $ openstack endpoint create \ - --publicurl http://controller:8776/v2/%\(tenant_id\)s \ - --internalurl http://controller:8776/v2/%\(tenant_id\)s \ - --adminurl http://controller:8776/v2/%\(tenant_id\)s \ - --region RegionOne \ - volumev2 - +--------------+-----------------------------------------+ - | Field | Value | - +--------------+-----------------------------------------+ - | adminurl | http://controller:8776/v2/%(tenant_id)s | - | id | 097b4a6fc8ba44b4b10d4822d2d9e076 | - | internalurl | http://controller:8776/v2/%(tenant_id)s | - | publicurl | http://controller:8776/v2/%(tenant_id)s | - | region | RegionOne | - | service_id | 16e038e449c94b40868277f1d801edb5 | + | region_id | RegionOne | + | service_id | eb9fd245bdbc414695952e93f29fe3ac | | service_name | cinderv2 | | service_type | volumev2 | + | url | http://controller:8776/v2/%(tenant_id)s | +--------------+-----------------------------------------+ -To install and configure Block Storage controller components -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + $ openstack endpoint create --region RegionOne \ + volumev2 internal http://controller:8776/v2/%\(tenant_id\)s + +--------------+-----------------------------------------+ + | Field | Value | + +--------------+-----------------------------------------+ + | enabled | True | + | id | 6436a8a23d014cfdb69c586eff146a32 | + | interface | internal | + | region | RegionOne | + | region_id | RegionOne | + | service_id | eb9fd245bdbc414695952e93f29fe3ac | + | service_name | cinderv2 | + | service_type | volumev2 | + | url | http://controller:8776/v2/%(tenant_id)s | + +--------------+-----------------------------------------+ + + $ openstack endpoint create --region RegionOne \ + volumev2 admin http://controller:8776/v2/%\(tenant_id\)s + +--------------+-----------------------------------------+ + | Field | Value | + +--------------+-----------------------------------------+ + | enabled | True | + | id | e652cf84dd334f359ae9b045a2c91d96 | + | interface | admin | + | region | RegionOne | + | region_id | RegionOne | + | service_id | eb9fd245bdbc414695952e93f29fe3ac | + | service_name | cinderv2 | + | service_type | volumev2 | + | url | http://controller:8776/v2/%(tenant_id)s | + +--------------+-----------------------------------------+ + +Install and configure components +-------------------------------- .. only:: obs @@ -186,23 +172,7 @@ To install and configure Block Storage controller components # apt-get install cinder-api cinder-scheduler python-cinderclient -2. - - .. only:: rdo - - .. Workaround for https://bugzilla.redhat.com/show_bug.cgi?id=1212900. - - Copy the ``/usr/share/cinder/cinder-dist.conf`` file - to ``/etc/cinder/cinder.conf``. - - .. code-block:: console - - # cp /usr/share/cinder/cinder-dist.conf /etc/cinder/cinder.conf - # chown -R cinder:cinder /etc/cinder/cinder.conf - - - - Edit the ``/etc/cinder/cinder.conf`` file and complete the +2. Edit the ``/etc/cinder/cinder.conf`` file and complete the following actions: * In the ``[database]`` section, configure database access: @@ -294,15 +264,31 @@ To install and configure Block Storage controller components # su -s /bin/sh -c "cinder-manage db sync" cinder -To finalize installation -~~~~~~~~~~~~~~~~~~~~~~~~ +Configure Compute to use Block Storage +-------------------------------------- + +* Edit the ``/etc/nova/nova.conf`` file and add the following + to it: + + .. code-block:: ini + + [cinder] + os_region_name = RegionOne + +Finalize installation +--------------------- .. only:: obs or rdo + #. Restart the Compute API service: + + .. code-block:: console + + # systemctl restart openstack-nova-api.service + #. Start the Block Storage services and configure them to start when the system boots: - .. code-block:: console # systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service @@ -310,6 +296,12 @@ To finalize installation .. only:: ubuntu + #. Restart the Compute API service: + + .. code-block:: console + + # service nova-api restart + #. Restart the Block Storage services: .. code-block:: console diff --git a/doc/install-guide/source/cinder-next-steps.rst b/doc/install-guide/source/cinder-next-steps.rst index a24f75e2d3..828a58268e 100644 --- a/doc/install-guide/source/cinder-next-steps.rst +++ b/doc/install-guide/source/cinder-next-steps.rst @@ -1,3 +1,5 @@ +.. _cinder-next-steps: + ========== Next steps ========== diff --git a/doc/install-guide/source/cinder-storage-node.rst b/doc/install-guide/source/cinder-storage-install.rst similarity index 74% rename from doc/install-guide/source/cinder-storage-node.rst rename to doc/install-guide/source/cinder-storage-install.rst index 7d0b278362..ea6b13a09d 100644 --- a/doc/install-guide/source/cinder-storage-node.rst +++ b/doc/install-guide/source/cinder-storage-install.rst @@ -1,127 +1,90 @@ -==================================== +.. _cinder-storage: + Install and configure a storage node -==================================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This section describes how to install and configure storage nodes for the Block Storage service. For simplicity, this configuration -references one storage node with an empty local block storage device -``/dev/sdb`` that contains a suitable partition table with -one partition ``/dev/sdb1`` occupying the entire device. +references one storage node with an empty local block storage device. +The instructions use ``/dev/sdb``, but you can substitute a different +value for your particular node. + The service provisions logical volumes on this device using the :term:`LVM ` driver and provides them to instances via :term:`iSCSI` transport. You can follow these instructions with minor modifications to horizontally scale your environment with additional storage nodes. -To configure prerequisites -~~~~~~~~~~~~~~~~~~~~~~~~~~ +Prerequisites +------------- -You must configure the storage node before you install and -configure the volume service on it. Similar to the controller node, -the storage node contains one network interface on the -:term:`management network`. The storage node also -needs an empty block storage device of suitable size for your -environment. For more information, see :ref:`environment`. +Before you install and configure the Block Storage service on the +storage node, you must prepare the storage device. -#. Configure the management interface: +.. note:: - * IP address: ``10.0.0.41`` + Perform these steps on the storage node. - * Network mask: ``255.255.255.0`` (or ``/24``) +#. Install the supporting utility packages: - * Default gateway: ``10.0.0.1`` + .. only:: obs -#. Set the hostname of the node to ``block1``. + * Install the LVM packages: -#. .. include:: shared/edit_hosts_file.txt + .. code-block:: console -#. Install and configure :term:`NTP` using the instructions in - :ref:`environment-ntp`. + # zypper install lvm2 -.. only:: obs + * (Optional) If you intend to use non-raw image types such as QCOW2 + and VMDK, install the QEMU package: - 5. If you intend to use non-raw image types such as QCOW2 and VMDK, - install the QEMU support package: + .. code-block:: console - .. code-block:: console + # zypper install qemu - # zypper install qemu + .. only:: rdo - 6. Install the LVM packages: + * Install the LVM packages: -.. only:: rdo + .. code-block:: console - 5. If you intend to use non-raw image types such as QCOW2 and VMDK, - install the QEMU support package: + # yum install lvm2 - .. code-block:: console + * Start the LVM metadata service and configure it to start when the + system boots: - # yum install qemu + .. code-block:: console - 6. Install the LVM packages: + # systemctl enable lvm2-lvmetad.service + # systemctl start lvm2-lvmetad.service - .. code-block:: console + .. only:: ubuntu - # yum install lvm2 + .. code-block:: console - .. note:: - - Some distributions include LVM by default. - - Start the LVM metadata service and configure it to start when the - system boots: - - .. code-block:: console - - # systemctl enable lvm2-lvmetad.service - # systemctl start lvm2-lvmetad.service - -.. only:: ubuntu - - 5. If you intend to use non-raw image types such as QCOW2 and VMDK, - install the QEMU support package: - - .. code-block:: console - - # apt-get install qemu - - .. note:: - - Some distributions include LVM by default. - - 6. Install the LVM packages: - - .. code-block:: console - - # apt-get install lvm2 - - .. note:: - - Some distributions include LVM by default. - - -7. Create the LVM physical volume ``/dev/sdb1``: - - .. code-block:: console - - # pvcreate /dev/sdb1 - Physical volume "/dev/sdb1" successfully created + # apt-get install lvm2 .. note:: - If your system uses a different device name, adjust these - steps accordingly. + Some distributions include LVM by default. -8. Create the LVM volume group ``cinder-volumes``: +#. Create the LVM physical volume ``/dev/sdb``: .. code-block:: console - # vgcreate cinder-volumes /dev/sdb1 + # pvcreate /dev/sdb + Physical volume "/dev/sdb" successfully created + +#. Create the LVM volume group ``cinder-volumes``: + + .. code-block:: console + + # vgcreate cinder-volumes /dev/sdb Volume group "cinder-volumes" successfully created The Block Storage service creates logical volumes in this volume group. -9. Only instances can access Block Storage volumes. However, the +#. Only instances can access Block Storage volumes. However, the underlying operating system manages the devices associated with the volumes. By default, the LVM volume scanning tool scans the ``/dev`` directory for block storage devices that @@ -167,8 +130,8 @@ environment. For more information, see :ref:`environment`. filter = [ "a/sda/", "r/.*/"] -Install and configure Block Storage volume components -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Install and configure components +-------------------------------- .. only:: obs @@ -340,8 +303,8 @@ Install and configure Block Storage volume components ... verbose = True -To finalize installation -~~~~~~~~~~~~~~~~~~~~~~~~ +Finalize installation +--------------------- .. only:: obs diff --git a/doc/install-guide/source/cinder-verify.rst b/doc/install-guide/source/cinder-verify.rst index 688942c533..39f1587794 100644 --- a/doc/install-guide/source/cinder-verify.rst +++ b/doc/install-guide/source/cinder-verify.rst @@ -1,8 +1,7 @@ .. _cinder-verify: -================ Verify operation -================ +~~~~~~~~~~~~~~~~ Verify operation of the Block Storage service. @@ -10,13 +9,6 @@ Verify operation of the Block Storage service. Perform these commands on the controller node. -#. In each client environment script, configure the Block Storage - client to use API version 2.0: - - .. code-block:: console - - $ echo "export OS_VOLUME_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh - #. Source the ``admin`` credentials to gain access to admin-only CLI commands: diff --git a/doc/install-guide/source/cinder.rst b/doc/install-guide/source/cinder.rst index a10e1dffa5..460cb690ea 100644 --- a/doc/install-guide/source/cinder.rst +++ b/doc/install-guide/source/cinder.rst @@ -1,3 +1,5 @@ +.. _cinder: + ============================= Add the Block Storage service ============================= @@ -5,8 +7,8 @@ Add the Block Storage service .. toctree:: common/get_started_block_storage.rst - cinder-controller-node.rst - cinder-storage-node.rst + cinder-controller-install.rst + cinder-storage-install.rst cinder-verify.rst cinder-next-steps.rst diff --git a/doc/install-guide/source/environment-networking-storage-cinder.rst b/doc/install-guide/source/environment-networking-storage-cinder.rst new file mode 100644 index 0000000000..81323bd4cd --- /dev/null +++ b/doc/install-guide/source/environment-networking-storage-cinder.rst @@ -0,0 +1,25 @@ +Block storage node (Optional) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If you want to deploy the Block Storage service, configure one +additional storage node. + +Configure network interfaces +---------------------------- + +#. Configure the management interface: + + * IP address: ``10.0.0.41`` + + * Network mask: ``255.255.255.0`` (or ``/24``) + + * Default gateway: ``10.0.0.1`` + +Configure name resolution +------------------------- + +#. Set the hostname of the node to ``block1``. + +#. .. include:: shared/edit_hosts_file.txt + +#. Reboot the system to activate the changes. diff --git a/doc/install-guide/source/environment-networking.rst b/doc/install-guide/source/environment-networking.rst index 4474c12e33..03e91592cd 100644 --- a/doc/install-guide/source/environment-networking.rst +++ b/doc/install-guide/source/environment-networking.rst @@ -122,5 +122,6 @@ the controller node. environment-networking-controller.rst environment-networking-compute.rst + environment-networking-storage-cinder.rst environment-networking-storage-swift.rst environment-networking-verify.rst diff --git a/doc/install-guide/source/launch-instance-cinder.rst b/doc/install-guide/source/launch-instance-cinder.rst index cf70ef9f02..6a12709ebe 100644 --- a/doc/install-guide/source/launch-instance-cinder.rst +++ b/doc/install-guide/source/launch-instance-cinder.rst @@ -18,25 +18,31 @@ Create a volume .. code-block:: console $ cinder create --display-name volume1 1 - +---------------------+--------------------------------------+ - | Property | Value | - +---------------------+--------------------------------------+ - | attachments | [] | - | availability_zone | nova | - | bootable | false | - | created_at | 2015-09-22T13:36:19.457750 | - | display_description | None | - | display_name | volume1 | - | encrypted | False | - | id | 0a816b7c-e578-4290-bb74-c13b8b90d4e7 | - | metadata | {} | - | multiattach | false | - | size | 1 | - | snapshot_id | None | - | source_volid | None | - | status | creating | - | volume_type | None | - +---------------------+--------------------------------------+ + +---------------------------------------+--------------------------------------+ + | Property | Value | + +---------------------------------------+--------------------------------------+ + | attachments | [] | + | availability_zone | nova | + | bootable | false | + | consistencygroup_id | None | + | created_at | 2015-10-12T16:02:29.000000 | + | description | None | + | encrypted | False | + | id | 09e3743e-192a-4ada-b8ee-d35352fa65c4 | + | metadata | {} | + | multiattach | False | + | name | volume1 | + | os-vol-tenant-attr:tenant_id | ed0b60bf607743088218b0a533d5943f | + | os-volume-replication:driver_data | None | + | os-volume-replication:extended_status | None | + | replication_status | disabled | + | size | 1 | + | snapshot_id | None | + | source_volid | None | + | status | creating | + | user_id | 58126687cbcc4888bfa9ab73a2256f27 | + | volume_type | None | + +---------------------------------------+--------------------------------------+ #. After a short time, the volume status should change from ``creating`` to ``available``: @@ -44,11 +50,11 @@ Create a volume .. code-block:: console $ cinder list - +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ - | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | - +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ - | 0a816b7c-e578-4290-bb74-c13b8b90d4e7 | available | volume1 | 1 | - | false | | - +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ + +--------------------------------------+-----------+---------+------+-------------+----------+-------------+-------------+ + | ID | Status | Name | Size | Volume Type | Bootable | Multiattach | Attached to | + +--------------------------------------+-----------+---------+------+-------------+----------+-------------+-------------+ + | 09e3743e-192a-4ada-b8ee-d35352fa65c4 | available | volume1 | 1 | - | false | False | | + +--------------------------------------+-----------+---------+------+-------------+----------+-------------+-------------+ Attach the volume to an instance -------------------------------- @@ -64,19 +70,19 @@ Attach the volume to an instance **Example** - Attach the ``0a816b7c-e578-4290-bb74-c13b8b90d4e7`` volume to the + Attach the ``09e3743e-192a-4ada-b8ee-d35352fa65c4`` volume to the ``public-instance`` instance: .. code-block:: console - $ nova volume-attach public-instance1 0a816b7c-e578-4290-bb74-c13b8b90d4e7 + $ nova volume-attach public-instance 09e3743e-192a-4ada-b8ee-d35352fa65c4 +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | 158bea89-07db-4ac2-8115-66c0d6a4bb48 | | serverId | 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | - | volumeId | 0a816b7c-e578-4290-bb74-c13b8b90d4e7 | + | volumeId | 09e3743e-192a-4ada-b8ee-d35352fa65c4 | +----------+--------------------------------------+ #. List volumes: @@ -87,7 +93,7 @@ Attach the volume to an instance +--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Attached to | +--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+ - | 158bea89-07db-4ac2-8115-66c0d6a4bb48 | in-use | | 1 | - | 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | + | 09e3743e-192a-4ada-b8ee-d35352fa65c4 | in-use | | 1 | - | 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | +--------------------------------------+-----------+--------------+------+-------------+--------------------------------------+ #. Access your instance using SSH and use the ``fdisk`` command to verify @@ -118,7 +124,8 @@ Attach the volume to an instance .. note:: - You must create a partition table and file system to use the volume. + You must create a file system on the device and mount it + to use the volume. For more information about how to manage volumes, see the `OpenStack User Guide