diff --git a/doc/admin-guide-cloud-rst/source/blockstorage-troubleshoot.rst b/doc/admin-guide-cloud-rst/source/blockstorage-troubleshoot.rst index 3504134941..0de2f37d22 100644 --- a/doc/admin-guide-cloud-rst/source/blockstorage-troubleshoot.rst +++ b/doc/admin-guide-cloud-rst/source/blockstorage-troubleshoot.rst @@ -6,20 +6,19 @@ This section provides useful tips to help you troubleshoot your Block Storage installation. .. toctree:: - :maxdepth: 2 + :maxdepth: 1 ts_cinder_config.rst ts_multipath_warn.rst ts_vol_attach_miss_sg_scan.rst ts_non_existent_host.rst ts_non_existent_vlun.rst - + ts-eql-volume-size.rst + ts-HTTP-bad-req-in-cinder-vol-log.rst + ts-duplicate-3par-host.rst + ts-failed-attach-vol-after-detach.rst + ts-failed-attach-vol-no-sysfsutils.rst + ts-failed-connect-vol-FC-SAN.rst .. TODO (MZ) Convert and include the following sections - include: blockstorage/section_ts_eql_volume_size.xml - include: blockstorage/section_ts_HTTP_bad_req_in_cinder_vol_log.xml - include: blockstorage/section_ts_duplicate_3par_host.xml - include: blockstorage/section_ts_failed_attach_vol_after_detach.xml - include: blockstorage/section_ts_failed_attach_vol_no_sysfsutils.xml - include: blockstorage/section_ts_failed_connect_vol_FC_SAN.xml include: blockstorage/section_ts_no_emulator_x86_64.xml diff --git a/doc/admin-guide-cloud-rst/source/ts-HTTP-bad-req-in-cinder-vol-log.rst b/doc/admin-guide-cloud-rst/source/ts-HTTP-bad-req-in-cinder-vol-log.rst new file mode 100644 index 0000000000..6b234cdd02 --- /dev/null +++ b/doc/admin-guide-cloud-rst/source/ts-HTTP-bad-req-in-cinder-vol-log.rst @@ -0,0 +1,46 @@ +.. highlight:: console + :linenothreshold: 5 + +HTTP bad request in cinder volume log +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Problem +------- + +These errors appear in the :file:`cinder-volume.log` file:: + + 2013-05-03 15:16:33 INFO [cinder.volume.manager] Updating volume status + 2013-05-03 15:16:33 DEBUG [hp3parclient.http] + REQ: curl -i https://10.10.22.241:8080/api/v1/cpgs -X GET -H "X-Hp3Par-Wsapi-Sessionkey: 48dc-b69ed2e5 + f259c58e26df9a4c85df110c-8d1e8451" -H "Accept: application/json" -H "User-Agent: python-3parclient" + + 2013-05-03 15:16:33 DEBUG [hp3parclient.http] RESP:{'content-length': 311, 'content-type': 'text/plain', + 'status': '400'} + + 2013-05-03 15:16:33 DEBUG [hp3parclient.http] RESP BODY:Second simultaneous read on fileno 13 detected. + Unless you really know what you're doing, make sure that only one greenthread can read any particular socket. + Consider using a pools.Pool. If you do know what you're doing and want to disable this error, + call eventlet.debug.hub_multiple_reader_prevention(False) + + 2013-05-03 15:16:33 ERROR [cinder.manager] Error during VolumeManager._report_driver_status: Bad request (HTTP 400) + Traceback (most recent call last): + File "/usr/lib/python2.7/dist-packages/cinder/manager.py", line 167, in periodic_tasks task(self, context) + File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 690, in _report_driver_status volume_stats = + self.driver.get_volume_stats(refresh=True) + File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/san/hp/hp_3par_fc.py", line 77, in get_volume_stats stats = + self.common.get_volume_stats(refresh, self.client) + File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/san/hp/hp_3par_common.py", line 421, in get_volume_stats cpg = + client.getCPG(self.config.hp3par_cpg) + File "/usr/lib/python2.7/dist-packages/hp3parclient/client.py", line 231, in getCPG cpgs = self.getCPGs() + File "/usr/lib/python2.7/dist-packages/hp3parclient/client.py", line 217, in getCPGs response, body = self.http.get('/cpgs') + File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 255, in get return self._cs_request(url, 'GET', **kwargs) + File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 224, in _cs_request **kwargs) + File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 198, in _time_request resp, body = self.request(url, method, **kwargs) + File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 192, in request raise exceptions.from_response(resp, body) + HTTPBadRequest: Bad request (HTTP 400) + +Solution +-------- + +You need to update your copy of the :file:`hp_3par_fc.py` driver which +contains the synchronization code. diff --git a/doc/admin-guide-cloud-rst/source/ts-duplicate-3par-host.rst b/doc/admin-guide-cloud-rst/source/ts-duplicate-3par-host.rst new file mode 100644 index 0000000000..e4ad454377 --- /dev/null +++ b/doc/admin-guide-cloud-rst/source/ts-duplicate-3par-host.rst @@ -0,0 +1,27 @@ +.. highlight:: console + :linenothreshold: 5 + +Duplicate 3PAR host +~~~~~~~~~~~~~~~~~~~ + +Problem +------- + +This error may be caused by a volume being exported outside of OpenStack +using a host name different from the system name that OpenStack expects. +This error could be displayed with the IQN if the host was exported +using iSCSI:: + + Duplicate3PARHost: 3PAR Host already exists: Host wwn 50014380242B9750 \ + already used by host cld4b5ubuntuW(id = 68. The hostname must be called\ + 'cld4b5ubuntu'. + +Solution +-------- + +Change the 3PAR host name to match the one that OpenStack expects. The +3PAR host constructed by the driver uses just the local hostname, not +the fully qualified domain name (FQDN) of the compute host. For example, +if the FQDN was *myhost.example.com*, just *myhost* would be used as the +3PAR hostname. IP addresses are not allowed as host names on the 3PAR +storage server. diff --git a/doc/admin-guide-cloud-rst/source/ts-eql-volume-size.rst b/doc/admin-guide-cloud-rst/source/ts-eql-volume-size.rst new file mode 100644 index 0000000000..5828eb883b --- /dev/null +++ b/doc/admin-guide-cloud-rst/source/ts-eql-volume-size.rst @@ -0,0 +1,195 @@ +.. highlight:: console + :linenothreshold: 5 + +Addressing discrepancies in reported volume sizes for EqualLogic storage +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Problem +------- + +There is a discrepancy between both the actual volume size in EqualLogic +(EQL) storage and the image size in the Image service, with what is +reported OpenStack database. This could lead to confusion if a user is +creating volumes from an image that was uploaded from an EQL volume +(through the Image service). The image size is slightly larger than the +target volume size; this is because EQL size reporting accounts for +additional storage used by EQL for internal volume metadata. + +To reproduce the issue follow the steps in the following procedure. + +This procedure assumes that the EQL array is provisioned, and that +appropriate configuration settings have been included in +:file:`/etc/cinder/cinder.conf` to connect to the EQL array. + +Create a new volume. Note the ID and size of the volume. In the +following example, the ID and size are +``74cf9c04-4543-47ae-a937-a9b7c6c921e7`` and ``1``, respectively: + +.. code-block:: console + + $ cinder create --display-name volume1 1 + + +-----------------------+-------------------------------------------+ + | Property | Value | + +-----------------------+-------------------------------------------+ + | attachments | [] | + | availability zone | nova | + | bootable | false | + | created_at | 2014-03-21T18:31:54.248775 | + | display_description | None | + | display_name | volume1 | + | id | 74cf9c04-4543-47ae-a937-a9b7c6c921e7 | + | metadata | {} | + | size | 1 | + | snapshot_id | None | + | source volid | None | + | status | creating | + | volume type | None | + +-----------------------+-------------------------------------------+ + +Verify the volume size on the EQL array by using its command-line +interface. + +The actual size (``VolReserve``) is 1.01 GB. The EQL Group Manager +should also report a volume size of 1.01 GB:: + + eql> volume select volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7 + eql (volume_volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7)> show + _______________________________ Volume Information ________________________________ + Name: volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7 + Size: 1GB + VolReserve: 1.01GB + VolReservelnUse: 0MB + ReplReservelnUse: 0MB + iSCSI Alias: volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7 + iSCSI Name: iqn.2001-05.com.equallogic:0-8a0906-19f91850c-067000000b4532cl-volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7 + ActualMembers: 1 + Snap-Warn: 10% + Snap-Depletion: delete-oldest + Description: + Snap-Reserve: 100% + Snap-Reserve-Avail: 100% (1.01GB) + Permission: read-write + DesiredStatus: online + Status: online + Connections: O + Snapshots: O + Bind: + Type: not-replicated + ReplicationReserveSpace: 0MB + +Create a new image from this volume: + +.. code-block:: console + + $ cinder upload-to-image --disk-format raw \ + --container-format bare volume1 image_from_volume1 + + +---------------------+---------------------------------------+ + | Property | Value | + +---------------------+---------------------------------------+ + | container_format | bare | + | disk_format | raw | + | display_description | None | + | id | 74cf9c04-4543-47ae-a937-a9b7c6c921e7 | + | image_id | 3020a21d-ba37-4495-8899-07fc201161b9 | + | image_name | image_from_volume1 | + | size | 1 | + | status | uploading | + | updated_at | 2014-03-21T18:31:55.000000 | + | volume_type | None | + +---------------------+---------------------------------------+ + +When you uploaded the volume in the previous step, the Image service +reported the volume's size as ``1`` (GB). However, when using +``glance image-list`` to list the image, the displayed size is +1085276160 bytes, or roughly 1.01 GB: + ++-----------------------+---------+-----------+--------------+--------------+ +| Name | Disk | Container | Size | Status | +| | Format | Format | | | ++=======================+=========+===========+==============+==============+ +| image\_from\_volume1 | raw | bare | *1085276160* | active | ++-----------------------+---------+-----------+--------------+--------------+ + +| + +Create a new volume using the previous image (``image_id 3020a21d-ba37-4495 +-8899-07fc201161b9`` in this example) as +the source. Set the target volume size to 1 GB; this is the size +reported by the ``cinder`` tool when you uploaded the volume to the +Image service: + +.. code-block:: console + + $ cinder create --display-name volume2 \ + --image-id 3020a21d-ba37-4495-8899-07fc201161b9 1 + ERROR: Invalid input received: Size of specified image 2 is larger + than volume size 1. (HTTP 400) (Request-ID: req-4b9369c0-dec5-4e16-a114-c0cdl6bSd210) + +The attempt to create a new volume based on the size reported by the +``cinder`` tool will then fail. + +Solution +-------- + +To work around this problem, increase the target size of the new image +to the next whole number. In the problem example, you created a 1 GB +volume to be used as volume-backed image, so a new volume using this +volume-backed image should use a size of 2 GB: + +.. code-block:: console + + $ cinder create --display-name volume2 \ + --image-id 3020a21d-ba37-4495-8899-07fc201161b9 1 + + +---------------------+--------------------------------------+ + | Property | Value | + +---------------------+--------------------------------------+ + | attachments | [] | + | availability_zone | nova | + | bootable | false | + | created_at | 2014-03-21T19:25:31.564482 | + | display_description | None | + | display_name | volume2 | + | id | 64e8eb18-d23f-437b-bcac-b3S2afa6843a | + | image_id | 3020a21d-ba37-4495-8899-07fc20116lb9 | + | metadata | [] | + | size | 2 | + | snapshot_id | None | + | source_volid | None | + | status | creating | + | volume_type | None | + +---------------------+--------------------------------------+ + +.. note:: + + The dashboard suggests a suitable size when you create a new volume + based on a volume-backed image. + +You can then check this new volume into the EQL array:: + + eql> volume select volume-64e8eb18-d23f-437b-bcac-b352afa6843a + eql (volume_volume-61e8eb18-d23f-437b-bcac-b352afa6843a)> show + ______________________________ Volume Information _______________________________ + Name: volume-64e8eb18-d23f-437b-bcac-b352afa6843a + Size: 2GB + VolReserve: 2.01GB + VolReserveInUse: 1.01GB + ReplReserveInUse: 0MB + iSCSI Alias: volume-64e8eb18-d23f-437b-bcac-b352afa6843a + iSCSI Name: iqn.2001-05.com.equallogic:0-8a0906-e3091850e-eae000000b7S32cl-volume-64e8eb18-d23f-437b-bcac-b3S2afa6Bl3a + ActualMembers: 1 + Snap-Warn: 10% + Snap-Depletion: delete-oldest + Description: + Snap-Reserve: 100% + Snap-Reserve-Avail: 100% (2GB) + Permission: read-write + DesiredStatus: online + Status: online + Connections: 1 + Snapshots: O + Bind: + Type: not-replicated + ReplicationReserveSpace: 0MB diff --git a/doc/admin-guide-cloud-rst/source/ts-failed-attach-vol-after-detach.rst b/doc/admin-guide-cloud-rst/source/ts-failed-attach-vol-after-detach.rst new file mode 100644 index 0000000000..dfa5c659c9 --- /dev/null +++ b/doc/admin-guide-cloud-rst/source/ts-failed-attach-vol-after-detach.rst @@ -0,0 +1,35 @@ +.. highlight:: console + :linenothreshold: 5 + +Failed to attach volume after detaching +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Problem +------- + +Failed to attach a volume after detaching the same volume. + +Solution +-------- + +You must change the device name on the ``nova-attach`` command. The VM +might not clean up after a ``nova-detach`` command runs. This example +shows how the ``nova-attach`` command fails when you use the ``vdb``, +``vdc``, or ``vdd`` device names:: + + # ls -al /dev/disk/by-path/ + total 0 + drwxr-xr-x 2 root root 200 2012-08-29 17:33 . + drwxr-xr-x 5 root root 100 2012-08-29 17:33 .. + lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0 -> ../../vda + lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part1 -> ../../vda1 + lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part2 -> ../../vda2 + lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part5 -> ../../vda5 + lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:06.0-virtio-pci-virtio2 -> ../../vdb + lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:08.0-virtio-pci-virtio3 -> ../../vdc + lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4 -> ../../vdd + lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4-part1 -> ../../vdd1 + +You might also have this problem after attaching and detaching the same +volume from the same VM with the same mount point multiple times. In +this case, restart the KVM host. diff --git a/doc/admin-guide-cloud-rst/source/ts-failed-attach-vol-no-sysfsutils.rst b/doc/admin-guide-cloud-rst/source/ts-failed-attach-vol-no-sysfsutils.rst new file mode 100644 index 0000000000..22d5781d2f --- /dev/null +++ b/doc/admin-guide-cloud-rst/source/ts-failed-attach-vol-no-sysfsutils.rst @@ -0,0 +1,28 @@ +.. highlight:: console + :linenothreshold: 5 + +Failed to attach volume, systool is not installed +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Problem +------- + +This warning and error occurs if you do not have the required +``sysfsutils`` package installed on the compute node:: + + WARNING nova.virt.libvirt.utils [req-1200f887-c82b-4e7c-a891-fac2e3735dbb\ + admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] systool\ + is not installed + ERROR nova.compute.manager [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin\ + admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] + [instance: df834b5a-8c3f-477a-be9b-47c97626555c|instance: df834b5a-8c3f-47\ + 7a-be9b-47c97626555c] + Failed to attach volume 13d5c633-903a-4764-a5a0-3336945b1db1 at /dev/vdk. + +Solution +-------- + +Run the following command on the compute node to install the +``sysfsutils`` packages:: + + # apt-get install sysfsutils diff --git a/doc/admin-guide-cloud-rst/source/ts-failed-connect-vol-FC-SAN.rst b/doc/admin-guide-cloud-rst/source/ts-failed-connect-vol-FC-SAN.rst new file mode 100644 index 0000000000..b6a1c6c727 --- /dev/null +++ b/doc/admin-guide-cloud-rst/source/ts-failed-connect-vol-FC-SAN.rst @@ -0,0 +1,29 @@ +.. highlight:: console + :linenothreshold: 5 + +Failed to connect volume in FC SAN +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Problem +------- + +Compute node failed to connect to a volume in a Fibre Channel (FC) SAN +configuration. The WWN may not be zoned correctly in your FC SAN that +links the compute host to the storage array:: + + ERROR nova.compute.manager [req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin\ + demo|req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo] [instance: 60ebd\ + 6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3\ + d5f3] + Failed to connect to volume 6f6a6a9c-dfcf-4c8d-b1a8-4445ff883200 while\ + attaching at /dev/vdjTRACE nova.compute.manager [instance: 60ebd6c7-c1e3-4\ + bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3] + Traceback (most recent call last):…f07aa4c3d5f3\] ClientException: The\ + server has either erred or is incapable of performing the requested\ + operation.(HTTP 500)(Request-ID: req-71e5132b-21aa-46ee-b3cc-19b5b4ab2f00) + +Solution +-------- + +The network administrator must configure the FC SAN fabric by correctly +zoning the WWN (port names) from your compute node HBAs. diff --git a/doc/admin-guide-cloud-rst/source/ts_cinder_config.rst b/doc/admin-guide-cloud-rst/source/ts_cinder_config.rst index e392aa453b..5e6835819a 100644 --- a/doc/admin-guide-cloud-rst/source/ts_cinder_config.rst +++ b/doc/admin-guide-cloud-rst/source/ts_cinder_config.rst @@ -1,9 +1,8 @@ .. highlight:: ini :linenothreshold: 1 -============================================ Troubleshoot the Block Storage configuration -============================================ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Most Block Storage errors are caused by incorrect volume configurations that result in volume creation failures. To resolve these failures,