diff --git a/doc/admin-guide-cloud/ch_blockstorage.xml b/doc/admin-guide-cloud/ch_blockstorage.xml index e1fd4844a3..c209247f3a 100644 --- a/doc/admin-guide-cloud/ch_blockstorage.xml +++ b/doc/admin-guide-cloud/ch_blockstorage.xml @@ -123,5 +123,15 @@ + + + + + + + + + + diff --git a/doc/admin-guide-cloud/section_ts_HTTP_bad_req_in_cinder_vol_log.xml b/doc/admin-guide-cloud/section_ts_HTTP_bad_req_in_cinder_vol_log.xml new file mode 100644 index 0000000000..390a65a0af --- /dev/null +++ b/doc/admin-guide-cloud/section_ts_HTTP_bad_req_in_cinder_vol_log.xml @@ -0,0 +1,44 @@ + +
+ Failed to attach volume after detaching +
+ Problem + The following errors are in the cinder-volume.log file. + 2013-05-03 15:16:33 INFO [cinder.volume.manager] Updating volume status +2013-05-03 15:16:33 DEBUG [hp3parclient.http] +REQ: curl -i https://10.10.22.241:8080/api/v1/cpgs -X GET -H "X-Hp3Par-Wsapi-Sessionkey: 48dc-b69ed2e5 +f259c58e26df9a4c85df110c-8d1e8451" -H "Accept: application/json" -H "User-Agent: python-3parclient" + +2013-05-03 15:16:33 DEBUG [hp3parclient.http] RESP:{'content-length': 311, 'content-type': 'text/plain', +'status': '400'} + +2013-05-03 15:16:33 DEBUG [hp3parclient.http] RESP BODY:Second simultaneous read on fileno 13 detected. +Unless you really know what you're doing, make sure that only one greenthread can read any particular socket. +Consider using a pools.Pool. If you do know what you're doing and want to disable this error, +call eventlet.debug.hub_multiple_reader_prevention(False) + +2013-05-03 15:16:33 ERROR [cinder.manager] Error during VolumeManager._report_driver_status: Bad request (HTTP 400) +Traceback (most recent call last): +File "/usr/lib/python2.7/dist-packages/cinder/manager.py", line 167, in periodic_tasks task(self, context) +File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 690, in _report_driver_status volume_stats = +self.driver.get_volume_stats(refresh=True) +File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/san/hp/hp_3par_fc.py", line 77, in get_volume_stats stats = +self.common.get_volume_stats(refresh, self.client) +File "/usr/lib/python2.7/dist-packages/cinder/volume/drivers/san/hp/hp_3par_common.py", line 421, in get_volume_stats cpg = +client.getCPG(self.config.hp3par_cpg) +File "/usr/lib/python2.7/dist-packages/hp3parclient/client.py", line 231, in getCPG cpgs = self.getCPGs() +File "/usr/lib/python2.7/dist-packages/hp3parclient/client.py", line 217, in getCPGs response, body = self.http.get('/cpgs') +File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 255, in get return self._cs_request(url, 'GET', **kwargs) +File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 224, in _cs_request **kwargs) +File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 198, in _time_request resp, body = self.request(url, method, **kwargs) +File "/usr/lib/python2.7/dist-packages/hp3parclient/http.py", line 192, in request raise exceptions.from_response(resp, body) +HTTPBadRequest: Bad request (HTTP 400) +
+
+ Solution + You need to update your copy of the hp_3par_fc.py driver which + contains the synchronization code. +
+
+ diff --git a/doc/admin-guide-cloud/section_ts_attach_vol_fail_not_JSON.xml b/doc/admin-guide-cloud/section_ts_attach_vol_fail_not_JSON.xml new file mode 100644 index 0000000000..3de41afc56 --- /dev/null +++ b/doc/admin-guide-cloud/section_ts_attach_vol_fail_not_JSON.xml @@ -0,0 +1,20 @@ + +
+ Nova volume attach error, not JSON serializable +
+ Problem + When you attach a nova volume to a VM, you will see the error with stack trace in /var/log/nova/nova-volume.log. The JSON serializable issue is caused by an RPC response timeout. +
+
+ Solution + Make sure your iptables allow port 3260 comunication on the ISC controller. Run the + following command. + + $ iptables -I INPUT <Last Rule No> -p tcp --dport 3260 -j ACCEPT + If the port communication is properly configured, you can try running the following + command.$ service iptables stop + If you try these solutions and still get the RPC response time out, you probably have + an ISC controller and KVM host incompatibility issue. Make sure they are + compatible.
+ diff --git a/doc/admin-guide-cloud/section_ts_cinder_config.xml b/doc/admin-guide-cloud/section_ts_cinder_config.xml old mode 100755 new mode 100644 diff --git a/doc/admin-guide-cloud/section_ts_duplicate_3par_host.xml b/doc/admin-guide-cloud/section_ts_duplicate_3par_host.xml new file mode 100644 index 0000000000..2afeabbf19 --- /dev/null +++ b/doc/admin-guide-cloud/section_ts_duplicate_3par_host.xml @@ -0,0 +1,21 @@ + +
+ Duplicate 3PAR host +
+ Problem + This error could be caused be by a volume being exported outside of OpenStack using a + host name different from the system name that OpenStack expects. This error could be displayed with the IQN if the host was exported using iSCSI. + Duplicate3PARHost: 3PAR Host already exists: Host wwn 50014380242B9750 already used by host cld4b5ubuntuW(id = 68. The hostname must be called 'cld4b5ubuntu‘. +
+
+ Solution + Change the 3PAR host name to match the one that OpenStack expects. The 3PAR host + constructed by the driver uses just the local hostname, not the fully qualified domain + name (FQDN) of the compute host. For example, if the FQDN was + myhost.example.com, just myhost would be + used as the 3PAR hostname. IP addresses are not allowed as host names on the 3PAR + storage server. +
+
+ diff --git a/doc/admin-guide-cloud/section_ts_failed_attach_vol_after_detach.xml b/doc/admin-guide-cloud/section_ts_failed_attach_vol_after_detach.xml new file mode 100644 index 0000000000..383e52aa30 --- /dev/null +++ b/doc/admin-guide-cloud/section_ts_failed_attach_vol_after_detach.xml @@ -0,0 +1,32 @@ + +
+ Failed to attach volume after detaching +
+ Problem + Failed to attach a volume after detaching the same volume. +
+
+ Solution + You need to change the device name on the nova-attach call. The VM may + not clean-up after a nova-detach operation. In the following example from + the VM, the nova-attach call will fail if the device names + vdb, vdc, or vdd are + used.# ls -al /dev/disk/by-path/ +total 0 +drwxr-xr-x 2 root root 200 2012-08-29 17:33 . +drwxr-xr-x 5 root root 100 2012-08-29 17:33 .. +lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0 -> ../../vda +lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part1 -> ../../vda1 +lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part2 -> ../../vda2 +lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part5 -> ../../vda5 +lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:06.0-virtio-pci-virtio2 -> ../../vdb +lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:08.0-virtio-pci-virtio3 -> ../../vdc +lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4 -> ../../vdd +lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4-part1 -> ../../vdd1 + You may also have this problem after attaching and detaching the same volume from the + same VM with the same mount point multiple times. In this case, restarting the KVM host + may fix the problem. +
+
+ diff --git a/doc/admin-guide-cloud/section_ts_failed_attach_vol_no_sysfsutils.xml b/doc/admin-guide-cloud/section_ts_failed_attach_vol_no_sysfsutils.xml new file mode 100644 index 0000000000..6355e8e5a9 --- /dev/null +++ b/doc/admin-guide-cloud/section_ts_failed_attach_vol_no_sysfsutils.xml @@ -0,0 +1,23 @@ + +
+ Failed to attach volume, systool is not installed +
+ Problem + This warning and error occurs if you do not have the required + sysfsutils package installed on the Compute node. + WARNING nova.virt.libvirt.utils [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] systool is not installed +ERROR nova.compute.manager [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] +[instance: df834b5a-8c3f-477a-be9b-47c97626555c|instance: df834b5a-8c3f-477a-be9b-47c97626555c] +Failed to attach volume 13d5c633-903a-4764-a5a0-3336945b1db1 at /dev/vdk. +
+
+ Solution + Run the following command on the Compute node to install the + sysfsutils packages. + + $ sudo apt-get install sysfsutils + +
+
+ diff --git a/doc/admin-guide-cloud/section_ts_failed_connect_vol_FC_SAN.xml b/doc/admin-guide-cloud/section_ts_failed_connect_vol_FC_SAN.xml new file mode 100644 index 0000000000..52652c0136 --- /dev/null +++ b/doc/admin-guide-cloud/section_ts_failed_connect_vol_FC_SAN.xml @@ -0,0 +1,20 @@ + +
+ Failed to connect volume in FC SAN +
+ Problem + Compute node failed to connect to a volume in an Fibre Channe (FC) SAN configuration. + The WWN may not be zoned correctly in your FC SAN that links the Compute host to the + storage array. + ERROR nova.compute.manager [req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo|req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo] [instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3] +Failed to connect to volume 6f6a6a9c-dfcf-4c8d-b1a8-4445ff883200 while attaching at /dev/vdjTRACE nova.compute.manager [instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3] +Traceback (most recent call last):…f07aa4c3d5f3\] ClientException: The server has either erred or is incapable of performing the requested operation.(HTTP 500)(Request-ID: req-71e5132b-21aa-46ee-b3cc-19b5b4ab2f00) +
+
+ Solution + The network administrator must configure the FC SAN fabric by correctly zoning the WWN + (port names) from your Compute node HBAs. +
+
+ diff --git a/doc/admin-guide-cloud/section_ts_failed_sched_create_vol.xml b/doc/admin-guide-cloud/section_ts_failed_sched_create_vol.xml new file mode 100644 index 0000000000..3f4d7d2b02 --- /dev/null +++ b/doc/admin-guide-cloud/section_ts_failed_sched_create_vol.xml @@ -0,0 +1,23 @@ + +
+ Failed to schedule and create volume +
+ Problem + The following warning is seen in the cinder-scheduler.log when + volume type and extra specs are defined and the volume is in an error state. + WARNING cinder.scheduler.manager [req-b6ef1628-fdc5-49e9-a40a-79d5fcedcfef c0c1ccd20639448c9deea5fe4c112a42 c8b023257513436f 8b303269988b2e7b|req-b6ef1628-fdc5-49e9-a40a-79d5fcedcfef +c0c1ccd20639448c9deea5fe4c112a42 c8b023257513436f 8b303269988b2e7b] +Failed to schedule_create_volume: No valid host was found. +
+
+ Solution + Enable the option + scheduler_driver=cinder.scheduler.simple.SimpleScheduler in the + /etc/cinder/cinder.conf file and restart the + cinder-scheduler service. The + scheduler_driver defaults to + cinder.scheduler.filter_scheduler.FilterScheduler. +
+
+ diff --git a/doc/admin-guide-cloud/section_ts_multipath_warn.xml b/doc/admin-guide-cloud/section_ts_multipath_warn.xml old mode 100755 new mode 100644 diff --git a/doc/admin-guide-cloud/section_ts_no_emulator_x86_64.xml b/doc/admin-guide-cloud/section_ts_no_emulator_x86_64.xml new file mode 100644 index 0000000000..9121a40c0d --- /dev/null +++ b/doc/admin-guide-cloud/section_ts_no_emulator_x86_64.xml @@ -0,0 +1,20 @@ + +
+ Cannot find suitable emulator for x86_64 +
+ Problem + When you attempt to create a VM, the error shows the VM is in the BUILD + then ERROR state. +
+
+ Solution + On the KVM host run, cat /proc/cpuinfo. Make sure the vme + and svm flags are set. + Follow the instructions in the + + enabling KVM section of the Configuration + Reference to enable hardware virtualization + support in your BIOS. +
+
diff --git a/doc/admin-guide-cloud/section_ts_non_existent_host.xml b/doc/admin-guide-cloud/section_ts_non_existent_host.xml new file mode 100644 index 0000000000..aede225cc0 --- /dev/null +++ b/doc/admin-guide-cloud/section_ts_non_existent_host.xml @@ -0,0 +1,22 @@ + +
+ Non-existent host +
+ Problem + This error could be caused be by a volume being exported outside of OpenStack using a + host name different from the system name that OpenStack expects. This error could be + display with the IQN if the host was exported using iSCSI. + 2013-04-19 04:02:02.336 2814 ERROR cinder.openstack.common.rpc.common [-] Returning exception Not found (HTTP 404) +NON_EXISTENT_HOST - HOST '10' was not found to caller. +
+
+ Solution + Host names constructed by the driver use just the local hostname, not the fully + qualified domain name (FQDN) of the Compute host. For example, if the FQDN was + myhost.example.com, just myhost would be + used as the 3PAR hostname. IP addresses are not allowed as host names on the 3PAR + storage server. +
+
+ diff --git a/doc/admin-guide-cloud/section_ts_non_existent_vlun.xml b/doc/admin-guide-cloud/section_ts_non_existent_vlun.xml new file mode 100644 index 0000000000..12514a0f77 --- /dev/null +++ b/doc/admin-guide-cloud/section_ts_non_existent_vlun.xml @@ -0,0 +1,18 @@ + +
+ Non-existent VLUN +
+ Problem + This error occurs if the 3PAR host exists with the correct host name that the + OpenStack cinder drivers expect but the volume was created in a different Domain. + HTTPNotFound: Not found (HTTP 404) NON_EXISTENT_VLUN - VLUN 'osv-DqT7CE3mSrWi4gZJmHAP-Q' was not found. +
+
+ Solution + The hp3par_domain configuration items either need to be updated to use + the domain the 3PAR host currently resides in, or the 3PAR host needs to be moved to the + domain that the volume was created in. +
+
+ diff --git a/doc/admin-guide-cloud/section_ts_vol_attach_miss_sg_scan.xml b/doc/admin-guide-cloud/section_ts_vol_attach_miss_sg_scan.xml old mode 100755 new mode 100644