[admin-guide] Fix rst markups for cinder troubleshoot files

Change-Id: I5fe44a54d14d41547ba311eb7561bede46b3394c
This commit is contained in:
venkatamahesh 2015-12-15 10:03:46 +05:30
parent a68ac6d95a
commit aa38be0b06
12 changed files with 212 additions and 197 deletions

View File

@ -5,7 +5,9 @@ HTTP bad request in cinder volume log
Problem
~~~~~~~
These errors appear in the :file:`cinder-volume.log` file::
These errors appear in the ``cinder-volume.log`` file:
.. code-block:: console
2013-05-03 15:16:33 INFO [cinder.volume.manager] Updating volume status
2013-05-03 15:16:33 DEBUG [hp3parclient.http]
@ -40,5 +42,5 @@ These errors appear in the :file:`cinder-volume.log` file::
Solution
~~~~~~~~
You need to update your copy of the :file:`hp_3par_fc.py` driver which
You need to update your copy of the ``hp_3par_fc.py`` driver which
contains the synchronization code.

View File

@ -8,18 +8,20 @@ Problem
This error may be caused by a volume being exported outside of OpenStack
using a host name different from the system name that OpenStack expects.
This error could be displayed with the :term:`IQN` if the host was exported
using iSCSI::
using iSCSI:
Duplicate3PARHost: 3PAR Host already exists: Host wwn 50014380242B9750 \
already used by host cld4b5ubuntuW(id = 68. The hostname must be called\
'cld4b5ubuntu'.
.. code-block:: console
Duplicate3PARHost: 3PAR Host already exists: Host wwn 50014380242B9750 \
already used by host cld4b5ubuntuW(id = 68. The hostname must be called\
'cld4b5ubuntu'.
Solution
~~~~~~~~
Change the 3PAR host name to match the one that OpenStack expects. The
3PAR host constructed by the driver uses just the local hostname, not
3PAR host constructed by the driver uses just the local host name, not
the fully qualified domain name (FQDN) of the compute host. For example,
if the FQDN was *myhost.example.com*, just *myhost* would be used as the
3PAR hostname. IP addresses are not allowed as host names on the 3PAR
3PAR host name. IP addresses are not allowed as host names on the 3PAR
storage server.

View File

@ -7,17 +7,17 @@ Problem
There is a discrepancy between both the actual volume size in EqualLogic
(EQL) storage and the image size in the Image service, with what is
reported OpenStack database. This could lead to confusion if a user is
creating volumes from an image that was uploaded from an EQL volume
(through the Image service). The image size is slightly larger than the
target volume size; this is because EQL size reporting accounts for
additional storage used by EQL for internal volume metadata.
reported to OpenStack database. This could lead to confusion
if a user is creating volumes from an image that was uploaded from an EQL
volume (through the Image service). The image size is slightly larger
than the target volume size; this is because EQL size reporting accounts
for additional storage used by EQL for internal volume metadata.
To reproduce the issue follow the steps in the following procedure.
This procedure assumes that the EQL array is provisioned, and that
appropriate configuration settings have been included in
:file:`/etc/cinder/cinder.conf` to connect to the EQL array.
``/etc/cinder/cinder.conf`` to connect to the EQL array.
Create a new volume. Note the ID and size of the volume. In the
following example, the ID and size are
@ -25,82 +25,84 @@ following example, the ID and size are
.. code-block:: console
$ cinder create --display-name volume1 1
$ cinder create --display-name volume1 1
+-----------------------+-------------------------------------------+
| Property | Value |
+-----------------------+-------------------------------------------+
| attachments | [] |
| availability zone | nova |
| bootable | false |
| created_at | 2014-03-21T18:31:54.248775 |
| display_description | None |
| display_name | volume1 |
| id | 74cf9c04-4543-47ae-a937-a9b7c6c921e7 |
| metadata | {} |
| size | 1 |
| snapshot_id | None |
| source volid | None |
| status | creating |
| volume type | None |
+-----------------------+-------------------------------------------+
+-----------------------+-------------------------------------------+
| Property | Value |
+-----------------------+-------------------------------------------+
| attachments | [] |
| availability zone | nova |
| bootable | false |
| created_at | 2014-03-21T18:31:54.248775 |
| display_description | None |
| display_name | volume1 |
| id | 74cf9c04-4543-47ae-a937-a9b7c6c921e7 |
| metadata | {} |
| size | 1 |
| snapshot_id | None |
| source volid | None |
| status | creating |
| volume type | None |
+-----------------------+-------------------------------------------+
Verify the volume size on the EQL array by using its command-line
interface.
The actual size (``VolReserve``) is 1.01 GB. The EQL Group Manager
should also report a volume size of 1.01 GB::
should also report a volume size of 1.01 GB:
eql> volume select volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7
eql (volume_volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7)> show
_______________________________ Volume Information ________________________________
Name: volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7
Size: 1GB
VolReserve: 1.01GB
VolReservelnUse: 0MB
ReplReservelnUse: 0MB
iSCSI Alias: volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7
iSCSI Name: iqn.2001-05.com.equallogic:0-8a0906-19f91850c-067000000b4532cl-volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7
ActualMembers: 1
Snap-Warn: 10%
Snap-Depletion: delete-oldest
Description:
Snap-Reserve: 100%
Snap-Reserve-Avail: 100% (1.01GB)
Permission: read-write
DesiredStatus: online
Status: online
Connections: O
Snapshots: O
Bind:
Type: not-replicated
ReplicationReserveSpace: 0MB
.. code-block:: console
eql> volume select volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7
eql (volume_volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7)> show
_______________________________ Volume Information ________________________________
Name: volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7
Size: 1GB
VolReserve: 1.01GB
VolReservelnUse: 0MB
ReplReservelnUse: 0MB
iSCSI Alias: volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7
iSCSI Name: iqn.2001-05.com.equallogic:0-8a0906-19f91850c-067000000b4532cl-volume-74cf9c04-4543-47ae-a937-a9b7c6c921e7
ActualMembers: 1
Snap-Warn: 10%
Snap-Depletion: delete-oldest
Description:
Snap-Reserve: 100%
Snap-Reserve-Avail: 100% (1.01GB)
Permission: read-write
DesiredStatus: online
Status: online
Connections: O
Snapshots: O
Bind:
Type: not-replicated
ReplicationReserveSpace: 0MB
Create a new image from this volume:
.. code-block:: console
$ cinder upload-to-image --disk-format raw \
--container-format bare volume1 image_from_volume1
$ cinder upload-to-image --disk-format raw \
--container-format bare volume1 image_from_volume1
+---------------------+---------------------------------------+
| Property | Value |
+---------------------+---------------------------------------+
| container_format | bare |
| disk_format | raw |
| display_description | None |
| id | 74cf9c04-4543-47ae-a937-a9b7c6c921e7 |
| image_id | 3020a21d-ba37-4495-8899-07fc201161b9 |
| image_name | image_from_volume1 |
| size | 1 |
| status | uploading |
| updated_at | 2014-03-21T18:31:55.000000 |
| volume_type | None |
+---------------------+---------------------------------------+
+---------------------+---------------------------------------+
| Property | Value |
+---------------------+---------------------------------------+
| container_format | bare |
| disk_format | raw |
| display_description | None |
| id | 74cf9c04-4543-47ae-a937-a9b7c6c921e7 |
| image_id | 3020a21d-ba37-4495-8899-07fc201161b9 |
| image_name | image_from_volume1 |
| size | 1 |
| status | uploading |
| updated_at | 2014-03-21T18:31:55.000000 |
| volume_type | None |
+---------------------+---------------------------------------+
When you uploaded the volume in the previous step, the Image service
reported the volume's size as ``1`` (GB). However, when using
``glance image-list`` to list the image, the displayed size is
:command:`glance image-list` to list the image, the displayed size is
1085276160 bytes, or roughly 1.01 GB:
+-----------------------+---------+-----------+--------------+--------------+
@ -110,7 +112,7 @@ reported the volume's size as ``1`` (GB). However, when using
| image\_from\_volume1 | raw | bare | *1085276160* | active |
+-----------------------+---------+-----------+--------------+--------------+
|
Create a new volume using the previous image (``image_id 3020a21d-ba37-4495
-8899-07fc201161b9`` in this example) as
@ -120,10 +122,10 @@ Image service:
.. code-block:: console
$ cinder create --display-name volume2 \
--image-id 3020a21d-ba37-4495-8899-07fc201161b9 1
ERROR: Invalid input received: Size of specified image 2 is larger
than volume size 1. (HTTP 400) (Request-ID: req-4b9369c0-dec5-4e16-a114-c0cdl6bSd210)
$ cinder create --display-name volume2 \
--image-id 3020a21d-ba37-4495-8899-07fc201161b9 1
ERROR: Invalid input received: Size of specified image 2 is larger
than volume size 1. (HTTP 400) (Request-ID: req-4b9369c0-dec5-4e16-a114-c0cdl6bSd210)
The attempt to create a new volume based on the size reported by the
``cinder`` tool will then fail.
@ -138,56 +140,58 @@ volume-backed image should use a size of 2 GB:
.. code-block:: console
$ cinder create --display-name volume2 \
--image-id 3020a21d-ba37-4495-8899-07fc201161b9 1
$ cinder create --display-name volume2 \
--image-id 3020a21d-ba37-4495-8899-07fc201161b9 1
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-03-21T19:25:31.564482 |
| display_description | None |
| display_name | volume2 |
| id | 64e8eb18-d23f-437b-bcac-b3S2afa6843a |
| image_id | 3020a21d-ba37-4495-8899-07fc20116lb9 |
| metadata | [] |
| size | 2 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-03-21T19:25:31.564482 |
| display_description | None |
| display_name | volume2 |
| id | 64e8eb18-d23f-437b-bcac-b3S2afa6843a |
| image_id | 3020a21d-ba37-4495-8899-07fc20116lb9 |
| metadata | [] |
| size | 2 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
.. note::
The dashboard suggests a suitable size when you create a new volume
based on a volume-backed image.
The dashboard suggests a suitable size when you create a new volume
based on a volume-backed image.
You can then check this new volume into the EQL array::
You can then check this new volume into the EQL array:
eql> volume select volume-64e8eb18-d23f-437b-bcac-b352afa6843a
eql (volume_volume-61e8eb18-d23f-437b-bcac-b352afa6843a)> show
______________________________ Volume Information _______________________________
Name: volume-64e8eb18-d23f-437b-bcac-b352afa6843a
Size: 2GB
VolReserve: 2.01GB
VolReserveInUse: 1.01GB
ReplReserveInUse: 0MB
iSCSI Alias: volume-64e8eb18-d23f-437b-bcac-b352afa6843a
iSCSI Name: iqn.2001-05.com.equallogic:0-8a0906-e3091850e-eae000000b7S32cl-volume-64e8eb18-d23f-437b-bcac-b3S2afa6Bl3a
ActualMembers: 1
Snap-Warn: 10%
Snap-Depletion: delete-oldest
Description:
Snap-Reserve: 100%
Snap-Reserve-Avail: 100% (2GB)
Permission: read-write
DesiredStatus: online
Status: online
Connections: 1
Snapshots: O
Bind:
Type: not-replicated
ReplicationReserveSpace: 0MB
.. code-block:: console
eql> volume select volume-64e8eb18-d23f-437b-bcac-b352afa6843a
eql (volume_volume-61e8eb18-d23f-437b-bcac-b352afa6843a)> show
______________________________ Volume Information _______________________________
Name: volume-64e8eb18-d23f-437b-bcac-b352afa6843a
Size: 2GB
VolReserve: 2.01GB
VolReserveInUse: 1.01GB
ReplReserveInUse: 0MB
iSCSI Alias: volume-64e8eb18-d23f-437b-bcac-b352afa6843a
iSCSI Name: iqn.2001-05.com.equallogic:0-8a0906-e3091850e-eae000000b7S32cl-volume-64e8eb18-d23f-437b-bcac-b3S2afa6Bl3a
ActualMembers: 1
Snap-Warn: 10%
Snap-Depletion: delete-oldest
Description:
Snap-Reserve: 100%
Snap-Reserve-Avail: 100% (2GB)
Permission: read-write
DesiredStatus: online
Status: online
Connections: 1
Snapshots: O
Bind:
Type: not-replicated
ReplicationReserveSpace: 0MB

View File

@ -10,23 +10,25 @@ Failed to attach a volume after detaching the same volume.
Solution
~~~~~~~~
You must change the device name on the ``nova-attach`` command. The VM
might not clean up after a ``nova-detach`` command runs. This example
shows how the ``nova-attach`` command fails when you use the ``vdb``,
``vdc``, or ``vdd`` device names::
You must change the device name on the :command:`nova-attach` command. The VM
might not clean up after a :command:`nova-detach` command runs. This example
shows how the :command:`nova-attach` command fails when you use the ``vdb``,
``vdc``, or ``vdd`` device names:
# ls -al /dev/disk/by-path/
total 0
drwxr-xr-x 2 root root 200 2012-08-29 17:33 .
drwxr-xr-x 5 root root 100 2012-08-29 17:33 ..
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0 -> ../../vda
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part1 -> ../../vda1
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part2 -> ../../vda2
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part5 -> ../../vda5
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:06.0-virtio-pci-virtio2 -> ../../vdb
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:08.0-virtio-pci-virtio3 -> ../../vdc
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4 -> ../../vdd
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4-part1 -> ../../vdd1
.. code-block:: console
# ls -al /dev/disk/by-path/
total 0
drwxr-xr-x 2 root root 200 2012-08-29 17:33 .
drwxr-xr-x 5 root root 100 2012-08-29 17:33 ..
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0 -> ../../vda
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part1 -> ../../vda1
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part2 -> ../../vda2
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:04.0-virtio-pci-virtio0-part5 -> ../../vda5
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:06.0-virtio-pci-virtio2 -> ../../vdb
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:08.0-virtio-pci-virtio3 -> ../../vdc
lrwxrwxrwx 1 root root 9 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4 -> ../../vdd
lrwxrwxrwx 1 root root 10 2012-08-29 17:33 pci-0000:00:09.0-virtio-pci-virtio4-part1 -> ../../vdd1
You might also have this problem after attaching and detaching the same
volume from the same VM with the same mount point multiple times. In

View File

@ -6,21 +6,25 @@ Problem
~~~~~~~
This warning and error occurs if you do not have the required
``sysfsutils`` package installed on the compute node::
``sysfsutils`` package installed on the compute node:
WARNING nova.virt.libvirt.utils [req-1200f887-c82b-4e7c-a891-fac2e3735dbb\
admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] systool\
is not installed
ERROR nova.compute.manager [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin\
admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin]
[instance: df834b5a-8c3f-477a-be9b-47c97626555c|instance: df834b5a-8c3f-47\
7a-be9b-47c97626555c]
Failed to attach volume 13d5c633-903a-4764-a5a0-3336945b1db1 at /dev/vdk.
.. code-block:: console
WARNING nova.virt.libvirt.utils [req-1200f887-c82b-4e7c-a891-fac2e3735dbb\
admin admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin] systool\
is not installed
ERROR nova.compute.manager [req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin\
admin|req-1200f887-c82b-4e7c-a891-fac2e3735dbb admin admin]
[instance: df834b5a-8c3f-477a-be9b-47c97626555c|instance: df834b5a-8c3f-47\
7a-be9b-47c97626555c]
Failed to attach volume 13d5c633-903a-4764-a5a0-3336945b1db1 at /dev/vdk.
Solution
~~~~~~~~
Run the following command on the compute node to install the
``sysfsutils`` packages::
``sysfsutils`` packages:
# apt-get install sysfsutils
.. code-block:: console
# apt-get install sysfsutils

View File

@ -7,18 +7,20 @@ Problem
The compute node failed to connect to a volume in a Fibre Channel (FC) SAN
configuration. The WWN may not be zoned correctly in your FC SAN that
links the compute host to the storage array::
links the compute host to the storage array:
ERROR nova.compute.manager [req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin\
demo|req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo] [instance: 60ebd\
6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3\
d5f3]
Failed to connect to volume 6f6a6a9c-dfcf-4c8d-b1a8-4445ff883200 while\
attaching at /dev/vdjTRACE nova.compute.manager [instance: 60ebd6c7-c1e3-4\
bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3]
Traceback (most recent call last):…f07aa4c3d5f3\] ClientException: The\
server has either erred or is incapable of performing the requested\
operation.(HTTP 500)(Request-ID: req-71e5132b-21aa-46ee-b3cc-19b5b4ab2f00)
.. code-block:: console
ERROR nova.compute.manager [req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin\
demo|req-2ddd5297-e405-44ab-aed3-152cd2cfb8c2 admin demo] [instance: 60ebd\
6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3\
d5f3]
Failed to connect to volume 6f6a6a9c-dfcf-4c8d-b1a8-4445ff883200 while\
attaching at /dev/vdjTRACE nova.compute.manager [instance: 60ebd6c7-c1e3-4\
bf0-8ef0-f07aa4c3d5f3|instance: 60ebd6c7-c1e3-4bf0-8ef0-f07aa4c3d5f3]
Traceback (most recent call last):…f07aa4c3d5f3\] ClientException: The\
server has either erred or is incapable of performing the requested\
operation.(HTTP 500)(Request-ID: req-71e5132b-21aa-46ee-b3cc-19b5b4ab2f00)
Solution
~~~~~~~~

View File

@ -6,25 +6,25 @@ Most Block Storage errors are caused by incorrect volume configurations
that result in volume creation failures. To resolve these failures,
review these logs:
- cinder-api log (:file:`/var/log/cinder/api.log`)
- ``cinder-api`` log (``/var/log/cinder/api.log``)
- cinder-volume log (:file:`/var/log/cinder/volume.log`)
- ``cinder-volume`` log (``/var/log/cinder/volume.log``)
The cinder-api log is useful for determining if you have endpoint or
The ``cinder-api`` log is useful for determining if you have endpoint or
connectivity issues. If you send a request to create a volume and it
fails, review the cinder-api log to determine whether the request made
fails, review the ``cinder-api`` log to determine whether the request made
it to the Block Storage service. If the request is logged and you see no
errors or tracebacks, check the cinder-volume log for errors or
errors or tracebacks, check the ``cinder-volume`` log for errors or
tracebacks.
.. note::
Create commands are listed in the ``cinder-api`` log.
Create commands are listed in the ``cinder-api`` log.
These entries in the :file:`cinder.openstack.common.log` file can be used to
assist in troubleshooting your block storage configuration.
These entries in the ``cinder.openstack.common.log`` file can be used to
assist in troubleshooting your Block Storage configuration.
.. code:: ini
.. code-block:: console
# Print debugging output (set logging level to DEBUG instead
# of default WARNING level). (boolean value)
@ -113,10 +113,10 @@ these suggested solutions.
be stored in a file on creation that can be queried in case of
restart of the ``tgt daemon``. By default, Block Storage uses a
``state_path`` variable, which if installing with Yum or APT should
be set to :file:`/var/lib/cinder/`. The next part is the ``volumes_dir``
variable, by default this just simply appends a :file:`volumes`
be set to ``/var/lib/cinder/``. The next part is the ``volumes_dir``
variable, by default this just simply appends a ``volumes``
directory to the ``state_path``. The result is a file-tree
:file:`/var/lib/cinder/volumes/`.
``/var/lib/cinder/volumes/``.
While the installer should handle all this, it can go wrong. If you have
trouble creating volumes and this directory does not exist you should
@ -128,9 +128,9 @@ these suggested solutions.
Along with the ``volumes_dir`` option, the iSCSI target driver also
needs to be configured to look in the correct place for the persistent
files. This is a simple entry in the :file:`/etc/tgt/conf.d` file that you
files. This is a simple entry in the ``/etc/tgt/conf.d`` file that you
should have set when you installed OpenStack. If issues occur, verify
that you have a :file:`/etc/tgt/conf.d/cinder.conf` file.
that you have a ``/etc/tgt/conf.d/cinder.conf`` file.
If the file is not present, create it with this command
@ -140,30 +140,30 @@ these suggested solutions.
- No sign of attach call in the ``cinder-api`` log.
This is most likely going to be a minor adjustment to your :file:`nova.conf`
file. Make sure that your :file:`nova.conf` has this entry
This is most likely going to be a minor adjustment to your ``nova.conf``
file. Make sure that your ``nova.conf`` has this entry:
.. code:: ini
.. code-block:: ini
volume_api_class=nova.volume.cinder.API
- Failed to create iscsi target error in the :file:`cinder-volume.log` file.
- Failed to create iscsi target error in the ``cinder-volume.log`` file.
::
.. code-block:: console
2013-03-12 01:35:43 1248 TRACE cinder.openstack.common.rpc.amqp \
ISCSITargetCreateFailed: \
Failed to create iscsi target for volume \
volume-137641b2-af72-4a2f-b243-65fdccd38780.
You might see this error in :file:`cinder-volume.log` after trying to
You might see this error in ``cinder-volume.log`` after trying to
create a volume that is 1 GB. To fix this issue:
Change contents of the :file:`/etc/tgt/targets.conf` from
Change contents of the ``/etc/tgt/targets.conf`` from
``include /etc/tgt/conf.d/*.conf`` to ``include /etc/tgt/conf.d/cinder_tgt.conf``,
as follows:
::
.. code-block:: ini
include /etc/tgt/conf.d/cinder_tgt.conf
include /etc/tgt/conf.d/cinder.conf

View File

@ -13,7 +13,7 @@ If the ``multipath-tools`` package is installed on the compute node,
it is used to perform the volume attachment.
The IDs in your message are unique to your system.
::
.. code-block:: console
WARNING nova.storage.linuxscsi [req-cac861e3-8b29-4143-8f1b-705d0084e571
admin admin|req-cac861e3-8b29-4143-8f1b-705d0084e571 admin admin]

View File

@ -11,7 +11,7 @@ When you attempt to create a VM, the error shows the VM is in the
Solution
~~~~~~~~
On the KVM host, run ``cat /proc/cpuinfo``. Make sure the ``vmx`` or
On the KVM host, run :command:`cat /proc/cpuinfo`. Make sure the ``vmx`` or
``svm`` flags are set.
Follow the instructions in the `enabling KVM

View File

@ -10,16 +10,16 @@ OpenStack using a host name different from the system name that
OpenStack expects. This error could be displayed with the :term:`IQN`
if the host was exported using iSCSI.
::
.. code-block:: console
2013-04-19 04:02:02.336 2814 ERROR cinder.openstack.common.rpc.common [-] Returning exception Not found (HTTP 404)
NON_EXISTENT_HOST - HOST '10' was not found to caller.
2013-04-19 04:02:02.336 2814 ERROR cinder.openstack.common.rpc.common [-] Returning exception Not found (HTTP 404)
NON_EXISTENT_HOST - HOST '10' was not found to caller.
Solution
~~~~~~~~
Host names constructed by the driver use just the local hostname, not
Host names constructed by the driver use just the local host name, not
the fully qualified domain name (FQDN) of the Compute host. For example,
if the FQDN was **myhost.example.com**, just **myhost** would be used as the
3PAR hostname. IP addresses are not allowed as host names on the 3PAR
3PAR host name. IP addresses are not allowed as host names on the 3PAR
storage server.

View File

@ -9,7 +9,7 @@ This error occurs if the 3PAR host exists with the correct host name
that the OpenStack Block Storage drivers expect but the volume was
created in a different Domain.
::
.. code-block:: console
HTTPNotFound: Not found (HTTP 404) NON_EXISTENT_VLUN - VLUN 'osv-DqT7CE3mSrWi4gZJmHAP-Q' was not found.

View File

@ -9,7 +9,7 @@ Failed to attach volume to an instance, ``sg_scan`` file not found. This
warning and error occur when the sg3-utils package is not installed on
the compute node. The IDs in your message are unique to your system:
::
.. code-block:: console
ERROR nova.compute.manager [req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin|req-cf2679fd-dd9e-4909-807f-48fe9bda3642 admin admin]
[instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5|instance: 7d7c92e0-49fa-4a8e-87c7-73f22a9585d5]
@ -21,8 +21,7 @@ the compute node. The IDs in your message are unique to your system:
Solution
~~~~~~~~
Run this command on the compute node to install the sg3-utils package:
Run this command on the compute node to install the ``sg3-utils`` package:
.. code-block:: console