bb0906f4f3
This adds a new config option to control the maximum number of disk devices allowed to attach to a single instance, which can be set per compute host. The configured maximum is enforced when device names are generated during server create, rebuild, evacuate, unshelve, live migrate, and attach volume. When the maximum is exceeded during server create, rebuild, evacuate, unshelve, or live migrate, the server will go into ERROR state and the server fault will contain the reason. When the maximum is exceeded during an attach volume request, the request fails fast in the API with a 403 error. The configured maximum on the destination is not enforced before cold migrate because the maximum is enforced in-place only (the destination is not checked over RPC). The configured maximum is also not enforced on shelved offloaded servers because they have no compute host, and the option is implemented at the nova-compute level. Part of blueprint conf-max-attach-volumes Change-Id: Ia9cc1c250483c31f44cdbba4f6342ac8d7fbe92b
58 lines
3.1 KiB
YAML
58 lines
3.1 KiB
YAML
---
|
|
features:
|
|
- |
|
|
A new configuration option, ``[compute]/max_disk_devices_to_attach``, which
|
|
defaults to ``-1`` (unlimited), has been added and can be used to configure
|
|
the maximum number of disk devices allowed to attach to a single server,
|
|
per compute host. Note that the number of disks supported by a server
|
|
depends on the bus used. For example, the ``ide`` disk bus is limited to 4
|
|
attached devices.
|
|
|
|
Usually, disk bus is determined automatically from the device type or disk
|
|
device, and the virtualization type. However, disk bus
|
|
can also be specified via a block device mapping or an image property.
|
|
See the ``disk_bus`` field in
|
|
https://docs.openstack.org/nova/latest/user/block-device-mapping.html
|
|
for more information about specifying disk bus in a block device mapping,
|
|
and see
|
|
https://docs.openstack.org/glance/latest/admin/useful-image-properties.html
|
|
for more information about the ``hw_disk_bus`` image property.
|
|
|
|
The configured maximum is enforced during server create, rebuild, evacuate,
|
|
unshelve, live migrate, and attach volume. When the maximum is exceeded
|
|
during server create, rebuild, evacuate, unshelve, or live migrate, the
|
|
server will go into ``ERROR`` state and the server fault message will
|
|
indicate the failure reason. When the maximum is exceeded during a server
|
|
attach volume API operation, the request will fail with a
|
|
``403 HTTPForbidden`` error.
|
|
issues:
|
|
- |
|
|
Operators changing the ``[compute]/max_disk_devices_to_attach`` on a
|
|
compute service that is hosting servers should be aware that it could
|
|
cause rebuilds to fail, if the maximum is decreased lower than the number
|
|
of devices already attached to servers. For example, if server A has 26
|
|
devices attached and an operators changes
|
|
``[compute]/max_disk_devices_to_attach`` to 20, a request to rebuild server
|
|
A will fail and go into ERROR state because 26 devices are already
|
|
attached and exceed the new configured maximum of 20.
|
|
|
|
Operators setting ``[compute]/max_disk_devices_to_attach`` should also be
|
|
aware that during a cold migration, the configured maximum is only enforced
|
|
in-place and the destination is not checked before the move. This means if
|
|
an operator has set a maximum of 26 on compute host A and a maximum of 20
|
|
on compute host B, a cold migration of a server with 26 attached devices
|
|
from compute host A to compute host B will succeed. Then, once the server
|
|
is on compute host B, a subsequent request to rebuild the server will fail
|
|
and go into ERROR state because 26 devices are already attached and exceed
|
|
the configured maximum of 20 on compute host B.
|
|
|
|
The configured maximum is not enforced on shelved offloaded servers, as
|
|
they have no compute host.
|
|
upgrade:
|
|
- |
|
|
The new configuration option, ``[compute]/max_disk_devices_to_attach``
|
|
defaults to ``-1`` (unlimited). Users of the libvirt driver should be
|
|
advised that the default limit for non-ide disk buses has changed from 26
|
|
to unlimited, upon upgrade to Stein. The ``ide`` disk bus continues to be
|
|
limited to 4 attached devices per server.
|