This spec proposes to add ability to allow users to use
``Aggregate``'s ``metadata`` to override the global config options
for weights to achieve more fine-grained control over resource
weights.
blueprint: per-aggregate-scheduling-weight
Change-Id: I6e15c6507d037ffe263a460441858ed454b02504
This resolves the TODO from Ocata change:
I8871b628f0ab892830ceeede68db16948cb293c8
By adding a min=0.0 value to the soft affinity
weight multiplier configuration options.
It also removes the deprecated [DEFAULT] group
alias from Ocata change:
I3f48e52815e80c99612bcd10cb53331a8c995fc3
Change-Id: I79e191010adbc0ec0ed02c9d589106debbf90ea8
This patch is a follow-up for
I66a2adb3ff75da6e267536f25c2eda5925f2fa87.
Add links to videos recorded at Rocky or Stein summit
in doc/source/user/cells.rst.
Change-Id: Idcc77cf2eee809f3ca99f952f0635213f3bb78eb
Fix broken links in doc/source/user/cells.rst.
In addition, fix a format of a console code block
in doc/source/admin/pci-passthrough.rst.
Change-Id: I66a2adb3ff75da6e267536f25c2eda5925f2fa87
Closes-Bug: #1808906
With change I11083aa3c78bd8b6201558561457f3d65007a177
the code for the API Service Version upgrade check no
longer exists and therefore the upgrade check itself
is meaningless now.
Change-Id: I68b13002bc745c2c9ca7209b806f08c30272550e
With placement being extracted from nova, the
"Resource Providers" nova-status upgrade check no
longer works as intended since the placement data
will no longer be in the nova_api database. As a
result the check can fail on otherwise properly
deployed setups with placement extracted.
This check was originally intended to ease the upgrade
to Ocata when placement was required for nova to work,
as can be seen from the newton/ocata/pike references
in the code.
Note that one could argue the check itself as a concept
is still useful for fresh installs to make sure everything
is deployed correctly and nova-compute is properly
reporting into placement, but for it to be maintained
we would have to change it to no longer rely on the
nova_api database and instead use the placement REST API,
which while possible, might not be worth the effort or
maintenance cost.
For simplicity and expediency, the check is simply removed
in this change.
Related mailing list discussion can be found here [1].
[1] http://lists.openstack.org/pipermail/openstack-discuss/2018-December/000454.html
Change-Id: I630a518d449a64160c81410245a22ed3d985fb01
Closes-Bug: #1808956
A recent thread in the mailing list [1] reminded me that we
don't have any documentation for the service user token feature
added back in ocata under blueprint use-service-tokens.
This change adds a troubleshooting entry for when using service
user tokens would be useful, and links to it from two known
trouble spots: live migration timeouts and creating images.
[1] http://lists.openstack.org/pipermail/openstack-discuss/2018-December/001130.html
Change-Id: I1dda889038ffe67d53ceb35049aa1f2a9da39ae8
Closes-Bug: #1809165
The placement api-ref had a link to user/placement.html in nova. That is
being fixed in Ibfe016f25a29b6810ea09c5d03a01dbf3c53371f but the link
is now loose in the internets, using the 'latest' prefix, so we want to
make sure that what is currently showing as a 404 goes somewhere useful,
in this case the top of the placement docs.
Change-Id: Ib75547ac655e614441dec5e1bcc7784c2b6a070c
- Move deprecated services to the end of the document
- Update incorrect information regarding nova-consoleauth
- Move configuration options that were specified for the wrong service
- Don't give the impression that the serial console is libvirt-only
Change-Id: Ie0fd987a1e5c130b8e31c84910814d5d051f2b31
Remove a lot of noise and make things more generic through use of
variables rather than hardcoded values.
Change-Id: Ief498f902f24e5991cf463323db78729ae6f8d89
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
The Placement REST API Version History page has been
created by splitting the Placement doc top page (index.html)
since I66e0c7d18b253b0a5a8fdac65e30b5b3cef37db2.
So the link to the Placement REST API Version History
is updated in the nova doc.
Change-Id: I60701b09ade203b35d4d1281eb32b177543c064a
This patch is a follow-up patch
for I3e1ca88cac3a52a8b44e26f051a51a6db77a3231.
Add descriptions about microversions in the parameter section
in the API reference guideline.
Change-Id: I266d44cf96945445115b843aacbc29083e21bd8e
This change does a few things:
* Links live_migration_completion_timeout to the config
option guide.
* Links the force complete API reference to the feature support
matrix to see which drivers support the operation.
* Fixes the server status mentioned in the troubleshooting for
the force complete API reference (a live migrating server
status is MIGRATING, not ACTIVE). The same text is copied to the
abort live migration API reference troubleshooting for
consistency (and since using the server status is more natural than
the task_state).
* Links to the admin guide for troubleshooting live migration
timeouts.
Change-Id: I496d3f4b99e3d7e978c7ecb13ab3b67023fcb919
Closes-Bug: #1808579
Config option ``libvirt.live_migration_progress_timeout`` was
deprecated in Ocata, and can now be removed.
This patch remove live_migration_progress_timeout and also remove
the migration progress timeout related logic.
Change-Id: Ife89a705892ad96de6d5f8e68b6e4b99063a7512
blueprint: live-migration-force-after-timeout
This patch remove the auto trigger post-copy, and add a new libvirt
configuration 'live_migration_completion_action'.
This option determines what actions will be taken against a VM after
``live_migration_completion_timeout`` expires. This option is set to
'abort' action by default, that means the live migrate operation will
be aborted after completion timeout expires. If option is set to
'force_complete', that means will either pause the VM or trigger
post_copy depending on if post copy is enabled and available.
Note that the progress based post-copy triggering from the libvirt
driver will be removed in next patch [1].
[1] Ife89a705892ad96de6d5f8e68b6e4b99063a7512
Change-Id: I0d286d12e588b431df3d94cf2e65d636bcdea2f8
blueprint: live-migration-force-after-timeout
Live migration is currently totally broken if a NUMA topology is
present. This affects everything that's been regrettably stuffed in with
NUMA topology including CPU pinning, hugepage support and emulator
thread support. Side effects can range from simple unexpected
performance hits (due to instances running on the same cores) to
complete failures (due to instance cores or huge pages being mapped to
CPUs/NUMA nodes that don't exist on the destination host).
Until such a time as we resolve these issues, we should alert users to
the fact that such issues exist. A workaround option is provided for
operators that _really_ need the broken behavior, but it's defaulted to
False to highlight the brokenness of this feature to unsuspecting
operators.
Change-Id: I217fba9138132b107e9d62895d699d238392e761
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
Related-bug: #1289064
All the clients using block device mapping are using the attribute
destination_type and the documentation points to dest_type instead.
Change-Id: Iba6e698e826d1a1898fde5cc999592f5821e3ebc
Co-Authored-By: David Moreno Garcia <david.mogar@gmail.com>
Closes-Bug: #1808358
Cells v1 has been deprecated since Pike. CERN
has been running with cells v2 since Queens.
The cells v1 job used to be the only thing that
ran with nova-network, but we switched the job
to use neutron in Rocky:
I9de6b710baffdffcd1d7ab19897d5776ef27ae7e
The cells v1 job also suffers from intermittent
test failures, like with snapshot tests.
Given the deprecated nature of cells v1 we should
just move it to the experimental queue so that it
can be run on-demand if desired but does not gate
on all nova changes, thus further moving along its
eventual removal.
This change also updates the cells v1 status doc
and adds some documentation about the different
job queues that nova uses for integration testing.
Change-Id: I74985f1946fffd0ae4d38604696d0d4656b6bf4e
Closes-Bug: #1807407