Adds rabbitmq_server_additional_erl_args variable which
is appended to RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS
environment variable to RabbitMQ server startup script.
This can be used to configure the schedulers.
Docs attached.
Change-Id: Id683c8cc6dac61354ffd94f3b460335b42136ba2
Co-authored-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Related-bug: #1846467
Tacker requires config for storing CSAR vnf packages.
This patch adds it as well as relevant docs.
Only one Tacker Conductor is deployed by default due to
lack of a shared filesystem.
Change-Id: Iad391f35105e79fa9319502256528990915df9b7
Co-authored-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Closes-Bug: #1845142
This also enables Placement when Zun is enabled like Kolla Ansible
already does with Nova.
Change-Id: Id2a09f702e8503b49d2b9e73e06b2ce9f4d168a9
Closes-bug: #1840573
Add documentation about deploying nova with multiple cells.
Change-Id: I89ee276917e5b9170746e07b7f644c7593b03da1
Depends-On: https://review.opendev.org/#/c/675659/
Related: blueprint bp/support-nova-cells
Introduce kolla_address filter.
Introduce put_address_in_context filter.
Add AF config to vars.
Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]
Other changes:
globals.yml - mention just IP in comment
prechecks/port_checks (api_intf) - kolla_address handles validation
3x interface conditional (swift configs: replication/storage)
2x interface variable definition with hostname
(haproxy listens; api intf)
1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)
neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
basic multinode source CI job for IPv6
prechecks for rabbitmq and qdrouterd use proper NSS database now
MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)
Ceph naming workaround in CI
TODO: probably needs documenting
RabbitMQ IPv6-only proto_dist
Ceph ms switch to IPv6 mode
Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)
haproxy upgrade checks for slaves based on ipv6 addresses
TODO:
ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.
ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.
rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.
ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.
KNOWN ISSUES (beyond us):
One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN
RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227
Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689
Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Sphinx 1.8 introduced [1] the '--keep-going' argument which, as its name
suggests, keeps the build running when it encounters non-fatal errors.
This is exceptionally useful in avoiding a continuous edit-build loop
when undertaking large doc reworks where multiple errors may be
introduced.
[1] https://github.com/sphinx-doc/sphinx/commit/e3483e9b045
Change-Id: I405812a0039274139e055c54ab7b451dc753c842
This is to avoid split-brain.
This change also adds relevant docs that sort out the
HA/quorum questions.
Change-Id: I9a8c2ec4dbbd0318beb488548b2cde8f4e487dc1
Closes-Bug: #1837761
Co-authored-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Adds a top-level guide for Nova, with links off to the various virt
driver guides.
Generalises the libvirt TLS guide into a libvirt guide, and adds info on
hardware virtualisation and qemu vs. kvm.
Adds information on configuring consoles.
Change-Id: I36beaaee313bdbc4bcf8cc15c41dda245a5a81ba
Add coordination backend configuration to designate.conf which is
required in multinode environments. Fixes warning from designate:
WARNING designate.coordination [-] No coordination backend configured,
assuming we are the only worker. Please configure a coordination backend
Change-Id: I23c4d2de7e3f9368795c423000a4f9a6c3a431e2
Closes-Bug: #1843842
Related-Bug: #1840070
Sometimes as cloud admins, we want to only update code that is running
in a cloud. But we dont need to do anything else. Make an action in
kolla-ansible that allows us to do that.
Change-Id: I904f595c69f7276e71692696471e32fd1f88e6e8
Implements: blueprint deploy-containers-action
The current tasks only use a hardcoded list deploying only the required files.
When using multiple custom policies, additionnal object-*.builder and
object*.gz files are to be deployed as well.
This adds a new default-empty variable that can be overridden when needed
Change-Id: I29c8e349c7cc83e3a2e01ff702d235a0cd97340e
Closes-Bug: #1844752
To securely support live migration between computenodes we should enable
tls, with cert auth, instead of TCP with no auth support.
Implements: blueprint libvirt-tls
Change-Id: I22ea6233933c840b853fdcc8e03400b2bf577271
The main motivation here is to document a mechanism which can be
used to configure Nova cells on a per-cell basis without introducing
a myriad of additional locations to put config files. The
following changes are made:
- Remove the note about only ini files being supported because
merge_yaml is now used
- Expand on supported config file locations
- Add a section on using conditionals in the config file
Partially Implements: blueprint support-nova-cells
Change-Id: I92599e501506fdacaf3adb94cc6fffcf6fea2af3
These filters can be used to capture a lot of the logic that we
currently have in 'when' statements, about which services are enabled
for a particular host.
In order to use these filters, it is necessary to install the
kolla_ansible python module, and not just the dependencies listed in
requirements.txt. The CI test and quickstart install from source
documentation has been updated accordingly.
Ansible is not currently in OpenStack global requirements, so for unit
tests we avoid a direct dependency on Ansible and provide fakes where
necessary.
Change-Id: Ib91cac3c28e2b5a834c9746b1d2236a309529556
This commit adds the necessary configuration to the Swift account,
container and object configuration files to enable the Swift recon
cli.
In order to give the object server on each Swift host access to the
recon files, a Docker volume is mounted into each container which
generates them. The volume is then mounted read only into the object
server container. Note that multiple containers append to the same
file. This should not be a problem since Swift uses a lock when
appending.
Change-Id: I343d8f45a78ebc3c11ed0c68fe8bec24f9ea7929
Co-authored-by: Doug Szumski <doug@stackhpc.com>
After the integration with placement [1], we need to configure how
zun-compute is going to work with nova-compute.
* If zun-compute and nova-compute run on the same compute node,
we need to set 'host_shared_with_nova' as true so that Zun
will use the resource provider (compute node) created by nova.
In this mode, containers and VMs could claim allocations against
the same resource provider.
* If zun-compute runs on a node without nova-compute, no extra
configuration is needed. By default, each zun-compute will create
a resource provider in placement to represent the compute node
it manages.
[1] https://blueprints.launchpad.net/zun/+spec/use-placement-resource-management
Change-Id: I2d85911c4504e541d2994ce3d48e2fbb1090b813
Instead of changing Docker daemon command line let's change config
for Docker instead. In /etc/docker/daemon.json file as it should be.
Custom Docker options can be set with 'docker_custom_config' variable.
Old 'docker_custom_option' is still present but should be avoided.
Co-Authored-By: Radosław Piliszek <radoslaw.piliszek@gmail.com>
Change-Id: I1215e04ec15b01c0b43bac8c0e81293f6724f278
ceph-ansible by default generates what we call nova.keyring as
openstack.keyring - adding a note to not confuse users.
Change-Id: I3992a037ab8e7947e35521b5c721a89bd954fdcd
This review is the first one in a series of patches and it introduces an
optional encryption for internal openstack endpoints, implementing part
of the add-ssl-internal-network spec.
Change-Id: I6589751626486279bf24725f22e71da8cd7f0a43