This is part of an effort to use environment variables for configuration of
paths on the local ansible control host, and ansible variables for
configuration of paths on remote hosts (seed, seed-hypervisor, overcloud). The
base_path variable is used to set sensible defaults for image_cache_path,
source_checkout_path and virtualenv_path.
Kolla (container images) and kolla-ansible (container deployment) are
separate concerns, and should be treated as such. Configuration
variables have been added for kolla-ansible which were previously shared
between the two projects:
kolla_venv -> kolla_ansible_venv
kolla_ctl_install_type -> kolla_ansible_ctl_install_type
Also, we introduce specific variables for the source code repository
checkout paths, which were previously both based on
source_checkout_path:
kolla_source_path
kolla_ansible_source_path
These changes help us to cleanly separate the configuration of paths on
the local (Ansible control) host, from those on the managed (target)
hosts. This is important because the local paths may be specific to the
environment in which the user is running kayobe, but the remote paths
are relatively fixed and specific to the cluster.
This command performs necessary changes on the host to prepare the control
plane for an upgrade.
Currently this performs a workaround for issue #14, RabbitMQ upgrade failure.
We clear stale entries from /etc/hosts on the overcloud hosts and from the
rabbitmq containers, which allows the upgrade to complete successfully. The
source of the stale entries is currently unknown.
Currently we use the switch interface descriptions in the
switch_interface_config variable with the
kolla_neutron_ml2_generic_switch_trunk_port_hosts variable to generate a list
of ports for each switch that should be added as trunk ports to all networks.
This change allows switch interfaces to be given an 'ngs_trunk_port' boolean
field which can be used to exclude matching interfaces from the list. This
may be useful in cases where a host has multiple interfaces but only some
should be added as trunk ports.
User accounts are configured during the following commands:
kayobe seed hypervisor host configure
kayobe seed host configure
kayobe overcloud host configure
The users are defined by the following variables:
seed_hypervisor_users
seed_users
controller_users
monitoring_users
The format required is described in the singleplatform-eng.users role
on Galaxy.
Any additional control plane hosts not in the controllers or monitoring
groups should define a 'users' variable.
Neutron network services are now mapped to hosts in the network group,
so it is these hosts that should be added as trunk ports to VLAN networks
by the networking-generic-switch neutron ML2 mechanism driver, rather
than the controllers.
Previously a URL based on the provisioning network IP was being advertised to
nodes during provisioning. The issue here is that the API server might not be
listening on the provisioning network. Instead we advertise the internal network
endpoint and assume that if any routes are required to enable this then they
have been created elsewhere.
When SELinux python module is not installed on the host (in this
instance the control host), ansible sets the ansible_selinux fact to
False. Also, the item to check is status rather than mode.
These allow us to use different gateways for compute nodes on the inspection
and provisioning networks than on the control plane hosts also accessing these
networks.
In environments without Swift we are currently unable to store hardware
introspection data. The inspection_store container runs an nginx server
that supports a restricted Swift-like HTTP API using WebDAV that supports
upload and retrieval of introspection data.
These are group-specific, and configured via the following variables:
controller_sysctl_parameters
monitoring_sysctl_parameters
seed_sysctl_parameters
seed_hypervisor_parameters
This allows a deployer to customise their inventory at various levels, by
providing a custom inventory template for one or more of the sections of the
inventory.
* Top level groups define the roles of hosts, e.g. controller or compute.
* Components define groups of services, e.g. nova or ironic.
* Services define single containers, e.g. nova-compute or ironic-api.
This allows for the full set of interfaces to be overridden by setting one
of these variables, rather than simply extending the default list via
<type>_extra_network_interfaces.
Previously, the external network carried both public API traffic and
neutron external network traffic. In some cases is it useful to separate
these networks. The public network now carries the public API traffic,
leaving the external network to carry neutron external network traffic
alone. For backwards compatibility, the public network defaults to the
external network.
The CLI command is:
kayobe overcloud introspection data save [--output-dir <dir>] [--output-format <format>]
This command will save introspection data collected by the seed host's ironic
inspector service to the control host for analysis.
The MichaelRigart.interfaces role has now been updated to support more complex
network topologies, including VLAN subinterfaces of bridges, and bridges with
a bonded interface as a port.
Rather than specifying kernel command line arguments directly, configuration of
IPA introspection data collectors and benchmarks is now possible by extending
lists of collector (ipa_collect_extra) and benchmark (ipa_benchmark_extra)
names. LLDP collection is now controlled via a flag, ipa_collect_lldp.
Additional kernel arguments may be passed via ipa_kernel_options_extra.
Overcloud deployment images can now be built via:
kayobe overcloud deployment image build
This should be done prior to running kayobe overcloud service deploy.
In order to build IPA images, the ipa_build_images variable should be
set to True. In this case, these images will be used by the overcloud's
ironic inspector service during hardware inspection, and by ironic
during provisioning.
The CLI command is:
kayobe seed deployment image build
This command will build Ironic Python Agent (IPA) kernel and ramdisk images
using the Diskimage Builder (DIB) ironic-agent element. The built images will
be copied to the appropriate location in the bifrost_deploy container on the
seed.
This allows us to build a customised image with site- or hardware- specific
extensions.
When using delegate_to with an IP address, ansible does use the corresponding
host in the inventory, and so not respect the ansible_user variable of the
delegate host. Here we revert to using the delegate host's inventory hostname,
and force ansible to respect the ansible_host variable of that host by setting
the variable in the task explicitly.