Previously shared kolla-ansible's kolla_config_path, now uses
kolla_build_config_path.
Also adds a variable, config_path, which sets a base path for
configuration on the remote hosts.
Allow the physical network interface configuration to be limited to a subset of
interfaces, either by interface name or switch interface description. This is done
via:
kayobe physical network configure --interface-limit interface1,interface2
or
kayobe physical network configure --interface-description-limit host1,host2
Fixes: #25
Modifies the default value for inspector_manage_firewall from False
to True. Management of the firewall by ironic inspector is important to
ensure that DHCP offers are not made to nodes during provisioning by
inspector's DHCP server.
Kolla (container images) and kolla-ansible (container deployment) are
separate concerns, and should be treated as such. Configuration
variables have been added for kolla-ansible which were previously shared
between the two projects:
kolla_venv -> kolla_ansible_venv
kolla_ctl_install_type -> kolla_ansible_ctl_install_type
Also, we introduce specific variables for the source code repository
checkout paths, which were previously both based on
source_checkout_path:
kolla_source_path
kolla_ansible_source_path
These changes help us to cleanly separate the configuration of paths on
the local (Ansible control) host, from those on the managed (target)
hosts. This is important because the local paths may be specific to the
environment in which the user is running kayobe, but the remote paths
are relatively fixed and specific to the cluster.
This command performs necessary changes on the host to prepare the control
plane for an upgrade.
Currently this performs a workaround for issue #14, RabbitMQ upgrade failure.
We clear stale entries from /etc/hosts on the overcloud hosts and from the
rabbitmq containers, which allows the upgrade to complete successfully. The
source of the stale entries is currently unknown.
User accounts are configured during the following commands:
kayobe seed hypervisor host configure
kayobe seed host configure
kayobe overcloud host configure
The users are defined by the following variables:
seed_hypervisor_users
seed_users
controller_users
monitoring_users
The format required is described in the singleplatform-eng.users role
on Galaxy.
Any additional control plane hosts not in the controllers or monitoring
groups should define a 'users' variable.
These allow us to use different gateways for compute nodes on the inspection
and provisioning networks than on the control plane hosts also accessing these
networks.
These are group-specific, and configured via the following variables:
controller_sysctl_parameters
monitoring_sysctl_parameters
seed_sysctl_parameters
seed_hypervisor_parameters
This allows a deployer to customise their inventory at various levels, by
providing a custom inventory template for one or more of the sections of the
inventory.
* Top level groups define the roles of hosts, e.g. controller or compute.
* Components define groups of services, e.g. nova or ironic.
* Services define single containers, e.g. nova-compute or ironic-api.
Previously, the external network carried both public API traffic and
neutron external network traffic. In some cases is it useful to separate
these networks. The public network now carries the public API traffic,
leaving the external network to carry neutron external network traffic
alone. For backwards compatibility, the public network defaults to the
external network.
To avoid a reboot when running the kayobe scripts, we disable selinux
before running the scripts. We do this using the provision reload plugin
that allows us to do a vagrant reload while provisioning the VM.
When doing vagrant halt then vagrant up we want the system to keep
working. The easiest way to do this is to use a virtual box plugin to
install the tools, then use the tools to sync the /vagrant directory,
rather than falling back to rsync on every boot of the VM. The rsync
looses all the writes since the last boot, forcing a full reprovision.
The CLI command is:
kayobe overcloud introspection data save [--output-dir <dir>] [--output-format <format>]
This command will save introspection data collected by the seed host's ironic
inspector service to the control host for analysis.
Overcloud deployment images can now be built via:
kayobe overcloud deployment image build
This should be done prior to running kayobe overcloud service deploy.
In order to build IPA images, the ipa_build_images variable should be
set to True. In this case, these images will be used by the overcloud's
ironic inspector service during hardware inspection, and by ironic
during provisioning.
The CLI command is:
kayobe seed deployment image build
This command will build Ironic Python Agent (IPA) kernel and ramdisk images
using the Diskimage Builder (DIB) ironic-agent element. The built images will
be copied to the appropriate location in the bifrost_deploy container on the
seed.
This allows us to build a customised image with site- or hardware- specific
extensions.
A network may be assigned a physical network by defining a variable of
the form <network>_physical_network. Currently this is not used by
kayobe but may be referenced in configuration e.g. when setting
neutron_vlan_ranges.