Adds these commands:
kayobe overcloud service configuration generate
kayobe overcloud service configuration save
The former generates configuration for kolla-ansible, the latter copies
kolla-ansible configuration from the overcloud hosts to the ansible control
host.
Kolla-ansible requires use of the docker python package, which is
incompatible with the legacy docker-py python package. We install the
former and remove the latter.
The restart handler task fails unless the image argument is passed to
the docker_container module. This shouldn't be necessary, as it should be
possible to identify the container by name alone.
Kayobe has a dependency on ansible, which is currently capped to 2.2. Docker has
decided to rename the docker-py python module to docker, and made some backwards
incompatible API changes to go along with it. Kolla-ansible requires us to use
docker (not docker-py) on the target hosts, but this is not supported for the
docker_container and docker_image ansible modules that kayobe uses with ansible
2.2. Upgrading to ansible 2.3 allows us to support the new docker python package.
For kolla, kolla-ansible and bifrost, we are now using a stackhpc/<release> branch
naming scheme rather than referencing tags in forked repos. It is recommended to
override these branches to specific tags for repeatable builds.
The Ironic Python Agent source version and tarball URLs have also been updated to
point to the pike release artifacts.
Finally, the kolla_openstack_release has been set to 5.0.0, following the
kolla-ansible second pike release candidate on which the stackhpc/pike branches
are currently based.
This is part of an effort to use environment variables for configuration of
paths on the local ansible control host, and ansible variables for
configuration of paths on remote hosts (seed, seed-hypervisor, overcloud). The
base_path variable is used to set sensible defaults for image_cache_path,
source_checkout_path and virtualenv_path.
Kolla (container images) and kolla-ansible (container deployment) are
separate concerns, and should be treated as such. Configuration
variables have been added for kolla-ansible which were previously shared
between the two projects:
kolla_venv -> kolla_ansible_venv
kolla_ctl_install_type -> kolla_ansible_ctl_install_type
Also, we introduce specific variables for the source code repository
checkout paths, which were previously both based on
source_checkout_path:
kolla_source_path
kolla_ansible_source_path
These changes help us to cleanly separate the configuration of paths on
the local (Ansible control) host, from those on the managed (target)
hosts. This is important because the local paths may be specific to the
environment in which the user is running kayobe, but the remote paths
are relatively fixed and specific to the cluster.
This command performs necessary changes on the host to prepare the control
plane for an upgrade.
Currently this performs a workaround for issue #14, RabbitMQ upgrade failure.
We clear stale entries from /etc/hosts on the overcloud hosts and from the
rabbitmq containers, which allows the upgrade to complete successfully. The
source of the stale entries is currently unknown.
This is done by passing --skip-prechecks, and applies to the following commands:
kayobe overcloud service deploy
kayobe overcloud service reconfigure
kayobe overcloud service upgrade
Currently we use the switch interface descriptions in the
switch_interface_config variable with the
kolla_neutron_ml2_generic_switch_trunk_port_hosts variable to generate a list
of ports for each switch that should be added as trunk ports to all networks.
This change allows switch interfaces to be given an 'ngs_trunk_port' boolean
field which can be used to exclude matching interfaces from the list. This
may be useful in cases where a host has multiple interfaces but only some
should be added as trunk ports.
User accounts are configured during the following commands:
kayobe seed hypervisor host configure
kayobe seed host configure
kayobe overcloud host configure
The users are defined by the following variables:
seed_hypervisor_users
seed_users
controller_users
monitoring_users
The format required is described in the singleplatform-eng.users role
on Galaxy.
Any additional control plane hosts not in the controllers or monitoring
groups should define a 'users' variable.
Neutron network services are now mapped to hosts in the network group,
so it is these hosts that should be added as trunk ports to VLAN networks
by the networking-generic-switch neutron ML2 mechanism driver, rather
than the controllers.
Previously a URL based on the provisioning network IP was being advertised to
nodes during provisioning. The issue here is that the API server might not be
listening on the provisioning network. Instead we advertise the internal network
endpoint and assume that if any routes are required to enable this then they
have been created elsewhere.
When SELinux python module is not installed on the host (in this
instance the control host), ansible sets the ansible_selinux fact to
False. Also, the item to check is status rather than mode.
These allow us to use different gateways for compute nodes on the inspection
and provisioning networks than on the control plane hosts also accessing these
networks.