This change modifies the templates to dynamically define the VIPs
based on network_data.yaml. If a network is defined and marked
with "vip: true" in network_data.yaml, it will be included in the
overcloud.yaml which defines the deployment-level resources.
This should make it possible to create custom networks and
use them for services which use high-availability through VIPs.
was modified to dynamically produce the FQDN map for VIPs on
isolated networks, to match overcloud.j2.yaml.
Partially-implements: blueprint composable-networks
There is logic in nova-base.yaml that depends on the default for
this parameter being '', and the nova-compute service only needs it
set to auto during upgrade. That will be done by  anyway, so it
doesn't matter what the default is. It's also not clear to me that
the nova-compute task is even needed now that we're post-Ocata, but
that's not a change I feel comfortable making.
These are mostly the low hanging fruit that only required a few
minor changes to fix. There are more that require a lot of changes
or might be more controversial that will be done later.
The key_name default is ignored because the parameter is used in
some mutually exclusive environments where the default doesn't
need to be the same.
Docker refuses to start the container because config_files/src-ceph:ro
is mounted at both /etc/ceph and config-data/puppet-generated/ceph.
The mount to /var/lib/config-data/puppet-generated/ceph should have
been removed in commit ed0b77ff93.
The cron containers need to run as root in order to create PID files
Additionally, the keystone_cron container was misconfigured to
use /usr/bin/cron instead of the correct /usr/bin/crond.
Additionally we have an issue where the Kolla keystone container has
hard coded ARGS for the docker container which causes -DFOREGROUND
(an Apache specific argument) to get appended onto the kolla_start
command thus causing crond to fail to startup correctly. This
works around the issue by overriding the command and calling
kolla_set_configs manually. Once we fix this in Kolla we can
With OvS2.7, DPDK is initialized immediately after setting
dpdk-init flag. DPDK requires hugepages configuration to be
available on kernel args with a reboot. This patch reboots
the node after applying the kernel args. And once the node
is rebooted, DPDK will be enabled and then the deployment
Running puppet apply with --logdest syslog results in all the output
being redirected to syslog. You get no error messages. In the case where this fails, the subsequent debug task shows nothing useful
as there was no stdout/stderr.
Also pass --logdest console to docker-puppet's puppet apply so that
we get the output for the debug task.
Signed-off-by: Bogdan Dobrelya <firstname.lastname@example.org>
networking-odl no longer supports the network-topology port
binding controller and instead now relies on a pseudo-agent binding
controller. This means that each OVS node must be configured with
host configuration in OVSDB about which VIF types, network types,
functions, etc that this OVS node supports. The end result is this
affects where nova and neutron will schedule instances.
- Modifying default port binding controller to use pseudo agent
- Adds necessary per role parameters to be able to configure host
config on a per role basis to allow for heterogenous compute node
Signed-off-by: Tim Rozet <email@example.com>
Presently the ovn-controller service (puppet/services/neutron-compute-plugin-ovn.yaml)
is started only on compute nodes. But for the cases where the controller nodes
provide the north/south traffic, we need ovn-controller service runninng in controller
nodes as well.
- Renames the neutron-compute-plugin-ovn.yaml to ovn-controller.yaml which makes more
sense and sets the service name as 'ovn-controller'.
- Adds the service 'ovn-controller' to Controller and Compute roles.
- Adds the missing 'upgrade_tasks' section in ovn-dbs.yaml and ovn-controller.yaml