It seems UpdateIdentifier is an overloaded parameter - it is used
both to trigger package updates in the minor update case as well as
to trigger the upgrade steps during a major upgrade. I'm not sure
it's appropriate to change either of the descriptions as a result,
so for the moment that is added to the exclusion list.
Change-Id: Ied36cf259f6a6e5c8cfa7a01722fb7fda6900976
Partial-Bug: 1700664
To be consistent with all other SoftwareDeployment's in
tripleo-heat-templates, this sets the name property on
the deployments where it was missing.
Change-Id: I8bc062d2af93acead240bd5e473ea385b2bf6cf2
A new role ComputeOvsDpdk has been added to avoid manual
roles_data creation. And cleaned-up the DPDK parameters
inline with the refactored code.
Change-Id: I16dac69609c98194c2504ff067258fa14363d4f1
Checks for an existing /var/run/yum.pid and exit 1 with an error
message saying why.
Change-Id: I374eeb4164a8007ae67fea2796eac109fffdef97
Closes-Bug: 1704131
To workaround yum bug with libnss we need to make yum cache
before running update. In fact we should have done this
regardless of the bug.
Change-Id: I5b2355fb8abe3c8d4b9ce9c62b9ffdba8c1e8d9d
Resolves: rhbz#1458841
Closes-Bug: #1703830
There is a Heat patch posted (via Depends-On) that resolves the issue
that caused this to be reverted. This reverts the revert and we need to
make sure all the upgrades jobs pass before we merge this patch.
This reverts commit 69936229f4def703cd44ab164d8d1989c9fa37cb.
Closes-Bug: #1699463
implements blueprint disable-deployments
Change-Id: Iedf680fddfbfc020d301bec8837a0cb98d481eb5
The vhost sockets sockets are created with qemu permission, but ovs
runs with root permission. In order to allow ovs to access vhost sockets
reducing the ovs group permission from root to qemu. This is a temprovary
workaround, until ovs fixes the permission issue. The script supports
both ovs2.6 and ovs2.7 versions.
Change-Id: I172956390c19fc9824bf7590cd48bfcf6201191b
In order to deploy OpenDaylight with DPDK we need to copy the DPDK
config for OVS done in the neutron-ovs-dpdk service template, without
enabling OVS agent for compute nodes. To do this correctly, we should
inherit and openvswitch service which is a common place to set OVS
configuration and parameters. Note: vswitch::dpdk config will be called
in prenetwork setup with ovs_dpdk_config.yaml so there is no need to
include that in the step config for neutron-ovs-dpdk-agent service or
opendaylight-ovs-dpdk.
Changes Include:
- Creates a common openvswitch service template, which in the future
will migrate to be its own service.
- Renames and fixes OVS DPDK configuration heat parameters in the
openvswitch template.
- neutron-ovs-dpdk-agent now inherits the common openvswitch template.
- Adds opendaylight-ovs-dpdk template which also inherits common ovs
template.
- Uses OVS DPDK config script to allow configuring OVS DPDK in
prenetwork config (before os-net-config runs). This has an issue
where hieradata is not present yet, so we have to redefine the heat
parameters and pass them via bash. In the future this should be
corrected.
- Adds opendaylight-dpdk environment file used to deploy an ODL + DPDK
deployment.
- Updates neutron-ovs-dpdk environment file.
Closes-Bug: 1656097
Partial-Bug: 1656096
Depends-On: I3227189691df85f265cf84bd4115d8d4c9f979f3
Change-Id: Ie80e38c2a9605d85cdf867a31b6888bfcae69e29
Signed-off-by: Tim Rozet <trozet@redhat.com>
DPDK has to be enabled on openvswitch on the boot before
configuring the network as when the network uses DPDK ports
OvS should be ready to handle DPDK. Enabled DPDK via
PreNetworkConfig by checking if ServiceNames contains
DPDK service.
Implements: blueprint ovs-2-6-dpdk
Closes-Bug: #1654975
Depends-On: I83a540336c01a696780621fb2b39486a6abf0917
Change-Id: I7af4534d91e67c94ba559b78b9ac6a001e639db3
This reverts commit d6c0979eb3de79b8c3a79ea5798498f0241eb32d.
This seems to be causing issues in Heat in upgrades.
Change-Id: I379fb2133358ba9c3c989c98a2dd399ad064f706
Related-Bug: #1699463
Commit I46941e54a476c7cc8645cd1aff391c9c6c5434de added support for
blacklisting servers from triggered Heat deployments.
This commit adds that functionality to the remaining Deployments in
tripleo-heat-templates for the ExtraConfig interfaces.
Since we can not (should not) change the interface to ExtraConfig, Heat
conditions are used on the actual <role>ExtraConfigPre and
NodeExtraConfig resources instead of using the actions approach on
Deployments.
Change-Id: I38fdb50d1d966a6c3651980c52298317fa3bece4
The bootstrap_nodeid can have capital letters while the hostname may
not. In puppet we use downcase for this comparison, so let's follow a
similar pattern for scripts from THT.
Change-Id: I8a0bec4a6f3ed0b4f2289cbe7023344fb284edf7
Closes-Bug: #16998201
Existing host_config_and_reboot.role.j2.yaml is done in ocata to
configure kernel args. This can be enhanced with use of role-specific
parameters, which is done in the current patch. The earlier method is
deprecated and will be removed in Q releae.
Implements: blueprint ovs-2-6-dpdk
Change-Id: Ib864f065527167a49a0f60812d7ad4ad12c836d1
We need to ensure that the pacemaker cluster restarts
in the end of the deployment.
Due to the resources renaming we added the
postconfig resource not in the end of the
deployment as it was *postpuppet.
Closes-bug: 1695904
Change-Id: Ic6978fcff591635223b354831cd6cbe0802316cf
With the composable undercloud installer, it's possible to disable
services. The extraconfig script assumes both, neutron and nova, are
installed and fails if they aren't.
This patch checks if those services are available before.
Change-Id: Idcc2b9809fcfa92649a0a1f45175ce417dc0e608
In change I2aae4e2fdfec526c835f8967b54e1db3757bca17 we did the
following:
-pacemaker_status=$(systemctl is-active pacemaker || :)
+pacemaker_status=""
+if hiera -c /etc/puppet/hiera.yaml service_names | grep -q pacemaker;
then
+ pacemaker_status=$(systemctl is-active pacemaker)
+fi
we did that so due to LP#1668266: we did not want systemctl is-active to
fail on non pacemaker nodes. The problem with the above hiera check is
that it will match on pacemaker_remote nodes as well.
We cannot piggyback the pacemaker_enabled hiera key because that is true
on all nodes. So let's make the test check only for pacemaker service
without matching pacemaker remote. Tested with:
1) Test on a controller node with pacemaker service enabled
[root@overcloud-controller-0 ~]# hiera -c /etc/puppet/hiera.yaml -a service_names |grep '\bpacemaker\b'
"pacemaker",
[root@overcloud-controller-0 ~]# echo $?
0
2) Test on a compute node without pacemaker:
[root@overcloud-novacompute-0 puppet]# hiera -c /etc/puppet/hiera.yaml service_names |grep '\bpacemaker\b'
[root@overcloud-novacompute-0 puppet]# echo $?
1
3) Test on a node with pacemaker_remote in the service_names key:
[root@overcloud-novacompute-0 puppet]# hiera -c /etc/puppet/hiera.yaml service_names |grep '\bpacemaker\b'
[root@overcloud-novacompute-0 puppet]# echo $?
1
[root@overcloud-novacompute-0 puppet]# hiera -c /etc/puppet/hiera.yaml service_names |grep '\bpacemaker_remote\b'
"pacemaker_remote"]
[root@overcloud-novacompute-0 puppet]# echo $?
0
Change-Id: I54c5756ba6dea791aef89a79bc0b538ba02ae48a
Closes-Bug: #1688214
This add openstack-nova-migration on the compute during the upgrade.
Closes-Bug: #1687081
Depends-on: Iab022bdfb655e3c52fecebf416e75c9e981072ab
Depends-on: I02dc8934521340f42ac44a7d16889f6d79620c33
Change-Id: I3db2a3188e538eeaef61769d38f0166545444cfe
To test this change we deployed a stock master with ipv6 which created a bunch
of ipv6 with /64 netmask:
[root@overcloud-controller-0 ~]# pcs resource show ip-fd00.fd00.fd00.2000..18
Resource: ip-fd00.fd00.fd00.2000..18 (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=fd00:fd00:fd00:2000::18 cidr_netmask=64
Operations: start interval=0s timeout=20s (ip-fd00.fd00.fd00.2000..18-start-interval-0s)
stop interval=0s timeout=20s (ip-fd00.fd00.fd00.2000..18-stop-interval-0s)
monitor interval=10s timeout=20s (ip-fd00.fd00.fd00.2000..18-monitor-interval-10s)
Then we update the THT folder with this patch and upload the new scripts on the undercloud via:
openstack overcloud deploy --update-plan-only ....
Then we kick off the minor update workflow:
openstack overcloud update stack -i overcloud
Once the controller-0 node (bootstrap node for pacemaker) is completed we have the
correct VIP configuration:
[root@overcloud-controller-0 heat-config-script]# pcs resource show ip-fd00.fd00.fd00.2000..18
Resource: ip-fd00.fd00.fd00.2000..18 (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=fd00:fd00:fd00:2000::18 cidr_netmask=128 nic=vlan20 lvs_ipv6_addrlabel=true lvs_ipv6_addrlabel_value=99
Operations: start interval=0s timeout=20s (ip-fd00.fd00.fd00.2000..18-start-interval-0s)
stop interval=0s timeout=20s (ip-fd00.fd00.fd00.2000..18-stop-interval-0s)
monitor interval=10s timeout=20s (ip-fd00.fd00.fd00.2000..18-monitor-interval-10s)
Also verified that running the script a second time does not alter the
(already fixed) VIPs.
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Change-Id: I765cd5c9b57134dff61f67ce726bf88af90f8090
We fixed pcs resources start/stop timeouts via
I587136d8d045d213875c657ea5a405074f80c8ad in Nov 2015.for mitaka.
And there we stated:
This can be removed once updates from deployments made prior to
I6fc18f1ad876c5a25723710a3b20d8ec9519dcba are no longer supported.
We can now safely remove these updates as they are useless and cost time
anyway.
Change-Id: Ibad2b3eed0d08560d52d5ebe700746b61e5b8f51
In two places during upgrade we manually trigger puppet.
There can be a problem when new puppet modules are added, and their
corresponding symlinks in /etc/puppet/modules are not created during
the installation as their are installed in
/usr/share/openstack-puppet/modules. To prevent the issue tripleo set
modulepath in the templates.
We must use the same modulepath to make sure that we don't fail
because of missing module in the manual puppet run.
This particulary happens when you upgrade from M->N->O, as the base
image in Mitaka doesn't have the proper symlinks and they are not
created during the installation of the package.
Closes-Bug: #1684587
Change-Id: I79df6ea33f1c58e13309176a6de41b7572541fd6
Fetch the host public keys from each node, combine them all and write to the
system-wide ssh known hosts. The alternative of disabling host key
verification is vulnerable to a MITM attack.
Change-Id: Ib572b5910720b1991812256e68c975f7fbe2239c
This uses the coalesce function to take null values into account, else
these resources will fail validation.
Change-Id: Iaf4218dd731826f80b76ff8f7a902adc8c865be5
Closes-Bug: #1681332
This reverts commit b323f8a16035549d84cdec4718380bde3d23d6c3 and uses
the new logic in puppet-tripleo (see Ifd6fa5b398d98e8998630ea0c9a2ce9867ceba2b
), basically doing the same.
Closes-Bug: 1665641
Change-Id: Ib5cb0578be2993af0a0b8675005d838640bdb139
Adds the ability to perform a yum update after performing the RHEL
registration.
Change-Id: Id84d156cd28413309981d5943242292a3a6fa807
Partial-Bug: #1640894
The current check tends to produce a false positive causing unnecessary
service restarts. yum check-update will exit with return code 100 if
updated packages are available.
Change-Id: I8bd89f2b24bafc6c991382b9eb484cfa9a2f8968
In [1] we removed the previously used special case upgrade code.
However we have since discovered that for openvswitch 2.5.0-14
the special case is still required with an extra flag to prevent
the restart. This adds the upgrade code back into the minor
update and 'manual upgrade' scripts for compute/swift. The
review at If998704b3c4199bbae8a1d068c31a71763f5c8a2 is adding
this logic for the ansible upgrade steps.
Related-Bug: 1669714
[1] https://review.openstack.org/#/q/59e5f9597eb37f69045e470eb457b878728477d7
Change-Id: I3e5899e2d831b89745b2f37e61ff69dbf83ff595