b1d82e6acb
On rhel8 the ovs_neutron_agent container endlessly restarts with: 2019-03-05 15:41:57.096 534226 DEBUG oslo_concurrency.lockutils [req-a004e117-dba8-42bf-81fe-abf7a9f2faab - - - - -] Lock "conntrack" released by "neutron.agent.linux.ip_conntrack.get_conntrack" :: held 0.257s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:339 2019-03-05 15:41:57.096 534226 ERROR neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-a004e117-dba8-42bf-81fe-abf7a9f2faab - - - - -] Exit code: 1; Stdin: ; Stdout: Table `raw' does not exist ; Stderr: Agent terminated!: neutron_lib.exceptions.ProcessExecutionError: Exit code: 1; Stdin: ; Stdout: Table `raw' does not exist 2019-03-05 15:41:57.098 534226 INFO oslo_rootwrap.client [-] Stopping rootwrap daemon process with pid=534237 This is due to an iptables bug listing tables that were not initialized: https://bugzilla.redhat.com/show_bug.cgi?id=1673609 Let us just load the raw module by listing it so that the issue does not appear. Slawomir and I tested this on an environment and could confirm that the agent does not restart any longer: [root@undercloud-0 ~]# podman logs neutron_ovs_agent 2>&1|grep -i error [root@undercloud-0 ~]# We are leaving these commands running on any OS version as they are harmless in any case. Closes-Bug: #1818834 Co-Authored-By: Slawomir Kaplonski <skaplons@redhat.com> Change-Id: Ifddcec009ae93ad0e51abfe1425eb81c1817db55 |
||
---|---|---|
.. | ||
aodh | ||
barbican | ||
ceilometer | ||
cinder | ||
clients | ||
congress | ||
container-image-prepare | ||
database | ||
deprecated | ||
designate | ||
ec2 | ||
etcd | ||
glance | ||
haproxy | ||
heat | ||
horizon | ||
image-serve | ||
ironic | ||
iscsid | ||
keepalived | ||
kernel | ||
keystone | ||
login-defs | ||
manila | ||
memcached | ||
messaging | ||
metrics | ||
mistral | ||
multipathd | ||
neutron | ||
nova | ||
octavia | ||
podman | ||
qdr | ||
rabbitmq | ||
sahara | ||
securetty | ||
selinux | ||
snmp | ||
sshd | ||
swift | ||
tacker | ||
time | ||
timesync | ||
tripleo-firewall | ||
tripleo-packages | ||
tuned | ||
zaqar | ||
README.rst |
README.rst
TripleO Deployments
This directory contains files that represent individual service deployments, orchestration tools, and the configuration tools used to deploy them.
Directory Structure
Each logical grouping of services will have a directory. Example: 'timesync'. Within this directory related timesync services would exist to for example configure timesync services on baremetal or via containers.
Filenaming conventions
As a convention each deployments service filename will reflect both the deployment engine (baremetal, or containers) along with the config tool used to deploy that service.
The convention is <service-name>-<engine>-<config management tool>.
Examples:
deployment/aodh/aodh-api-container-puppet.yaml (containerized Aodh service configured with Puppet)
deployment/aodh/aodh-api-container-ansible.yaml (containerized Aodh service configured with Ansible)
deployment/timesync/chrony-baremetal-ansible.yaml (baremetal Chrony service configured with Ansible)
deployment/timesync/chrony-baremetal-puppet.yaml (baremetal Chrony service configured with Puppet)