Files
kolla-ansible/ansible/roles/ironic/templates/ironic.conf.j2
Rafael Weingärtner 22a6223b1b Standardize the configuration of "oslo_messaging" section
After all of the discussions we had on
"https://review.opendev.org/#/c/670626/2", I studied all projects that
have an "oslo_messaging" section. Afterwards, I applied the same method
that is already used in "oslo_messaging" section in Nova, Cinder, and
others. This guarantees that we have a consistent method to
enable/disable notifications across projects based on components (e.g.
Ceilometer) being enabled or disabled. Here follows the list of
components, and the respective changes I did.

* Aodh:
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.

* Congress:
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.

* Cinder:
It was already properly configured.

* Octavia:
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.

* Heat:
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Ceilometer:
Ceilometer publishes some messages in the rabbitMQ. However, the
default driver is "messagingv2", and not ''(empty) as defined in Oslo;
these configurations are defined in ceilometer/publisher/messaging.py.
Therefore, we do not need to do anything for the
"oslo_messaging_notifications" section in Ceilometer

* Tacker:
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Neutron:
It was already properly configured.

* Nova
It was already properly configured. However, we found another issue
with its configuration. Kolla-ansible does not configure nova
notifications as it should. If 'searchlight' is not installed (enabled)
the 'notification_format' should be 'unversioned'. The default is
'both'; so nova will send a notification to the queue
versioned_notifications; but that queue has no consumer when
'searchlight' is disabled. In our case, the queue got 511k messages.
The huge amount of "stuck" messages made the Rabbitmq cluster
unstable.

https://bugzilla.redhat.com/show_bug.cgi?id=1478274
https://bugs.launchpad.net/ceilometer/+bug/1665449

* Nova_hyperv:
I added the same configurations as in Nova project.

* Vitrage
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Searchlight
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.

* Ironic
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.

* Glance
It was already properly configured.

* Trove
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Blazar
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Sahara
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Watcher
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.

* Barbican
I created a mechanism similar to what we have in Cinder, Nova,
and others. I also added a configuration to 'keystone_notifications'
section. Barbican needs its own queue to capture events from Keystone.
Otherwise, it has an impact on Ceilometer and other systems that are
connected to the "notifications" default queue.

* Keystone
Keystone is the system that triggered this work with the discussions
that followed on https://review.opendev.org/#/c/670626/2. After a long
discussion, we agreed to apply the same approach that we have in Nova,
Cinder and other systems in Keystone. That is what we did. Moreover, we
introduce a new topic "barbican_notifications" when barbican is
enabled. We also removed the "variable" enable_cadf_notifications, as
it is obsolete, and the default in Keystone is CADF.

* Mistral:
It was hardcoded "noop" as the driver. However, that does not seem a
good practice. Instead, I applied the same standard of using the driver
and pushing to "notifications" queue if Ceilometer is enabled.

* Cyborg:
I created a mechanism similar to what we have in AODH, Cinder, Nova,
and others.

* Murano
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Senlin
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Manila
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Zun
The section is declared, but it is not used. Therefore, it will
be removed in an upcomming PR.

* Designate
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

* Magnum
It was already using a similar scheme; I just modified it a little bit
to be the same as we have in all other components

Closes-Bug: #1838985

Change-Id: I88bdb004814f37c81c9a9c4e5e491fac69f6f202
Signed-off-by: Rafael Weingärtner <rafael@apache.org>
2019-08-15 13:18:16 -03:00

150 lines
4.4 KiB
Django/Jinja

# NOTE(mgoddard): Ironic is changing the default value of [deploy]
# default_boot_option from 'netboot' to 'local'. If the option is not set,
# ironic will log a warning during the transition period. Even so,
# kolla-ansible should not set a value for this option as the warning is
# intended to inform operators of the impending change. The warning may be
# suppressed by the deployer by setting a value for the option.
[DEFAULT]
{% if not enable_keystone | bool %}
auth_strategy = noauth
{% endif %}
debug = {{ ironic_logging_debug }}
log_dir = /var/log/kolla/ironic
transport_url = {{ rpc_transport_url }}
{% if pin_release_version is defined %}
pin_release_version = {{ pin_release_version }}
{% endif %}
[oslo_messaging_notifications]
transport_url = {{ notify_transport_url }}
{% if ironic_enabled_notification_topics %}
driver = messagingv2
topics = {{ ironic_enabled_notification_topics | map(attribute='name') | join(',') }}
{% else %}
driver = noop
{% endif %}
{% if ironic_policy_file is defined %}
[oslo_policy]
policy_file = {{ ironic_policy_file }}
{% endif %}
{% if service_name == 'ironic-api' %}
[api]
host_ip = {{ api_interface_address }}
port = {{ ironic_api_listen_port }}
api_workers = {{ openstack_service_workers }}
{% endif %}
{% if service_name == 'ironic-conductor' %}
[conductor]
api_url = {{ internal_protocol }}://{{ ironic_internal_fqdn }}:{{ ironic_api_port }}
automated_clean=false
{% endif %}
[database]
connection = mysql+pymysql://{{ ironic_database_user }}:{{ ironic_database_password }}@{{ ironic_database_address }}/{{ ironic_database_name }}
max_retries = -1
{% if enable_keystone | bool %}
[keystone_authtoken]
www_authenticate_uri = {{ keystone_internal_url }}
auth_url = {{ keystone_admin_url }}
auth_type = password
project_domain_id = {{ default_project_domain_id }}
user_domain_id = {{ default_user_domain_id }}
project_name = service
username = {{ ironic_keystone_user }}
password = {{ ironic_keystone_password }}
memcache_security_strategy = ENCRYPT
memcache_secret_key = {{ memcache_secret_key }}
memcached_servers = {% for host in groups['memcached'] %}{{ hostvars[host]['ansible_' + hostvars[host]['api_interface']]['ipv4']['address'] }}:{{ memcached_port }}{% if not loop.last %},{% endif %}{% endfor %}
{% endif %}
{% if enable_cinder | bool %}
[cinder]
auth_url = {{ keystone_admin_url }}
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = {{ ironic_keystone_user }}
password = {{ ironic_keystone_password }}
{% endif %}
{% if enable_glance | bool %}
[glance]
glance_api_servers = {{ internal_protocol }}://{{ glance_internal_fqdn }}:{{ glance_api_port }}
auth_url = {{ keystone_admin_url }}
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = {{ ironic_keystone_user }}
password = {{ ironic_keystone_password }}
{% endif %}
{% if enable_neutron | bool %}
[neutron]
url = {{ internal_protocol }}://{{ neutron_internal_fqdn }}:{{ neutron_server_port }}
auth_url = {{ keystone_admin_url }}
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = {{ ironic_keystone_user }}
password = {{ ironic_keystone_password }}
cleaning_network = {{ ironic_cleaning_network }}
{% endif %}
[inspector]
enabled = true
{% if enable_keystone | bool %}
auth_url = {{ keystone_admin_url }}
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = {{ ironic_keystone_user }}
password = {{ ironic_keystone_password }}
{% else %}
auth_type=none
{% endif %}
endpoint_override = {{ ironic_inspector_internal_endpoint }}
[agent]
deploy_logs_local_path = /var/log/kolla/ironic
deploy_logs_storage_backend = local
deploy_logs_collect = always
[pxe]
pxe_append_params = nofb nomodeset vga=normal console=tty0 console=ttyS0,{{ ironic_console_serial_speed }}
{% if enable_ironic_ipxe | bool %}
ipxe_enabled = True
pxe_bootfile_name = undionly.kpxe
uefi_pxe_bootfile_name = ipxe.efi
pxe_config_template = $pybasedir/drivers/modules/ipxe_config.template
uefi_pxe_config_template = $pybasedir/drivers/modules/ipxe_config.template
tftp_root = /httpboot
tftp_master_path = /httpboot/master_images
tftp_server = {{ api_interface_address }}
{% endif %}
{% if enable_ironic_ipxe | bool %}
[deploy]
http_url = {{ ironic_ipxe_url }}
{% endif %}
[oslo_middleware]
enable_proxy_headers_parsing = True
{% if not enable_neutron | bool %}
[dhcp]
dhcp_provider = none
{% endif %}