The parameter, when set to true, will configure cinder-volume to connect
to Etcd through the node's own local IP on the Etcd network, instead of
a VIP on the network.
This is useful when deploying cinder-volume in an A/A configuration at
an edge site with the HCI roles. As Etcd and cinder-volume are both
running on the same node (typically 3 nodes configured the same), then
each node can just connect directly to Etcd without having to go through
a VIP. Additionally, we have no VIP management at the edge sites
presently.
Change-Id: I8a8825ecff9fc99b5de7390075470356397d85a2
implements: blueprint split-controlplane-templates
This directory contains files that represent individual service
deployments, orchestration tools, and the configuration tools used to
deploy them.
Directory Structure
Each logical grouping of services will have a directory. Example:
'timesync'. Within this directory related timesync services would exist
to for example configure timesync services on baremetal or via
containers.
Filenaming conventions
As a convention each deployments service filename will reflect both
the deployment engine (baremetal, or containers) along with the config
tool used to deploy that service.
The convention is <service-name>-<engine>-<config
management tool>.
Examples:
deployment/aodh/aodh-api-container-puppet.yaml (containerized Aodh
service configured with Puppet)
deployment/aodh/aodh-api-container-ansible.yaml (containerized Aodh
service configured with Ansible)
deployment/timesync/chrony-baremetal-ansible.yaml (baremetal Chrony
service configured with Ansible)
deployment/timesync/chrony-baremetal-puppet.yaml (baremetal Chrony
service configured with Puppet)