23cdf4dd17
With this change a Heat resource is no longer used to create an undercloud neutron API port resource for the redis and ovn_dbs service virtual IPs. Instead an external deploy task at step 0 in the individual service template uses the "tripleo_service_vip" ansible module to mange a neutron API port resource for each service. The interfaces to control the IP address and service network (RedisVirtualFixedIPs, OVNDBsVirtualFixedIPs and ServiceNetMap) remains the same. It is also possible to include the 'use_neutron' boolean in the FixedIPs parameter to instruct the ansible module not to create a neutron API resource, and simply "echo" the ip_address given in the FixedIPs parameter. For example: RedisVirtualFixedIPs: - ip_address: 1.0.0.5 use_neutron: false Alternatively the fixed-ips can be set using the 'ServiceVips' parameter, like this: ServiceVips: redis: 1.0.0.5 ovs_dbs: 1.0.0.6 NOTE: If the neutron service is not available the tripleo_service_vip ansible module will "echo" the IP provided in %service%VirtualFixedIPs. Related: blueprint network-data-v2-ports Depends-On: https://review.opendev.org/777307 Depends-On: https://review.opendev.org/779883 Change-Id: I4794418546363888e7a555a16b45b7a4417f1ef8
32 lines
1.8 KiB
YAML
32 lines
1.8 KiB
YAML
# *******************************************************************
|
|
# This file was created automatically by the sample environment
|
|
# generator. Developers should use `tox -e genconfig` to update it.
|
|
# Users are recommended to make changes to a copy of the file instead
|
|
# of the original, if any customizations are needed.
|
|
# *******************************************************************
|
|
# title: Distributed Compute Node
|
|
# description: |
|
|
# Environment file for deploying a remote site of distributed compute nodes
|
|
# (DCN) in a separate stack (multi-stack) deployment.
|
|
parameter_defaults:
|
|
# Manage the network and related resources (subnets and segments) with either create, update, or delete operations (depending on the stack operation). Does not apply to ports which will always be managed as needed. Defaults to true. For multi-stack use cases where the network related resources have already been managed by a separate stack, this parameter can be set to false.
|
|
# Type: boolean
|
|
ManageNetworks: False
|
|
|
|
# The availability zone where new Nova compute nodes will be added. If the zone does not already exist, it will be created. If left unset, it will default to the value of the stack name.
|
|
# Type: string
|
|
NovaComputeAvailabilityZone: ''
|
|
|
|
# Whether instances can attach cinder volumes from a different availability zone.
|
|
# Type: boolean
|
|
NovaCrossAZAttach: False
|
|
|
|
# Refuse to boot an instance if it would require downloading from glance and uploading to ceph instead of a COW clone.
|
|
# Type: boolean
|
|
NovaDisableImageDownloadToRbd: True
|
|
|
|
resource_registry:
|
|
OS::TripleO::Services::GlanceApiEdge: ../deployment/glance/glance-api-edge-container-puppet.yaml
|
|
OS::TripleO::Services::HAproxyEdge: ../deployment/haproxy/haproxy-edge-container-puppet.yaml
|
|
OS::TripleO::Services::NovaAZConfig: ../deployment/nova/nova-az-config.yaml
|