041251d473
Mark regular non-containerized services with FIXME to be switched, once it is containerized Do not mark yet an external/backend/plugin/host-config related puppet services templates with that FIXME Mark puppet/services/ceph- related templates as TODO switch it to containerized ceph-ansible eventually, maybe. Change-Id: Ib9fbad05eeb57dc641499fbf411cb5870da7a8e9 Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
46 lines
2.2 KiB
YAML
46 lines
2.2 KiB
YAML
# A Heat environment file that enables DVR in the overcloud.
|
|
# This works by configuring L3 and Metadata agents on the
|
|
# compute nodes.
|
|
resource_registry:
|
|
# FIXME(bogdando): switch it, once it is containerized
|
|
OS::TripleO::Services::ComputeNeutronL3Agent: ../puppet/services/neutron-l3-compute-dvr.yaml
|
|
OS::TripleO::Services::ComputeNeutronMetadataAgent: ../docker/services/neutron-metadata.yaml
|
|
|
|
# With DVR enabled, the Compute nodes also need the br-ex bridge to be
|
|
# connected to a physical network.
|
|
OS::TripleO::Compute::Net::SoftwareConfig: ../net-config-bridge.yaml
|
|
|
|
# DVR requires a port on the external network for each compute node.
|
|
# This will usually match the one currently in use for
|
|
# OS::TripleO::Controller::Ports::ExternalPort.
|
|
# Please review your network configuration before deploying to ensure that
|
|
# this is appropriate.
|
|
OS::TripleO::Compute::Ports::ExternalPort: ../network/ports/noop.yaml
|
|
|
|
parameter_defaults:
|
|
|
|
# DVR requires that the L2 population feature is enabled
|
|
NeutronMechanismDrivers: ['openvswitch', 'l2population']
|
|
NeutronEnableL2Pop: 'True'
|
|
|
|
# Setting NeutronEnableDVR enables distributed routing support in the
|
|
# ML2 plugin and agents that support this feature
|
|
NeutronEnableDVR: true
|
|
|
|
# We also need to set the proper agent mode for the L3 agent. This will only
|
|
# affect the agent on the controller node.
|
|
NeutronL3AgentMode: 'dvr_snat'
|
|
|
|
# Enabling DVR deploys additional services to the compute nodes that through
|
|
# normal operation will consume memory. The amount required is roughly
|
|
# proportional to the number of Neutron routers that will be scheduled to
|
|
# that host. It is necessary to reserve memory on the compute nodes to avoid
|
|
# memory issues when creating instances that are connected to routed
|
|
# networks. The current expected consumption is 50 MB per router in addition
|
|
# to the base reserved amount. Deployers should refer to existing
|
|
# documentation, release notes, etc. for additional information on estimating
|
|
# an appropriate value. The provided value here is based on an estimate of 10
|
|
# routers and is an example value *only* and should be reviewed and modified
|
|
# if necessary before deploying.
|
|
NovaReservedHostMemory: 2560
|