tripleo-heat-templates/environments/dcn-hci.yaml
Alan Bishop 2d60799c49 Define a new CinderVolumeEdge service
CinderVolumeEdge is an optional service (defaults to OS::Heat::None)
that can be enabled on DCN/Edge nodes for edge sites that support
persistent block storage (i.e. cinder). The dcn-hci.yaml environment
file enables the service.

The new service supports the following edge deployment models:
1. Edge site with no block storage
   - Deploy DistributedCompute nodes
   - Use dcn.yaml environment file (the CinderVolumeEdge service
     remains disabled)
2. Edge site with traditional HCI storage
   - Deploy DistributedComputeHCI nodes
   - Use dcn-hci.yaml env file to enable the CinderVolumeEdge service
   - Use ceph-ansible.yaml env file to deploy ceph for the RBD backend
3. Edge site with quasi-hyperconverged storage
   - Deploy DistributedCompute nodes
   - Use dcn-hci.yaml env file to enable the CinderVolumeEdge service
   - Use ceph-ansible-external.yaml env file so the RBD backend can
     access an external ceph cluster

This patch adds support for number 3, which is a new capability. Whereas
traditional HCI means ceph and cinder services run on compute nodes, the
new model is still quasi-hyperconverged because cinder (as well as
glance) runs on the compute nodes.

Change-Id: I56b5792c1d53bb8659e440f598006e471894ff2e
2020-12-08 06:17:02 -08:00

53 lines
3.2 KiB
YAML

# *******************************************************************
# This file was created automatically by the sample environment
# generator. Developers should use `tox -e genconfig` to update it.
# Users are recommended to make changes to a copy of the file instead
# of the original, if any customizations are needed.
# *******************************************************************
# title: Distributed Compute Node HCI
# description: |
# Environment file for deploying a remote site of HCI distributed compute
# nodes (DCN) in a separate stack (multi-stack) deployment. It should be
# used in combination with environments/ceph-ansible/ceph-ansible.yaml.
parameter_defaults:
# When running Cinder A/A, whether to connect to Etcd via the local IP for the Etcd network. If set to true, the ip on the local node will be used. If set to false, the VIP on the Etcd network will be used instead. Defaults to false.
# Type: boolean
CinderEtcdLocalConnect: True
# The Cinder service's storage availability zone.
# Type: string
CinderStorageAvailabilityZone: dcn
# The cluster name used for deploying the cinder-volume service in an active-active (A/A) configuration. This configuration requires the Cinder backend drivers support A/A, and the cinder-volume service not be managed by pacemaker. If these criteria are not met then the cluster name must be left blank.
# Type: string
CinderVolumeCluster: dcn
# List of enabled Image Import Methods. Valid values in the list are 'glance-direct', 'web-download', or 'copy-image'
# Type: comma_delimited_list
GlanceEnabledImportMethods: web-download,copy-image
# Manage the network and related resources (subnets and segments) with either create, update, or delete operations (depending on the stack operation). Does not apply to ports which will always be managed as needed. Defaults to true. For multi-stack use cases where the network related resources have already been managed by a separate stack, this parameter can be set to false.
# Type: boolean
ManageNetworks: False
# The availability zone where new Nova compute nodes will be added. If the zone does not already exist, it will be created. If left unset, it will default to the value of the stack name.
# Type: string
NovaComputeAvailabilityZone: ''
# Whether instances can attach cinder volumes from a different availability zone.
# Type: boolean
NovaCrossAZAttach: False
# Refuse to boot an instance if it would require downloading from glance and uploading to ceph instead of a COW clone.
# Type: boolean
NovaDisableImageDownloadToRbd: True
resource_registry:
OS::TripleO::Network::Ports::OVNDBsVipPort: ../network/ports/noop.yaml
OS::TripleO::Network::Ports::RedisVipPort: ../network/ports/noop.yaml
OS::TripleO::Services::CinderVolumeEdge: ../deployment/cinder/cinder-volume-container-puppet.yaml
OS::TripleO::Services::Etcd: ../deployment/etcd/etcd-container-puppet.yaml
OS::TripleO::Services::GlanceApiEdge: ../deployment/glance/glance-api-edge-container-puppet.yaml
OS::TripleO::Services::HAproxyEdge: ../deployment/haproxy/haproxy-edge-container-puppet.yaml
OS::TripleO::Services::NovaAZConfig: ../deployment/nova/nova-az-config.yaml