Document DCN deployment
Documents DCN deployment using a centralized undercloud. This is the beginning of the documentation. The TODO's will be resolved in followup patches. Change-Id: Ia9a4a67fe1bdef377570544600977d6537c1f9bb
This commit is contained in:
parent
cec6a36576
commit
9b02025e68
|
@ -0,0 +1,120 @@
|
|||
.. _distributed_compute_node:
|
||||
|
||||
Distributed Compute Node deployment
|
||||
===================================
|
||||
|
||||
Introduction
|
||||
------------
|
||||
Additional groups of compute nodes can be deployed and integrated with an
|
||||
existing deployment of a control plane stack. These compute nodes are deployed
|
||||
in separate stacks from the main control plane (overcloud) stack, and they
|
||||
consume some of the stack outputs from the overcloud stack to reuse as
|
||||
configuration data.
|
||||
|
||||
Deploying these additional nodes in separate stacks provides for separation of
|
||||
management between the control plane stack and the stacks for additional compute
|
||||
nodes. The stacks can be managed, scaled, and updated separately.
|
||||
|
||||
Using separate stacks also creates smaller failure domains as there are less
|
||||
baremetal nodes in each invidiual stack. A failure in one baremetal node only
|
||||
requires that management operations to address that failure need only affect
|
||||
the single stack that contains the failed node.
|
||||
|
||||
A routed spine and leaf networking layout can be used to deploy these
|
||||
additional groups of compute nodes in a distributed nature. Not all nodes need
|
||||
to be co-located at the same physical location or datacenter. See
|
||||
:ref:`routed_spine_leaf_network` for more details.
|
||||
|
||||
Such an architecture is referred to as "Distributed Compute Node" or "DCN" for
|
||||
short.
|
||||
|
||||
|
||||
Deploying from a centralized undercloud
|
||||
---------------------------------------
|
||||
The main overcloud control plane stack should be deployed as needed for the
|
||||
desired cloud architecture layout. This stack contains nodes running the
|
||||
control plane and infrastructure services needed for the cloud. For the
|
||||
purposes of this documentation, this stack is referred to as the overcloud
|
||||
stack.
|
||||
|
||||
The overcloud stack may or may not contain compute nodes. It may be a user
|
||||
requirement that compute services are available within the overcloud stack,
|
||||
however it is not strictly required.
|
||||
|
||||
Undercloud configuration
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
TODO
|
||||
|
||||
Saving configuration from the overcloud
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
Once the overcloud has been deployed, data needs to be retrieved
|
||||
from the overcloud Heat stack and plan to pass as input values into the
|
||||
separate DCN deployment.
|
||||
|
||||
Extract the needed data from the stack outputs:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# EndpointMap: Cloud service to URL mapping
|
||||
openstack stack output show standalone EndpointMap --format json \
|
||||
| jq '{"parameter_defaults": {"EndpointMapOverride": .output_value}}' \
|
||||
> endpoint-map.json
|
||||
|
||||
# AllNodesConfig: Node specific hieradata (hostnames, etc) set on all nodes
|
||||
openstack stack output show standalone AllNodesConfig --format json \
|
||||
| jq '{"parameter_defaults": {"AllNodesExtraMapData": .output_value}}' \
|
||||
> all-nodes-extra-map-data.json
|
||||
|
||||
# GlobalConfig: Service specific hieradata set on all nodes
|
||||
openstack stack output show $STACK GlobalConfig --format json \
|
||||
| jq '{"parameter_defaults": {"GlobalConfigExtraMapData": .output_value}}' \
|
||||
> $DIR/global-config-extra-map-data.json
|
||||
|
||||
# HostsEntry: Entries for /etc/hosts set on all nodes
|
||||
openstack stack output show standalone HostsEntry -f json \
|
||||
| jq -r '{"parameter_defaults":{"ExtraHostFileEntries": .output_value}}' \
|
||||
> extra-host-file-entries.json
|
||||
|
||||
The same passwords and secrets should be reused when deploying the additional
|
||||
compute stacks. These values can be saved from the existing control plane stack
|
||||
deployment with the following command::
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
openstack object save overcloud plan-environment.yaml
|
||||
python -c "import yaml; data=yaml.safe_load(open('plan-environment.yaml').read()); print yaml.dump(dict(parameter_defaults=data['passwords']))" > passwords.yaml
|
||||
|
||||
Use the passwords.yaml enviornment file generated by the previous command, or
|
||||
reuse the environment file used to set the values for the control plane stack.
|
||||
|
||||
.. note::
|
||||
|
||||
The `passwords.yaml` generated in the previous command contains sensitive
|
||||
security data such as passwords and TLS certificates that are used in the
|
||||
overcloud deployment.
|
||||
|
||||
Care should be taken to keep the file as secured as possible.
|
||||
|
||||
Create an environment file for setting necessary oslo messaging configuration
|
||||
overrides:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
parameter_defaults:
|
||||
ComputeExtraConfig:
|
||||
oslo_messaging_notify_use_ssl: false
|
||||
oslo_messaging_rpc_use_ssl: false
|
||||
|
||||
|
||||
Reusing networks from the overcloud
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
TODO
|
||||
|
||||
Spine and Leaf configuration
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
TODO
|
||||
|
||||
|
||||
Standalone deployment
|
||||
---------------------
|
||||
TODO
|
|
@ -32,4 +32,5 @@ Documentation on how to enable and configure various features available in
|
|||
designate
|
||||
multiple_overclouds
|
||||
tuned
|
||||
distributed_compute_node
|
||||
deploy_openshift
|
||||
|
|
|
@ -1,3 +1,5 @@
|
|||
.. _routed_spine_leaf_network:
|
||||
|
||||
Deploying Overcloud with L3 routed networking
|
||||
=============================================
|
||||
|
||||
|
|
Loading…
Reference in New Issue