Remove DeploymentSwiftDataMap related content for deployed server

Support for it was removed in train with[1]

[1] https://review.opendev.org/#/c/671981/

Change-Id: Iefdfb54e09c8e83a09e00bc97ddc7add9795a169
This commit is contained in:
Rabi Mishra 2020-04-23 09:00:24 +05:30
parent afc55e5cc3
commit 4d35701abd
1 changed files with 4 additions and 306 deletions

View File

@ -214,37 +214,9 @@ composition nature of Heat and tripleo-heat-templates to substitute
the train release, these environment files are no longer required and they
have been removed from tripleo-heat-templates.
Optionally include an environment to set the ``DeploymentSwiftDataMap``
paramter called ``deployment-swift-data-map.yaml``::
# Append to deploy command
-e deployment-swift-data-map.yaml \
This environment sets the Swift container and object names for the deployment
metadata for each deployed server. This environment file must be written
entirely by the user. Example contents are as follows::
parameter_defaults:
DeploymentSwiftDataMap:
overcloud-controller-0:
container: overcloud-controller
object: 0
overcloud-controller-1:
container: overcloud-controller
object: 1
overcloud-controller-2:
container: overcloud-controller
object: 2
overcloud-novacompute-0:
container: overcloud-compute
object: 0
The ``DeploymentSwiftDataMap`` parameter's value is a dict. The keys are the
Heat assigned names for each server resource. The values are another dict of
the Swift container and object name Heat should use for storing the deployment
data for that server resource. These values should match the container and
object names as described in the
:ref:`pre-configuring-metadata-agent-configuration` section.
.. note::
Starting in the train release, support for setting DeploymentSwiftDataMap
parameter and configuring deployed servers using heat has been removed.
deployed-server with config-download
____________________________________
@ -433,291 +405,17 @@ NIC configs could be further customized to not require these parameters.
When using network isolation, refer to the documentation on using fixed
IP addresses for further information at :ref:`predictable_ips`.
Configuring Deployed Servers to poll Heat
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. _pre-configuring-metadata-agent-configuration:
Pre-configuring metadata agent configuration
____________________________________________
Beginning with the Pike release of TripleO, the deployed server's agents can be
configured for polling of the Heat deployment data independently of creating the
overcloud stack.
This is accomplished using the ``DeploymentSwiftDataMap`` parameter as shown in
the previous section. Once the Swift container and object names are chosen for
each deployed server, create Swift temporary url's that correspond to each
container/object and configure the temporary url in the agent configuration for
the respective deployed server.
For this example, the following ``DeploymentSwiftDataMap`` parameter value is
assumed to be::
parameter_defaults:
DeploymentSwiftDataMap:
overcloud-controller-0:
container: overcloud-controller
object: 0
overcloud-controller-1:
container: overcloud-controller
object: 1
overcloud-controller-2:
container: overcloud-controller
object: 2
overcloud-novacompute-0:
container: overcloud-compute
object: 0
Start by showing the Swift account and temporary URL key::
swift stat
Sample output looks like::
Account: AUTH_aa7784aae1ae41c38e6e01fd76caaa7c
Containers: 5
Objects: 706
Bytes: 3521748
Containers in policy "policy-0": 5
Objects in policy "policy-0": 706
Bytes in policy "policy-0": 3521748
Meta Temp-Url-Key: 25ad317c25bb89c62f5730f3b8cf8fca
X-Account-Project-Domain-Id: default
X-Openstack-Request-Id: txedaadba016cd474dac37f-00595ea5af
X-Timestamp: 1499288311.20888
X-Trans-Id: txedaadba016cd474dac37f-00595ea5af
Content-Type: text/plain; charset=utf-8
Accept-Ranges: bytes
Record the value of ``Account`` and ``Meta Temp-Url-Key`` from the output from
the above command.
If ``Meta Temp-Url-Key`` is not set, it can be set by running the following
command (choose a unique value for the actual key value)::
swift post -m "Temp-URL-Key:b3968d0207b54ece87cccc06515a89d4"
Create temporary URL's for each Swift object specified in
``DeploymentSwiftDataMap``::
swift tempurl GET 600000000 /v1/AUTH_aa7784aae1ae41c38e6e01fd76caaa7c/overcloud-controller/0 25ad317c25bb89c62f5730f3b8cf8fca
swift tempurl GET 600000000 /v1/AUTH_aa7784aae1ae41c38e6e01fd76caaa7c/overcloud-controller/1 25ad317c25bb89c62f5730f3b8cf8fca
swift tempurl GET 600000000 /v1/AUTH_aa7784aae1ae41c38e6e01fd76caaa7c/overcloud-controller/2 25ad317c25bb89c62f5730f3b8cf8fca
swift tempurl GET 600000000 /v1/AUTH_aa7784aae1ae41c38e6e01fd76caaa7c/overcloud-compute/0 25ad317c25bb89c62f5730f3b8cf8fca
See ``swift tempurl --help`` for a detailed explanation of each argument.
The above commands output URL paths which need to be joined with the Swift
public api endpoint to construct the full metadata URL. In a default TripleO
deployment, this value is ``http://192.168.24.1:8080`` but is likely different
for any real deployment.
Joining the output from one of the above commands with the Swift public
endpoint results in a URL that looks like::
http://192.168.24.1:8080/v1/AUTH_aa7784aae1ae41c38e6e01fd76caaa7c/overcloud-controller/0?temp_url_sig=92de8e4c66b77c54630dede8150b3ebcd46a1fca&temp_url_expires=700000000
Once each URL is obtained, configure the agent on each deployed server with its
respective metadata URL (e.g., use the metadata URL for controller 0 on the
deployed server intended to be controller 0, etc). Create the following file
(and ``local-data`` directory if necessary). Both should be root owned::
mkdir -p /var/lib/os-collect-config/local-data
/var/lib/os-collect-config/local-data/deployed-server.json
Example file contents::
{
"os-collect-config": {
"collectors": ["request", "local"],
"request": {
"metadata_url": "http://192.168.24.1:8080/v1/AUTH_aa7784aae1ae41c38e6e01fd76caaa7c/overcloud-controller/0?temp_url_sig=92de8e4c66b77c54630dede8150b3ebcd46a1fca&temp_url_expires=700000000"
}
}
}
The deployed server's agent is now configured.
Reading metadata configuration from Heat
________________________________________
If not using ``DeploymentSwiftDataMap``, the metadata configuration will have
to be read directly from Heat once the stack starts to create.
Upon executing the deployment command, Heat will begin creating the
``overcloud`` stack. The stack events are shown in the terminal as the stack
operation is in progress.
The resources corresponding to the deployed server will enter
CREATE_IN_PROGRESS. At this point, the Heat stack will not continue as it is
waiting for signals from the servers. The agents on the deployed servers need
to be configured to poll Heat for their configuration.
This point in the Heat events output will look similar to::
2017-01-14 13:25:13Z [overcloud.Compute.0.NovaCompute]: CREATE_IN_PROGRESS state changed
2017-01-14 13:25:14Z [overcloud.Controller.0.Controller]: CREATE_IN_PROGRESS state changed
2017-01-14 13:25:14Z [overcloud.Controller.1.Controller]: CREATE_IN_PROGRESS state changed
2017-01-14 13:25:15Z [overcloud.Controller.2.Controller]: CREATE_IN_PROGRESS state changed
The example output above is from a deployment with 3 controllers and 1 compute.
As seen, these resources have entered the CREATE_IN_PROGRESS state.
To configure the agents on the deployed servers, the request metadata url needs
to be read from Heat resource metadata on the individual resources, and
configured in the ``/etc/os-collect-config.conf`` configuration file on the
corresponding deployed servers.
Manual configuration of Heat agents
"""""""""""""""""""""""""""""""""""
These steps can be used to manually configure the Heat agents
(``os-collect-config``) on the deployed servers.
Query Heat for the request metadata url by first listing the nested
``deployed-server`` resources::
openstack stack resource list -n 5 overcloud | grep deployed-server
Example output::
| deployed-server | 895c08b8-f6f4-4564-b344-586603e7e970 | OS::Heat::DeployedServer | CREATE_COMPLETE | 2017-01-14T13:25:12Z | overcloud-Controller-pgeu4nxsuq6r-1-v4slfaduprak-Controller-ltxdxz2fin3d |
| deployed-server | 87cd8d81-8bbe-4c0b-9bd9-f5bcd1343265 | OS::Heat::DeployedServer | CREATE_COMPLETE | 2017-01-14T13:25:15Z | overcloud-Controller-pgeu4nxsuq6r-0-5uin56wp3ign-Controller-5wkislg4kiv5 |
| deployed-server | 3d387f61-dc6d-41f7-b3b8-5c9a0ab0ed7b | OS::Heat::DeployedServer | CREATE_COMPLETE | 2017-01-14T13:25:16Z | overcloud-Controller-pgeu4nxsuq6r-2-m6tgzatgnqrb-Controller-yczqaulovrla |
| deployed-server | cc230478-287e-4591-a905-bbfca6c89742 | OS::Heat::DeployedServer | CREATE_COMPLETE | 2017-01-14T13:25:13Z | overcloud-Compute-vllmnqf5d77h-0-kfm2xsdmtmr6-NovaCompute-67djxtyrwi6z |
Show the resource metadata for one of the resources. The last column in the
above output is a nested stack name and is used in the command below. The
command shows the resource metadata for the first controller (Controller.0)::
openstack stack resource metadata overcloud-Controller-pgeu4nxsuq6r-0-5uin56wp3ign-Controller-5wkislg4kiv5 deployed-server
The above command outputs a significant amount of JSON output representing the
resource metadata. To see just the request metadata_url, the command can be
piped to jq to show just the needed url::
openstack stack resource metadata overcloud-Controller-pgeu4nxsuq6r-0-5uin56wp3ign-Controller-5wkislg4kiv5 deployed-server | jq -r '.["os-collect-config"].request.metadata_url'
Example output::
http://10.12.53.41:8080/v1/AUTH_cf85adf63bc04912854473ff2b08b5a2/ov-ntroller-5wkislg4kiv5-deployed-server-yc4lx2d43dmb/244744c2-4af1-4626-92c6-94b2f78e3791?temp_url_sig=6d33b16ee2ae166a306633f04376ee54f0451ae4&temp_url_expires=2147483586
Using the above url, configure ``/etc/os-collect-config.conf`` on the deployed
server that is intended to be used as Controller 0. The full configuration
would be::
[DEFAULT]
collectors=request
command=os-refresh-config
polling_interval=30
[request]
metadata_url=http://10.12.53.41:8080/v1/AUTH_cf85adf63bc04912854473ff2b08b5a2/ov-ntroller-5wkislg4kiv5-deployed-server-yc4lx2d43dmb/244744c2-4af1-4626-92c6-94b2f78e3791?temp_url_sig=6d33b16ee2ae166a306633f04376ee54f0451ae4&temp_url_expires=2147483586
Once the configuration has been updated on the deployed server for Controller
0, restart the os-collect-config service::
sudo systemctl restart os-collect-config
Repeat the configuration for the other nodes in the Overcloud, by querying Heat
for the request metadata url, and updating the os-collect-config configuration
on the respective deployed servers.
Once all the agents have been properly configured, they will begin polling for
the software deployments to apply locally from Heat, and the Heat stack will
continue creating. If the deployment is successful, the Heat stack will
eventually go to the ``CREATE_COMPLETE`` state.
Automatic configuration of Heat agents
""""""""""""""""""""""""""""""""""""""
A script is included with ``tripleo-heat-templates`` that can be used to do
automatic configuration of the Heat agent on the deployed servers instead of
relying on the above manual process.
The script requires that the environment variables needed to authenticate with
the Undercloud's keystone have been set in the current shell. These environment
variables can be set by sourcing the Undercloud's ``stackrc`` file.
The script also requires that the user executing the script can ssh as the same
user to each deployed server, and that the remote user account has password-less
sudo access.
The following shows an example of running the script::
export OVERCLOUD_ROLES="ControllerDeployedServer ComputeDeployedServer"
export ControllerDeployedServer_hosts="192.168.25.1 192.168.25.2 192.168.25.3"
export ComputeDeployedServer_hosts="192.168.25.4"
tripleo-heat-templates/deployed-server/scripts/get-occ-config.sh
As shown above, the script is further configured by the ``$OVERCLOUD_ROLES``
environment variable, and corresponding ``$<role-name>_hosts`` variables.
``$OVERCLOUD_ROLES`` is a space separated list of the role names used for the
Overcloud deployment. These role names correspond to the name of the roles from
the roles data file used during the deployment.
Each ``$<role-name>_hosts`` variable is a space separated **ordered** list of
IP addresses that are the IP addresses of the deployed servers for the roles.
For example, in the above command, 192.168.25.1 is the IP of Controller 0,
192.168.25.2 is the IP of Controller 1, etc.
.. Note:: The IP addresses for the hosts in the ``$<role-name>_hosts`` variable
must be **ordered** to avoid Heat agent configuration mismatch.
Start with the address for the node with the lowest node-index and
count from there. For example, when deployed server IP addresses are:
* overcloud-controller-0: 192.168.25.10
* overcloud-controller-1: 192.168.25.11
* overcloud-controller-2: 192.168.25.12
* overcloud-compute-0: 192.168.25.20
* overcloud-compute-1: 192.168.25.21
The variables must be set as follows.
(**The order of entries is critical!**)
For Controllers::
# controller-0 controller-1 controller-2
ControllerDeployedServer_hosts="192.168.25.10 192.168.25.11 192.168.25.12"
For Computes::
# compute-0 compute-1
ComputeDeployedServer_hosts="192.168.25.20 192.168.25.21"
The script will take care of querying Heat for each request metadata url,
configure the url in the agent configuration file on each deployed server, and
restart the agent service.
Once the script executes successfully, the deployed servers will start polling
Heat for software deployments and the stack will continue creating.
Scaling the Overcloud
---------------------
Scaling Up
^^^^^^^^^^
When scaling up the Overcloud, the heat agents on the new servers being added
to the deployment need to be configured to correspond to the new nodes being
added.
For example, when scaling out compute nodes, the steps to be completed by the
When scaling out compute nodes, the steps to be completed by the
user are as follows:
#. Prepare the new deployed server(s) as shown in `Deployed Server
Requirements`_.
#. Start the scale out command. See :doc:`../post_deployment/scale_roles` for reference.
#. Once Heat has created the new resources for the new deployed server(s),
query Heat for the request metadata url for the new nodes, and configure the
remote agents as shown in `Manual configuration of Heat agents`_. The manual
configuration of the agents should be used when scaling up because the
automated script method will reconfigure all nodes, not just the new nodes
being added to the deployment.
Scaling Down
^^^^^^^^^^^^