Document DeployedServerPortMap deprecation

Adds documentation around the deprecation of DeployedServerPortMap and
also updates the deployed-server docs to remove the parts related to
configuring the nodes with Heat as that has not been used since Queens.

Change-Id: Ief2c8a3198a9ade9610a5d08b705a6e9617ab2f0
Signed-off-by: James Slagle <jslagle@redhat.com>
Depends-On: Ib59bb985fe15f612f93a33b1a688427e684654dd
This commit is contained in:
James Slagle
2021-06-15 13:11:06 -04:00
parent e146694003
commit 7e65bbeec1

View File

@@ -58,86 +58,32 @@ the purposes of assigning IP addresses to the port resources created by
tripleo-heat-templates.
Network L3 connectivity is still a requirement between the Undercloud and
Overcloud nodes. The Overcloud nodes will communicate over HTTP(s) to poll the
Undercloud for software configuration to be applied by their local agents.
The polling process requires L3 routable network connectivity from the deployed
servers to the Undercloud OpenStack API's.
If the ctlplane is a routable network from the deployed servers, then the
deployed servers can connect directly to the IP address specified by
``local_ip`` from ``undercloud.conf``. Alternatively, they could connect to the
virtual IP address (VIP) specified by ``undercloud_public_host``, if VIP's are
in use.
In the scenario where the ctlplane is **not** routable from the deployed
servers, then ``undercloud_public_host`` in ``undercloud.conf`` must be set to
a hostname that resolves to a routable IP address for the deployed servers. SSL
also must be configured on the Undercloud so that HAProxy is bound to that
configured hostname. Specify either ``undercloud_service_certifcate`` or
``generate_service_certificate`` to enable SSL during the Undercloud
installation. See :doc:`../features/ssl` for more information on configuring SSL.
Additionally, when the ctlplane is not routable from the deployed
servers, Heat on the Undercloud must be configured to use the public
endpoints for OpenStack service communication during the polling process
and be configured to use Swift temp URL's for signaling. Add the
following hiera data to a new or existing hiera file::
heat_clients_endpoint_type: public
heat::engine::default_deployment_signal_transport: TEMP_URL_SIGNAL
Specify the path to the hiera file with the ``hieradata_override``
configuration in ``undercloud.conf``::
hieradata_override = /path/to/custom/hiera/file.yaml
Overcloud nodes. The undercloud will need to be able to connect over a routable
IP to the overcloud nodes for software configuration with ansible.
Overcloud
_________
Configure the deployed servers that will be used as nodes in the Overcloud with
L3 connectivity to the Undercloud as needed. The configuration could be done
Configure the deployed servers that will be used as nodes in the overcloud with
L3 connectivity from the Undercloud as needed. The configuration could be done
via static or DHCP IP assignment.
Further networking configuration of Overcloud nodes is the same as in a typical
TripleO deployment, except for:
* Initial configuration of L3 connectivity to the Undercloud
* Initial configuration of L3 connectivity from the undercloud to the
overcloud.
* No requirement for dedicating a separate L2 network for provisioning
Testing Connectivity
____________________
On each Overcloud node run the following commands that test connectivity to the
Undercloud's IP address where OpenStack services are bound. Use either
``local_ip`` or ``undercloud_public_host`` in the following examples.
Test basic connectivity to the Undercloud::
ping <undercloud local_ip>
Test HTTP/HTTPS connectivity to Heat API on the Undercloud::
curl <undercloud local_ip>:8000
Sample output::
{"versions": [{"status": "CURRENT", "id": "v1.0", "links": [{"href": "http://10.12.53.41:8000/v1/", "rel": "self"}]}]}
Test HTTP/HTTPS connectivity to Swift on the Undercloud The html output shown
here is expected! While it indicates no resource was found, it demonstrates
successful connectivity to the HTTP service::
curl <undercloud local_ip>:8080
Sample output::
<html><h1>Not Found</h1><p>The resource could not be found.</p></html>
The output from the above curl commands demonstrates successful connectivity to
the web services bound at the Undercloud's ``local_ip`` IP address. It's
important to verify this connectivity prior to starting the deployment,
otherwise the deployment may be unsuccessful and difficult to debug.
Test connectivity from the undercloud to the overcloud nodes using SSH over the configured IP
address on the deployed servers. This should be the IP address that is
configured on ``--overcloud-ssh-network`` as passed to the ``openstack overcloud
deploy`` command. The key and user to use with the test should be the same as
used with ``--overcloud-ssh-key`` and ``--overcloud-ssh-user`` with the
deployment command.
Package repositories
^^^^^^^^^^^^^^^^^^^^
@@ -149,30 +95,6 @@ other areas of TripleO, such as Undercloud installation. See
:doc:`../repositories` for the detailed steps on how to
enable the standard repositories for TripleO.
Initial Package Installation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Once the repositories have been enabled on the deployed servers, the initial
packages for the Heat agent need to be installed. Run the following command on
each server intending to be used as part of the Overcloud::
sudo yum install python-heat-agent*
Certificate Authority Configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If SSL is enabled on the Undercloud endpoints, the deployed servers need to be
configured to trust the Certificate Authority (CA) that signed the SSL
certificates.
On a default Undercloud install with SSL where the CA is automatically
generated, the CA file will be at
``/etc/pki/ca-trust/source/anchors/cm-local-ca.pem``. Copy this CA file to the
``/etc/pki/ca-trust/source/anchors/`` directory on each deployed server. Then
run the following command on each server to update the CA trust::
sudo update-ca-trust extract
Deploying the Overcloud
-----------------------
@@ -282,47 +204,47 @@ meaningful parameters depending on the network architecture in use with
deployed servers. However, they still must be specified as they are required
parameters for the template interface.
The ``DeployedServerPortMap`` parameter can be used to assign fixed IP's
from either the ctlplane network or the IP address range for the
overcloud.
If the deployed servers were preconfigured with IP addresses from the ctlplane
network for the initial undercloud connectivity, then the same IP addresses can
be reused during the overcloud deployment. Add the following to a new
environment file and specify the environment file as part of the deployment
command::
resource_registry:
OS::TripleO::DeployedServer::ControlPlanePort: ../deployed-server/deployed-neutron-port.yaml
parameter_defaults:
DeployedServerPortMap:
controller0-ctlplane:
fixed_ips:
- ip_address: 192.168.24.9
subnets:
- cidr: 192.168.24.0/24
network:
tags:
- 192.168.24.0/24
compute0-ctlplane:
fixed_ips:
- ip_address: 192.168.24.8
subnets:
- cidr: 192.168.24..0/24
network:
tags:
- 192.168.24.0/24
The value of the DeployedServerPortMap variable is a map. The keys correspond
to the ``<short hostname>-ctlplane`` of the deployed servers. Specify the ip
addresses and subnet CIDR to be assigned under ``fixed_ips``.
In the case where the ctlplane is not routable from the deployed
servers, the virtual IPs on the ControlPlane, as well as the virtual IPs
for services (Redis and OVNDBs) must be statically assigned.
.. admonition:: Victoria and prior releases
The ``DeployedServerPortMap`` parameter can be used to assign fixed IP's
from either the ctlplane network or the IP address range for the
overcloud.
If the deployed servers were preconfigured with IP addresses from the ctlplane
network for the initial undercloud connectivity, then the same IP addresses can
be reused during the overcloud deployment. Add the following to a new
environment file and specify the environment file as part of the deployment
command::
resource_registry:
OS::TripleO::DeployedServer::ControlPlanePort: ../deployed-server/deployed-neutron-port.yaml
parameter_defaults:
DeployedServerPortMap:
controller0-ctlplane:
fixed_ips:
- ip_address: 192.168.24.9
subnets:
- cidr: 192.168.24.0/24
network:
tags:
- 192.168.24.0/24
compute0-ctlplane:
fixed_ips:
- ip_address: 192.168.24.8
subnets:
- cidr: 192.168.24..0/24
network:
tags:
- 192.168.24.0/24
The value of the DeployedServerPortMap variable is a map. The keys correspond
to the ``<short hostname>-ctlplane`` of the deployed servers. Specify the ip
addresses and subnet CIDR to be assigned under ``fixed_ips``.
In the case where the ctlplane is not routable from the deployed
servers, the virtual IPs on the ControlPlane, as well as the virtual IPs
for services (Redis and OVNDBs) must be statically assigned.
Use ``DeployedServerPortMap`` to assign an IP address from any CIDR::
resource_registry:
@@ -403,52 +325,165 @@ for services (Redis and OVNDBs) must be statically assigned.
- 192.168.100.0/24
Use ``DeployedServerPortMap`` to assign an ControlPlane Virtual IP address from
any CIDR, and the ``RedisVirtualFixedIPs`` and ``OVNDBsVirtualFixedIPs``
parameters to assing the ``RedisVip`` and ``OVNDBsVip``::
Use ``DeployedServerPortMap`` to assign an ControlPlane Virtual IP address from
any CIDR, and the ``RedisVirtualFixedIPs`` and ``OVNDBsVirtualFixedIPs``
parameters to assing the ``RedisVip`` and ``OVNDBsVip``::
resource_registry:
OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml
OS::TripleO::Network::Ports::ControlPlaneVipPort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml
resource_registry:
OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml
OS::TripleO::Network::Ports::ControlPlaneVipPort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml
parameter_defaults:
NeutronPublicInterface: eth1
EC2MetadataIp: 192.168.100.1
ControlPlaneDefaultRoute: 192.168.100.1
parameter_defaults:
NeutronPublicInterface: eth1
EC2MetadataIp: 192.168.100.1
ControlPlaneDefaultRoute: 192.168.100.1
# Set VIP's for redis and OVN
RedisVirtualFixedIPs:
- ip_address: 192.168.100.10
use_neutron: false
OVNDBsVirtualFixedIPs:
- ip_address: 192.168.100.11
use_neutron: false
# Set VIP's for redis and OVN
RedisVirtualFixedIPs:
- ip_address: 192.168.100.10
use_neutron: false
OVNDBsVirtualFixedIPs:
- ip_address: 192.168.100.11
use_neutron: false
DeployedServerPortMap:
control_virtual_ip:
DeployedServerPortMap:
control_virtual_ip:
fixed_ips:
- ip_address: 192.168.100.1
subnets:
- cidr: 192.168.100.0/24
network:
tags:
- 192.168.100.0/24
controller0-ctlplane:
fixed_ips:
- ip_address: 192.168.100.2
subnets:
- cidr: 192.168.100.0/24
network:
tags:
- 192.168.100.0/24
compute0-ctlplane:
fixed_ips:
- ip_address: 192.168.100.3
subnets:
- cidr: 192.168.100.0/24
network:
tags:
- 192.168.100.0/24
Beginning in Wallaby, the
``environments/deployed-server-deployed-neutron-ports.yaml`` environment, the
deployed-neutron-port.yaml template the DeployedServerPortMap parameter, are
deprecated in favor of using ``NodePortMap``, ``ControlPlaneVipData``, and
``VipPortMap`` with the generated ``environments/deployed-ports.yaml``
environment from ``environments/deployed-ports.j2.yaml``.
Using the previous example with ``DeployedServerPortMap``, that would be
migrated to use ``NodePortMap``, ``ControlPlaneVipData``, and ``VipPortMap`` as
follows. The example is expanded to also show parameter values as they would be
used when using network isolation::
parameter_defaults:
NodePortMap:
controller0:
ctlplane
ip_address: 192.168.100.2
ip_address_uri: 192.168.100.2
ip_subnet: 192.168.100.0/24
external:
ip_address: 10.0.0.10
ip_address_uri: 10.0.0.10
ip_subnet: 10.0.0.10/24
internal_api:
ip_address: 172.16.2.10
ip_address_uri: 172.16.2.10
ip_subnet: 172.16.2.10/24
management:
ip_address: 192.168.1.10
ip_address_uri: 192.168.1.10
ip_subnet: 192.168.1.10/24
storage:
ip_address: 172.16.1.10
ip_address_uri: 172.16.1.10
ip_subnet: 172.16.1.10/24
storage_mgmt:
ip_address: 172.16.3.10
ip_address_uri: 172.16.3.10
ip_subnet: 172.16.3.10/24
tenant:
ip_address: 172.16.0.10
ip_address_uri: 172.16.0.10
ip_subnet: 172.16.0.10/24
compute0:
ctlplane
ip_address: 192.168.100.3
ip_address_uri: 192.168.100.3
ip_subnet: 192.168.100.0/24
external:
ip_address: 10.0.0.110
ip_address_uri: 10.0.0.110
ip_subnet: 10.0.0.110/24
internal_api:
ip_address: 172.16.2.110
ip_address_uri: 172.16.2.110
ip_subnet: 172.16.2.110/24
management:
ip_address: 192.168.1.110
ip_address_uri: 192.168.1.110
ip_subnet: 192.168.1.110/24
storage:
ip_address: 172.16.1.110
ip_address_uri: 172.16.1.110
ip_subnet: 172.16.1.110/24
storage_mgmt:
ip_address: 172.16.3.110
ip_address_uri: 172.16.3.110
ip_subnet: 172.16.3.110/24
tenant:
ip_address: 172.16.0.110
ip_address_uri: 172.16.0.110
ip_subnet: 172.16.0.110/24
ControlPlaneVipData:
fixed_ips:
- ip_address: 192.168.100.1
subnets:
- cidr: 192.168.100.0/24
- ip_address: 192.168.100.1
name: control_virtual_ip
network:
tags:
- 192.168.100.0/24
controller0-ctlplane:
fixed_ips:
- ip_address: 192.168.100.2
tags: []
subnets:
- cidr: 192.168.100.0/24
network:
tags:
- 192.168.100.0/24
compute0-ctlplane:
fixed_ips:
- ip_address: 192.168.100.3
subnets:
- cidr: 192.168.100.0/24
network:
tags:
- 192.168.100.0/24
- ip_version: 4
VipPortMap
external:
ip_address: 10.0.0.100
ip_address_uri: 10.0.0.100
ip_subnet: 10.0.0.100/24
internal_api:
ip_address: 172.16.2.100
ip_address_uri: 172.16.2.100
ip_subnet: 172.16.2.100/24
storage:
ip_address: 172.16.1.100
ip_address_uri: 172.16.1.100
ip_subnet: 172.16.1.100/24
storage_mgmt:
ip_address: 172.16.3.100
ip_address_uri: 172.16.3.100
ip_subnet: 172.16.3.100/24
RedisVirtualFixedIPs:
- ip_address: 192.168.100.10
use_neutron: false
OVNDBsVirtualFixedIPs:
- ip_address: 192.168.100.11
use_neutron: false
The environment file ``environments/deployed-ports.yaml`` must then be included
with the deployment command.
The ``EC2MetadataIp`` and ``ControlPlaneDefaultRoute`` parameters are set to
the value of the control virtual IP address. These parameters are required to
@@ -456,9 +491,6 @@ be set by the sample NIC configs, and must be set to a pingable IP address in
order to pass the validations performed during deployment. Alternatively, the
NIC configs could be further customized to not require these parameters.
When using network isolation, refer to the documentation on using fixed
IP addresses for further information at :ref:`predictable_ips`.
Scaling the Overcloud
---------------------