docs/doc/source/shared/_includes/aio_duplex_extend.rest
Ron Stone 287cd4dc39 Merge Virtual and Bare Metal install docs
Incorporate Virtual content in BM AIO-DX install proc using synchronized tabs.
Make self-referential include paths relative
Move virtual includes to conventional folder for shared content
Convert link to root of Install docs to external. This is required
because link source page is used in context where internal ref is
not available
Address review comments from patchset 5
Integrate change on AIO-SX
Integrate Std with Storage
Integrate Dedicated Storage
Share includes to avoid indentation formatting errors (greybars) in DS
builds

Signed-off-by: Ron Stone <ronald.stone@windriver.com>
Change-Id: Ie04c5f8a065b5e2bf87176515bb1131b75a4fcf3
2024-01-12 17:21:02 +00:00

382 lines
15 KiB
ReStructuredText
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

=================================
Extend Capacity with Worker Nodes
=================================
.. start-aio-duplex-extend
This section describes the steps to extend capacity with worker nodes on a
|prod| All-in-one Duplex deployment configuration.
.. contents::
:local:
:depth: 1
--------------------------------
Install software on worker nodes
--------------------------------
#. Power on the worker node servers.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-install-sw-on-workers-power-on-dx
:end-before: end-install-sw-on-workers-power-on-dx
.. group-tab:: Virtual
#. On the host, power on the worker-0 and worker-1 virtual servers.
They will automatically attempt to network boot over the
management network:
.. code-block:: none
$ virsh start duplex-worker-0
$ virsh start duplex-worker-1
#. Attach to the consoles of worker-0 and worker-1.
.. code-block:: none
$ virsh console duplex-worker-0
$ virsh console duplex-worker-1~(keystone_admin)$
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-install-sw-on-workers-power-on-dx
:end-before: end-install-sw-on-workers-power-on-dx
#. As the worker nodes boot, a message appears on their console instructing
you to configure the personality of the node.
#. On the console of controller-0, list hosts to see newly discovered worker
node hosts (hostname=None):
::
~(keystone_admin)$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
| 3 | None | None | locked | disabled | offline |
| 4 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
#. Using the host id, set the personality of this host to 'worker':
.. code-block:: bash
~(keystone_admin)$ system host-update 3 personality=worker hostname=worker-0
~(keystone_admin)$ system host-update 4 personality=worker hostname=worker-1
This initiates the install of software on worker nodes.
This can take 5-10 minutes, depending on the performance of the host machine.
.. only:: starlingx
.. Note::
A node with Edgeworker personality is also available. See
:ref:`deploy-edgeworker-nodes` for details.
#. Wait for the install of software on the worker nodes to complete, for the
worker nodes to reboot, and for both to show as locked/disabled/online in
'system host-list'.
::
~(keystone_admin)$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
| 3 | worker-0 | worker | locked | disabled | online |
| 4 | worker-1 | worker | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
----------------------
Configure worker nodes
----------------------
#. The MGMT interfaces are partially set up by the network install procedure;
configuring the port used for network install as the MGMT port and
specifying the attached network of "mgmt".
Complete the MGMT interface configuration of the worker nodes by specifying
the attached network of "cluster-host".
.. code-block:: bash
for NODE in worker-0 worker-1; do
system interface-network-assign $NODE mgmt0 cluster-host
done
.. only:: openstack
*************************************
OpenStack-specific host configuration
*************************************
.. important::
**These steps are required only if the StarlingX OpenStack application
(|prefix|-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes
in support of installing the |prefix|-openstack manifest and helm-charts
later.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-labels-dx
:end-before: end-os-specific-host-config-labels-dx
.. group-tab:: Virtual
No additional steps are required.
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-labels-dx
:end-before: end-os-specific-host-config-labels-dx
#. **For OpenStack only:** Configure the host settings for the vSwitch.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-vswitch-dx
:end-before: end-os-specific-host-config-vswitch-dx
.. group-tab:: Virtual
No additional configuration is required for the OVS vswitch in
virtual environment.
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-vswitch-dx
:end-before: end-os-specific-host-config-vswitch-dx
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
needed for |prefix|-openstack nova ephemeral disks.
.. code-block:: bash
for NODE in worker-0 worker-1; do
~(keystone_admin)$ system host-lvg-add ${NODE} nova-local
# Get UUID of DISK to create PARTITION to be added to nova-local local volume group
# CEPH OSD Disks can NOT be used
# For best performance, do NOT use system/root disk, use a separate physical disk.
# List hosts disks and take note of UUID of disk to be used
~(keystone_admin)$ system host-disk-list ${NODE}
# ( if using ROOT DISK, select disk with device_path of
# 'system host-show ${NODE} | fgrep rootfs' )
# Create new PARTITION on selected disk, and take note of new partitions uuid in response
# The size of the PARTITION needs to be large enough to hold the aggregate size of
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
# but is limited by the size and space available on the physical disk you chose above.
# The following example uses a small PARTITION size such that you can fit it on the
# root disk, if that is what you chose above.
# Additional PARTITION(s) from additional disks can be added later if required.
PARTITION_SIZE=30
~(keystone_admin)$ system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
# Add new partition to nova-local local volume group
~(keystone_admin)$ system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
sleep 2
done
#. **For OpenStack only:** Configure data interfaces for worker nodes.
Data class interfaces are vswitch interfaces used by vswitch to provide
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
underlying assigned Data Network.
.. important::
A compute-labeled worker host **MUST** have at least one Data class
interface.
* Configure the data interfaces for worker nodes.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-data-dx
:end-before: end-os-specific-host-config-data-dx
.. group-tab:: Virtual
.. code-block:: none
# Execute the following lines with
~(keystone_admin)$ export NODE=worker-0
# and then repeat with
~(keystone_admin)$ export NODE=worker-1
~(keystone_admin)$ DATA0IF=eth1000
~(keystone_admin)$ DATA1IF=eth1001
~(keystone_admin)$ PHYSNET0='physnet0'
~(keystone_admin)$ PHYSNET1='physnet1'
~(keystone_admin)$ SPL=/tmp/tmp-system-port-list
~(keystone_admin)$ SPIL=/tmp/tmp-system-host-if-list
~(keystone_admin)$ system host-port-list ${NODE} --nowrap > ${SPL}
~(keystone_admin)$ system host-if-list -a ${NODE} --nowrap > ${SPIL}
~(keystone_admin)$ DATA0PCIADDR=$(cat $SPL | grep $DATA0IF | awk '{print $8}')
~(keystone_admin)$ DATA1PCIADDR=$(cat $SPL | grep $DATA1IF | awk '{print $8}')
~(keystone_admin)$ DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
~(keystone_admin)$ DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
~(keystone_admin)$ DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
~(keystone_admin)$ DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
~(keystone_admin)$ DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
~(keystone_admin)$ DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
~(keystone_admin)$ system datanetwork-add ${PHYSNET0} vlan
~(keystone_admin)$ system datanetwork-add ${PHYSNET1} vlan
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-data-dx
:end-before: end-os-specific-host-config-data-dx
*****************************************
Optionally Configure PCI-SRIOV Interfaces
*****************************************
#. **Optionally**, configure pci-sriov interfaces for worker nodes.
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers.
.. only:: openstack
This step is **optional** for OpenStack. Do this step if using |SRIOV|
vNICs in hosted application VMs. Note that pci-sriov interfaces can
have the same Data Networks assigned to them as vswitch data interfaces.
* Configure the pci-sriov interfaces for worker nodes.
.. code-block:: bash
# Execute the following lines with
~(keystone_admin)$ export NODE=worker-0
# and then repeat with
~(keystone_admin)$ export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as pci-sriov interfaces,
# based on displayed linux port name, pci address and device type.
~(keystone_admin)$ system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
~(keystone_admin)$ system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as pci-sriov class interfaces, MTU of 1500 and named sriov#
~(keystone_admin)$ system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
~(keystone_admin)$ system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
# If not already created, create Data Networks that the 'pci-sriov'
# interfaces will be connected to
~(keystone_admin)$ DATANET0='datanet0'
~(keystone_admin)$ DATANET1='datanet1'
~(keystone_admin)$ system datanetwork-add ${DATANET0} vlan
~(keystone_admin)$ system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to PCI-SRIOV Interfaces
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* **For Kubernetes only** To enable using |SRIOV| network attachments for
the above interfaces in Kubernetes hosted application containers:
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-sriov-dx
:end-before: end-os-specific-host-config-sriov-dx
.. group-tab:: Virtual
Configure the Kubernetes |SRIOV| device plugin.
.. code-block:: bash
for NODE in worker-0 worker-1; do
system host-label-assign $NODE sriovdp=enabled
done
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-sriov-dx
:end-before: end-os-specific-host-config-sriov-dx
-------------------
Unlock worker nodes
-------------------
Unlock worker nodes in order to bring them into service:
.. code-block:: bash
for NODE in worker-0 worker-1; do
system host-unlock $NODE
done
The worker nodes will reboot to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host
machine.
.. end-aio-duplex-extend