Lots of corrections / clarifications to install guides.

(for Greg W) - replaced abbrev substitutions inside **bold** with plain text as
these this combination does not expand. Also in one additional file that came up
in search. Some related margin adjustments ...
Initial content submit.
Fix cherrypick merge conflicts.

Signed-off-by: Greg Waines <greg.waines@windriver.com>
Change-Id: If8805359bf80b0f1359ef58c24493307310e7e28
Signed-off-by: Ron Stone <ronald.stone@windriver.com>
(cherry picked from commit 50bc21226b)
Signed-off-by: Ron Stone <ronald.stone@windriver.com>
This commit is contained in:
Greg Waines
2021-05-03 11:01:51 -04:00
committed by Ron Stone
parent a15fd32a10
commit cdb61ee207
7 changed files with 286 additions and 100 deletions

View File

@@ -90,6 +90,85 @@ Configure worker nodes
OpenStack-specific host configuration
*************************************
This step is optional for Kubernetes: Do this step if using |SRIOV| network
attachments in hosted application containers.
.. only:: starlingx
.. important::
This step is **required** for OpenStack.
* Configure the data interfaces
::
DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
# configure the datanetworks in sysinv, prior to referencing it
# in the ``system host-if-modify`` command'.
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
for NODE in worker-0 worker-1; do
echo "Configuring interface for: $NODE"
set -ex
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
set +ex
done
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure |SRIOV| device plug in:
::
for NODE in worker-0 worker-1; do
system host-label-assign $NODE sriovdp=enabled
done
* If planning on running |DPDK| in containers on this host, configure the
number of 1G Huge pages required on both |NUMA| nodes:
::
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application $NODE 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application $NODE 1 -1G 10
done
.. only:: starlingx
*************************************
OpenStack-specific host configuration
*************************************
.. important::
**These steps are required only if the StarlingX OpenStack application
@@ -241,64 +320,80 @@ Optionally Configure PCI-SRIOV Interfaces
* Configure the pci-sriov interfaces for worker nodes.
::
#. **For OpenStack only:** Configure the host settings for the vSwitch.
# Execute the following lines with
export NODE=worker-0
# and then repeat with
export NODE=worker-1
**If using OVS-DPDK vswitch, run the following commands:**
# List inventoried hosts ports and identify ports to be used as pci-sriov interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
Default recommendation for worker node is to use a single core on each
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
configured, if not run the following command.
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
::
# Modify configuration for these interfaces
# Configuring them as pci-sriov class interfaces, MTU of 1500 and named sriov#
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
for NODE in worker-0 worker-1; do
# Create Data Networks that the 'pci-sriov' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# assign 1 core on processor/numa-node 0 on worker-node to vswitch
system host-cpu-modify -f vswitch -p0 1 $NODE
# Assign Data Networks to PCI-SRIOV Interfaces
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
# assign 1 core on processor/numa-node 1 on worker-node to vswitch
system host-cpu-modify -f vswitch -p1 1 $NODE
done
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
each |NUMA| node where vswitch is running on this host, with the
following command:
* Configure the Kubernetes |SRIOV| device plugin.
::
::
for NODE in worker-0 worker-1; do
for NODE in worker-0 worker-1; do
system host-label-assign $NODE sriovdp=enabled
done
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 0
* If planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes.
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 1
::
done
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application $NODE 0 -1G 10
.. important::
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application $NODE 1 -1G 10
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
done
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
this host with the command:
::
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 0
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 1
done
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
needed for stx-openstack nova ephemeral disks.
::
for NODE in worker-0 worker-1; do
echo "Configuring Nova local for: $NODE"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
done
-------------------

View File

@@ -505,7 +505,7 @@ A persistent storage backend is required if your application requires |PVCs|.
For host-based Ceph:
#. Initialize with add ceph backend:
#. Initialize with add Ceph backend:
::

View File

@@ -225,8 +225,7 @@ Configure controller-0
#. Configure the |OAM| interface of controller-0 and specify the
attached network as "oam".
Use the |OAM| port name that is applicable to your deployment environment,
for example eth0:
Use the |OAM| port name that is applicable to your deployment environment, for example eth0:
::

View File

@@ -273,8 +273,8 @@ Configure worker nodes
.. important::
**These steps are required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
These steps are required only if the |org| OpenStack application
(|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the stx-openstack manifest and helm-charts later.
@@ -481,6 +481,102 @@ Optionally Configure PCI-SRIOV Interfaces
done
.. only:: starlingx
*************************************
OpenStack-specific host configuration
*************************************
.. important::
**This step is required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the stx-openstack manifest and helm-charts later.
::
for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled
done
#. **For OpenStack only:** Configure the host settings for the vSwitch.
**If using OVS-DPDK vswitch, run the following commands:**
Default recommendation for worker node is to use a single core on each
numa-node for |OVS|-|DPDK| vswitch. This should have been automatically
configured, if not run the following command.
::
for NODE in worker-0 worker-1; do
# assign 1 core on processor/numa-node 0 on worker-node to vswitch
system host-cpu-modify -f vswitch -p0 1 $NODE
# assign 1 core on processor/numa-node 1 on worker-node to vswitch
system host-cpu-modify -f vswitch -p1 1 $NODE
done
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on
each |NUMA| node where vswitch is running on this host, with the
following command:
::
for NODE in worker-0 worker-1; do
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 0
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 1
done
.. important::
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for
this host with the command:
::
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 0
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 1
done
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for stx-openstack nova ephemeral disks.
::
for NODE in worker-0 worker-1; do
echo "Configuring Nova local for: $NODE"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
done
-------------------
Unlock worker nodes

View File

@@ -30,16 +30,16 @@
* Runs directly on the host (it is not containerized).
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
To deploy the default containerized |OVS|:
**To deploy the default containerized OVS:**
::
system modify --vswitch_type none
Do not run any vSwitch directly on the host, instead, use the containerized
This does not run any vSwitch directly on the host, instead, it uses the containerized
|OVS| defined in the helm charts of stx-openstack manifest.
To deploy |OVS|-|DPDK|, run the following command:
**To deploy OVS-DPDK, run the following command:**
::
@@ -53,32 +53,23 @@
When using |OVS|-|DPDK|, configure vSwitch memory per |NUMA| node with
the following command:
::
system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
For example:
::
system host-memory-modify -f vswitch -1G 1 worker-0 0
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
.. important::
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment with
the command:
|VMs| created in an |OVS|-|DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
::
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment with
the command:
system host-memory-modify -1G <1G hugepages number> <hostname or id> <processor>
::
For example:
::
system host-memory-modify worker-0 0 -1G 10
system host-memory-modify -f application -1G 10 worker-0 0
system host-memory-modify -f application -1G 10 worker-1 1
.. note::
@@ -86,24 +77,24 @@
locking and unlocking all compute-labeled worker nodes (and/or AIO
controllers) to apply the change.
#. **For OpenStack only:** Set up disk partition for nova-local volume
group, which is needed for stx-openstack nova ephemeral disks.
#. **For OpenStack only:** Set up disk partition for nova-local volume
group, which is needed for stx-openstack nova ephemeral disks.
::
::
export NODE=controller-0
export NODE=controller-0
echo ">>> Getting root disk info"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
echo ">>> Getting root disk info"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
echo ">>>> Configuring nova-local"
NOVA_SIZE=34
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
sleep 2
echo ">>>> Configuring nova-local"
NOVA_SIZE=34
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
sleep 2
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
.. incl-config-controller-0-openstack-specific-aio-simplex-end:

View File

@@ -244,7 +244,7 @@ OpenStack-specific host configuration
* Runs directly on the host (it is not containerized).
* Requires that at least 1 core be assigned/dedicated to the vSwitch function.
To deploy the default containerized |OVS|:
To deploy the default containerized OVS|:
::
@@ -333,7 +333,8 @@ Unlock controller-0 in order to bring it into service:
system host-unlock controller-0
Controller-0 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host machine.
service. This can take 5-10 minutes, depending on the performance of the host
machine.
-------------------------------------------------
Install software on controller-1 and worker nodes
@@ -377,17 +378,18 @@ Install software on controller-1 and worker nodes
system host-update 3 personality=worker hostname=worker-0
Repeat for worker-1. Power on worker-1 and wait for the new host (hostname=None) to
be discovered by checking 'system host-list':
Repeat for worker-1. Power on worker-1 and wait for the new host
(hostname=None) to be discovered by checking 'system host-list':
::
system host-update 4 personality=worker hostname=worker-1
For rook storage, there is no storage personality. Some hosts with worker personality
providers storage service. Here we still named these worker host storage-x.
Repeat for storage-0 and storage-1. Power on storage-0, storage-1 and wait for the
new host (hostname=None) to be discovered by checking 'system host-list':
For rook storage, there is no storage personality. Some hosts with worker
personality providers storage service. Here we still named these worker host
storage-x. Repeat for storage-0 and storage-1. Power on storage-0, storage-1
and wait for the new host (hostname=None) to be discovered by checking
'system host-list':
::
@@ -426,8 +428,8 @@ Configure controller-1
.. incl-config-controller-1-start:
Configure the OAM and MGMT interfaces of controller-0 and specify the attached
networks. Use the OAM and MGMT port names, for example eth0, that are applicable
Configure the |OAM| and MGMT interfaces of controller-0 and specify the attached
networks. Use the |OAM| and MGMT port names, for example eth0, that are applicable
to your deployment environment.
(Note that the MGMT interface is partially set up automatically by the network
@@ -529,8 +531,8 @@ Configure worker nodes
system host-label-assign ${NODE} sriovdp=enabled
done
* If planning on running DPDK in containers on this host, configure the number
of 1G Huge pages required on both NUMA nodes:
* If planning on running |DPDK| in containers on this host, configure the number
of 1G Huge pages required on both |NUMA| nodes:
::
@@ -623,8 +625,9 @@ Unlock worker nodes in order to bring them into service:
system host-unlock $NODE
done
The worker nodes will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host machine.
The worker nodes will reboot in order to apply configuration changes and come
into service. This can take 5-10 minutes, depending on the performance of the
host machine.
-----------------------
Configure storage nodes
@@ -632,7 +635,8 @@ Configure storage nodes
#. Assign the cluster-host network to the MGMT interface for the storage nodes.
Note that the MGMT interfaces are partially set up by the network install procedure.
Note that the MGMT interfaces are partially set up by the network install
procedure.
::
@@ -660,7 +664,8 @@ Unlock storage nodes in order to bring them into service:
done
The storage nodes will reboot in order to apply configuration changes and come
into service. This can take 5-10 minutes, depending on the performance of the host machine.
into service. This can take 5-10 minutes, depending on the performance of the
host machine.
-------------------------------------------------
Install Rook application manifest and helm-charts

View File

@@ -66,7 +66,7 @@ Deletes subcloud group details from the database.
[--update_apply_type UPDATE_APPLY_TYPE]
[--max_parallel_subclouds MAX_PARALLEL_SUBCLOUDS]
For example,
For example:
.. code-block:: none