r5 install changes from master

Applying directly to r5 instead of cherrypicking as commit contains
r6 changes.
Add missing include file and vendor strings.
Add missing labels.

Signed-off-by: Ron Stone <ronald.stone@windriver.com>
Change-Id: Icf429001faa34b1414f4b1bb8e68a15090b5921d
Signed-off-by: Ron Stone <ronald.stone@windriver.com>
This commit is contained in:
Ron Stone
2021-09-01 10:51:22 -04:00
parent 980eb029d0
commit e6b58a2180
11 changed files with 607 additions and 292 deletions

View File

@@ -0,0 +1,5 @@
.. ref1-begin
.. ref1-end
.. ref2-begin
.. ref2-end

View File

@@ -0,0 +1,2 @@
.. ref1-begin
.. ref1-end

View File

@@ -0,0 +1,31 @@
::
cd ~
cat <<EOF > localhost.yml
system_mode: duplex
dns_servers:
- 8.8.8.8
- 8.8.4.4
external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
admin_username: admin
admin_password: <admin-password>
ansible_become_pass: <sysadmin-password>
# OPTIONALLY provide a ROOT CA certificate and key for k8s root ca,
# if not specified, one will be auto-generated,
# see Kubernetes Root CA Certificate in Security Guide for details.
k8s_root_ca_cert: < your_root_ca_cert.pem >
k8s_root_ca_key: < your_root_ca_key.pem >
apiserver_cert_sans:
- < your_hostname_for_oam_floating.your_domain >
EOF

View File

@@ -0,0 +1,29 @@
::
cd ~
cat <<EOF > localhost.yml
system_mode: simplex
dns_servers:
- 8.8.8.8
- 8.8.4.4
external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
admin_username: admin
admin_password: <admin-password>
ansible_become_pass: <sysadmin-password>
# OPTIONALLY provide a ROOT CA certificate and key for k8s root ca,
# if not specified, one will be auto-generated,
# see Kubernetes Root CA Certificate in Security Guide for details.
k8s_root_ca_cert: < your_root_ca_cert.pem >
k8s_root_ca_key: < your_root_ca_key.pem >
apiserver_cert_sans:
- < your_hostname_for_oam_floating.your_domain >
EOF

View File

@@ -0,0 +1,2 @@
.. ref1-begin
.. ref1-end

View File

@@ -71,3 +71,8 @@
.. ..
.. |installer-image-name| replace:: bootimage .. |installer-image-name| replace:: bootimage
.. |OVS-DPDK| replace:: |OVS|-|DPDK|
.. |ovs-dpdk| replace:: ovs-dpdk
.. |vswitch-label| replace:: openvswitch=enabled

View File

@@ -39,7 +39,8 @@ Bootstrap system on controller-0
-------------------------------- --------------------------------
#. Login using the username / password of "sysadmin" / "sysadmin". #. Login using the username / password of "sysadmin" / "sysadmin".
When logging in for the first time, you will be forced to change the password. When logging in for the first time, you will be forced to change the
password.
:: ::
@@ -107,31 +108,11 @@ Bootstrap system on controller-0
#. Create a minimal user configuration override file. #. Create a minimal user configuration override file.
To use this method, create your override file at ``$HOME/localhost.yml`` To use this method, create your override file at ``$HOME/localhost.yml``
and provide the minimum required parameters for the deployment configuration and provide the minimum required parameters for the deployment
as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing configuration as shown in the example below. Use the OAM IP SUBNET and IP
applicable to your deployment environment. ADDRESSing applicable to your deployment environment.
:: .. include:: /_includes/min-bootstrap-overrides-non-simplex.rest
cd ~
cat <<EOF > localhost.yml
system_mode: duplex
dns_servers:
- 8.8.8.8
- 8.8.4.4
external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
admin_username: admin
admin_password: <admin-password>
ansible_become_pass: <sysadmin-password>
EOF
.. only:: starlingx .. only:: starlingx
@@ -148,7 +129,7 @@ Bootstrap system on controller-0
:start-after: docker-reg-begin :start-after: docker-reg-begin
:end-before: docker-reg-end :end-before: docker-reg-end
.. code-block:: .. code-block:: yaml
docker_registries: docker_registries:
quay.io: quay.io:
@@ -187,7 +168,7 @@ Bootstrap system on controller-0
:start-after: firewall-begin :start-after: firewall-begin
:end-before: firewall-end :end-before: firewall-end
.. code-block:: .. code-block:: bash
# Add these lines to configure Docker to use a proxy server # Add these lines to configure Docker to use a proxy server
docker_http_proxy: http://my.proxy.com:1080 docker_http_proxy: http://my.proxy.com:1080
@@ -222,44 +203,49 @@ Configure controller-0
#. Configure the |OAM| interface of controller-0 and specify the #. Configure the |OAM| interface of controller-0 and specify the
attached network as "oam". attached network as "oam".
Use the |OAM| port name that is applicable to your deployment environment, The following example configures the |OAM| interface on a physical untagged
for example eth0: ethernet port. Use the |OAM| port name that is applicable to your
deployment environment, for example eth0:
:: .. code-block:: bash
OAM_IF=<OAM-PORT> OAM_IF=<OAM-PORT>
system host-if-modify controller-0 $OAM_IF -c platform system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam system interface-network-assign controller-0 $OAM_IF oam
To configure a vlan or aggregated ethernet interface, see :ref:`Node Interfaces <node-interfaces-index>`.
#. Configure the MGMT interface of controller-0 and specify the attached #. Configure the MGMT interface of controller-0 and specify the attached
networks of both "mgmt" and "cluster-host". networks of both "mgmt" and "cluster-host".
Use the MGMT port name that is applicable to your deployment environment, The following example configures the MGMT interface on a physical untagged
for example eth1: ethernet port. Use the MGMT port name that is applicable to your deployment
environment, for example eth1:
.. code-block:: none .. code-block:: bash
MGMT_IF=<MGMT-PORT> MGMT_IF=<MGMT-PORT>
# De-provision loopback interface and
# remove mgmt and cluster-host networks from loopback interface
system host-if-modify controller-0 lo -c none system host-if-modify controller-0 lo -c none
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
for UUID in $IFNET_UUIDS; do for UUID in $IFNET_UUIDS; do
system interface-network-remove ${UUID} system interface-network-remove ${UUID}
done done
# Configure management interface and assign mgmt and cluster-host networks to it
system host-if-modify controller-0 $MGMT_IF -c platform system host-if-modify controller-0 $MGMT_IF -c platform
system interface-network-assign controller-0 $MGMT_IF mgmt system interface-network-assign controller-0 $MGMT_IF mgmt
system interface-network-assign controller-0 $MGMT_IF cluster-host system interface-network-assign controller-0 $MGMT_IF cluster-host
To configure a vlan or aggregated ethernet interface, see :ref:`Node
Interfaces <node-interfaces-index>`.
#. Configure |NTP| servers for network time synchronization: #. Configure |NTP| servers for network time synchronization:
:: ::
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration
<ptp-server-config-index>`.
.. only:: openstack .. only:: openstack
************************************* *************************************
@@ -269,84 +255,115 @@ Configure controller-0
.. important:: .. important::
**These steps are required only if the StarlingX OpenStack application **These steps are required only if the StarlingX OpenStack application
(stx-openstack) will be installed.** (|prefix|-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in #. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
support of installing the stx-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
:: .. only:: starlingx
.. parsed-literal::
system host-label-assign controller-0 openstack-control-plane=enabled
system host-label-assign controller-0 openstack-compute-node=enabled
system host-label-assign controller-0 |vswitch-label|
system host-label-assign controller-0 sriov=enabled
.. only:: partner
.. include:: /_includes/aio_duplex_install_kubernetes.rest
:start-after: ref1-begin
:end-before: ref1-end
#. **For OpenStack only:** Due to the additional openstack services running
on the |AIO| controller platform cores, a minimum of 4 platform cores are
required, 6 platform cores are recommended.
Increase the number of platform cores with the following commands:
.. code-block::
# assign 6 cores on processor/numa-node 0 on controller-0 to platform
system host-cpu-modify -f platform -p0 6 controller-0
system host-label-assign controller-0 openstack-control-plane=enabled
system host-label-assign controller-0 openstack-compute-node=enabled
system host-label-assign controller-0 openvswitch=enabled
system host-label-assign controller-0 sriov=enabled
#. **For OpenStack only:** Configure the system setting for the vSwitch. #. **For OpenStack only:** Configure the system setting for the vSwitch.
StarlingX has |OVS| (kernel-based) vSwitch configured as default: .. only:: starlingx
* Runs in a container; defined within the helm charts of stx-openstack StarlingX has |OVS| (kernel-based) vSwitch configured as default:
manifest.
* Shares the core(s) assigned to the platform.
If you require better performance, |OVS|-|DPDK| (|OVS| with the Data * Runs in a container; defined within the helm charts of |prefix|-openstack
Plane Development Kit, which is supported only on bare metal hardware) manifest.
should be used: * Shares the core(s) assigned to the platform.
* Runs directly on the host (it is not containerized). If you require better performance, |OVS-DPDK| (|OVS| with the Data
* Requires that at least 1 core be assigned/dedicated to the vSwitch function. Plane Development Kit, which is supported only on bare metal hardware)
should be used:
**To deploy the default containerized OVS:** * Runs directly on the host (it is not containerized).
Requires that at least 1 core be assigned/dedicated to the vSwitch
function.
:: To deploy the default containerized |OVS|:
system modify --vswitch_type none ::
This does not run any vSwitch directly on the host, instead, it uses the system modify --vswitch_type none
containerized |OVS| defined in the helm charts of stx-openstack
manifest.
**To deploy OVS-DPDK, run the following command:** This does not run any vSwitch directly on the host, instead, it uses
the containerized |OVS| defined in the helm charts of |prefix|-openstack
manifest.
:: To deploy |OVS-DPDK|, run the following command:
system modify --vswitch_type ovs-dpdk .. parsed-literal::
Default recommendation for an |AIO|-controller is to use a single core system modify --vswitch_type |ovs-dpdk|
for |OVS|-|DPDK| vswitch.
:: Default recommendation for an |AIO|-controller is to use a single
core |OVS-DPDK| vswitch.
.. code-block:: bash
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch # assign 1 core on processor/numa-node 0 on controller-0 to vswitch
system host-cpu-modify -f vswitch -p0 1 controller-0 system host-cpu-modify -f vswitch -p0 1 controller-0
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created Once vswitch_type is set to |OVS-DPDK|, any subsequent nodes created will
will default to automatically assigning 1 vSwitch core for |AIO| default to automatically assigning 1 vSwitch core for |AIO| controllers
controllers and 2 vSwitch cores for compute-labeled worker nodes. and 2 vSwitch cores (both on numa-node 0; physical NICs are typically on
first numa-node) for compute-labeled worker nodes.
When using |OVS-DPDK|, configure 1G huge page for vSwitch memory on each
|NUMA| node on the host. It is recommended to configure 1x 1G huge page
(-1G 1) for vSwitch memory on each |NUMA| node on the host.
However, due to a limitation with Kubernetes, only a single huge page
size is supported on any one host. If your application |VMs| require 2M
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
memory on each |NUMA| node on the host.
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on .. code-block::
each |NUMA| node where vswitch is running on this host, with the
following command:
::
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch # assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
system host-memory-modify -f vswitch -1G 1 controller-0 0 system host-memory-modify -f vswitch -1G 1 controller-0 0
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
system host-memory-modify -f vswitch -1G 1 controller-0 1
.. important:: .. important::
|VMs| created in an |OVS|-|DPDK| environment must be configured to use |VMs| created in an |OVS-DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property: huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large ``hw:mem_page_size=large``
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment on this host with Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
the commands: this host, assuming 1G huge page size is being used on this host, with
the following commands:
::
.. code-block:: bash
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications # assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-memory-modify -f application -1G 10 controller-0 0 system host-memory-modify -f application -1G 10 controller-0 0
@@ -354,43 +371,58 @@ Configure controller-0
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications # assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
system host-memory-modify -f application -1G 10 controller-0 1 system host-memory-modify -f application -1G 10 controller-0 1
.. note:: .. note::
After controller-0 is unlocked, changing vswitch_type requires After controller-0 is unlocked, changing vswitch_type requires
locking and unlocking controller-0 to apply the change. locking and unlocking controller-0 to apply the change.
#. **For OpenStack only:** Set up disk partition for nova-local volume #. **For OpenStack only:** Set up disk partition for nova-local volume
group, which is needed for stx-openstack nova ephemeral disks. group, which is needed for |prefix|-openstack nova ephemeral disks.
:: .. code-block:: bash
export NODE=controller-0 export NODE=controller-0
echo ">>> Getting root disk info" # Create nova-local local volume group
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}') system host-lvg-add ${NODE} nova-local
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
echo ">>>> Configuring nova-local" # Get UUID of DISK to create PARTITION to be added to nova-local local volume group
NOVA_SIZE=34 # CEPH OSD Disks can NOT be used
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE}) # For best performance, do NOT use system/root disk, use a separate physical disk.
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local # List hosts disks and take note of UUID of disk to be used
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID} system host-disk-list ${NODE}
sleep 2 # ( if using ROOT DISK, select disk with device_path of
# system host-show ${NODE} | grep rootfs )
# Create new PARTITION on selected disk, and take note of new partitions uuid in response
# The size of the PARTITION needs to be large enough to hold the aggregate size of
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
# but is limited by the size and space available on the physical disk you chose above.
# The following example uses a small PARTITION size such that you can fit it on the
# root disk, if that is what you chose above.
# Additional PARTITION(s) from additional disks can be added later if required.
PARTITION_SIZE=30
system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
# Add new partition to nova-local local volume group
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
sleep 2
#. **For OpenStack only:** Configure data interfaces for controller-0. #. **For OpenStack only:** Configure data interfaces for controller-0.
Data class interfaces are vswitch interfaces used by vswitch to provide Data class interfaces are vswitch interfaces used by vswitch to provide
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the |VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
underlying assigned Data Network. underlying assigned Data Network.
.. important:: .. important::
A compute-labeled All-in-one controller host **MUST** have at least one Data class interface. A compute-labeled All-in-one controller host **MUST** have at least
one Data class interface.
* Configure the data interfaces for controller-0. * Configure the data interfaces for controller-0.
:: .. code-block:: bash
export NODE=controller-0 export NODE=controller-0
@@ -411,8 +443,6 @@ Configure controller-0
# Create Data Networks that vswitch 'data' interfaces will be connected to # Create Data Networks that vswitch 'data' interfaces will be connected to
DATANET0='datanet0' DATANET0='datanet0'
DATANET1='datanet1' DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces # Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0} system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
@@ -436,7 +466,7 @@ Optionally Configure PCI-SRIOV Interfaces
* Configure the pci-sriov interfaces for controller-0. * Configure the pci-sriov interfaces for controller-0.
:: .. code-block:: bash
export NODE=controller-0 export NODE=controller-0
@@ -454,7 +484,8 @@ Optionally Configure PCI-SRIOV Interfaces
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
# Create Data Networks that the 'pci-sriov' interfaces will be connected to # If not already created, create Data Networks that the 'pci-sriov'
# interfaces will be connected to
DATANET0='datanet0' DATANET0='datanet0'
DATANET1='datanet1' DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan system datanetwork-add ${DATANET0} vlan
@@ -465,8 +496,8 @@ Optionally Configure PCI-SRIOV Interfaces
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1} system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* To enable using |SRIOV| network attachments for the above interfaces in * **For Kubernetes Only:** To enable using |SRIOV| network attachments for
Kubernetes hosted application containers: the above interfaces in Kubernetes hosted application containers:
* Configure the Kubernetes |SRIOV| device plugin. * Configure the Kubernetes |SRIOV| device plugin.
@@ -478,7 +509,7 @@ Optionally Configure PCI-SRIOV Interfaces
containers on this host, configure the number of 1G Huge pages required containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes. on both |NUMA| nodes.
:: .. code-block:: bash
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications # assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-memory-modify -f application controller-0 0 -1G 10 system host-memory-modify -f application controller-0 0 -1G 10
@@ -505,7 +536,7 @@ A persistent storage backend is required if your application requires |PVCs|.
For host-based Ceph: For host-based Ceph:
#. Initialize with add Ceph backend: #. Initialize with add ceph backend:
:: ::
@@ -525,27 +556,23 @@ For host-based Ceph:
# List OSD storage devices # List OSD storage devices
system host-stor-list controller-0 system host-stor-list controller-0
.. only:: starlingx
# Add disk as an OSD storage For Rook container-based Ceph:
system host-stor-add controller-0 osd <disk-uuid>
.. only:: starlingx #. Initialize with add ceph-rook backend:
For Rook container-based Ceph: ::
#. Initialize with add ceph-rook backend: system storage-backend-add ceph-rook --confirmed
:: #. Assign Rook host labels to controller-0 in support of installing the
rook-ceph-apps manifest/helm-charts later:
system storage-backend-add ceph-rook --confirmed ::
#. Assign Rook host labels to controller-0 in support of installing the system host-label-assign controller-0 ceph-mon-placement=enabled
rook-ceph-apps manifest/helm-charts later: system host-label-assign controller-0 ceph-mgr-placement=enabled
::
system host-label-assign controller-0 ceph-mon-placement=enabled
system host-label-assign controller-0 ceph-mgr-placement=enabled
------------------- -------------------
@@ -556,6 +583,44 @@ Unlock controller-0
:start-after: incl-unlock-controller-0-aio-simplex-start: :start-after: incl-unlock-controller-0-aio-simplex-start:
:end-before: incl-unlock-controller-0-aio-simplex-end: :end-before: incl-unlock-controller-0-aio-simplex-end:
.. only:: openstack
* **For OpenStack only:** Due to the additional openstack services
containers running on the controller host, the size of the docker
filesystem needs to be increased from the default size of 30G to 60G.
.. code-block:: bash
# check existing size of docker fs
system host-fs-list controller-0
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
system host-lvg-list controller-0
# if existing docker fs size + cgts-vg available space is less than
# 60G, you will need to add a new disk partition to cgts-vg.
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
# ( if not use another unused disk )
# Get device path of ROOT DISK
system host-show controller-0 | grep rootfs
# Get UUID of ROOT DISK by listing disks
system host-disk-list controller-0
# Create new PARTITION on ROOT DISK, and take note of new partitions uuid in response
# Use a partition size such that youll be able to increase docker fs size from 30G to 60G
PARTITION_SIZE=30
system host-disk-partition-add -t lvm_phys_vol controller-0 <root-disk-uuid> ${PARTITION_SIZE}
# Add new partition to cgts-vg local volume group
system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID>
sleep 2 # wait for partition to be added
# Increase docker filesystem to 60G
system host-fs-modify controller-0 docker=60
------------------------------------- -------------------------------------
Install software on controller-1 node Install software on controller-1 node
------------------------------------- -------------------------------------
@@ -608,8 +673,9 @@ Configure controller-1
#. Configure the |OAM| interface of controller-1 and specify the #. Configure the |OAM| interface of controller-1 and specify the
attached network of "oam". attached network of "oam".
Use the |OAM| port name that is applicable to your deployment environment, The following example configures the |OAM| interface on a physical untagged
for example eth0: ethernet port, use the |OAM| port name that is applicable to your
deployment environment, for example eth0:
:: ::
@@ -617,6 +683,9 @@ Configure controller-1
system host-if-modify controller-1 $OAM_IF -c platform system host-if-modify controller-1 $OAM_IF -c platform
system interface-network-assign controller-1 $OAM_IF oam system interface-network-assign controller-1 $OAM_IF oam
To configure a vlan or aggregated ethernet interface, see :ref:`Node
Interfaces <node-interfaces-index>`.
#. The MGMT interface is partially set up by the network install procedure; #. The MGMT interface is partially set up by the network install procedure;
configuring the port used for network install as the MGMT port and configuring the port used for network install as the MGMT port and
specifying the attached network of "mgmt". specifying the attached network of "mgmt".
@@ -636,53 +705,84 @@ Configure controller-1
.. important:: .. important::
**These steps are required only if the StarlingX OpenStack application These steps are required only if the |prod-os| application
(stx-openstack) will be installed.** (|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in #. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
support of installing the stx-openstack manifest and helm-charts later. support of installing the |prefix|-openstack manifest and helm-charts later.
:: .. only:: starlingx
::
system host-label-assign controller-1 openstack-control-plane=enabled
system host-label-assign controller-1 openstack-compute-node=enabled
system host-label-assign controller-1 |vswitch-label|
system host-label-assign controller-1 sriov=enabled
.. only:: partner
.. include:: /_includes/aio_duplex_install_kubernetes.rest
:start-after: ref2-begin
:end-before: ref2-end
#. **For OpenStack only:** Due to the additional openstack services running
on the |AIO| controller platform cores, a minimum of 4 platform cores are
required, 6 platform cores are recommended.
Increase the number of platform cores with the following commands:
.. code-block::
# assign 6 cores on processor/numa-node 0 on controller-1 to platform
system host-cpu-modify -f platform -p0 6 controller-1
system host-label-assign controller-1 openstack-control-plane=enabled
system host-label-assign controller-1 openstack-compute-node=enabled
system host-label-assign controller-1 openvswitch=enabled
system host-label-assign controller-1 sriov=enabled
#. **For OpenStack only:** Configure the host settings for the vSwitch. #. **For OpenStack only:** Configure the host settings for the vSwitch.
**If using OVS-DPDK vswitch, run the following commands:** If using |OVS-DPDK| vswitch, run the following commands:
Default recommendation for an AIO-controller is to use a single core Default recommendation for an |AIO|-controller is to use a single core
for |OVS|-|DPDK| vswitch. This should have been automatically configured, for |OVS-DPDK| vSwitch. This should have been automatically configured,
if not run the following command. if not run the following command.
:: .. code-block:: bash
# assign 1 core on processor/numa-node 0 on controller-1 to vswitch # assign 1 core on processor/numa-node 0 on controller-1 to vswitch
system host-cpu-modify -f vswitch -p0 1 controller-1 system host-cpu-modify -f vswitch -p0 1 controller-1
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
each |NUMA| node where vswitch is running on this host, with the each |NUMA| node on the host. It is recommended
following command: to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
node on the host.
:: However, due to a limitation with Kubernetes, only a single huge page
size is supported on any one host. If your application VMs require 2M
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
memory on each |NUMA| node on the host.
.. code-block:: bash
# assign 1x 1G huge page on processor/numa-node 0 on controller-1 to vswitch # assign 1x 1G huge page on processor/numa-node 0 on controller-1 to vswitch
system host-memory-modify -f vswitch -1G 1 controller-1 0 system host-memory-modify -f vswitch -1G 1 controller-1 0
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
system host-memory-modify -f vswitch -1G 1 controller-1 1
.. important:: .. important::
|VMs| created in an |OVS|-|DPDK| environment must be configured to use |VMs| created in an |OVS-DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property: huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment for Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
this host with the command: this host, assuming 1G huge page size is being used on this host, with
the following commands:
:: .. code-block:: bash
# assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications # assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
system host-memory-modify -f application -1G 10 controller-1 0 system host-memory-modify -f application -1G 10 controller-1 0
@@ -692,23 +792,37 @@ Configure controller-1
#. **For OpenStack only:** Set up disk partition for nova-local volume group, #. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for stx-openstack nova ephemeral disks. which is needed for |prefix|-openstack nova ephemeral disks.
:: .. code-block:: bash
export NODE=controller-1 export NODE=controller-1
echo ">>> Getting root disk info" # Create nova-local local volume group
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
echo ">>>> Configuring nova-local"
NOVA_SIZE=34
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
# Get UUID of DISK to create PARTITION to be added to nova-local local volume group
# CEPH OSD Disks can NOT be used
# For best performance, do NOT use system/root disk, use a separate physical disk.
# List hosts disks and take note of UUID of disk to be used
system host-disk-list ${NODE}
# ( if using ROOT DISK, select disk with device_path of
# system host-show ${NODE} | grep rootfs )
# Create new PARTITION on selected disk, and take note of new partitions uuid in response
# The size of the PARTITION needs to be large enough to hold the aggregate size of
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
# but is limited by the size and space available on the physical disk you chose above.
# The following example uses a small PARTITION size such that you can fit it on the
# root disk, if that is what you chose above.
# Additional PARTITION(s) from additional disks can be added later if required.
PARTITION_SIZE=30
system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
# Add new partition to nova-local local volume group
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
sleep 2 sleep 2
#. **For OpenStack only:** Configure data interfaces for controller-1. #. **For OpenStack only:** Configure data interfaces for controller-1.
@@ -722,7 +836,7 @@ Configure controller-1
* Configure the data interfaces for controller-1. * Configure the data interfaces for controller-1.
:: .. code-block:: bash
export NODE=controller-1 export NODE=controller-1
@@ -743,8 +857,6 @@ Configure controller-1
# Create Data Networks that vswitch 'data' interfaces will be connected to # Create Data Networks that vswitch 'data' interfaces will be connected to
DATANET0='datanet0' DATANET0='datanet0'
DATANET1='datanet1' DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces # Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0} system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
@@ -768,7 +880,7 @@ Optionally Configure PCI-SRIOV Interfaces
* Configure the pci-sriov interfaces for controller-1. * Configure the pci-sriov interfaces for controller-1.
:: .. code-block:: bash
export NODE=controller-1 export NODE=controller-1
@@ -786,23 +898,22 @@ Optionally Configure PCI-SRIOV Interfaces
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
# Create Data Networks that the 'pci-sriov' interfaces will be connected to # If not already created, create Data Networks that the 'pci-sriov' interfaces
# will be connected to
DATANET0='datanet0' DATANET0='datanet0'
DATANET1='datanet1' DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to PCI-SRIOV Interfaces # Assign Data Networks to PCI-SRIOV Interfaces
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0} system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1} system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* To enable using |SRIOV| network attachments for the above interfaces in * **For Kubernetes only:** To enable using |SRIOV| network attachments for
Kubernetes hosted application containers: the above interfaces in Kubernetes hosted application containers:
* Configure the Kubernetes |SRIOV| device plugin. * Configure the Kubernetes |SRIOV| device plugin.
:: .. code-block:: bash
system host-label-assign controller-1 sriovdp=enabled system host-label-assign controller-1 sriovdp=enabled
@@ -810,7 +921,7 @@ Optionally Configure PCI-SRIOV Interfaces
containers on this host, configure the number of 1G Huge pages required containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes. on both |NUMA| nodes.
:: .. code-block:: bash
# assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications # assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
system host-memory-modify -f application controller-1 0 -1G 10 system host-memory-modify -f application controller-1 0 -1G 10
@@ -827,7 +938,7 @@ For host-based Ceph:
#. Add an |OSD| on controller-1 for host-based Ceph: #. Add an |OSD| on controller-1 for host-based Ceph:
:: .. code-block:: bash
# List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID # List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD. # By default, /dev/sda is being used as system disk and can not be used for OSD.
@@ -839,10 +950,6 @@ For host-based Ceph:
# List OSD storage devices # List OSD storage devices
system host-stor-list controller-1 system host-stor-list controller-1
# Add disk as an OSD storage
system host-stor-add controller-1 osd <disk-uuid>
.. only:: starlingx .. only:: starlingx
For Rook container-based Ceph: For Rook container-based Ceph:
@@ -850,7 +957,7 @@ For host-based Ceph:
#. Assign Rook host labels to controller-1 in support of installing the #. Assign Rook host labels to controller-1 in support of installing the
rook-ceph-apps manifest/helm-charts later: rook-ceph-apps manifest/helm-charts later:
:: .. code-block:: bash
system host-label-assign controller-1 ceph-mon-placement=enabled system host-label-assign controller-1 ceph-mon-placement=enabled
system host-label-assign controller-1 ceph-mgr-placement=enabled system host-label-assign controller-1 ceph-mgr-placement=enabled
@@ -862,7 +969,7 @@ Unlock controller-1
Unlock controller-1 in order to bring it into service: Unlock controller-1 in order to bring it into service:
:: .. code-block:: bash
system host-unlock controller-1 system host-unlock controller-1
@@ -870,6 +977,44 @@ Controller-1 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host service. This can take 5-10 minutes, depending on the performance of the host
machine. machine.
.. only:: openstack
* **For OpenStack only:** Due to the additional openstack services containers
running on the controller host, the size of the docker filesystem needs to be
increased from the default size of 30G to 60G.
.. code-block:: bash
# check existing size of docker fs
system host-fs-list controller-1
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
system host-lvg-list controller-1
# if existing docker fs size + cgts-vg available space is less than
# 60G, you will need to add a new disk partition to cgts-vg.
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
# ( if not use another unused disk )
# Get device path of ROOT DISK
system host-show controller-1 | grep rootfs
# Get UUID of ROOT DISK by listing disks
system host-disk-list controller-1
# Create new PARTITION on ROOT DISK, and take note of new partitions uuid in response
# Use a partition size such that youll be able to increase docker fs size from 30G to 60G
PARTITION_SIZE=30
system host-disk-partition-add -t lvm_phys_vol controller-1 <root-disk-uuid> ${PARTITION_SIZE}
# Add new partition to cgts-vg local volume group
system host-pv-add controller-1 cgts-vg <NEW_PARTITION_UUID>
sleep 2 # wait for partition to be added
# Increase docker filesystem to 60G
system host-fs-modify controller-1 docker=60
.. only:: starlingx .. only:: starlingx
----------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------
@@ -897,13 +1042,14 @@ machine.
#. Configure Rook to use /dev/sdb on controller-0 and controller-1 as a ceph #. Configure Rook to use /dev/sdb on controller-0 and controller-1 as a ceph
|OSD|. |OSD|.
:: .. code-block:: bash
$ system host-disk-wipe -s --confirm controller-0 /dev/sdb $ system host-disk-wipe -s --confirm controller-0 /dev/sdb
$ system host-disk-wipe -s --confirm controller-1 /dev/sdb $ system host-disk-wipe -s --confirm controller-1 /dev/sdb
values.yaml for rook-ceph-apps. values.yaml for rook-ceph-apps.
::
.. code-block:: yaml
cluster: cluster:
storage: storage:
@@ -957,6 +1103,7 @@ machine.
.. include:: ../kubernetes_install_next.txt .. include:: ../kubernetes_install_next.txt
.. only:: partner .. only:: partner
.. include:: /_includes/72hr-to-license.rest .. include:: /_includes/72hr-to-license.rest

View File

@@ -109,33 +109,16 @@ Bootstrap system on controller-0
To use this method, create your override file at ``$HOME/localhost.yml`` To use this method, create your override file at ``$HOME/localhost.yml``
and provide the minimum required parameters for the deployment and provide the minimum required parameters for the deployment
configuration as shown in the example below. Use the OAM IP SUBNET and IP configuration as shown in the example below. Use the |OAM| IP SUBNET and
ADDRESSing applicable to your deployment environment. IP ADDRESSing applicable to your deployment environment.
:: .. include:: /_includes/min-bootstrap-overrides-simplex.rest
cd ~
cat <<EOF > localhost.yml
system_mode: simplex
dns_servers:
- 8.8.8.8
- 8.8.4.4
external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
admin_username: admin
admin_password: <admin-password>
ansible_become_pass: <sysadmin-password>
EOF
.. only:: starlingx .. only:: starlingx
In either of the above options, the bootstrap playbooks default values In either of the above options, the bootstrap playbooks default
will pull all container images required for the |prod-p| from Docker hub. values will pull all container images required for the |prod-p| from
Docker hub.
If you have setup a private Docker registry to use for bootstrapping If you have setup a private Docker registry to use for bootstrapping
then you will need to add the following lines in $HOME/localhost.yml: then you will need to add the following lines in $HOME/localhost.yml:
@@ -220,9 +203,10 @@ The newly installed controller needs to be configured.
source /etc/platform/openrc source /etc/platform/openrc
#. Configure the |OAM| interface of controller-0 and specify the attached network #. Configure the |OAM| interface of controller-0 and specify the attached
as "oam". Use the |OAM| port name that is applicable to your deployment network as "oam". The following example configures the OAM interface on a
environment, for example eth0: physical untagged ethernet port, use |OAM| port name that is applicable to
your deployment environment, for example eth0:
:: ::
@@ -230,12 +214,17 @@ The newly installed controller needs to be configured.
system host-if-modify controller-0 $OAM_IF -c platform system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam system interface-network-assign controller-0 $OAM_IF oam
To configure a vlan or aggregated ethernet interface, see :ref:`Node
Interfaces <node-interfaces-index>`.
#. Configure |NTP| servers for network time synchronization: #. Configure |NTP| servers for network time synchronization:
:: ::
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration
<ptp-server-config-index>`.
.. only:: openstack .. only:: openstack
************************************* *************************************
@@ -247,75 +236,113 @@ The newly installed controller needs to be configured.
.. important:: .. important::
**These steps are required only if the StarlingX OpenStack application **These steps are required only if the StarlingX OpenStack application
(stx-openstack) will be installed.** (|prereq|-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in #. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
support of installing the stx-openstack manifest and helm-charts later. support of installing the |prereq|-openstack manifest and helm-charts later.
::
system host-label-assign controller-0 openstack-control-plane=enabled .. only:: starlingx
system host-label-assign controller-0 openstack-compute-node=enabled
system host-label-assign controller-0 openvswitch=enabled .. parsed-literal::
system host-label-assign controller-0 sriov=enabled
system host-label-assign controller-0 openstack-control-plane=enabled
system host-label-assign controller-0 openstack-compute-node=enabled
system host-label-assign controller-0 |vswitch-label|
system host-label-assign controller-0 sriov=enabled
.. only:: partner
.. include:: /_includes/aio_simplex_install_kubernetes.rest
:start-after: ref1-begin
:end-before: ref1-end
#. **For OpenStack only:** Due to the additional openstack services running
on the |AIO| controller platform cores, a minimum of 4 platform cores are
required, 6 platform cores are recommended.
Increase the number of platform cores with the following commands:
.. code-block::
# Assign 6 cores on processor/numa-node 0 on controller-0 to platform
system host-cpu-modify -f platform -p0 6 controller-0
#. **For OpenStack only:** Configure the system setting for the vSwitch. #. **For OpenStack only:** Configure the system setting for the vSwitch.
StarlingX has |OVS| (kernel-based) vSwitch configured as default: .. only:: starlingx
* Runs in a container; defined within the helm charts of stx-openstack StarlingX has |OVS| (kernel-based) vSwitch configured as default:
manifest.
* Shares the core(s) assigned to the platform.
If you require better performance, |OVS|-|DPDK| (|OVS| with the Data Plane * Runs in a container; defined within the helm charts of |prereq|-openstack
Development Kit, which is supported only on bare metal hardware) should be manifest.
used: * Shares the core(s) assigned to the platform.
* Runs directly on the host (it is not containerized). If you require better performance, |OVS-DPDK| (|OVS| with the Data
* Requires that at least 1 core be assigned/dedicated to the vSwitch function. Plane Development Kit, which is supported only on bare metal hardware)
should be used:
**To deploy the default containerized OVS:** * Runs directly on the host (it is not containerized).
Requires that at least 1 core be assigned/dedicated to the vSwitch
function.
:: To deploy the default containerized |OVS|:
system modify --vswitch_type none ::
This does not run any vSwitch directly on the host, instead, it uses the system modify --vswitch_type none
containerized |OVS| defined in the helm charts of stx-openstack
manifest.
**To deploy OVS-DPDK, run the following command:** This does not run any vSwitch directly on the host, instead, it uses
the containerized |OVS| defined in the helm charts of
|prefix|-openstack manifest.
:: To deploy |OVS-DPDK|, run the following command:
system modify --vswitch_type ovs-dpdk .. parsed-literal::
Default recommendation for an AIO-controller is to use a single core system modify --vswitch_type |ovs-dpdk|
for |OVS|-|DPDK| vswitch.
:: Default recommendation for an |AIO|-controller is to use a single core
for |OVS-DPDK| vSwitch.
.. code-block:: bash
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch # assign 1 core on processor/numa-node 0 on controller-0 to vswitch
system host-cpu-modify -f vswitch -p0 1 controller-0 system host-cpu-modify -f vswitch -p0 1 controller-0
When using |OVS|-|DPDK|, configure 1x 1G huge page for vSwitch memory on each |NUMA| node
where vswitch is running on this host, with the following command:
:: When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
each |NUMA| node on the host. It is recommended
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
node on the host.
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch However, due to a limitation with Kubernetes, only a single huge page
size is supported on any one host. If your application |VMs| require 2M
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
memory on each |NUMA| node on the host.
.. code-block::
# Assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
system host-memory-modify -f vswitch -1G 1 controller-0 0 system host-memory-modify -f vswitch -1G 1 controller-0 0
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
system host-memory-modify -f vswitch -1G 1 controller-0 1
.. important:: .. important::
|VMs| created in an |OVS|-|DPDK| environment must be configured to use |VMs| created in an |OVS-DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property: huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS|-|DPDK| environment on this host with Configure the huge pages for VMs in an |OVS-DPDK| environment on this
the commands: host, assuming 1G huge page size is being used on this host, with the
following commands:
:: .. code-block:: bash
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications # assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-memory-modify -f application -1G 10 controller-0 0 system host-memory-modify -f application -1G 10 controller-0 0
@@ -329,25 +356,38 @@ The newly installed controller needs to be configured.
locking and unlocking controller-0 to apply the change. locking and unlocking controller-0 to apply the change.
#. **For OpenStack only:** Set up disk partition for nova-local volume #. **For OpenStack only:** Set up disk partition for nova-local volume
group, which is needed for stx-openstack nova ephemeral disks. group, which is needed for |prereq|-openstack nova ephemeral disks.
.. code-block:: bash .. code-block:: bash
export NODE=controller-0 # Create nova-local local volume group
echo ">>> Getting root disk info"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
echo ">>>> Configuring nova-local"
NOVA_SIZE=34
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
# Get UUID of DISK to create PARTITION to be added to nova-local local volume group
# CEPH OSD Disks can NOT be used
# For best performance, do NOT use system/root disk, use a separate physical disk.
# List hosts disks and take note of UUID of disk to be used
system host-disk-list ${NODE}
# ( if using ROOT DISK, select disk with device_path of
# system host-show ${NODE} | fgrep rootfs )
# Create new PARTITION on selected disk, and take note of new partitions uuid in response
# The size of the PARTITION needs to be large enough to hold the aggregate size of
# all nova ephemeral disks of all VMs that you want to be able to host on this host,
# but is limited by the size and space available on the physical disk you chose above.
# The following example uses a small PARTITION size such that you can fit it on the
# root disk, if that is what you chose above.
# Additional PARTITION(s) from additional disks can be added later if required.
PARTITION_SIZE=30
system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
# Add new partition to nova-local local volume group
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
sleep 2 sleep 2
#. **For OpenStack only:** Configure data interfaces for controller-0. #. **For OpenStack only:** Configure data interfaces for controller-0.
Data class interfaces are vswitch interfaces used by vswitch to provide Data class interfaces are vswitch interfaces used by vswitch to provide
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
@@ -355,11 +395,12 @@ The newly installed controller needs to be configured.
.. important:: .. important::
A compute-labeled worker host **MUST** have at least one Data class interface. A compute-labeled |AIO|-controller host **MUST** have at least one
Data class interface.
* Configure the data interfaces for controller-0. * Configure the data interfaces for controller-0.
:: .. code-block:: bash
export NODE=controller-0 export NODE=controller-0
@@ -406,7 +447,7 @@ Optionally Configure PCI-SRIOV Interfaces
* Configure the pci-sriov interfaces for controller-0. * Configure the pci-sriov interfaces for controller-0.
:: .. code-block:: bash
export NODE=controller-0 export NODE=controller-0
@@ -424,7 +465,8 @@ Optionally Configure PCI-SRIOV Interfaces
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid>
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid>
# Create Data Networks that the 'pci-sriov' interfaces will be connected to # If not already created, create Data Networks that the 'pci-sriov' interfaces will
# be connected to
DATANET0='datanet0' DATANET0='datanet0'
DATANET1='datanet1' DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan system datanetwork-add ${DATANET0} vlan
@@ -435,8 +477,8 @@ Optionally Configure PCI-SRIOV Interfaces
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1} system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* To enable using |SRIOV| network attachments for the above interfaces in * **For Kubernetes Only:** To enable using |SRIOV| network attachments for
Kubernetes hosted application containers: the above interfaces in Kubernetes hosted application containers:
* Configure the Kubernetes |SRIOV| device plugin. * Configure the Kubernetes |SRIOV| device plugin.
@@ -448,7 +490,7 @@ Optionally Configure PCI-SRIOV Interfaces
containers on this host, configure the number of 1G Huge pages required containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes. on both |NUMA| nodes.
:: .. code-block:: bash
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications # assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-memory-modify -f application controller-0 0 -1G 10 system host-memory-modify -f application controller-0 0 -1G 10
@@ -538,6 +580,44 @@ machine.
.. incl-unlock-controller-0-aio-simplex-end: .. incl-unlock-controller-0-aio-simplex-end:
.. only:: openstack
* **For OpenStack only:** Due to the additional openstack services
containers running on the controller host, the size of the docker filesystem
needs to be increased from the default size of 30G to 60G.
.. code-block:: bash
# check existing size of docker fs
system host-fs-list controller-0
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
system host-lvg-list controller-0
# if existing docker fs size + cgts-vg available space is less than
# 60G, you will need to add a new disk partition to cgts-vg.
# Assuming you have unused space on ROOT DISK, add partition to ROOT DISK.
# ( if not use another unused disk )
# Get device path of ROOT DISK
system host-show controller-0 --nowrap | fgrep rootfs
# Get UUID of ROOT DISK by listing disks
system host-disk-list controller-0
# Create new PARTITION on ROOT DISK, and take note of new partitions uuid in response
# Use a partition size such that youll be able to increase docker fs size from 30G to 60G
PARTITION_SIZE=30
system hostdisk-partition-add -t lvm_phys_vol controller-0 <root-disk-uuid> ${PARTITION_SIZE}
# Add new partition to cgts-vg local volume group
system host-pv-add controller-0 cgts-vg <NEW_PARTITION_UUID>
sleep 2 # wait for partition to be added
# Increase docker filesystem to 60G
system host-fs-modify controller-0 docker=60
.. only:: starlingx .. only:: starlingx
----------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------
@@ -567,7 +647,8 @@ machine.
system host-disk-wipe -s --confirm controller-0 /dev/sdb system host-disk-wipe -s --confirm controller-0 /dev/sdb
values.yaml for rook-ceph-apps. values.yaml for rook-ceph-apps.
::
.. code-block:: yaml
cluster: cluster:
storage: storage:

View File

@@ -1,3 +1,5 @@
.. _rook_storage_install_kubernetes:
===================================================================== =====================================================================
Install StarlingX Kubernetes on Bare Metal Standard with Rook Storage Install StarlingX Kubernetes on Bare Metal Standard with Rook Storage
===================================================================== =====================================================================
@@ -244,7 +246,7 @@ OpenStack-specific host configuration
* Runs directly on the host (it is not containerized). * Runs directly on the host (it is not containerized).
* Requires that at least 1 core be assigned/dedicated to the vSwitch function. * Requires that at least 1 core be assigned/dedicated to the vSwitch function.
To deploy the default containerized OVS|: To deploy the default containerized |OVS|:
:: ::
@@ -261,12 +263,11 @@ OpenStack-specific host configuration
system host-cpu-modify -f vswitch -p0 1 controller-0 system host-cpu-modify -f vswitch -p0 1 controller-0
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created will Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes created will
default to automatically assigning 1 vSwitch core for AIO controllers and 2 default to automatically assigning 1 vSwitch core for |AIO| controllers and
vSwitch cores for compute-labeled worker nodes. 2 vSwitch cores for compute-labeled worker nodes.
When using |OVS|-|DPDK|, configure vSwitch memory per NUMA node with the When using |OVS|-|DPDK|, configure vSwitch memory per |NUMA| node with the
following following command:
command:
:: ::
@@ -403,9 +404,9 @@ Install software on controller-1 and worker nodes
A node with Edgeworker personality is also available. See A node with Edgeworker personality is also available. See
:ref:`deploy-edgeworker-nodes` for details. :ref:`deploy-edgeworker-nodes` for details.
#. Wait for the software installation on controller-1, worker-0, and worker-1 to #. Wait for the software installation on controller-1, worker-0, and worker-1
complete, for all servers to reboot, and for all to show as locked/disabled/online in to complete, for all servers to reboot, and for all to show as
'system host-list'. locked/disabled/online in 'system host-list'.
:: ::
@@ -428,9 +429,9 @@ Configure controller-1
.. incl-config-controller-1-start: .. incl-config-controller-1-start:
Configure the |OAM| and MGMT interfaces of controller-0 and specify the attached Configure the |OAM| and MGMT interfaces of controller-0 and specify the
networks. Use the |OAM| and MGMT port names, for example eth0, that are applicable attached networks. Use the |OAM| and MGMT port names, for example eth0, that
to your deployment environment. are applicable to your deployment environment.
(Note that the MGMT interface is partially set up automatically by the network (Note that the MGMT interface is partially set up automatically by the network
install procedure.) install procedure.)
@@ -518,12 +519,12 @@ Configure worker nodes
This step is **required** for OpenStack. This step is **required** for OpenStack.
This step is optional for Kubernetes: Do this step if using SRIOV network This step is optional for Kubernetes: Do this step if using |SRIOV|
attachments in hosted application containers. network attachments in hosted application containers.
For Kubernetes SRIOV network attachments: For Kubernetes |SRIOV| network attachments:
* Configure SRIOV device plug in: * Configure |SRIOV| device plug in:
:: ::
@@ -531,10 +532,10 @@ Configure worker nodes
system host-label-assign ${NODE} sriovdp=enabled system host-label-assign ${NODE} sriovdp=enabled
done done
* If planning on running |DPDK| in containers on this host, configure the number * If planning on running |DPDK| in containers on this host, configure the
of 1G Huge pages required on both |NUMA| nodes: number of 1G Huge pages required on both |NUMA| nodes:
:: .. code-block:: bash
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-memory-modify ${NODE} 0 -1G 100 system host-memory-modify ${NODE} 0 -1G 100
@@ -543,7 +544,7 @@ Configure worker nodes
For both Kubernetes and OpenStack: For both Kubernetes and OpenStack:
:: .. code-block:: bash
DATA0IF=<DATA-0-PORT> DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT> DATA1IF=<DATA-1-PORT>
@@ -589,18 +590,27 @@ OpenStack-specific host configuration
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in #. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the stx-openstack manifest and helm-charts later. support of installing the stx-openstack manifest and helm-charts later.
::
for NODE in worker-0 worker-1; do .. only:: starlingx
system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled .. code-block:: bash
system host-label-assign $NODE sriov=enabled
done for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled
done
.. only:: partner
.. include:: /_includes/rook_storage_install_kubernetes.rest
:start-after: ref1-begin
:end-before: ref1-end
#. **For OpenStack only:** Set up disk partition for nova-local volume group, #. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for stx-openstack nova ephemeral disks. which is needed for stx-openstack nova ephemeral disks.
:: .. code-block:: bash
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
echo "Configuring Nova local for: $NODE" echo "Configuring Nova local for: $NODE"
@@ -619,7 +629,7 @@ Unlock worker nodes
Unlock worker nodes in order to bring them into service: Unlock worker nodes in order to bring them into service:
:: .. code-block:: bash
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-unlock $NODE system host-unlock $NODE
@@ -638,7 +648,7 @@ Configure storage nodes
Note that the MGMT interfaces are partially set up by the network install Note that the MGMT interfaces are partially set up by the network install
procedure. procedure.
:: .. code-block:: bash
for NODE in storage-0 storage-1; do for NODE in storage-0 storage-1; do
system interface-network-assign $NODE mgmt0 cluster-host system interface-network-assign $NODE mgmt0 cluster-host
@@ -657,15 +667,14 @@ Unlock storage nodes
Unlock storage nodes in order to bring them into service: Unlock storage nodes in order to bring them into service:
:: .. code-block:: bash
for STORAGE in storage-0 storage-1; do for STORAGE in storage-0 storage-1; do
system host-unlock $STORAGE system host-unlock $STORAGE
done done
The storage nodes will reboot in order to apply configuration changes and come The storage nodes will reboot in order to apply configuration changes and come
into service. This can take 5-10 minutes, depending on the performance of the into service. This can take 5-10 minutes, depending on the performance of the host machine.
host machine.
------------------------------------------------- -------------------------------------------------
Install Rook application manifest and helm-charts Install Rook application manifest and helm-charts
@@ -720,7 +729,7 @@ On host storage-0 and storage-1:
system application-apply rook-ceph-apps system application-apply rook-ceph-apps
#. Wait for OSDs pod ready. #. Wait for |OSDs| pod ready.
:: ::

View File

@@ -103,6 +103,8 @@ Host memory provisioning
host_memory_provisioning/allocating-host-memory-using-horizon host_memory_provisioning/allocating-host-memory-using-horizon
host_memory_provisioning/allocating-host-memory-using-the-cli host_memory_provisioning/allocating-host-memory-using-the-cli
.. _node-interfaces-index:
--------------- ---------------
Node interfaces Node interfaces
--------------- ---------------

View File

@@ -35,6 +35,8 @@ NTP Server Configuration
configuring-ntp-servers-and-services-using-the-cli configuring-ntp-servers-and-services-using-the-cli
resynchronizing-a-host-to-the-ntp-server resynchronizing-a-host-to-the-ntp-server
.. _ptp-server-config-index:
------------------------ ------------------------
PTP Server Configuration PTP Server Configuration
------------------------ ------------------------