diff --git a/doc/source/deploy_install_guides/current/bare_metal/aio_duplex.rst b/doc/source/deploy_install_guides/current/bare_metal/aio_duplex.rst new file mode 100644 index 000000000..d4dbd4e81 --- /dev/null +++ b/doc/source/deploy_install_guides/current/bare_metal/aio_duplex.rst @@ -0,0 +1,26 @@ +============================================== +Bare metal All-in-one Duplex Installation R2.0 +============================================== + +-------- +Overview +-------- + +.. include:: ../desc_aio_duplex.txt + +The bare metal AIO-DX deployment configuration may be extended with up to four +worker/compute nodes (not shown in the diagram). Installation instructions for +these additional nodes are described in :doc:`aio_duplex_extend`. + +.. include:: ../ipv6_note.txt + +------------ +Installation +------------ + +.. toctree:: + :maxdepth: 2 + + aio_duplex_hardware + aio_duplex_install_kubernetes + aio_duplex_extend \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/bare_metal/aio_duplex_extend.rst b/doc/source/deploy_install_guides/current/bare_metal/aio_duplex_extend.rst new file mode 100644 index 000000000..456c2d09e --- /dev/null +++ b/doc/source/deploy_install_guides/current/bare_metal/aio_duplex_extend.rst @@ -0,0 +1,196 @@ +================================================ +Extend Capacity with Worker and/or Compute Nodes +================================================ + +This section describes the steps to extend capacity with worker and/or compute +nodes on a **StarlingX R2.0 bare metal All-in-one Duplex** deployment +configuration. + +.. contents:: + :local: + :depth: 1 + +--------------------------------- +Install software on compute nodes +--------------------------------- + +#. Power on the compute servers and force them to network boot with the + appropriate BIOS boot options for your particular server. + +#. As the compute servers boot, a message appears on their console instructing + you to configure the personality of the node. + +#. On the console of controller-0, list hosts to see newly discovered compute + hosts (hostname=None): + + :: + + system host-list + +----+--------------+-------------+----------------+-------------+--------------+ + | id | hostname | personality | administrative | operational | availability | + +----+--------------+-------------+----------------+-------------+--------------+ + | 1 | controller-0 | controller | unlocked | enabled | available | + | 2 | controller-0 | controller | unlocked | enabled | available | + | 3 | None | None | locked | disabled | offline | + | 4 | None | None | locked | disabled | offline | + +----+--------------+-------------+----------------+-------------+--------------+ + +#. Using the host id, set the personality of this host to 'controller': + + :: + + system host-update 3 personality=worker hostname=compute-0 + system host-update 4 personality=worker hostname=compute-1 + + This initiates the install of software on compute nodes. + This can take 5-10 minutes, depending on the performance of the host machine. + +#. Wait for the install of software on the computes to complete, the computes to + reboot and to both show as locked/disabled/online in 'system host-list'. + + :: + + system host-list + +----+--------------+-------------+----------------+-------------+--------------+ + | id | hostname | personality | administrative | operational | availability | + +----+--------------+-------------+----------------+-------------+--------------+ + | 1 | controller-0 | controller | unlocked | enabled | available | + | 2 | controller-1 | controller | unlocked | enabled | available | + | 3 | compute-0 | compute | locked | disabled | online | + | 4 | compute-1 | compute | locked | disabled | online | + +----+--------------+-------------+----------------+-------------+--------------+ + +----------------------- +Configure compute nodes +----------------------- + +#. Assign the cluster-host network to the MGMT interface for the compute nodes: + + (Note that the MGMT interfaces are partially set up automatically by the + network install procedure.) + + :: + + for COMPUTE in compute-0 compute-1; do + system interface-network-assign $COMPUTE mgmt0 cluster-host + done + +#. Configure data interfaces for compute nodes. Use the DATA port names, for + example eth0, that are applicable to your deployment environment. + + .. important:: + + This step is **required** for OpenStack. + + This step is optional for Kubernetes: Do this step if using SRIOV network + attachments in hosted application containers. + + For Kubernetes SRIOV network attachments: + + * Configure SRIOV device plug in: + + :: + + system host-label-assign controller-1 sriovdp=enabled + + * If planning on running DPDK in containers on this host, configure the number + of 1G Huge pages required on both NUMA nodes: + + :: + + system host-memory-modify controller-1 0 -1G 100 + system host-memory-modify controller-1 1 -1G 100 + + For both Kubernetes and OpenStack: + + :: + + DATA0IF= + DATA1IF= + PHYSNET0='physnet0' + PHYSNET1='physnet1' + SPL=/tmp/tmp-system-port-list + SPIL=/tmp/tmp-system-host-if-list + + # configure the datanetworks in sysinv, prior to referencing it + # in the ``system host-if-modify`` command'. + system datanetwork-add ${PHYSNET0} vlan + system datanetwork-add ${PHYSNET1} vlan + + for COMPUTE in compute-0 compute-1; do + echo "Configuring interface for: $COMPUTE" + set -ex + system host-port-list ${COMPUTE} --nowrap > ${SPL} + system host-if-list -a ${COMPUTE} --nowrap > ${SPIL} + DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') + DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') + DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') + DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') + DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') + DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') + DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') + DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') + system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} + system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID} + system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} + system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1} + set +ex + done + +************************************* +OpenStack-specific host configuration +************************************* + +.. important:: + + **This step is required only if the StarlingX OpenStack application + (stx-openstack) will be installed.** + +#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in + support of installing the stx-openstack manifest and helm-charts later. + + :: + + for NODE in compute-0 compute-1; do + system host-label-assign $NODE openstack-compute-node=enabled + system host-label-assign $NODE openvswitch=enabled + system host-label-assign $NODE sriov=enabled + done + +#. **For OpenStack only:** Setup disk partition for nova-local volume group, + needed for stx-openstack nova ephemeral disks. + + :: + + for COMPUTE in compute-0 compute-1; do + echo "Configuring Nova local for: $COMPUTE" + ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}') + ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') + PARTITION_SIZE=10 + NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE}) + NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') + system host-lvg-add ${COMPUTE} nova-local + system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID} + done + + for COMPUTE in compute-0 compute-1; do + echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready." + while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done + done + +-------------------- +Unlock compute nodes +-------------------- + +Unlock compute nodes in order to bring them into service: + +:: + + for COMPUTE in compute-0 compute-1; do + system host-unlock $COMPUTE + done + +The compute nodes will reboot to apply configuration changes and come into +service. This can take 5-10 minutes, depending on the performance of the host +machine. + diff --git a/doc/source/deploy_install_guides/current/bare_metal/aio_duplex_hardware.rst b/doc/source/deploy_install_guides/current/bare_metal/aio_duplex_hardware.rst new file mode 100644 index 000000000..1fc6ab451 --- /dev/null +++ b/doc/source/deploy_install_guides/current/bare_metal/aio_duplex_hardware.rst @@ -0,0 +1,58 @@ +===================== +Hardware Requirements +===================== + +This section describes the hardware requirements and server preparation for a +**StarlingX R2.0 bare metal All-in-one Duplex** deployment configuration. + +.. contents:: + :local: + :depth: 1 + +----------------------------- +Minimum hardware requirements +----------------------------- + +The recommended minimum hardware requirements for bare metal servers for various +host types are: + ++-------------------------+-----------------------------------------------------------+ +| Minimum Requirement | All-in-one Controller Node | ++=========================+===========================================================+ +| Number of servers | 2 | ++-------------------------+-----------------------------------------------------------+ +| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) | +| | 8 cores/socket | +| | | +| | or | +| | | +| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores | +| | (low-power/low-cost option) | ++-------------------------+-----------------------------------------------------------+ +| Minimum memory | 64 GB | ++-------------------------+-----------------------------------------------------------+ +| Primary disk | 500 GB SDD or NVMe | ++-------------------------+-----------------------------------------------------------+ +| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD | +| | - Recommended, but not required: 1 or more SSDs or NVMe | +| | drives for Ceph journals (min. 1024 MiB per OSD journal)| +| | - For OpenStack, recommend 1 or more 500 GB (min. 10K RPM)| +| | for VM local ephemeral storage | ++-------------------------+-----------------------------------------------------------+ +| Minimum network ports | - Mgmt/Cluster: 1x10GE | +| | - OAM: 1x1GE | +| | - Data: 1 or more x 10GE | ++-------------------------+-----------------------------------------------------------+ +| BIOS settings | - Hyper-Threading technology enabled | +| | - Virtualization technology enabled | +| | - VT for directed I/O enabled | +| | - CPU power and performance policy set to performance | +| | - CPU C state control disabled | +| | - Plug & play BMC detection disabled | ++-------------------------+-----------------------------------------------------------+ + +-------------------------- +Prepare bare metal servers +-------------------------- + +.. include:: prep_servers.txt \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/bare_metal/aio_duplex_install_kubernetes.rst b/doc/source/deploy_install_guides/current/bare_metal/aio_duplex_install_kubernetes.rst new file mode 100644 index 000000000..cb2d4509e --- /dev/null +++ b/doc/source/deploy_install_guides/current/bare_metal/aio_duplex_install_kubernetes.rst @@ -0,0 +1,328 @@ +================================================= +Install StarlingX Kubernetes on Bare Metal AIO-DX +================================================= + +This section describes the steps to install the StarlingX Kubernetes platform +on a **StarlingX R2.0 bare metal All-in-one Duplex** deployment configuration. + +.. contents:: + :local: + :depth: 1 + +--------------------- +Create a bootable USB +--------------------- + +Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to +create a bootable USB with the StarlingX ISO on your system. + +-------------------------------- +Install software on controller-0 +-------------------------------- + +.. include:: aio_simplex_install_kubernetes.rst + :start-after: incl-install-software-controller-0-aio-simplex-start: + :end-before: incl-install-software-controller-0-aio-simplex-end: + +-------------------------------- +Bootstrap system on controller-0 +-------------------------------- + +#. Login using the username / password of "sysadmin" / "sysadmin". + When logging in for the first time, you will be forced to change the password. + + :: + + Login: sysadmin + Password: + Changing password for sysadmin. + (current) UNIX Password: sysadmin + New Password: + (repeat) New Password: + +#. External connectivity is required to run the Ansible bootstrap playbook. The + StarlingX boot image will DHCP out all interfaces so the server may have + obtained an IP address and have external IP connectivity if a DHCP server is + present in your environment. Verify this using the :command:`ip addr` and + :command:`ping 8.8.8.8` commands. + + Otherwise, manually configure an IP address and default IP route. Use the + PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your + deployment environment. + + :: + + sudo ip address add / dev + sudo ip link set up dev + sudo ip route add default via dev + ping 8.8.8.8 + +#. Specify user configuration overrides for the Ansible bootstrap playbook. + + Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible + configuration are: + + ``/etc/ansible/hosts`` + The default Ansible inventory file. Contains a single host: localhost. + + ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml`` + The Ansible bootstrap playbook. + + ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml`` + The default configuration values for the bootstrap playbook. + + sysadmin home directory ($HOME) + The default location where Ansible looks for and imports user + configuration override files for hosts. For example: ``$HOME/.yml``. + + Specify the user configuration override file for the Ansible bootstrap + playbook using one of the following methods: + + * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit + the configurable values as desired (use the commented instructions in + the file). + + or + + * Create the minimal user configuration override file as shown in the + example below, using the OAM IP SUBNET and IP ADDRESSing applicable to your + deployment environment: + + :: + + cd ~ + cat < localhost.yml + system_mode: duplex + + dns_servers: + - 8.8.8.8 + - 8.8.4.4 + + external_oam_subnet: / + external_oam_gateway_address: + external_oam_floating_address: + external_oam_node_0_address: + external_oam_node_1_address: + + admin_username: admin + admin_password: + ansible_become_pass: + EOF + + Additional :doc:`ansible_bootstrap_configs` are available for advanced use cases. + +#. Run the Ansible bootstrap playbook: + + :: + + ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml + + Wait for Ansible bootstrap playbook to complete. + This can take 5-10 minutes, depending on the performance of the host machine. + +---------------------- +Configure controller-0 +---------------------- + +.. include:: aio_simplex_install_kubernetes.rst + :start-after: incl-config-controller-0-aio-simplex-start: + :end-before: incl-config-controller-0-aio-simplex-end: + +------------------- +Unlock controller-0 +------------------- + +.. include:: aio_simplex_install_kubernetes.rst + :start-after: incl-unlock-controller-0-aio-simplex-start: + :end-before: incl-unlock-controller-0-aio-simplex-end: + +------------------------------------- +Install software on controller-1 node +------------------------------------- + +#. Power on the controller-1 server and force it to network boot with the + appropriate BIOS boot options for your particular server. + +#. As controller-1 boots, a message appears on its console instructing you to + configure the personality of the node. + +#. On the console of controller-0, list hosts to see newly discovered controller-1 + host (hostname=None): + + :: + + system host-list + +----+--------------+-------------+----------------+-------------+--------------+ + | id | hostname | personality | administrative | operational | availability | + +----+--------------+-------------+----------------+-------------+--------------+ + | 1 | controller-0 | controller | unlocked | enabled | available | + | 2 | None | None | locked | disabled | offline | + +----+--------------+-------------+----------------+-------------+--------------+ + +#. Using the host id, set the personality of this host to 'controller': + + :: + + system host-update 2 personality=controller + +#. Wait for the software installation on controller-1 to complete, for controller-1 to + reboot, and for controller-1 to show as locked/disabled/online in 'system host-list'. + + This can take 5-10 minutes, depending on the performance of the host machine. + + :: + + system host-list + +----+--------------+-------------+----------------+-------------+--------------+ + | id | hostname | personality | administrative | operational | availability | + +----+--------------+-------------+----------------+-------------+--------------+ + | 1 | controller-0 | controller | unlocked | enabled | available | + | 2 | controller-1 | controller | locked | disabled | online | + +----+--------------+-------------+----------------+-------------+--------------+ + +---------------------- +Configure controller-1 +---------------------- + +#. Configure the OAM and MGMT interfaces of controller-1 and specify the + attached networks. Use the OAM and MGMT port names, for example eth0, that are + applicable to your deployment environment: + + (Note that the MGMT interface is partially set up automatically by the network + install procedure.) + + :: + + OAM_IF= + MGMT_IF= + system host-if-modify controller-1 $OAM_IF -c platform + system interface-network-assign controller-1 $OAM_IF oam + system interface-network-assign controller-1 $MGMT_IF cluster-host + +#. Configure data interfaces for controller-1. Use the DATA port names, for example + eth0, applicable to your deployment environment. + + .. important:: + + This step is **required** for OpenStack. + + This step is optional for Kubernetes: Do this step if using SRIOV network + attachments in hosted application containers. + + For Kubernetes SRIOV network attachments: + + * Configure the SRIOV device plugin: + + :: + + system host-label-assign controller-1 sriovdp=enabled + + * If planning on running DPDK in containers on this host, configure the number + of 1G Huge pages required on both NUMA nodes: + + :: + + system host-memory-modify controller-1 0 -1G 100 + system host-memory-modify controller-1 1 -1G 100 + + + For both Kubernetes and OpenStack: + + :: + + DATA0IF= + DATA1IF= + export COMPUTE=controller-1 + PHYSNET0='physnet0' + PHYSNET1='physnet1' + SPL=/tmp/tmp-system-port-list + SPIL=/tmp/tmp-system-host-if-list + system host-port-list ${COMPUTE} --nowrap > ${SPL} + system host-if-list -a ${COMPUTE} --nowrap > ${SPIL} + DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') + DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') + DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') + DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') + DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') + DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') + DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') + DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') + + system datanetwork-add ${PHYSNET0} vlan + system datanetwork-add ${PHYSNET1} vlan + + system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} + system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID} + system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} + system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1} + +#. Add an OSD on controller-1 for ceph: + + :: + + echo ">>> Add OSDs to primary tier" + system host-disk-list controller-1 + system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {} + system host-stor-list controller-1 + +************************************* +OpenStack-specific host configuration +************************************* + +.. important:: + + **This step is required only if the StarlingX OpenStack application + (stx-openstack) will be installed.** + +#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in + support of installing the stx-openstack manifest and helm-charts later. + + :: + + system host-label-assign controller-1 openstack-control-plane=enabled + system host-label-assign controller-1 openstack-compute-node=enabled + system host-label-assign controller-1 openvswitch=enabled + system host-label-assign controller-1 sriov=enabled + +#. **For OpenStack only:** Set up disk partition for nova-local volume group, + which is needed for stx-openstack nova ephemeral disks. + + :: + + export COMPUTE=controller-1 + + echo ">>> Getting root disk info" + ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}') + ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') + echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID" + + echo ">>>> Configuring nova-local" + NOVA_SIZE=34 + NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE}) + NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') + system host-lvg-add ${COMPUTE} nova-local + system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID} + sleep 2 + + echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready." + while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done + +------------------- +Unlock controller-1 +------------------- + +Unlock controller-1 in order to bring it into service: + +:: + + system host-unlock controller-1 + +Controller-1 will reboot in order to apply configuration changes and come into +service. This can take 5-10 minutes, depending on the performance of the host +machine. + +---------- +Next steps +---------- + +.. include:: ../kubernetes_install_next.txt diff --git a/doc/source/deploy_install_guides/current/bare_metal/aio_simplex.rst b/doc/source/deploy_install_guides/current/bare_metal/aio_simplex.rst new file mode 100644 index 000000000..822f47ecb --- /dev/null +++ b/doc/source/deploy_install_guides/current/bare_metal/aio_simplex.rst @@ -0,0 +1,21 @@ +=============================================== +Bare metal All-in-one Simplex Installation R2.0 +=============================================== + +-------- +Overview +-------- + +.. include:: ../desc_aio_simplex.txt + +.. include:: ../ipv6_note.txt + +------------ +Installation +------------ + +.. toctree:: + :maxdepth: 2 + + aio_simplex_hardware + aio_simplex_install_kubernetes diff --git a/doc/source/deploy_install_guides/current/bare_metal/aio_simplex_hardware.rst b/doc/source/deploy_install_guides/current/bare_metal/aio_simplex_hardware.rst new file mode 100644 index 000000000..d01fa99c4 --- /dev/null +++ b/doc/source/deploy_install_guides/current/bare_metal/aio_simplex_hardware.rst @@ -0,0 +1,58 @@ +===================== +Hardware Requirements +===================== + +This section describes the hardware requirements and server preparation for a +**StarlingX R2.0 bare metal All-in-one Simplex** deployment configuration. + +.. contents:: + :local: + :depth: 1 + +----------------------------- +Minimum hardware requirements +----------------------------- + +The recommended minimum hardware requirements for bare metal servers for various +host types are: + ++-------------------------+-----------------------------------------------------------+ +| Minimum Requirement | All-in-one Controller Node | ++=========================+===========================================================+ +| Number of servers | 1 | ++-------------------------+-----------------------------------------------------------+ +| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) | +| | 8 cores/socket | +| | | +| | or | +| | | +| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores | +| | (low-power/low-cost option) | ++-------------------------+-----------------------------------------------------------+ +| Minimum memory | 64 GB | ++-------------------------+-----------------------------------------------------------+ +| Primary disk | 500 GB SDD or NVMe | ++-------------------------+-----------------------------------------------------------+ +| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD | +| | - Recommended, but not required: 1 or more SSDs or NVMe | +| | drives for Ceph journals (min. 1024 MiB per OSD | +| | journal) | +| | - For OpenStack, recommend 1 or more 500 GB (min. 10K | +| | RPM) for VM local ephemeral storage | ++-------------------------+-----------------------------------------------------------+ +| Minimum network ports | - OAM: 1x1GE | +| | - Data: 1 or more x 10GE | ++-------------------------+-----------------------------------------------------------+ +| BIOS settings | - Hyper-Threading technology enabled | +| | - Virtualization technology enabled | +| | - VT for directed I/O enabled | +| | - CPU power and performance policy set to performance | +| | - CPU C state control disabled | +| | - Plug & play BMC detection disabled | ++-------------------------+-----------------------------------------------------------+ + +-------------------------- +Prepare bare metal servers +-------------------------- + +.. include:: prep_servers.txt \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/bare_metal_aio_simplex.rst b/doc/source/deploy_install_guides/current/bare_metal/aio_simplex_install_kubernetes.rst similarity index 62% rename from doc/source/deploy_install_guides/current/bare_metal_aio_simplex.rst rename to doc/source/deploy_install_guides/current/bare_metal/aio_simplex_install_kubernetes.rst index a838482c4..d533b85c1 100644 --- a/doc/source/deploy_install_guides/current/bare_metal_aio_simplex.rst +++ b/doc/source/deploy_install_guides/current/bare_metal/aio_simplex_install_kubernetes.rst @@ -1,111 +1,26 @@ -================================== -Bare metal All-in-one Simplex R2.0 -================================== +================================================= +Install StarlingX Kubernetes on Bare Metal AIO-SX +================================================= + +This section describes the steps to install the StarlingX Kubernetes platform +on a **StarlingX R2.0 bare metal All-in-one Simplex** deployment configuration. .. contents:: :local: :depth: 1 ------------ -Description ------------ - -.. include:: virtual_aio_simplex.rst - :start-after: incl-aio-simplex-intro-start: - :end-before: incl-aio-simplex-intro-end: - -.. include:: virtual_aio_simplex.rst - :start-after: incl-ipv6-note-start: - :end-before: incl-ipv6-note-end: - --------------------- -Hardware requirements +Create a bootable USB --------------------- -The recommended minimum requirements for bare metal servers for various host -types are: - -+-------------------------+-----------------------------------------------------------+ -| Minimum Requirement | All-in-one Controller Node | -+=========================+===========================================================+ -| Number of servers | 1 | -+-------------------------+-----------------------------------------------------------+ -| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) | -| | 8 cores/socket | -| | | -| | or | -| | | -| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores | -| | (low-power/low-cost option) | -+-------------------------+-----------------------------------------------------------+ -| Minimum memory | 64 GB | -+-------------------------+-----------------------------------------------------------+ -| Primary disk | 500 GB SDD or NVMe | -+-------------------------+-----------------------------------------------------------+ -| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD | -| | - Recommended, but not required: 1 or more SSDs or NVMe | -| | drives for Ceph journals (min. 1024 MiB per OSD | -| | journal) | -| | - For OpenStack, recommend 1 or more 500 GB (min. 10K | -| | RPM) for VM local ephemeral storage | -+-------------------------+-----------------------------------------------------------+ -| Minimum network ports | - OAM: 1x1GE | -| | - Data: 1 or more x 10GE | -+-------------------------+-----------------------------------------------------------+ -| BIOS settings | - Hyper-Threading technology enabled | -| | - Virtualization technology enabled | -| | - VT for directed I/O enabled | -| | - CPU power and performance policy set to performance | -| | - CPU C state control disabled | -| | - Plug & play BMC detection disabled | -+-------------------------+-----------------------------------------------------------+ - ---------------------- -Preparing the servers ---------------------- - -.. incl-prepare-servers-start: - -Prior to starting the StarlingX installation, the bare metal servers must be in the -following condition: - -* Physically installed - -* Cabled for power - -* Cabled for networking - - * Far-end switch ports should be properly configured to realize the networking - shown in Figure 1. - -* All disks wiped - - * Ensures that servers will boot from either the network or USB storage (if present) - -* Powered off - -.. incl-prepare-servers-end: - --------------------- -StarlingX Kubernetes --------------------- - -******************************* -Installing StarlingX Kubernetes -******************************* - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Create a bootable USB with the StarlingX ISO -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to -create a bootable USB on your system. +create a bootable USB with the StarlingX ISO on your system. -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- Install software on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- -.. incl-install-software-controller-0-aio-start: +.. incl-install-software-controller-0-aio-simplex-start: #. Insert the bootable USB into a bootable USB port on the host you are configuring as controller-0. @@ -125,11 +40,11 @@ Install software on controller-0 #. Wait for non-interactive install of software to complete and server to reboot. This can take 5-10 minutes, depending on the performance of the server. -.. incl-install-software-controller-0-aio-end: +.. incl-install-software-controller-0-aio-simplex-end: -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- Bootstrap system on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- #. Login using the username / password of "sysadmin" / "sysadmin". When logging in for the first time, you will be forced to change the password. @@ -210,9 +125,7 @@ Bootstrap system on controller-0 ansible_become_pass: EOF - Additional Ansible bootstrap configurations for advanced use cases are available: - - * :ref:`IPv6 ` + Additional :doc:`ansible_bootstrap_configs` are available for advanced use cases. #. Run the Ansible bootstrap playbook: @@ -223,11 +136,11 @@ Bootstrap system on controller-0 Wait for Ansible bootstrap playbook to complete. This can take 5-10 minutes, depending on the performance of the host machine. -^^^^^^^^^^^^^^^^^^^^^^ +---------------------- Configure controller-0 -^^^^^^^^^^^^^^^^^^^^^^ +---------------------- -.. incl-config-controller-0-start: +.. incl-config-controller-0-aio-simplex-start: #. Acquire admin credentials: @@ -325,9 +238,9 @@ Configure controller-0 system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {} system host-stor-list controller-0 -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +************************************* OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +************************************* .. important:: @@ -410,11 +323,13 @@ OpenStack-specific host configuration echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready." while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done -.. incl-config-controller-0-end: +.. incl-config-controller-0-aio-simplex-end: -^^^^^^^^^^^^^^^^^^^ +------------------- Unlock controller-0 -^^^^^^^^^^^^^^^^^^^ +------------------- + +.. incl-unlock-controller-0-aio-simplex-start: Unlock controller-0 in order to bring it into service: @@ -422,43 +337,13 @@ Unlock controller-0 in order to bring it into service: system host-unlock controller-0 -Controller-0 will reboot in order to apply configuration change and come into +Controller-0 will reboot in order to apply configuration changes and come into service. This can take 5-10 minutes, depending on the performance of the host machine. -When it completes, your Kubernetes cluster is up and running. +.. incl-unlock-controller-0-aio-simplex-end: -*************************** -Access StarlingX Kubernetes -*************************** +---------- +Next steps +---------- -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-kubernetes-start: - :end-before: incl-access-starlingx-kubernetes-end: - -------------------- -StarlingX OpenStack -------------------- - -*************************** -Install StarlingX OpenStack -*************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-install-starlingx-openstack-start: - :end-before: incl-install-starlingx-openstack-end: - -************************** -Access StarlingX OpenStack -************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-openstack-start: - :end-before: incl-access-starlingx-openstack-end: - -***************************** -Uninstall StarlingX OpenStack -***************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-uninstall-starlingx-openstack-start: - :end-before: incl-uninstall-starlingx-openstack-end: \ No newline at end of file +.. include:: ../kubernetes_install_next.txt diff --git a/doc/source/deploy_install_guides/current/ansible_bootstrap_configs.rst b/doc/source/deploy_install_guides/current/bare_metal/ansible_bootstrap_configs.rst similarity index 85% rename from doc/source/deploy_install_guides/current/ansible_bootstrap_configs.rst rename to doc/source/deploy_install_guides/current/bare_metal/ansible_bootstrap_configs.rst index 6a3e993d8..381fb78a1 100644 --- a/doc/source/deploy_install_guides/current/ansible_bootstrap_configs.rst +++ b/doc/source/deploy_install_guides/current/bare_metal/ansible_bootstrap_configs.rst @@ -1,6 +1,10 @@ -=========================================== -Additional Ansible bootstrap configurations -=========================================== +================================ +Ansible Bootstrap Configurations +================================ + +.. contents:: + :local: + :depth: 1 .. _ansible_bootstrap_ipv6: diff --git a/doc/source/deploy_install_guides/current/bare_metal/controller_storage.rst b/doc/source/deploy_install_guides/current/bare_metal/controller_storage.rst new file mode 100644 index 000000000..27ee145e7 --- /dev/null +++ b/doc/source/deploy_install_guides/current/bare_metal/controller_storage.rst @@ -0,0 +1,22 @@ +============================================================= +Bare metal Standard with Controller Storage Installation R2.0 +============================================================= + +-------- +Overview +-------- + +.. include:: ../desc_controller_storage.txt + +.. include:: ../ipv6_note.txt + + +------------ +Installation +------------ + +.. toctree:: + :maxdepth: 2 + + controller_storage_hardware + controller_storage_install_kubernetes diff --git a/doc/source/deploy_install_guides/current/bare_metal/controller_storage_hardware.rst b/doc/source/deploy_install_guides/current/bare_metal/controller_storage_hardware.rst new file mode 100644 index 000000000..0736fe749 --- /dev/null +++ b/doc/source/deploy_install_guides/current/bare_metal/controller_storage_hardware.rst @@ -0,0 +1,55 @@ +===================== +Hardware Requirements +===================== + +This section describes the hardware requirements and server preparation for a +**StarlingX R2.0 bare metal Standard with Controller Storage** deployment +configuration. + +.. contents:: + :local: + :depth: 1 + +----------------------------- +Minimum hardware requirements +----------------------------- + +The recommended minimum hardware requirements for bare metal servers for various +host types are: + ++-------------------------+-----------------------------+-----------------------------+ +| Minimum Requirement | Controller Node | Compute Node | ++=========================+=============================+=============================+ +| Number of servers | 2 | 2-10 | ++-------------------------+-----------------------------+-----------------------------+ +| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) | +| | 8 cores/socket | ++-------------------------+-----------------------------+-----------------------------+ +| Minimum memory | 64 GB | 32 GB | ++-------------------------+-----------------------------+-----------------------------+ +| Primary disk | 500 GB SDD or NVMe | 120 GB (Minimum 10k RPM) | ++-------------------------+-----------------------------+-----------------------------+ +| Additional disks | - 1 or more 500 GB (min. | - For OpenStack, recommend | +| | 10K RPM) for Ceph OSD | 1 or more 500 GB (min. | +| | - Recommended, but not | 10K RPM) for VM local | +| | required: 1 or more SSDs | ephemeral storage | +| | or NVMe drives for Ceph | | +| | journals (min. 1024 MiB | | +| | per OSD journal) | | ++-------------------------+-----------------------------+-----------------------------+ +| Minimum network ports | - Mgmt/Cluster: 1x10GE | - Mgmt/Cluster: 1x10GE | +| | - OAM: 1x1GE | - Data: 1 or more x 10GE | ++-------------------------+-----------------------------+-----------------------------+ +| BIOS settings | - Hyper-Threading technology enabled | +| | - Virtualization technology enabled | +| | - VT for directed I/O enabled | +| | - CPU power and performance policy set to performance | +| | - CPU C state control disabled | +| | - Plug & play BMC detection disabled | ++-------------------------+-----------------------------+-----------------------------+ + +-------------------------- +Prepare bare metal servers +-------------------------- + +.. include:: prep_servers.txt \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/bare_metal_controller_storage.rst b/doc/source/deploy_install_guides/current/bare_metal/controller_storage_install_kubernetes.rst similarity index 74% rename from doc/source/deploy_install_guides/current/bare_metal_controller_storage.rst rename to doc/source/deploy_install_guides/current/bare_metal/controller_storage_install_kubernetes.rst index e5f520ba6..97118cda9 100644 --- a/doc/source/deploy_install_guides/current/bare_metal_controller_storage.rst +++ b/doc/source/deploy_install_guides/current/bare_metal/controller_storage_install_kubernetes.rst @@ -1,89 +1,25 @@ -.. _bm_standard_controller_r2: +=========================================================================== +Install StarlingX Kubernetes on Bare Metal Standard with Controller Storage +=========================================================================== -================================================ -Bare metal Standard with Controller Storage R2.0 -================================================ +This section describes the steps to install the StarlingX Kubernetes platform +on a **StarlingX R2.0 bare metal Standard with Controller Storage** deployment +configuration. .. contents:: :local: :depth: 1 ------------ -Description ------------ - -.. include:: virtual_controller_storage.rst - :start-after: incl-controller-storage-intro-start: - :end-before: incl-controller-storage-intro-end: - -.. include:: virtual_aio_simplex.rst - :start-after: incl-ipv6-note-start: - :end-before: incl-ipv6-note-end: - --------------------- -Hardware requirements +Create a bootable USB --------------------- -The recommended minimum requirements for bare metal servers for various host -types are: - -+-------------------------+-----------------------------+-----------------------------+ -| Minimum Requirement | Controller Node | Compute Node | -+=========================+=============================+=============================+ -| Number of servers | 2 | 2-10 | -+-------------------------+-----------------------------+-----------------------------+ -| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) | -| | 8 cores/socket | -+-------------------------+-----------------------------+-----------------------------+ -| Minimum memory | 64 GB | 32 GB | -+-------------------------+-----------------------------+-----------------------------+ -| Primary disk | 500 GB SDD or NVMe | 120 GB (Minimum 10k RPM) | -+-------------------------+-----------------------------+-----------------------------+ -| Additional disks | - 1 or more 500 GB (min. | - For OpenStack, recommend | -| | 10K RPM) for Ceph OSD | 1 or more 500 GB (min. | -| | - Recommended, but not | 10K RPM) for VM local | -| | required: 1 or more SSDs | ephemeral storage | -| | or NVMe drives for Ceph | | -| | journals (min. 1024 MiB | | -| | per OSD journal) | | -+-------------------------+-----------------------------+-----------------------------+ -| Minimum network ports | - Mgmt/Cluster: 1x10GE | - Mgmt/Cluster: 1x10GE | -| | - OAM: 1x1GE | - Data: 1 or more x 10GE | -+-------------------------+-----------------------------+-----------------------------+ -| BIOS settings | - Hyper-Threading technology enabled | -| | - Virtualization technology enabled | -| | - VT for directed I/O enabled | -| | - CPU power and performance policy set to performance | -| | - CPU C state control disabled | -| | - Plug & play BMC detection disabled | -+-------------------------+-----------------------------+-----------------------------+ - ---------------- -Prepare Servers ---------------- - -.. include:: bare_metal_aio_simplex.rst - :start-after: incl-prepare-servers-start: - :end-before: incl-prepare-servers-end: - --------------------- -StarlingX Kubernetes --------------------- - -******************************* -Installing StarlingX Kubernetes -******************************* - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Create a bootable USB with the StarlingX ISO -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to -create a bootable USB on your system. +create a bootable USB with the StarlingX ISO on your system. -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- Install software on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- .. incl-install-software-controller-0-standard-start: @@ -107,9 +43,9 @@ Install software on controller-0 .. incl-install-software-controller-0-standard-end: -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- Bootstrap system on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- .. incl-bootstrap-sys-controller-0-standard-start: @@ -194,9 +130,7 @@ Bootstrap system on controller-0 ansible_become_pass: EOF - Additional Ansible bootstrap configurations for advanced use cases are available: - - * :ref:`IPv6 ` + Additional :doc:`ansible_bootstrap_configs` are available for advanced use cases. #. Run the Ansible bootstrap playbook: @@ -210,9 +144,9 @@ Bootstrap system on controller-0 .. incl-bootstrap-sys-controller-0-standard-end: -^^^^^^^^^^^^^^^^^^^^^^ +---------------------- Configure controller-0 -^^^^^^^^^^^^^^^^^^^^^^ +---------------------- .. incl-config-controller-0-storage-start: @@ -247,9 +181,9 @@ Configure controller-0 system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +************************************* OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +************************************* .. important:: @@ -308,17 +242,22 @@ OpenStack-specific host configuration .. incl-config-controller-0-storage-end: -^^^^^^^^^^^^^^^^^^^ +------------------- Unlock controller-0 -^^^^^^^^^^^^^^^^^^^ +------------------- -.. include:: bare_metal_aio_duplex.rst - :start-after: incl-unlock-controller-0-start: - :end-before: incl-unlock-controller-0-end: +Unlock controller-0 in order to bring it into service: -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +:: + + system host-unlock controller-0 + +Controller-0 will reboot in order to apply configuration changes and come into +service. This can take 5-10 minutes, depending on the performance of the host machine. + +-------------------------------------------------- Install software on controller-1 and compute nodes -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------------------------- #. Power on the controller-1 server and force it to network boot with the appropriate BIOS boot options for your particular server. @@ -383,9 +322,9 @@ Install software on controller-1 and compute nodes | 4 | compute-1 | compute | locked | disabled | online | +----+--------------+-------------+----------------+-------------+--------------+ -^^^^^^^^^^^^^^^^^^^^^^ +---------------------- Configure controller-1 -^^^^^^^^^^^^^^^^^^^^^^ +---------------------- .. incl-config-controller-1-start: @@ -404,9 +343,9 @@ install procedure.) system interface-network-assign controller-1 $OAM_IF oam system interface-network-assign controller-1 $MGMT_IF cluster-host -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +************************************* OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +************************************* .. important:: @@ -422,9 +361,9 @@ of installing the stx-openstack manifest and helm-charts later. .. incl-config-controller-1-end: -^^^^^^^^^^^^^^^^^^^ +------------------- Unlock controller-1 -^^^^^^^^^^^^^^^^^^^ +------------------- .. incl-unlock-controller-1-start: @@ -440,9 +379,9 @@ machine. .. incl-unlock-controller-1-end: -^^^^^^^^^^^^^^^^^^^^^^^ +----------------------- Configure compute nodes -^^^^^^^^^^^^^^^^^^^^^^^ +----------------------- #. Add the third Ceph monitor to compute-0: @@ -545,9 +484,9 @@ Configure compute nodes set +ex done -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +************************************* OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +************************************* .. important:: @@ -586,9 +525,9 @@ OpenStack-specific host configuration while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done done -^^^^^^^^^^^^^^^^^^^^ +-------------------- Unlock compute nodes -^^^^^^^^^^^^^^^^^^^^ +-------------------- Unlock compute nodes in order to bring them into service: @@ -601,9 +540,9 @@ Unlock compute nodes in order to bring them into service: The compute nodes will reboot in order to apply configuration changes and come into service. This can take 5-10 minutes, depending on the performance of the host machine. -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +---------------------------- Add Ceph OSDs to controllers -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +---------------------------- #. Add OSDs to controller-0: @@ -635,40 +574,8 @@ Add Ceph OSDs to controllers system host-stor-list $HOST -Your Kubernetes cluster is up and running. +---------- +Next steps +---------- -*************************** -Access StarlingX Kubernetes -*************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-kubernetes-start: - :end-before: incl-access-starlingx-kubernetes-end: - -------------------- -StarlingX OpenStack -------------------- - -*************************** -Install StarlingX OpenStack -*************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-install-starlingx-openstack-start: - :end-before: incl-install-starlingx-openstack-end: - -************************** -Access StarlingX OpenStack -************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-openstack-start: - :end-before: incl-access-starlingx-openstack-end: - -***************************** -Uninstall StarlingX OpenStack -***************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-uninstall-starlingx-openstack-start: - :end-before: incl-uninstall-starlingx-openstack-end: +.. include:: ../kubernetes_install_next.txt diff --git a/doc/source/deploy_install_guides/current/bare_metal/dedicated_storage.rst b/doc/source/deploy_install_guides/current/bare_metal/dedicated_storage.rst new file mode 100644 index 000000000..cac7ce244 --- /dev/null +++ b/doc/source/deploy_install_guides/current/bare_metal/dedicated_storage.rst @@ -0,0 +1,23 @@ +.. _bm_standard_dedicated_r2: + +============================================================ +Bare metal Standard with Dedicated Storage Installation R2.0 +============================================================ + +-------- +Overview +-------- + +.. include:: ../desc_dedicated_storage.txt + +.. include:: ../ipv6_note.txt + +------------ +Installation +------------ + +.. toctree:: + :maxdepth: 2 + + dedicated_storage_hardware + dedicated_storage_install_kubernetes diff --git a/doc/source/deploy_install_guides/current/bare_metal/dedicated_storage_hardware.rst b/doc/source/deploy_install_guides/current/bare_metal/dedicated_storage_hardware.rst new file mode 100644 index 000000000..eddec0124 --- /dev/null +++ b/doc/source/deploy_install_guides/current/bare_metal/dedicated_storage_hardware.rst @@ -0,0 +1,60 @@ +===================== +Hardware Requirements +===================== + +This section describes the hardware requirements and server preparation for a +**StarlingX R2.0 bare metal Standard with Dedicated Storage** deployment +configuration. + +.. contents:: + :local: + :depth: 1 + +----------------------------- +Minimum hardware requirements +----------------------------- + +The recommended minimum hardware requirements for bare metal servers for various +host types are: + ++---------------------+-----------------------+-----------------------+-----------------------+ +| Minimum Requirement | Controller Node | Storage Node | Compute Node | ++=====================+=======================+=======================+=======================+ +| Number of servers | 2 | 2-9 | 2-100 | ++---------------------+-----------------------+-----------------------+-----------------------+ +| Minimum processor | Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket | +| class | | ++---------------------+-----------------------+-----------------------+-----------------------+ +| Minimum memory | 64 GB | 64 GB | 32 GB | ++---------------------+-----------------------+-----------------------+-----------------------+ +| Primary disk | 500 GB SDD or NVM | 120 GB (min. 10k RPM) | 120 GB (min. 10k RPM) | ++---------------------+-----------------------+-----------------------+-----------------------+ +| Additional disks | None | - 1 or more 500 GB | - For OpenStack, | +| | | (min.10K RPM) for | recommend 1 or more | +| | | Ceph OSD | 500 GB (min. 10K | +| | | - Recommended, but | RPM) for VM | +| | | not required: 1 or | ephemeral storage | +| | | more SSDs or NVMe | | +| | | drives for Ceph | | +| | | journals (min. 1024 | | +| | | MiB per OSD | | +| | | journal) | | ++---------------------+-----------------------+-----------------------+-----------------------+ +| Minimum network | - Mgmt/Cluster: | - Mgmt/Cluster: | - Mgmt/Cluster: | +| ports | 1x10GE | 1x10GE | 1x10GE | +| | - OAM: 1x1GE | | - Data: 1 or more | +| | | | x 10GE | ++---------------------+-----------------------+-----------------------+-----------------------+ +| BIOS settings | - Hyper-Threading technology enabled | +| | - Virtualization technology enabled | +| | - VT for directed I/O enabled | +| | - CPU power and performance policy set to performance | +| | - CPU C state control disabled | +| | - Plug & play BMC detection disabled | ++---------------------+-----------------------+-----------------------+-----------------------+ + +-------------------------- +Prepare bare metal servers +-------------------------- + +.. include:: prep_servers.txt \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/bare_metal_dedicated_storage.rst b/doc/source/deploy_install_guides/current/bare_metal/dedicated_storage_install_kubernetes.rst similarity index 61% rename from doc/source/deploy_install_guides/current/bare_metal_dedicated_storage.rst rename to doc/source/deploy_install_guides/current/bare_metal/dedicated_storage_install_kubernetes.rst index 4bb9810ac..ce5c2cda6 100644 --- a/doc/source/deploy_install_guides/current/bare_metal_dedicated_storage.rst +++ b/doc/source/deploy_install_guides/current/bare_metal/dedicated_storage_install_kubernetes.rst @@ -1,126 +1,62 @@ -.. _bm_standard_dedicated_r2: +========================================================================== +Install StarlingX Kubernetes on Bare Metal Standard with Dedicated Storage +========================================================================== -=============================================== -Bare metal Standard with Dedicated Storage R2.0 -=============================================== +This section describes the steps to install the StarlingX Kubernetes platform +on a **StarlingX R2.0 bare metal Standard with Dedicated Storage** deployment +configuration. .. contents:: :local: :depth: 1 ------------ -Description ------------ - -.. include:: virtual_dedicated_storage.rst - :start-after: incl-dedicated-storage-intro-start: - :end-before: incl-dedicated-storage-intro-end: - -.. include:: virtual_aio_simplex.rst - :start-after: incl-ipv6-note-start: - :end-before: incl-ipv6-note-end: - ---------------------- -Hardware requirements ---------------------- - -The recommended minimum requirements for bare metal servers for various host -types are: - -+---------------------+-----------------------+-----------------------+-----------------------+ -| Minimum Requirement | Controller Node | Storage Node | Compute Node | -+=====================+=======================+=======================+=======================+ -| Number of servers | 2 | 2-9 | 2-100 | -+---------------------+-----------------------+-----------------------+-----------------------+ -| Minimum processor | Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket | -| class | | -+---------------------+-----------------------+-----------------------+-----------------------+ -| Minimum memory | 64 GB | 64 GB | 32 GB | -+---------------------+-----------------------+-----------------------+-----------------------+ -| Primary disk | 500 GB SDD or NVM | 120 GB (min. 10k RPM) | 120 GB (min. 10k RPM) | -+---------------------+-----------------------+-----------------------+-----------------------+ -| Additional disks | None | - 1 or more 500 GB | - For OpenStack, | -| | | (min.10K RPM) for | recommend 1 or more | -| | | Ceph OSD | 500 GB (min. 10K | -| | | - Recommended, but | RPM) for VM | -| | | not required: 1 or | ephemeral storage | -| | | more SSDs or NVMe | | -| | | drives for Ceph | | -| | | journals (min. 1024 | | -| | | MiB per OSD | | -| | | journal) | | -+---------------------+-----------------------+-----------------------+-----------------------+ -| Minimum network | - Mgmt/Cluster: | - Mgmt/Cluster: | - Mgmt/Cluster: | -| ports | 1x10GE | 1x10GE | 1x10GE | -| | - OAM: 1x1GE | | - Data: 1 or more | -| | | | x 10GE | -+---------------------+-----------------------+-----------------------+-----------------------+ -| BIOS settings | - Hyper-Threading technology enabled | -| | - Virtualization technology enabled | -| | - VT for directed I/O enabled | -| | - CPU power and performance policy set to performance | -| | - CPU C state control disabled | -| | - Plug & play BMC detection disabled | -+---------------------+-----------------------+-----------------------+-----------------------+ - ---------------- -Prepare Servers ---------------- - -.. include:: bare_metal_aio_simplex.rst - :start-after: incl-prepare-servers-start: - :end-before: incl-prepare-servers-end: - --------------------- -StarlingX Kubernetes --------------------- - -******************************* -Installing StarlingX Kubernetes -******************************* - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------------------- Create a bootable USB with the StarlingX ISO -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------------------- Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to create a bootable USB on your system. -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- Install software on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- -.. include:: bare_metal_controller_storage.rst +.. include:: controller_storage_install_kubernetes.rst :start-after: incl-install-software-controller-0-standard-start: :end-before: incl-install-software-controller-0-standard-end: -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- Bootstrap system on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- -.. include:: bare_metal_controller_storage.rst +.. include:: controller_storage_install_kubernetes.rst :start-after: incl-bootstrap-sys-controller-0-standard-start: :end-before: incl-bootstrap-sys-controller-0-standard-end: -^^^^^^^^^^^^^^^^^^^^^^ +---------------------- Configure controller-0 -^^^^^^^^^^^^^^^^^^^^^^ +---------------------- -.. include:: bare_metal_controller_storage.rst +.. include:: controller_storage_install_kubernetes.rst :start-after: incl-config-controller-0-storage-start: :end-before: incl-config-controller-0-storage-end: -^^^^^^^^^^^^^^^^^^^ +------------------- Unlock controller-0 -^^^^^^^^^^^^^^^^^^^ +------------------- -.. include:: bare_metal_aio_duplex.rst - :start-after: incl-unlock-controller-0-start: - :end-before: incl-unlock-controller-0-end: +Unlock controller-0 in order to bring it into service: -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Install software on controller-1, storage nodes and compute nodes -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +:: + + system host-unlock controller-0 + +Controller-0 will reboot in order to apply configuration changes and come into +service. This can take 5-10 minutes, depending on the performance of the host machine. + +------------------------------------------------------------------ +Install software on controller-1, storage nodes, and compute nodes +------------------------------------------------------------------ #. Power on the controller-1 server and force it to network boot with the appropriate BIOS boot options for your particular server. @@ -209,25 +145,25 @@ Install software on controller-1, storage nodes and compute nodes | 6 | compute-1 | compute | locked | disabled | online | +----+--------------+-------------+----------------+-------------+--------------+ -^^^^^^^^^^^^^^^^^^^^^^ +---------------------- Configure controller-1 -^^^^^^^^^^^^^^^^^^^^^^ +---------------------- -.. include:: bare_metal_controller_storage.rst +.. include:: controller_storage_install_kubernetes.rst :start-after: incl-config-controller-1-start: :end-before: incl-config-controller-1-end: -^^^^^^^^^^^^^^^^^^^ +------------------- Unlock controller-1 -^^^^^^^^^^^^^^^^^^^ +------------------- -.. include:: bare_metal_controller_storage.rst +.. include:: controller_storage_install_kubernetes.rst :start-after: incl-unlock-controller-1-start: :end-before: incl-unlock-controller-1-end: -^^^^^^^^^^^^^^^^^^^^^^^ +----------------------- Configure storage nodes -^^^^^^^^^^^^^^^^^^^^^^^ +----------------------- #. Assign the cluster-host network to the MGMT interface for the storage nodes: @@ -270,9 +206,9 @@ Configure storage nodes system host-stor-list $HOST -^^^^^^^^^^^^^^^^^^^^ +-------------------- Unlock storage nodes -^^^^^^^^^^^^^^^^^^^^ +-------------------- Unlock storage nodes in order to bring them into service: @@ -286,9 +222,9 @@ The storage nodes will reboot in order to apply configuration changes and come into service. This can take 5-10 minutes, depending on the performance of the host machine. -^^^^^^^^^^^^^^^^^^^^^^^ +----------------------- Configure compute nodes -^^^^^^^^^^^^^^^^^^^^^^^ +----------------------- #. Assign the cluster-host network to the MGMT interface for the compute nodes: @@ -367,9 +303,9 @@ Configure compute nodes set +ex done -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +************************************* OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +************************************* .. important:: @@ -408,9 +344,9 @@ OpenStack-specific host configuration while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done done -^^^^^^^^^^^^^^^^^^^^ +-------------------- Unlock compute nodes -^^^^^^^^^^^^^^^^^^^^ +-------------------- Unlock compute nodes in order to bring them into service: @@ -424,40 +360,8 @@ The compute nodes will reboot in order to apply configuration changes and come into service. This can take 5-10 minutes, depending on the performance of the host machine. -Your Kubernetes cluster is up and running. +---------- +Next steps +---------- -*************************** -Access StarlingX Kubernetes -*************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-kubernetes-start: - :end-before: incl-access-starlingx-kubernetes-end: - -------------------- -StarlingX OpenStack -------------------- - -*************************** -Install StarlingX OpenStack -*************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-install-starlingx-openstack-start: - :end-before: incl-install-starlingx-openstack-end: - -************************** -Access StarlingX OpenStack -************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-openstack-start: - :end-before: incl-access-starlingx-openstack-end: - -***************************** -Uninstall StarlingX OpenStack -***************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-uninstall-starlingx-openstack-start: - :end-before: incl-uninstall-starlingx-openstack-end: +.. include:: ../kubernetes_install_next.txt diff --git a/doc/source/deploy_install_guides/current/bare_metal_ironic.rst b/doc/source/deploy_install_guides/current/bare_metal/ironic.rst similarity index 98% rename from doc/source/deploy_install_guides/current/bare_metal_ironic.rst rename to doc/source/deploy_install_guides/current/bare_metal/ironic.rst index 4f6af716d..3acd42279 100644 --- a/doc/source/deploy_install_guides/current/bare_metal_ironic.rst +++ b/doc/source/deploy_install_guides/current/bare_metal/ironic.rst @@ -6,9 +6,9 @@ Bare metal Standard with Ironic R2.0 :local: :depth: 1 ------------- -Introduction ------------- +-------- +Overview +-------- Ironic is an OpenStack project that provisions bare metal machines. For information about the Ironic project, see @@ -18,7 +18,7 @@ End user applications can be deployed on bare metal servers (instead of virtual machines) by configuring OpenStack Ironic and deploying a pool of 1 or more bare metal servers. -.. figure:: figures/starlingx-deployment-options-ironic.png +.. figure:: ../figures/starlingx-deployment-options-ironic.png :scale: 90% :alt: Standard with Ironic deployment configuration @@ -54,9 +54,9 @@ Installation options StarlingX currently supports only a bare metal installation of Ironic with a standard configuration, either: -* :ref:`Bare metal Standard with Controller Storage R2.0 ` +* :doc:`controller_storage` -* :ref:`Bare metal Standard with Dedicated Storage R2.0 ` +* :doc:`dedicated_storage` This guide assumes that you have a standard deployment installed and configured diff --git a/doc/source/deploy_install_guides/current/bare_metal/prep_servers.txt b/doc/source/deploy_install_guides/current/bare_metal/prep_servers.txt new file mode 100644 index 000000000..61a686201 --- /dev/null +++ b/doc/source/deploy_install_guides/current/bare_metal/prep_servers.txt @@ -0,0 +1,17 @@ +Prior to starting the StarlingX installation, the bare metal servers must be in +the following condition: + +* Physically installed + +* Cabled for power + +* Cabled for networking + + * Far-end switch ports should be properly configured to realize the networking + shown in Figure 1. + +* All disks wiped + + * Ensures that servers will boot from either the network or USB storage (if present) + +* Powered off \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/bare_metal_aio_duplex.rst b/doc/source/deploy_install_guides/current/bare_metal_aio_duplex.rst deleted file mode 100644 index d082f5486..000000000 --- a/doc/source/deploy_install_guides/current/bare_metal_aio_duplex.rst +++ /dev/null @@ -1,631 +0,0 @@ -================================= -Bare metal All-in-one Duplex R2.0 -================================= - -.. contents:: - :local: - :depth: 1 - ------------ -Description ------------ - -.. include:: virtual_aio_duplex.rst - :start-after: incl-aio-duplex-intro-start: - :end-before: incl-aio-duplex-intro-end: - -The bare metal AIO-DX deployment configuration may be extended with up to four -worker/compute nodes (not shown in the diagram). Installation instructions for -these additional nodes are described in `Extending capacity with worker / compute nodes`_. - -.. include:: virtual_aio_simplex.rst - :start-after: incl-ipv6-note-start: - :end-before: incl-ipv6-note-end: - ---------------------- -Hardware requirements ---------------------- - -The recommended minimum requirements for bare metal servers for various host -types are: - -+-------------------------+-----------------------------------------------------------+ -| Minimum Requirement | All-in-one Controller Node | -+=========================+===========================================================+ -| Number of servers | 2 | -+-------------------------+-----------------------------------------------------------+ -| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) | -| | 8 cores/socket | -| | | -| | or | -| | | -| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores | -| | (low-power/low-cost option) | -+-------------------------+-----------------------------------------------------------+ -| Minimum memory | 64 GB | -+-------------------------+-----------------------------------------------------------+ -| Primary disk | 500 GB SDD or NVMe | -+-------------------------+-----------------------------------------------------------+ -| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD | -| | - Recommended, but not required: 1 or more SSDs or NVMe | -| | drives for Ceph journals (min. 1024 MiB per OSD journal)| -| | - For OpenStack, recommend 1 or more 500 GB (min. 10K RPM)| -| | for VM local ephemeral storage | -+-------------------------+-----------------------------------------------------------+ -| Minimum network ports | - Mgmt/Cluster: 1x10GE | -| | - OAM: 1x1GE | -| | - Data: 1 or more x 10GE | -+-------------------------+-----------------------------------------------------------+ -| BIOS settings | - Hyper-Threading technology enabled | -| | - Virtualization technology enabled | -| | - VT for directed I/O enabled | -| | - CPU power and performance policy set to performance | -| | - CPU C state control disabled | -| | - Plug & play BMC detection disabled | -+-------------------------+-----------------------------------------------------------+ - ---------------- -Prepare Servers ---------------- - -.. include:: bare_metal_aio_simplex.rst - :start-after: incl-prepare-servers-start: - :end-before: incl-prepare-servers-end: - --------------------- -StarlingX Kubernetes --------------------- - -******************************* -Installing StarlingX Kubernetes -******************************* - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Create a bootable USB with the StarlingX ISO -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to -create a bootable USB on your system. - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Install software on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. include:: bare_metal_aio_simplex.rst - :start-after: incl-install-software-controller-0-aio-start: - :end-before: incl-install-software-controller-0-aio-end: - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Bootstrap system on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -#. Login using the username / password of "sysadmin" / "sysadmin". - When logging in for the first time, you will be forced to change the password. - - :: - - Login: sysadmin - Password: - Changing password for sysadmin. - (current) UNIX Password: sysadmin - New Password: - (repeat) New Password: - -#. External connectivity is required to run the Ansible bootstrap playbook. The - StarlingX boot image will DHCP out all interfaces so the server may have - obtained an IP address and have external IP connectivity if a DHCP server is - present in your environment. Verify this using the :command:`ip addr` and - :command:`ping 8.8.8.8` commands. - - Otherwise, manually configure an IP address and default IP route. Use the - PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your - deployment environment. - - :: - - sudo ip address add / dev - sudo ip link set up dev - sudo ip route add default via dev - ping 8.8.8.8 - -#. Specify user configuration overrides for the Ansible bootstrap playbook. - - Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible - configuration are: - - ``/etc/ansible/hosts`` - The default Ansible inventory file. Contains a single host: localhost. - - ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml`` - The Ansible bootstrap playbook. - - ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml`` - The default configuration values for the bootstrap playbook. - - sysadmin home directory ($HOME) - The default location where Ansible looks for and imports user - configuration override files for hosts. For example: ``$HOME/.yml``. - - Specify the user configuration override file for the Ansible bootstrap - playbook using one of the following methods: - - * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit - the configurable values as desired (use the commented instructions in - the file). - - or - - * Create the minimal user configuration override file as shown in the - example below, using the OAM IP SUBNET and IP ADDRESSing applicable to your - deployment environment: - - :: - - cd ~ - cat < localhost.yml - system_mode: duplex - - dns_servers: - - 8.8.8.8 - - 8.8.4.4 - - external_oam_subnet: / - external_oam_gateway_address: - external_oam_floating_address: - external_oam_node_0_address: - external_oam_node_1_address: - - admin_username: admin - admin_password: - ansible_become_pass: - EOF - - Additional Ansible bootstrap configurations for advanced use cases are available: - - * :ref:`IPv6 ` - -#. Run the Ansible bootstrap playbook: - - :: - - ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml - - Wait for Ansible bootstrap playbook to complete. - This can take 5-10 minutes, depending on the performance of the host machine. - -^^^^^^^^^^^^^^^^^^^^^^ -Configure controller-0 -^^^^^^^^^^^^^^^^^^^^^^ - -.. include:: bare_metal_aio_simplex.rst - :start-after: incl-config-controller-0-start: - :end-before: incl-config-controller-0-end: - -^^^^^^^^^^^^^^^^^^^ -Unlock controller-0 -^^^^^^^^^^^^^^^^^^^ - -.. incl-unlock-controller-0-start: - -Unlock controller-0 in order to bring it into service: - -:: - - system host-unlock controller-0 - -Controller-0 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host machine. - -.. incl-unlock-controller-0-end: - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Install software on controller-1 node -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -#. Power on the controller-1 server and force it to network boot with the - appropriate BIOS boot options for your particular server. - -#. As controller-1 boots, a message appears on its console instructing you to - configure the personality of the node. - -#. On the console of controller-0, list hosts to see newly discovered controller-1 - host (hostname=None): - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | None | None | locked | disabled | offline | - +----+--------------+-------------+----------------+-------------+--------------+ - -#. Using the host id, set the personality of this host to 'controller': - - :: - - system host-update 2 personality=controller - -#. Wait for the software installation on controller-1 to complete, for controller-1 to - reboot, and for controller-1 to show as locked/disabled/online in 'system host-list'. - - This can take 5-10 minutes, depending on the performance of the host machine. - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | controller-1 | controller | locked | disabled | online | - +----+--------------+-------------+----------------+-------------+--------------+ - -^^^^^^^^^^^^^^^^^^^^^^ -Configure controller-1 -^^^^^^^^^^^^^^^^^^^^^^ - -#. Configure the OAM and MGMT interfaces of controller-1 and specify the - attached networks. Use the OAM and MGMT port names, for example eth0, that are - applicable to your deployment environment: - - (Note that the MGMT interface is partially set up automatically by the network - install procedure.) - - :: - - OAM_IF= - MGMT_IF= - system host-if-modify controller-1 $OAM_IF -c platform - system interface-network-assign controller-1 $OAM_IF oam - system interface-network-assign controller-1 $MGMT_IF cluster-host - -#. Configure data interfaces for controller-1. Use the DATA port names, for example - eth0, applicable to your deployment environment. - - .. important:: - - This step is **required** for OpenStack. - - This step is optional for Kubernetes: Do this step if using SRIOV network - attachments in hosted application containers. - - For Kubernetes SRIOV network attachments: - - * Configure the SRIOV device plugin: - - :: - - system host-label-assign controller-1 sriovdp=enabled - - * If planning on running DPDK in containers on this host, configure the number - of 1G Huge pages required on both NUMA nodes: - - :: - - system host-memory-modify controller-1 0 -1G 100 - system host-memory-modify controller-1 1 -1G 100 - - - For both Kubernetes and OpenStack: - - :: - - DATA0IF= - DATA1IF= - export COMPUTE=controller-1 - PHYSNET0='physnet0' - PHYSNET1='physnet1' - SPL=/tmp/tmp-system-port-list - SPIL=/tmp/tmp-system-host-if-list - system host-port-list ${COMPUTE} --nowrap > ${SPL} - system host-if-list -a ${COMPUTE} --nowrap > ${SPIL} - DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') - DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') - DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') - DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') - DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') - DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') - DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') - DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') - - system datanetwork-add ${PHYSNET0} vlan - system datanetwork-add ${PHYSNET1} vlan - - system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} - system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID} - system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} - system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1} - -#. Add an OSD on controller-1 for ceph: - - :: - - echo ">>> Add OSDs to primary tier" - system host-disk-list controller-1 - system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {} - system host-stor-list controller-1 - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. important:: - - **This step is required only if the StarlingX OpenStack application - (stx-openstack) will be installed.** - -#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in - support of installing the stx-openstack manifest and helm-charts later. - - :: - - system host-label-assign controller-1 openstack-control-plane=enabled - system host-label-assign controller-1 openstack-compute-node=enabled - system host-label-assign controller-1 openvswitch=enabled - system host-label-assign controller-1 sriov=enabled - -#. **For OpenStack only:** Set up disk partition for nova-local volume group, - which is needed for stx-openstack nova ephemeral disks. - - :: - - export COMPUTE=controller-1 - - echo ">>> Getting root disk info" - ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}') - ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') - echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID" - - echo ">>>> Configuring nova-local" - NOVA_SIZE=34 - NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE}) - NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') - system host-lvg-add ${COMPUTE} nova-local - system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID} - sleep 2 - - echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready." - while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done - -^^^^^^^^^^^^^^^^^^^ -Unlock controller-1 -^^^^^^^^^^^^^^^^^^^ - -Unlock controller-1 in order to bring it into service: - -:: - - system host-unlock controller-1 - -Controller-1 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host -machine. - -When it completes, your Kubernetes cluster is up and running. - -*************************** -Access StarlingX Kubernetes -*************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-kubernetes-start: - :end-before: incl-access-starlingx-kubernetes-end: - -------------------- -StarlingX OpenStack -------------------- - -*************************** -Install StarlingX OpenStack -*************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-install-starlingx-openstack-start: - :end-before: incl-install-starlingx-openstack-end: - -************************** -Access StarlingX OpenStack -************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-openstack-start: - :end-before: incl-access-starlingx-openstack-end: - -***************************** -Uninstall StarlingX OpenStack -***************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-uninstall-starlingx-openstack-start: - :end-before: incl-uninstall-starlingx-openstack-end: - ----------------------------------------------- -Extending capacity with worker / compute nodes ----------------------------------------------- - -********************************* -Install software on compute nodes -********************************* - -#. Power on the compute servers and force them to network boot with the - appropriate BIOS boot options for your particular server. - -#. As the compute servers boot, a message appears on their console instructing - you to configure the personality of the node. - -#. On the console of controller-0, list hosts to see newly discovered compute - hosts (hostname=None): - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | controller-0 | controller | unlocked | enabled | available | - | 3 | None | None | locked | disabled | offline | - | 4 | None | None | locked | disabled | offline | - +----+--------------+-------------+----------------+-------------+--------------+ - -#. Using the host id, set the personality of this host to 'controller': - - :: - - system host-update 3 personality=worker hostname=compute-0 - system host-update 4 personality=worker hostname=compute-1 - - This initiates the install of software on compute nodes. - This can take 5-10 minutes, depending on the performance of the host machine. - -#. Wait for the install of software on the computes to complete, the computes to - reboot and to both show as locked/disabled/online in 'system host-list'. - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | controller-1 | controller | unlocked | enabled | available | - | 3 | compute-0 | compute | locked | disabled | online | - | 4 | compute-1 | compute | locked | disabled | online | - +----+--------------+-------------+----------------+-------------+--------------+ - -*********************** -Configure compute nodes -*********************** - -#. Assign the cluster-host network to the MGMT interface for the compute nodes: - - (Note that the MGMT interfaces are partially set up automatically by the - network install procedure.) - - :: - - for COMPUTE in compute-0 compute-1; do - system interface-network-assign $COMPUTE mgmt0 cluster-host - done - -#. Configure data interfaces for compute nodes. Use the DATA port names, for - example eth0, that are applicable to your deployment environment. - - .. important:: - - This step is **required** for OpenStack. - - This step is optional for Kubernetes: Do this step if using SRIOV network - attachments in hosted application containers. - - For Kubernetes SRIOV network attachments: - - * Configure SRIOV device plug in: - - :: - - system host-label-assign controller-1 sriovdp=enabled - - * If planning on running DPDK in containers on this host, configure the number - of 1G Huge pages required on both NUMA nodes: - - :: - - system host-memory-modify controller-1 0 -1G 100 - system host-memory-modify controller-1 1 -1G 100 - - For both Kubernetes and OpenStack: - - :: - - DATA0IF= - DATA1IF= - PHYSNET0='physnet0' - PHYSNET1='physnet1' - SPL=/tmp/tmp-system-port-list - SPIL=/tmp/tmp-system-host-if-list - - # configure the datanetworks in sysinv, prior to referencing it - # in the ``system host-if-modify`` command'. - system datanetwork-add ${PHYSNET0} vlan - system datanetwork-add ${PHYSNET1} vlan - - for COMPUTE in compute-0 compute-1; do - echo "Configuring interface for: $COMPUTE" - set -ex - system host-port-list ${COMPUTE} --nowrap > ${SPL} - system host-if-list -a ${COMPUTE} --nowrap > ${SPIL} - DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') - DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') - DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') - DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') - DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') - DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') - DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') - DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') - system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} - system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID} - system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} - system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1} - set +ex - done - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -OpenStack-specific host configuration -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. important:: - - **This step is required only if the StarlingX OpenStack application - (stx-openstack) will be installed.** - -#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in - support of installing the stx-openstack manifest and helm-charts later. - - :: - - for NODE in compute-0 compute-1; do - system host-label-assign $NODE openstack-compute-node=enabled - system host-label-assign $NODE openvswitch=enabled - system host-label-assign $NODE sriov=enabled - done - -#. **For OpenStack only:** Setup disk partition for nova-local volume group, - needed for stx-openstack nova ephemeral disks. - - :: - - for COMPUTE in compute-0 compute-1; do - echo "Configuring Nova local for: $COMPUTE" - ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}') - ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') - PARTITION_SIZE=10 - NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE}) - NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') - system host-lvg-add ${COMPUTE} nova-local - system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID} - done - - for COMPUTE in compute-0 compute-1; do - echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready." - while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done - done - -******************** -Unlock compute nodes -******************** - -Unlock compute nodes in order to bring them into service: - -:: - - for COMPUTE in compute-0 compute-1; do - system host-unlock $COMPUTE - done - -The compute nodes will reboot to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host -machine. - diff --git a/doc/source/deploy_install_guides/current/desc_aio_duplex.txt b/doc/source/deploy_install_guides/current/desc_aio_duplex.txt new file mode 100644 index 000000000..952f5836c --- /dev/null +++ b/doc/source/deploy_install_guides/current/desc_aio_duplex.txt @@ -0,0 +1,23 @@ +The All-in-one Duplex (AIO-DX) deployment option provides a pair of high +availability (HA) servers with each server providing all three cloud functions +(controller, compute, and storage). + +An AIO-DX configuration provides the following benefits: + +* Only a small amount of cloud processing and storage power is required +* Application consolidation using multiple virtual machines on a single pair of + physical servers +* High availability (HA) services run on the controller function across two + physical servers in either active/active or active/standby mode +* A storage back end solution using a two-node CEPH deployment across two servers +* Virtual machines scheduled on both compute functions +* Protection against overall server hardware fault, where + + * All controller HA services go active on the remaining healthy server + * All virtual machines are recovered on the remaining healthy server + +.. figure:: ../figures/starlingx-deployment-options-duplex.png + :scale: 50% + :alt: All-in-one Duplex deployment configuration + + *Figure 1: All-in-one Duplex deployment configuration* \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/desc_aio_simplex.txt b/doc/source/deploy_install_guides/current/desc_aio_simplex.txt new file mode 100644 index 000000000..b3bd342ef --- /dev/null +++ b/doc/source/deploy_install_guides/current/desc_aio_simplex.txt @@ -0,0 +1,18 @@ +The All-in-one Simplex (AIO-SX) deployment option provides all three cloud +functions (controller, compute, and storage) on a single server with the +following benefits: + +* Requires only a small amount of cloud processing and storage power +* Application consolidation using multiple virtual machines on a single pair of + physical servers +* A storage backend solution using a single-node CEPH deployment + +.. figure:: ../figures/starlingx-deployment-options-simplex.png + :scale: 50% + :alt: All-in-one Simplex deployment configuration + + *Figure 1: All-in-one Simplex deployment configuration* + +An AIO-SX deployment gives no protection against overall server hardware fault. +Hardware component protection can be enabled with, for example, a hardware RAID +or 2x Port LAG in the deployment. \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/desc_controller_storage.txt b/doc/source/deploy_install_guides/current/desc_controller_storage.txt new file mode 100644 index 000000000..cf53c35ad --- /dev/null +++ b/doc/source/deploy_install_guides/current/desc_controller_storage.txt @@ -0,0 +1,22 @@ +The Standard with Controller Storage deployment option provides two high +availability (HA) controller nodes and a pool of up to 10 compute nodes. + +A Standard with Controller Storage configuration provides the following benefits: + +* A pool of up to 10 compute nodes +* High availability (HA) services run across the controller nodes in either + active/active or active/standby mode +* A storage back end solution using a two-node CEPH deployment across two + controller servers +* Protection against overall controller and compute node failure, where + + * On overall controller node failure, all controller HA services go active on + the remaining healthy controller node + * On overall compute node failure, virtual machines and containers are + recovered on the remaining healthy compute nodes + +.. figure:: ../figures/starlingx-deployment-options-controller-storage.png + :scale: 50% + :alt: Standard with Controller Storage deployment configuration + + *Figure 1: Standard with Controller Storage deployment configuration* \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/desc_dedicated_storage.txt b/doc/source/deploy_install_guides/current/desc_dedicated_storage.txt new file mode 100644 index 000000000..ef41856fd --- /dev/null +++ b/doc/source/deploy_install_guides/current/desc_dedicated_storage.txt @@ -0,0 +1,17 @@ +The Standard with Dedicated Storage deployment option is a standard installation +with independent controller, compute, and storage nodes. + +A Standard with Dedicated Storage configuration provides the following benefits: + +* A pool of up to 100 compute nodes +* A 2x node high availability (HA) controller cluster with HA services running + across the controller nodes in either active/active or active/standby mode +* A storage back end solution using a two-to-9x node HA CEPH storage cluster + that supports a replication factor of two or three +* Up to four groups of 2x storage nodes, or up to three groups of 3x storage nodes + +.. figure:: ../figures/starlingx-deployment-options-dedicated-storage.png + :scale: 50% + :alt: Standard with Dedicated Storage deployment configuration + + *Figure 1: Standard with Dedicated Storage deployment configuration* \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/index.rst b/doc/source/deploy_install_guides/current/index.rst new file mode 100644 index 000000000..4d650bd98 --- /dev/null +++ b/doc/source/deploy_install_guides/current/index.rst @@ -0,0 +1,51 @@ +=========================== +StarlingX R2.0 Installation +=========================== + +StarlingX provides a pre-defined set of standard +:doc:`deployment configurations `. Most deployment options may +be installed in a virtual environment or on bare metal. + +----------------------------------------------------- +Install StarlingX Kubernetes in a virtual environment +----------------------------------------------------- + +.. toctree:: + :maxdepth: 2 + + virtual/aio_simplex + virtual/aio_duplex + virtual/controller_storage + virtual/dedicated_storage + +------------------------------------------ +Install StarlingX Kubernetes on bare metal +------------------------------------------ + +.. toctree:: + :maxdepth: 2 + + bare_metal/aio_simplex + bare_metal/aio_duplex + bare_metal/controller_storage + bare_metal/dedicated_storage + bare_metal/ironic + bare_metal/ansible_bootstrap_configs + +----------------- +Access Kubernetes +----------------- + +.. toctree:: + :maxdepth: 2 + + kubernetes_access + +------------------- +StarlingX OpenStack +------------------- + +.. toctree:: + :maxdepth: 2 + + openstack/index diff --git a/doc/source/deploy_install_guides/current/ipv6_note.txt b/doc/source/deploy_install_guides/current/ipv6_note.txt new file mode 100644 index 000000000..4187fd7f6 --- /dev/null +++ b/doc/source/deploy_install_guides/current/ipv6_note.txt @@ -0,0 +1,10 @@ +.. note:: + + By default, StarlingX uses IPv4. To use StarlingX with IPv6: + + * The entire infrastructure and cluster configuration must be IPv6, with the + exception of the PXE boot network. + + * Not all external servers are reachable via IPv6 addresses (for example + Docker registries). Depending on your infrastructure, it may be necessary + to deploy a NAT64/DNS64 gateway to translate the IPv4 addresses to IPv6. \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/access_starlingx_kubernetes.rst b/doc/source/deploy_install_guides/current/kubernetes_access.rst similarity index 90% rename from doc/source/deploy_install_guides/current/access_starlingx_kubernetes.rst rename to doc/source/deploy_install_guides/current/kubernetes_access.rst index 8ed29bf4c..7768578e4 100644 --- a/doc/source/deploy_install_guides/current/access_starlingx_kubernetes.rst +++ b/doc/source/deploy_install_guides/current/kubernetes_access.rst @@ -1,6 +1,9 @@ -=========================== -Access StarlingX Kubernetes -=========================== +================================ +Access StarlingX Kubernetes R2.0 +================================ + +Use local/remote CLIs, GUIs, and/or REST APIs to access and manage StarlingX +Kubernetes and hosted containerized applications. .. contents:: :local: diff --git a/doc/source/deploy_install_guides/current/kubernetes_install_next.txt b/doc/source/deploy_install_guides/current/kubernetes_install_next.txt new file mode 100644 index 000000000..877c4a9a6 --- /dev/null +++ b/doc/source/deploy_install_guides/current/kubernetes_install_next.txt @@ -0,0 +1,7 @@ +Your Kubernetes cluster is now up and running. + +For instructions on how to access StarlingX Kubernetes see +:doc:`../kubernetes_access`. + +For instructions on how to install and access StarlingX OpenStack see +:doc:`../openstack/index`. \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/access_starlingx_openstack.rst b/doc/source/deploy_install_guides/current/openstack/access.rst similarity index 87% rename from doc/source/deploy_install_guides/current/access_starlingx_openstack.rst rename to doc/source/deploy_install_guides/current/openstack/access.rst index 3dddbb95c..93d73a1f2 100644 --- a/doc/source/deploy_install_guides/current/access_starlingx_openstack.rst +++ b/doc/source/deploy_install_guides/current/openstack/access.rst @@ -2,6 +2,9 @@ Access StarlingX OpenStack ========================== +Use local/remote CLIs, GUIs and/or REST APIs to access and manage StarlingX +OpenStack and hosted virtualized applications. + .. contents:: :local: :depth: 1 diff --git a/doc/source/deploy_install_guides/current/openstack/index.rst b/doc/source/deploy_install_guides/current/openstack/index.rst new file mode 100644 index 000000000..9b429da9d --- /dev/null +++ b/doc/source/deploy_install_guides/current/openstack/index.rst @@ -0,0 +1,16 @@ +=================== +StarlingX OpenStack +=================== + +This section describes the steps to install and access StarlingX OpenStack. +Other than the OpenStack-specific configurations required in the underlying +StarlingX Kubernetes infrastructure (described in the installation steps for +StarlingX Kubernetes), the installation of containerized OpenStack for StarlingX +is independent of deployment configuration. + +.. toctree:: + :maxdepth: 2 + + install + access + uninstall_delete \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/install_openstack.rst b/doc/source/deploy_install_guides/current/openstack/install.rst similarity index 77% rename from doc/source/deploy_install_guides/current/install_openstack.rst rename to doc/source/deploy_install_guides/current/openstack/install.rst index eeda581ae..7f66478f4 100644 --- a/doc/source/deploy_install_guides/current/install_openstack.rst +++ b/doc/source/deploy_install_guides/current/openstack/install.rst @@ -1,12 +1,8 @@ -================= -Install OpenStack -================= +=========================== +Install StarlingX OpenStack +=========================== -.. contents:: - :local: - :depth: 1 - -These installation instructions assume that you have completed the following +These instructions assume that you have completed the following OpenStack-specific configuration tasks that are required by the underlying StarlingX Kubernetes platform: @@ -60,12 +56,10 @@ Install application manifest and helm-charts watch -n 5 system application-list - When it completes, your OpenStack cloud is up and running. +---------- +Next steps +---------- --------------------------- -Access StarlingX OpenStack --------------------------- +Your OpenStack cloud is now up and running. -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-openstack-start: - :end-before: incl-access-starlingx-openstack-end: +See :doc:`access` for details on how to access StarlingX OpenStack. diff --git a/doc/source/deploy_install_guides/current/uninstall_delete_openstack.rst b/doc/source/deploy_install_guides/current/openstack/uninstall_delete.rst similarity index 84% rename from doc/source/deploy_install_guides/current/uninstall_delete_openstack.rst rename to doc/source/deploy_install_guides/current/openstack/uninstall_delete.rst index e6e3506a3..9578ceb4e 100644 --- a/doc/source/deploy_install_guides/current/uninstall_delete_openstack.rst +++ b/doc/source/deploy_install_guides/current/openstack/uninstall_delete.rst @@ -1,9 +1,9 @@ -=================== -Uninstall OpenStack -=================== +============================= +Uninstall StarlingX OpenStack +============================= This section provides additional commands for uninstalling and deleting the -OpenStack application. +StarlingX OpenStack application. .. warning:: diff --git a/doc/source/deploy_install_guides/current/virtual/aio_duplex.rst b/doc/source/deploy_install_guides/current/virtual/aio_duplex.rst new file mode 100644 index 000000000..5add4c3ee --- /dev/null +++ b/doc/source/deploy_install_guides/current/virtual/aio_duplex.rst @@ -0,0 +1,21 @@ +=========================================== +Virtual All-in-one Duplex Installation R2.0 +=========================================== + +-------- +Overview +-------- + +.. include:: ../desc_aio_duplex.txt + +.. include:: ../ipv6_note.txt + +------------ +Installation +------------ + +.. toctree:: + :maxdepth: 2 + + aio_duplex_environ + aio_duplex_install_kubernetes diff --git a/doc/source/deploy_install_guides/current/virtual/aio_duplex_environ.rst b/doc/source/deploy_install_guides/current/virtual/aio_duplex_environ.rst new file mode 100644 index 000000000..1a08c405d --- /dev/null +++ b/doc/source/deploy_install_guides/current/virtual/aio_duplex_environ.rst @@ -0,0 +1,52 @@ +============================ +Prepare Host and Environment +============================ + +This section describes how to prepare the physical host and virtual environment +for a **StarlingX R2.0 virtual All-in-one Duplex** deployment configuration. + +.. contents:: + :local: + :depth: 1 + +------------------------------------ +Physical host requirements and setup +------------------------------------ + +.. include:: physical_host_req.txt + +--------------------------------------- +Prepare virtual environment and servers +--------------------------------------- + +The following steps explain how to prepare the virtual environment and servers +on a physical host for a StarlingX R2.0 virtual All-in-one Duplex deployment +configuration. + +#. Prepare virtual environment. + + Set up the virtual platform networks for virtual deployment: + + :: + + bash setup_network.sh + +#. Prepare virtual servers. + + Create the XML definitions for the virtual servers required by this + configuration option. This will create the XML virtual server definition for: + + * duplex-controller-0 + * duplex-controller-1 + + The following command will start/virtually power on: + + * The 'duplex-controller-0' virtual server + * The X-based graphical virt-manager application + + :: + + bash setup_configuration.sh -c duplex -i ./bootimage.iso + + If there is no X-server present, then errors are returned. + diff --git a/doc/source/deploy_install_guides/current/virtual_aio_duplex.rst b/doc/source/deploy_install_guides/current/virtual/aio_duplex_install_kubernetes.rst similarity index 68% rename from doc/source/deploy_install_guides/current/virtual_aio_duplex.rst rename to doc/source/deploy_install_guides/current/virtual/aio_duplex_install_kubernetes.rst index 309813dc1..5e1d44e19 100644 --- a/doc/source/deploy_install_guides/current/virtual_aio_duplex.rst +++ b/doc/source/deploy_install_guides/current/virtual/aio_duplex_install_kubernetes.rst @@ -1,577 +1,425 @@ -============================== -Virtual All-in-one Duplex R2.0 -============================== - -.. contents:: - :local: - :depth: 1 - ------------ -Description ------------ - -.. incl-aio-duplex-intro-start: - -The All-in-one Duplex (AIO-DX) deployment option provides a pair of high -availability (HA) servers with each server providing all three cloud functions -(controller, compute, and storage). - -An AIO-DX configuration provides the following benefits: - -* Only a small amount of cloud processing and storage power is required -* Application consolidation using multiple virtual machines on a single pair of - physical servers -* High availability (HA) services run on the controller function across two - physical servers in either active/active or active/standby mode -* A storage back end solution using a two-node CEPH deployment across two servers -* Virtual machines scheduled on both compute functions -* Protection against overall server hardware fault, where - - * All controller HA services go active on the remaining healthy server - * All virtual machines are recovered on the remaining healthy server - -.. figure:: figures/starlingx-deployment-options-duplex.png - :scale: 50% - :alt: All-in-one Duplex deployment configuration - - *Figure 1: All-in-one Duplex deployment configuration* - -.. incl-aio-duplex-intro-end: - -.. include:: virtual_aio_simplex.rst - :start-after: incl-ipv6-note-start: - :end-before: incl-ipv6-note-end: - ------------------------------------- -Physical host requirements and setup ------------------------------------- - -.. include:: virtual_aio_simplex.rst - :start-after: incl-virt-physical-host-req-start: - :end-before: incl-virt-physical-host-req-end: - ---------------------------------------- -Prepare virtual environment and servers ---------------------------------------- - -On the host, prepare the virtual environment and virtual servers. - -#. Set up virtual platform networks for virtual deployment: - - :: - - bash setup_network.sh - -#. Create the XML definitions for the virtual servers required by this - configuration option. This creates the XML virtual server definition for: - - * duplex-controller-0 - * duplex-controller-1 - - The following command will start/virtually power on: - - * the 'duplex-controller-0' virtual server - * the X-based graphical virt-manager application - - If there is no X-server present, then errors are returned. - - :: - - bash setup_configuration.sh -c duplex -i ./bootimage.iso - --------------------- -StarlingX Kubernetes --------------------- - -***************************************** -Install the StarlingX Kubernetes platform -***************************************** - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Install software on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -In the last step of "Prepare the virtual environment and virtual servers" the -controller-0 virtual server 'duplex-controller-0' was started by the -:command:`setup_configuration.sh` command. - -On the host, attach to the console of virtual controller-0 and select the appropriate -installer menu options to start the non-interactive install of -StarlingX software on controller-0. - -.. note:: - - When entering the console, it is very easy to miss the first installer menu - selection. Use ESC to navigate to previous menus, to ensure you are at the - first installer menu. - -:: - - virsh console duplex-controller-0 - -Make the following menu selections in the installer: - -#. First menu: Select 'All-in-one Controller Configuration' -#. Second menu: Select 'Serial Console' -#. Third menu: Select 'Standard Security Profile' - -Wait for the non-interactive install of software to complete and for the server -to reboot. This can take 5-10 minutes, depending on the performance of the host -machine. - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Bootstrap system on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -On virtual controller-0: - -#. Log in using the username / password of "sysadmin" / "sysadmin". - When logging in for the first time, you will be forced to change the password. - - :: - - Login: sysadmin - Password: - Changing password for sysadmin. - (current) UNIX Password: sysadmin - New Password: - (repeat) New Password: - -#. External connectivity is required to run the Ansible bootstrap playbook. - - :: - - export CONTROLLER0_OAM_CIDR=10.10.10.3/24 - export DEFAULT_OAM_GATEWAY=10.10.10.1 - sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1 - sudo ip link set up dev enp7s1 - sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1 - -#. Specify user configuration overrides for the Ansible bootstrap playbook. - - Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible - configuration are: - - ``/etc/ansible/hosts`` - The default Ansible inventory file. Contains a single host: localhost. - - ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml`` - The Ansible bootstrap playbook. - - ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml`` - The default configuration values for the bootstrap playbook. - - sysadmin home directory ($HOME) - The default location where Ansible looks for and imports user - configuration override files for hosts. For example: ``$HOME/.yml``. - - - Specify the user configuration override file for the Ansible bootstrap - playbook using one of the following methods: - - * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit - the configurable values as desired (use the commented instructions in - the file). - - or - - * Create the minimal user configuration override file as shown in the example - below: - - :: - - cd ~ - cat < localhost.yml - system_mode: duplex - - dns_servers: - - 8.8.8.8 - - 8.8.4.4 - - external_oam_subnet: 10.10.10.0/24 - external_oam_gateway_address: 10.10.10.1 - external_oam_floating_address: 10.10.10.2 - external_oam_node_0_address: 10.10.10.3 - external_oam_node_1_address: 10.10.10.4 - - admin_username: admin - admin_password: - ansible_become_pass: - EOF - - Additional Ansible bootstrap configurations for advanced use cases are available: - - * :ref:`IPv6 ` - -#. Run the Ansible bootstrap playbook: - - :: - - ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml - - Wait for Ansible bootstrap playbook to complete. - This can take 5-10 minutes, depending on the performance of the host machine. - -^^^^^^^^^^^^^^^^^^^^^^ -Configure controller-0 -^^^^^^^^^^^^^^^^^^^^^^ - -On virtual controller-0: - -#. Acquire admin credentials: - - :: - - source /etc/platform/openrc - -#. Configure the OAM and MGMT interfaces of controller-0 and specify the - attached networks: - - :: - - OAM_IF=enp7s1 - MGMT_IF=enp7s2 - system host-if-modify controller-0 lo -c none - IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') - for UUID in $IFNET_UUIDS; do - system interface-network-remove ${UUID} - done - system host-if-modify controller-0 $OAM_IF -c platform - system interface-network-assign controller-0 $OAM_IF oam - system host-if-modify controller-0 $MGMT_IF -c platform - system interface-network-assign controller-0 $MGMT_IF mgmt - system interface-network-assign controller-0 $MGMT_IF cluster-host - -#. Configure NTP Servers for network time synchronization: - - .. note:: - - In a virtual environment, this can sometimes cause Ceph clock skew alarms. - Also, the virtual instances clock is synchronized with the host clock, - so it is not absolutely required to configure NTP in this step. - - :: - - system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org - -#. Configure data interfaces for controller-0. - - .. important:: - - **This step is required only if the StarlingX OpenStack application - (stx-openstack) will be installed.** - - 1G Huge Pages are not supported in the virtual environment and there is no - virtual NIC supporting SRIOV. For that reason, data interfaces are not - applicable in the virtual environment for the Kubernetes-only scenario. - - For OpenStack only: - - :: - - DATA0IF=eth1000 - DATA1IF=eth1001 - export COMPUTE=controller-0 - PHYSNET0='physnet0' - PHYSNET1='physnet1' - SPL=/tmp/tmp-system-port-list - SPIL=/tmp/tmp-system-host-if-list - system host-port-list ${COMPUTE} --nowrap > ${SPL} - system host-if-list -a ${COMPUTE} --nowrap > ${SPIL} - DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') - DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') - DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') - DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') - DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') - DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') - DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') - DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') - - system datanetwork-add ${PHYSNET0} vlan - system datanetwork-add ${PHYSNET1} vlan - - system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} - system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID} - system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} - system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1} - -#. Add an OSD on controller-0 for ceph: - - :: - - system host-disk-list controller-0 - system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {} - system host-stor-list controller-0 - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. important:: - - **This step is required only if the StarlingX OpenStack application - (stx-openstack) will be installed.** - -#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in - support of installing the stx-openstack manifest/helm-charts later. - - :: - - system host-label-assign controller-0 openstack-control-plane=enabled - system host-label-assign controller-0 openstack-compute-node=enabled - system host-label-assign controller-0 openvswitch=enabled - system host-label-assign controller-0 sriov=enabled - -#. **For OpenStack only:** A vSwitch is required. - - The default vSwitch is containerized OVS that is packaged with the - stx-openstack manifest/helm-charts. StarlingX provides the option to use - OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT - supported, only OVS is supported. Therefore, simply use the default OVS - vSwitch here. - -#. **For OpenStack Only:** Set up disk partition for nova-local volume group, - which is needed for stx-openstack nova ephemeral disks. - - :: - - export COMPUTE=controller-0 - - echo ">>> Getting root disk info" - ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}') - ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') - echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID" - - echo ">>>> Configuring nova-local" - NOVA_SIZE=34 - NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE}) - NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') - system host-lvg-add ${COMPUTE} nova-local - system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID} - sleep 2 - - echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready." - while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done - -^^^^^^^^^^^^^^^^^^^ -Unlock controller-0 -^^^^^^^^^^^^^^^^^^^ - -Unlock virtual controller-0 to bring it into service: - -:: - - system host-unlock controller-0 - -Controller-0 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host machine. - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Install software on controller-1 node -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -#. On the host, power on the controller-1 virtual server, 'duplex-controller-1'. It will - automatically attempt to network boot over the management network: - - :: - - virsh start duplex-controller-1 - -#. Attach to the console of virtual controller-1: - - :: - - virsh console duplex-controller-1 - - As controller-1 VM boots, a message appears on its console instructing you to - configure the personality of the node. - -#. On the console of virtual controller-0, list hosts to see the newly discovered - controller-1 host (hostname=None): - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | None | None | locked | disabled | offline | - +----+--------------+-------------+----------------+-------------+--------------+ - -#. On virtual controller-0, using the host id, set the personality of this host - to 'controller': - - :: - - system host-update 2 personality=controller - -#. Wait for the software installation on controller-1 to complete, controller-1 to - reboot, and controller-1 to show as locked/disabled/online in 'system host-list'. - This can take 5-10 minutes, depending on the performance of the host machine. - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | controller-1 | controller | locked | disabled | online | - +----+--------------+-------------+----------------+-------------+--------------+ - -^^^^^^^^^^^^^^^^^^^^^^ -Configure controller-1 -^^^^^^^^^^^^^^^^^^^^^^ - -On virtual controller-0: - -#. Configure the OAM and MGMT interfaces of controller-1 and specify the - attached networks. Note that the MGMT interface is partially set up - automatically by the network install procedure. - - :: - - OAM_IF=enp7s1 - system host-if-modify controller-1 $OAM_IF -c platform - system interface-network-assign controller-1 $OAM_IF oam - system interface-network-assign controller-1 mgmt0 cluster-host - -#. Configure data interfaces for controller-1. - - .. important:: - - **This step is required only if the StarlingX OpenStack application - (stx-openstack) will be installed.** - - 1G Huge Pages are not supported in the virtual environment and there is no - virtual NIC supporting SRIOV. For that reason, data interfaces are not - applicable in the virtual environment for the Kubernetes-only scenario. - - For OpenStack only: - - :: - - DATA0IF=eth1000 - DATA1IF=eth1001 - export COMPUTE=controller-1 - PHYSNET0='physnet0' - PHYSNET1='physnet1' - SPL=/tmp/tmp-system-port-list - SPIL=/tmp/tmp-system-host-if-list - system host-port-list ${COMPUTE} --nowrap > ${SPL} - system host-if-list -a ${COMPUTE} --nowrap > ${SPIL} - DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') - DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') - DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') - DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') - DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') - DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') - DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') - DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') - - system datanetwork-add ${PHYSNET0} vlan - system datanetwork-add ${PHYSNET1} vlan - - system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} - system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID} - system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} - system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1} - -#. Add an OSD on controller-1 for ceph: - - :: - - echo ">>> Add OSDs to primary tier" - system host-disk-list controller-1 - system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {} - system host-stor-list controller-1 - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. important:: - - **This step is required only if the StarlingX OpenStack application - (stx-openstack) will be installed.** - -#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in - support of installing the stx-openstack manifest/helm-charts later: - - :: - - system host-label-assign controller-1 openstack-control-plane=enabled - system host-label-assign controller-1 openstack-compute-node=enabled - system host-label-assign controller-1 openvswitch=enabled - system host-label-assign controller-1 sriov=enabled - -#. **For OpenStack only:** Set up disk partition for nova-local volume group, - which is needed for stx-openstack nova ephemeral disks: - - :: - - export COMPUTE=controller-1 - - echo ">>> Getting root disk info" - ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}') - ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') - echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID" - - echo ">>>> Configuring nova-local" - NOVA_SIZE=34 - NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE}) - NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') - system host-lvg-add ${COMPUTE} nova-local - system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID} - -^^^^^^^^^^^^^^^^^^^ -Unlock controller-1 -^^^^^^^^^^^^^^^^^^^ - -Unlock virtual controller-1 in order to bring it into service: - -:: - - system host-unlock controller-1 - -Controller-1 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host machine. - -When it completes, your Kubernetes cluster is up and running. - -*************************** -Access StarlingX Kubernetes -*************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-kubernetes-start: - :end-before: incl-access-starlingx-kubernetes-end: - -------------------- -StarlingX OpenStack -------------------- - -*************************** -Install StarlingX OpenStack -*************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-install-starlingx-openstack-start: - :end-before: incl-install-starlingx-openstack-end: - -************************** -Access StarlingX OpenStack -************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-openstack-start: - :end-before: incl-access-starlingx-openstack-end: - -***************************** -Uninstall StarlingX OpenStack -***************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-uninstall-starlingx-openstack-start: - :end-before: incl-uninstall-starlingx-openstack-end: +============================================== +Install StarlingX Kubernetes on Virtual AIO-DX +============================================== + +This section describes the steps to install the StarlingX Kubernetes platform +on a **StarlingX R2.0 virtual All-in-one Duplex** deployment configuration. + +.. contents:: + :local: + :depth: 1 + +-------------------------------- +Install software on controller-0 +-------------------------------- + +In the last step of "Prepare virtual environment and servers" the +controller-0 virtual server 'duplex-controller-0' was started by the +:command:`setup_configuration.sh` command. + +On the host, attach to the console of virtual controller-0 and select the appropriate +installer menu options to start the non-interactive install of +StarlingX software on controller-0. + +.. note:: + + When entering the console, it is very easy to miss the first installer menu + selection. Use ESC to navigate to previous menus, to ensure you are at the + first installer menu. + +:: + + virsh console duplex-controller-0 + +Make the following menu selections in the installer: + +#. First menu: Select 'All-in-one Controller Configuration' +#. Second menu: Select 'Serial Console' +#. Third menu: Select 'Standard Security Profile' + +Wait for the non-interactive install of software to complete and for the server +to reboot. This can take 5-10 minutes, depending on the performance of the host +machine. + +-------------------------------- +Bootstrap system on controller-0 +-------------------------------- + +On virtual controller-0: + +#. Log in using the username / password of "sysadmin" / "sysadmin". + When logging in for the first time, you will be forced to change the password. + + :: + + Login: sysadmin + Password: + Changing password for sysadmin. + (current) UNIX Password: sysadmin + New Password: + (repeat) New Password: + +#. External connectivity is required to run the Ansible bootstrap playbook. + + :: + + export CONTROLLER0_OAM_CIDR=10.10.10.3/24 + export DEFAULT_OAM_GATEWAY=10.10.10.1 + sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1 + sudo ip link set up dev enp7s1 + sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1 + +#. Specify user configuration overrides for the Ansible bootstrap playbook. + + Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible + configuration are: + + ``/etc/ansible/hosts`` + The default Ansible inventory file. Contains a single host: localhost. + + ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml`` + The Ansible bootstrap playbook. + + ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml`` + The default configuration values for the bootstrap playbook. + + sysadmin home directory ($HOME) + The default location where Ansible looks for and imports user + configuration override files for hosts. For example: ``$HOME/.yml``. + + + Specify the user configuration override file for the Ansible bootstrap + playbook using one of the following methods: + + * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit + the configurable values as desired (use the commented instructions in + the file). + + or + + * Create the minimal user configuration override file as shown in the example + below: + + :: + + cd ~ + cat < localhost.yml + system_mode: duplex + + dns_servers: + - 8.8.8.8 + - 8.8.4.4 + + external_oam_subnet: 10.10.10.0/24 + external_oam_gateway_address: 10.10.10.1 + external_oam_floating_address: 10.10.10.2 + external_oam_node_0_address: 10.10.10.3 + external_oam_node_1_address: 10.10.10.4 + + admin_username: admin + admin_password: + ansible_become_pass: + EOF + + Additional Ansible bootstrap configurations for advanced use cases are available: + + * :ref:`IPv6 ` + +#. Run the Ansible bootstrap playbook: + + :: + + ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml + + Wait for Ansible bootstrap playbook to complete. + This can take 5-10 minutes, depending on the performance of the host machine. + +---------------------- +Configure controller-0 +---------------------- + +On virtual controller-0: + +#. Acquire admin credentials: + + :: + + source /etc/platform/openrc + +#. Configure the OAM and MGMT interfaces of controller-0 and specify the + attached networks: + + :: + + OAM_IF=enp7s1 + MGMT_IF=enp7s2 + system host-if-modify controller-0 lo -c none + IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') + for UUID in $IFNET_UUIDS; do + system interface-network-remove ${UUID} + done + system host-if-modify controller-0 $OAM_IF -c platform + system interface-network-assign controller-0 $OAM_IF oam + system host-if-modify controller-0 $MGMT_IF -c platform + system interface-network-assign controller-0 $MGMT_IF mgmt + system interface-network-assign controller-0 $MGMT_IF cluster-host + +#. Configure NTP Servers for network time synchronization: + + .. note:: + + In a virtual environment, this can sometimes cause Ceph clock skew alarms. + Also, the virtual instances clock is synchronized with the host clock, + so it is not absolutely required to configure NTP in this step. + + :: + + system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org + +#. Configure data interfaces for controller-0. + + .. important:: + + **This step is required only if the StarlingX OpenStack application + (stx-openstack) will be installed.** + + 1G Huge Pages are not supported in the virtual environment and there is no + virtual NIC supporting SRIOV. For that reason, data interfaces are not + applicable in the virtual environment for the Kubernetes-only scenario. + + For OpenStack only: + + :: + + DATA0IF=eth1000 + DATA1IF=eth1001 + export COMPUTE=controller-0 + PHYSNET0='physnet0' + PHYSNET1='physnet1' + SPL=/tmp/tmp-system-port-list + SPIL=/tmp/tmp-system-host-if-list + system host-port-list ${COMPUTE} --nowrap > ${SPL} + system host-if-list -a ${COMPUTE} --nowrap > ${SPIL} + DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') + DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') + DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') + DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') + DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') + DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') + DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') + DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') + + system datanetwork-add ${PHYSNET0} vlan + system datanetwork-add ${PHYSNET1} vlan + + system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} + system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID} + system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} + system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1} + +#. Add an OSD on controller-0 for ceph: + + :: + + system host-disk-list controller-0 + system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {} + system host-stor-list controller-0 + +************************************* +OpenStack-specific host configuration +************************************* + +.. include:: aio_simplex_install_kubernetes.rst + :start-after: incl-config-controller-0-openstack-specific-aio-simplex-start: + :end-before: incl-config-controller-0-openstack-specific-aio-simplex-end: + +------------------- +Unlock controller-0 +------------------- + +Unlock virtual controller-0 to bring it into service: + +:: + + system host-unlock controller-0 + +Controller-0 will reboot in order to apply configuration changes and come into +service. This can take 5-10 minutes, depending on the performance of the host machine. + +------------------------------------- +Install software on controller-1 node +------------------------------------- + +#. On the host, power on the controller-1 virtual server, 'duplex-controller-1'. It will + automatically attempt to network boot over the management network: + + :: + + virsh start duplex-controller-1 + +#. Attach to the console of virtual controller-1: + + :: + + virsh console duplex-controller-1 + + As controller-1 VM boots, a message appears on its console instructing you to + configure the personality of the node. + +#. On the console of virtual controller-0, list hosts to see the newly discovered + controller-1 host (hostname=None): + + :: + + system host-list + +----+--------------+-------------+----------------+-------------+--------------+ + | id | hostname | personality | administrative | operational | availability | + +----+--------------+-------------+----------------+-------------+--------------+ + | 1 | controller-0 | controller | unlocked | enabled | available | + | 2 | None | None | locked | disabled | offline | + +----+--------------+-------------+----------------+-------------+--------------+ + +#. On virtual controller-0, using the host id, set the personality of this host + to 'controller': + + :: + + system host-update 2 personality=controller + +#. Wait for the software installation on controller-1 to complete, controller-1 to + reboot, and controller-1 to show as locked/disabled/online in 'system host-list'. + This can take 5-10 minutes, depending on the performance of the host machine. + + :: + + system host-list + +----+--------------+-------------+----------------+-------------+--------------+ + | id | hostname | personality | administrative | operational | availability | + +----+--------------+-------------+----------------+-------------+--------------+ + | 1 | controller-0 | controller | unlocked | enabled | available | + | 2 | controller-1 | controller | locked | disabled | online | + +----+--------------+-------------+----------------+-------------+--------------+ + +---------------------- +Configure controller-1 +---------------------- + +On virtual controller-0: + +#. Configure the OAM and MGMT interfaces of controller-1 and specify the + attached networks. Note that the MGMT interface is partially set up + automatically by the network install procedure. + + :: + + OAM_IF=enp7s1 + system host-if-modify controller-1 $OAM_IF -c platform + system interface-network-assign controller-1 $OAM_IF oam + system interface-network-assign controller-1 mgmt0 cluster-host + +#. Configure data interfaces for controller-1. + + .. important:: + + **This step is required only if the StarlingX OpenStack application + (stx-openstack) will be installed.** + + 1G Huge Pages are not supported in the virtual environment and there is no + virtual NIC supporting SRIOV. For that reason, data interfaces are not + applicable in the virtual environment for the Kubernetes-only scenario. + + For OpenStack only: + + :: + + DATA0IF=eth1000 + DATA1IF=eth1001 + export COMPUTE=controller-1 + PHYSNET0='physnet0' + PHYSNET1='physnet1' + SPL=/tmp/tmp-system-port-list + SPIL=/tmp/tmp-system-host-if-list + system host-port-list ${COMPUTE} --nowrap > ${SPL} + system host-if-list -a ${COMPUTE} --nowrap > ${SPIL} + DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') + DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') + DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') + DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') + DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') + DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') + DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') + DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') + + system datanetwork-add ${PHYSNET0} vlan + system datanetwork-add ${PHYSNET1} vlan + + system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} + system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID} + system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} + system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1} + +#. Add an OSD on controller-1 for ceph: + + :: + + echo ">>> Add OSDs to primary tier" + system host-disk-list controller-1 + system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {} + system host-stor-list controller-1 + +************************************* +OpenStack-specific host configuration +************************************* + +.. important:: + + **This step is required only if the StarlingX OpenStack application + (stx-openstack) will be installed.** + +#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in + support of installing the stx-openstack manifest/helm-charts later: + + :: + + system host-label-assign controller-1 openstack-control-plane=enabled + system host-label-assign controller-1 openstack-compute-node=enabled + system host-label-assign controller-1 openvswitch=enabled + system host-label-assign controller-1 sriov=enabled + +#. **For OpenStack only:** Set up disk partition for nova-local volume group, + which is needed for stx-openstack nova ephemeral disks: + + :: + + export COMPUTE=controller-1 + + echo ">>> Getting root disk info" + ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}') + ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') + echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID" + + echo ">>>> Configuring nova-local" + NOVA_SIZE=34 + NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE}) + NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') + system host-lvg-add ${COMPUTE} nova-local + system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID} + +------------------- +Unlock controller-1 +------------------- + +Unlock virtual controller-1 in order to bring it into service: + +:: + + system host-unlock controller-1 + +Controller-1 will reboot in order to apply configuration changes and come into +service. This can take 5-10 minutes, depending on the performance of the host machine. + +---------- +Next steps +---------- + +.. include:: ../kubernetes_install_next.txt diff --git a/doc/source/deploy_install_guides/current/virtual/aio_simplex.rst b/doc/source/deploy_install_guides/current/virtual/aio_simplex.rst new file mode 100644 index 000000000..bcbb14f16 --- /dev/null +++ b/doc/source/deploy_install_guides/current/virtual/aio_simplex.rst @@ -0,0 +1,21 @@ +============================================ +Virtual All-in-one Simplex Installation R2.0 +============================================ + +-------- +Overview +-------- + +.. include:: ../desc_aio_simplex.txt + +.. include:: ../ipv6_note.txt + +------------ +Installation +------------ + +.. toctree:: + :maxdepth: 2 + + aio_simplex_environ + aio_simplex_install_kubernetes diff --git a/doc/source/deploy_install_guides/current/virtual/aio_simplex_environ.rst b/doc/source/deploy_install_guides/current/virtual/aio_simplex_environ.rst new file mode 100644 index 000000000..2aae37500 --- /dev/null +++ b/doc/source/deploy_install_guides/current/virtual/aio_simplex_environ.rst @@ -0,0 +1,50 @@ +============================ +Prepare Host and Environment +============================ + +This section describes how to prepare the physical host and virtual environment +for a **StarlingX R2.0 virtual All-in-one Simplex** deployment configuration. + +.. contents:: + :local: + :depth: 1 + +------------------------------------ +Physical host requirements and setup +------------------------------------ + +.. include:: physical_host_req.txt + +--------------------------------------- +Prepare virtual environment and servers +--------------------------------------- + +The following steps explain how to prepare the virtual environment and servers +on a physical host for a StarlingX R2.0 virtual All-in-one Simplex deployment +configuration. + +#. Prepare virtual environment. + + Set up the virtual platform networks for virtual deployment: + + :: + + bash setup_network.sh + +#. Prepare virtual servers. + + Create the XML definitions for the virtual servers required by this + configuration option. This will create the XML virtual server definition for: + + * simplex-controller-0 + + The following command will start/virtually power on: + + * The 'simplex-controller-0' virtual server + * The X-based graphical virt-manager application + + :: + + bash setup_configuration.sh -c simplex -i ./bootimage.iso + + If there is no X-server present, then errors will occur. diff --git a/doc/source/deploy_install_guides/current/virtual_aio_simplex.rst b/doc/source/deploy_install_guides/current/virtual/aio_simplex_install_kubernetes.rst similarity index 54% rename from doc/source/deploy_install_guides/current/virtual_aio_simplex.rst rename to doc/source/deploy_install_guides/current/virtual/aio_simplex_install_kubernetes.rst index 312584608..fb24dcf20 100644 --- a/doc/source/deploy_install_guides/current/virtual_aio_simplex.rst +++ b/doc/source/deploy_install_guides/current/virtual/aio_simplex_install_kubernetes.rst @@ -1,185 +1,19 @@ -=============================== -Virtual All-in-one Simplex R2.0 -=============================== +============================================== +Install StarlingX Kubernetes on Virtual AIO-SX +============================================== + +This section describes the steps to install the StarlingX Kubernetes platform +on a **StarlingX R2.0 virtual All-in-one Simplex** deployment configuration. .. contents:: :local: :depth: 1 ------------ -Description ------------ - -.. incl-aio-simplex-intro-start: - -The All-in-one Simplex (AIO-SX) deployment option provides all three cloud -functions (controller, compute, and storage) on a single server. - -An AIO-SX configuration provides the following benefits: - -* Only a small amount of cloud processing and storage power is required -* Application consolidation using multiple virtual machines on a single pair of - physical servers -* A storage backend solution using a single-node CEPH deployment - -An AIO-SX deployment provides no protection against overall server hardware -fault, as protection is either not required or provided at a higher level. -Hardware component protection can be enable with, for example, a hardware RAID -or 2x Port LAG in the deployment. - - -.. figure:: figures/starlingx-deployment-options-simplex.png - :scale: 50% - :alt: All-in-one Simplex deployment configuration - - *Figure 1: All-in-one Simplex deployment configuration* - -.. incl-aio-simplex-intro-end: - -.. incl-ipv6-note-start: - -.. note:: - - By default, StarlingX uses IPv4. To use StarlingX with IPv6: - - * The entire infrastructure and cluster configuration must be IPv6, with the - exception of the PXE boot network. - - * Not all external servers are reachable via IPv6 addresses (e.g. Docker - registries). Depending on your infrastructure, it may be necessary to deploy - a NAT64/DNS64 gateway to translate the IPv4 addresses to IPv6. - -.. incl-ipv6-note-end: - ------------------------------------- -Physical host requirements and setup ------------------------------------- - -.. incl-virt-physical-host-req-start: - -This section describes: - -* System requirements for the workstation hosting the virtual machine(s) where - StarlingX will be deployed - -* Host setup - -********************* -Hardware requirements -********************* - -The host system should have at least: - -* **Processor:** x86_64 only supported architecture with BIOS enabled hardware - virtualization extensions - -* **Cores:** 8 - -* **Memory:** 32GB RAM - -* **Hard Disk:** 500GB HDD - -* **Network:** One network adapter with active Internet connection - -********************* -Software requirements -********************* - -The host system should have at least: - -* A workstation computer with Ubuntu 16.04 LTS 64-bit - -All other required packages will be installed by scripts in the StarlingX tools repository. - -********** -Host setup -********** - -Set up the host with the following steps: - -#. Update OS: - - :: - - apt-get update - -#. Clone the StarlingX tools repository: - - :: - - apt-get install -y git - cd $HOME - git clone https://opendev.org/starlingx/tools - -#. Install required packages: - - :: - - cd $HOME/tools/deployment/libvirt/ - bash install_packages.sh - apt install -y apparmor-profiles - apt-get install -y ufw - ufw disable - ufw status - - - .. note:: - - On Ubuntu 16.04, if apparmor-profile modules were installed as shown in - the example above, you must reboot the server to fully install the - apparmor-profile modules. - - -#. Get the StarlingX ISO. This can be from a private StarlingX build or from the public Cengn - StarlingX build off 'master' branch, as shown below: - - :: - - wget http://mirror.starlingx.cengn.ca/mirror/starlingx/release/2.0.0/centos/outputs/iso/bootimage.iso - -.. incl-virt-physical-host-req-end: - ---------------------------------------- -Prepare virtual environment and servers ---------------------------------------- - -On the host, prepare the virtual environment and virtual servers. - -#. Set up virtual platform networks for virtual deployment: - - :: - - bash setup_network.sh - -#. Create the XML definitions for the virtual servers required by this - configuration option. This creates the XML virtual server definition for: - - * simplex-controller-0 - - The following command will start/virtually power on: - - * the 'simplex-controller-0' virtual server - * the X-based graphical virt-manager application - - If there is no X-server present, then errors will occur. - - :: - - bash setup_configuration.sh -c simplex -i ./bootimage.iso - --------------------- -StarlingX Kubernetes --------------------- - -***************************************** -Install the StarlingX Kubernetes platform -***************************************** - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- Install software on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- -In the last step of `Prepare virtual environment and servers`_, the +In the last step of "Prepare virtual environment and servers", the controller-0 virtual server 'simplex-controller-0' was started by the :command:`setup_configuration.sh` command. @@ -207,9 +41,9 @@ Wait for the non-interactive install of software to complete and for the server to reboot. This can take 5-10 minutes, depending on the performance of the host machine. -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- Bootstrap system on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +-------------------------------- On virtual controller-0: @@ -300,9 +134,9 @@ On virtual controller-0: Wait for Ansible bootstrap playbook to complete. This can take 5-10 minutes, depending on the performance of the host machine. -^^^^^^^^^^^^^^^^^^^^^^ +---------------------- Configure controller-0 -^^^^^^^^^^^^^^^^^^^^^^ +---------------------- On virtual controller-0: @@ -381,9 +215,11 @@ On virtual controller-0: system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {} system host-stor-list controller-0 -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +************************************* OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +************************************* + +.. incl-config-controller-0-openstack-specific-aio-simplex-start: .. important:: @@ -400,7 +236,7 @@ OpenStack-specific host configuration system host-label-assign controller-0 openvswitch=enabled system host-label-assign controller-0 sriov=enabled -#. **For OpenStack only**: A vSwitch is required. +#. **For OpenStack only:** A vSwitch is required. The default vSwitch is containerized OVS that is packaged with the stx-openstack manifest/helm-charts. StarlingX provides the option to use @@ -431,9 +267,11 @@ OpenStack-specific host configuration echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready." while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done -^^^^^^^^^^^^^^^^^^^ +.. incl-config-controller-0-openstack-specific-aio-simplex-end: + +------------------- Unlock controller-0 -^^^^^^^^^^^^^^^^^^^ +------------------- Unlock virtual controller-0 to bring it into service: @@ -444,60 +282,8 @@ Unlock virtual controller-0 to bring it into service: Controller-0 will reboot to apply configuration changes and come into service. This can take 5-10 minutes, depending on the performance of the host machine. -When it completes, your Kubernetes cluster is up and running. +---------- +Next steps +---------- -*************************** -Access StarlingX Kubernetes -*************************** - -.. incl-access-starlingx-kubernetes-start: - -Use local/remote CLIs, GUIs, and/or REST APIs to access and manage StarlingX -Kubernetes and hosted containerized applications. Refer to details on accessing -the StarlingX Kubernetes cluster in the -:doc:`Access StarlingX Kubernetes guide `. - -.. incl-access-starlingx-kubernetes-end: - -------------------- -StarlingX OpenStack -------------------- - -*************************** -Install StarlingX OpenStack -*************************** - -.. incl-install-starlingx-openstack-start: - -Other than the OpenStack-specific configurations required in the underlying -StarlingX/Kubernetes infrastructure (described in the installation steps for the -StarlingX Kubernetes platform above), the installation of containerized OpenStack -for StarlingX is independent of deployment configuration. Refer to the -:doc:`Install OpenStack guide ` -for installation instructions. - -.. incl-install-starlingx-openstack-end: - -************************** -Access StarlingX OpenStack -************************** - -.. incl-access-starlingx-openstack-start: - -Use local/remote CLIs, GUIs and/or REST APIs to access and manage StarlingX -OpenStack and hosted virtualized applications. Refer to details on accessing -StarlingX OpenStack in the -:doc:`Access StarlingX OpenStack guide `. - -.. incl-access-starlingx-openstack-end: - -***************************** -Uninstall StarlingX OpenStack -***************************** - -.. incl-uninstall-starlingx-openstack-start: - -Refer to the :doc:`Uninstall OpenStack guide ` for -instructions on how to uninstall and delete the OpenStack application. - -.. incl-uninstall-starlingx-openstack-end: \ No newline at end of file +.. include:: ../kubernetes_install_next.txt \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/virtual/controller_storage.rst b/doc/source/deploy_install_guides/current/virtual/controller_storage.rst new file mode 100644 index 000000000..50687780e --- /dev/null +++ b/doc/source/deploy_install_guides/current/virtual/controller_storage.rst @@ -0,0 +1,21 @@ +========================================================== +Virtual Standard with Controller Storage Installation R2.0 +========================================================== + +-------- +Overview +-------- + +.. include:: ../desc_controller_storage.txt + +.. include:: ../ipv6_note.txt + +------------ +Installation +------------ + +.. toctree:: + :maxdepth: 2 + + controller_storage_environ + controller_storage_install_kubernetes diff --git a/doc/source/deploy_install_guides/current/virtual/controller_storage_environ.rst b/doc/source/deploy_install_guides/current/virtual/controller_storage_environ.rst new file mode 100644 index 000000000..da54b110c --- /dev/null +++ b/doc/source/deploy_install_guides/current/virtual/controller_storage_environ.rst @@ -0,0 +1,54 @@ +============================ +Prepare Host and Environment +============================ + +This section describes how to prepare the physical host and virtual environment +for a **StarlingX R2.0 virtual Standard with Controller Storage** deployment +configuration. + +.. contents:: + :local: + :depth: 1 + +------------------------------------ +Physical host requirements and setup +------------------------------------ + +.. include:: physical_host_req.txt + +--------------------------------------- +Prepare virtual environment and servers +--------------------------------------- + +The following steps explain how to prepare the virtual environment and servers +on a physical host for a StarlingX R2.0 virtual Standard with Controller Storage +deployment configuration. + +#. Prepare virtual environment. + + Set up virtual platform networks for virtual deployment: + + :: + + bash setup_network.sh + +#. Prepare virtual servers. + + Create the XML definitions for the virtual servers required by this + configuration option. This will create the XML virtual server definition for: + + * controllerstorage-controller-0 + * controllerstorage-controller-1 + * controllerstorage-worker-0 + * controllerstorage-worker-1 + + The following command will start/virtually power on: + + * The 'controllerstorage-controller-0' virtual server + * The X-based graphical virt-manager application + + :: + + bash setup_configuration.sh -c controllerstorage -i ./bootimage.iso + + If there is no X-server present, then errors are returned. \ No newline at end of file diff --git a/doc/source/deploy_install_guides/current/virtual_controller_storage.rst b/doc/source/deploy_install_guides/current/virtual/controller_storage_install_kubernetes.rst similarity index 78% rename from doc/source/deploy_install_guides/current/virtual_controller_storage.rst rename to doc/source/deploy_install_guides/current/virtual/controller_storage_install_kubernetes.rst index 6b5b0c0d9..433a1b257 100644 --- a/doc/source/deploy_install_guides/current/virtual_controller_storage.rst +++ b/doc/source/deploy_install_guides/current/virtual/controller_storage_install_kubernetes.rst @@ -1,641 +1,550 @@ -============================================= -Virtual Standard with Controller Storage R2.0 -============================================= - -.. contents:: - :local: - :depth: 1 - ------------ -Description ------------ - -.. incl-controller-storage-intro-start: - -The Standard with Controller Storage deployment option provides two high -availability (HA) controller nodes and a pool of up to 10 compute nodes. - -A Standard with Controller Storage configuration provides the following benefits: - -* A pool of up to 10 compute nodes -* High availability (HA) services run across the controller nodes in either - active/active or active/standby mode -* A storage back end solution using a two-node CEPH deployment across two - controller servers -* Protection against overall controller and compute node failure, where - - * On overall controller node failure, all controller HA services go active on - the remaining healthy controller node - * On overall compute node failure, virtual machines and containers are - recovered on the remaining healthy compute nodes - -.. figure:: figures/starlingx-deployment-options-controller-storage.png - :scale: 50% - :alt: Standard with Controller Storage deployment configuration - - *Figure 1: Standard with Controller Storage deployment configuration* - -.. incl-controller-storage-intro-end: - -.. include:: virtual_aio_simplex.rst - :start-after: incl-ipv6-note-start: - :end-before: incl-ipv6-note-end: - ------------------------------------- -Physical host requirements and setup ------------------------------------- - -.. include:: virtual_aio_simplex.rst - :start-after: incl-virt-physical-host-req-start: - :end-before: incl-virt-physical-host-req-end: - ---------------------------------------- -Prepare virtual environment and servers ---------------------------------------- - -On the host, prepare the virtual environment and virtual servers. - -#. Set up virtual platform networks for virtual deployment: - - :: - - bash setup_network.sh - -#. Create the XML definitions for the virtual servers required by this - configuration option. This creates the XML virtual server definition for: - - * controllerstorage-controller-0 - * controllerstorage-controller-1 - * controllerstorage-worker-0 - * controllerstorage-worker-1 - - The following command will start/virtually power on: - - * the 'controllerstorage-controller-0' virtual server - * the X-based graphical virt-manager application - - If there is no X-server present, then errors are returned. - - :: - - bash setup_configuration.sh -c controllerstorage -i ./bootimage.iso - --------------------- -StarlingX Kubernetes --------------------- - -******************************* -Installing StarlingX Kubernetes -******************************* - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Install software on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -In the last step of "Prepare the virtual environment and virtual servers" the -controller-0 virtual server 'controllerstorage-controller-0' was started by the -:command:`setup_configuration.sh` command. - -On the host, attach to the console of virtual controller-0 and select the appropriate -installer menu options to start the non-interactive install of -StarlingX software on controller-0. - -.. note:: - - When entering the console, it is very easy to miss the first installer menu - selection. Use ESC to navigate to previous menus, to ensure you are at the - first installer menu. - -:: - - virsh console controllerstorage-controller-0 - -Make the following menu selections in the installer: - -#. First menu: Select 'Standard Controller Configuration' -#. Second menu: Select 'Serial Console' -#. Third menu: Select 'Standard Security Profile' - -Wait for the non-interactive install of software to complete and for the server -to reboot. This can take 5-10 minutes depending on the performance of the host -machine. - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Bootstrap system on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -On virtual controller-0: - -#. Log in using the username / password of "sysadmin" / "sysadmin". - When logging in for the first time, you will be forced to change the password. - - :: - - Login: sysadmin - Password: - Changing password for sysadmin. - (current) UNIX Password: sysadmin - New Password: - (repeat) New Password: - -#. External connectivity is required to run the Ansible bootstrap playbook: - - :: - - export CONTROLLER0_OAM_CIDR=10.10.10.3/24 - export DEFAULT_OAM_GATEWAY=10.10.10.1 - sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1 - sudo ip link set up dev enp7s1 - sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1 - -#. Specify user configuration overrides for the Ansible bootstrap playbook. - - Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible - configuration are: - - ``/etc/ansible/hosts`` - The default Ansible inventory file. Contains a single host: localhost. - - ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml`` - The Ansible bootstrap playbook. - - ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml`` - The default configuration values for the bootstrap playbook. - - sysadmin home directory ($HOME) - The default location where Ansible looks for and imports user - configuration override files for hosts. For example: ``$HOME/.yml``. - - - Specify the user configuration override file for the Ansible bootstrap - playbook using one of the following methods: - - * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit - the configurable values as desired (use the commented instructions in - the file). - - or - - * Create the minimal user configuration override file as shown in the example - below: - - :: - - cd ~ - cat < localhost.yml - system_mode: duplex - - dns_servers: - - 8.8.8.8 - - 8.8.4.4 - - external_oam_subnet: 10.10.10.0/24 - external_oam_gateway_address: 10.10.10.1 - external_oam_floating_address: 10.10.10.2 - external_oam_node_0_address: 10.10.10.3 - external_oam_node_1_address: 10.10.10.4 - - admin_username: admin - admin_password: - ansible_become_pass: - EOF - - Additional Ansible bootstrap configurations for advanced use cases are available: - - * :ref:`IPv6 ` - -#. Run the Ansible bootstrap playbook: - - :: - - ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml - - Wait for Ansible bootstrap playbook to complete. - This can take 5-10 minutes, depending on the performance of the host machine. - -^^^^^^^^^^^^^^^^^^^^^^ -Configure controller-0 -^^^^^^^^^^^^^^^^^^^^^^ - -On virtual controller-0: - -#. Acquire admin credentials: - - :: - - source /etc/platform/openrc - -#. Configure the OAM and MGMT interfaces of controller-0 and specify the - attached networks: - - :: - - OAM_IF=enp7s1 - MGMT_IF=enp7s2 - system host-if-modify controller-0 lo -c none - IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') - for UUID in $IFNET_UUIDS; do - system interface-network-remove ${UUID} - done - system host-if-modify controller-0 $OAM_IF -c platform - system interface-network-assign controller-0 $OAM_IF oam - system host-if-modify controller-0 $MGMT_IF -c platform - system interface-network-assign controller-0 $MGMT_IF mgmt - system interface-network-assign controller-0 $MGMT_IF cluster-host - -#. Configure NTP Servers for network time synchronization: - - .. note:: - - In a virtual environment, this can sometimes cause Ceph clock skew alarms. - Also, the virtual instances clock is synchronized with the host clock, - so it is not absolutely required to configure NTP here. - - - :: - - system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. important:: - - **This step is required only if the StarlingX OpenStack application - (stx-openstack) will be installed.** - -#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in - support of installing the stx-openstack manifest/helm-charts later. - - :: - - system host-label-assign controller-0 openstack-control-plane=enabled - -#. **For OpenStack only:** A vSwitch is required. - - The default vSwitch is containerized OVS that is packaged with the - stx-openstack manifest/helm-charts. StarlingX provides the option to use - OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT - supported, only OVS is supported. Therefore, simply use the default OVS - vSwitch here. - -^^^^^^^^^^^^^^^^^^^ -Unlock controller-0 -^^^^^^^^^^^^^^^^^^^ - -Unlock virtual controller-0 in order to bring it into service: - -:: - - system host-unlock controller-0 - -Controller-0 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host machine. - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Install software on controller-1 and compute nodes -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -#. On the host, power on the controller-1 virtual server, - 'controllerstorage-controller-1'. It will automatically attempt to network - boot over the management network: - - :: - - virsh start controllerstorage-controller-1 - -#. Attach to the console of virtual controller-1: - - :: - - virsh console controllerstorage-controller-1 - - As controller-1 VM boots, a message appears on its console instructing you to - configure the personality of the node. - -#. On console of virtual controller-0, list hosts to see the newly discovered - controller-1 host (hostname=None): - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | None | None | locked | disabled | offline | - +----+--------------+-------------+----------------+-------------+--------------+ - -#. On virtual controller-0, using the host id, set the personality of this host - to 'controller': - - :: - - system host-update 2 personality=controller - - This initiates the install of software on controller-1. - This can take 5-10 minutes, depending on the performance of the host machine. - -#. While waiting on the previous step to complete, start up and set the personality - for 'controllerstorage-worker-0' and 'controllerstorage-worker-1'. Set the - personality to 'worker' and assign a unique hostname for each. - - For example, start 'controllerstorage-worker-0' from the host: - - :: - - virsh start controllerstorage-worker-0 - - Wait for new host (hostname=None) to be discovered by checking - ‘system host-list’ on virtual controller-0: - - :: - - system host-update 3 personality=worker hostname=compute-0 - - Repeat for 'controllerstorage-worker-1'. On the host: - - :: - - virsh start controllerstorage-worker-1 - - And wait for new host (hostname=None) to be discovered by checking - ‘system host-list’ on virtual controller-0: - - :: - - system host-update 4 personality=worker hostname=compute-1 - -#. Wait for the software installation on controller-1, compute-0, and compute-1 to - complete, for all virtual servers to reboot, and for all to show as - locked/disabled/online in 'system host-list'. - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | controller-1 | controller | locked | disabled | online | - | 3 | compute-0 | compute | locked | disabled | online | - | 4 | compute-1 | compute | locked | disabled | online | - +----+--------------+-------------+----------------+-------------+--------------+ - -^^^^^^^^^^^^^^^^^^^^^^ -Configure controller-1 -^^^^^^^^^^^^^^^^^^^^^^ - -Configure the OAM and MGMT interfaces of virtual controller-0 and specify the -attached networks. Note that the MGMT interface is partially set up by the -network install procedure. - -:: - - OAM_IF=enp7s1 - system host-if-modify controller-1 $OAM_IF -c platform - system interface-network-assign controller-1 $OAM_IF oam - system interface-network-assign controller-1 mgmt0 cluster-host - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. important:: - - **This step is required only if the StarlingX OpenStack application - (stx-openstack) will be installed.** - -**For OpenStack only:** Assign OpenStack host labels to controller-1 in support -of installing the stx-openstack manifest/helm-charts later: - -:: - - system host-label-assign controller-1 openstack-control-plane=enabled - -^^^^^^^^^^^^^^^^^^^ -Unlock controller-1 -^^^^^^^^^^^^^^^^^^^ - -Unlock virtual controller-1 in order to bring it into service: - -:: - - system host-unlock controller-1 - -Controller-1 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host machine. - -^^^^^^^^^^^^^^^^^^^^^^^ -Configure compute nodes -^^^^^^^^^^^^^^^^^^^^^^^ - -On virtual controller-0: - -#. Add the third Ceph monitor to compute-0: - - (The first two Ceph monitors are automatically assigned to controller-0 and - controller-1.) - - :: - - system ceph-mon-add compute-0 - -#. Wait for the compute node monitor to complete configuration: - - :: - - system ceph-mon-list - +--------------------------------------+-------+--------------+------------+------+ - | uuid | ceph_ | hostname | state | task | - | | mon_g | | | | - | | ib | | | | - +--------------------------------------+-------+--------------+------------+------+ - | 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None | - | a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None | - | f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | compute-0 | configured | None | - +--------------------------------------+-------+--------------+------------+------+ - -#. Assign the cluster-host network to the MGMT interface for the compute nodes. - - Note that the MGMT interfaces are partially set up automatically by the - network install procedure. - - :: - - for COMPUTE in compute-0 compute-1; do - system interface-network-assign $COMPUTE mgmt0 cluster-host - done - -#. Configure data interfaces for compute nodes. - - .. important:: - - **This step is required only if the StarlingX OpenStack application - (stx-openstack) will be installed.** - - 1G Huge Pages are not supported in the virtual environment and there is no - virtual NIC supporting SRIOV. For that reason, data interfaces are not - applicable in the virtual environment for the Kubernetes-only scenario. - - For OpenStack only: - - :: - - DATA0IF=eth1000 - DATA1IF=eth1001 - PHYSNET0='physnet0' - PHYSNET1='physnet1' - SPL=/tmp/tmp-system-port-list - SPIL=/tmp/tmp-system-host-if-list - - # configure the datanetworks in sysinv, prior to referencing it - # in the ``system host-if-modify`` command'. - system datanetwork-add ${PHYSNET0} vlan - system datanetwork-add ${PHYSNET1} vlan - - for COMPUTE in compute-0 compute-1; do - echo "Configuring interface for: $COMPUTE" - set -ex - system host-port-list ${COMPUTE} --nowrap > ${SPL} - system host-if-list -a ${COMPUTE} --nowrap > ${SPIL} - DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') - DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') - DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') - DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') - DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') - DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') - DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') - DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') - system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} - system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID} - system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} - system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1} - set +ex - done - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. important:: - - **This step is required only if the StarlingX OpenStack application - (stx-openstack) will be installed.** - -#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in - support of installing the stx-openstack manifest/helm-charts later: - - :: - - for NODE in compute-0 compute-1; do - system host-label-assign $NODE openstack-compute-node=enabled - system host-label-assign $NODE openvswitch=enabled - system host-label-assign $NODE sriov=enabled - done - -#. **For OpenStack only:** Set up disk partition for nova-local volume group, - which is needed for stx-openstack nova ephemeral disks: - - :: - - for COMPUTE in compute-0 compute-1; do - echo "Configuring Nova local for: $COMPUTE" - ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}') - ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') - PARTITION_SIZE=10 - NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE}) - NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') - system host-lvg-add ${COMPUTE} nova-local - system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID} - done - -^^^^^^^^^^^^^^^^^^^^ -Unlock compute nodes -^^^^^^^^^^^^^^^^^^^^ - -Unlock virtual compute nodes to bring them into service: - -:: - - for COMPUTE in compute-0 compute-1; do - system host-unlock $COMPUTE - done - -The compute nodes will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host machine. - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Add Ceph OSDs to controllers -^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -On virtual controller-0: - -#. Add OSDs to controller-0: - - :: - - HOST=controller-0 - DISKS=$(system host-disk-list ${HOST}) - TIERS=$(system storage-tier-list ceph_cluster) - OSDs="/dev/sdb" - for OSD in $OSDs; do - system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}') - while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done - done - - system host-stor-list $HOST - -#. Add OSDs to controller-1: - - :: - - HOST=controller-1 - DISKS=$(system host-disk-list ${HOST}) - TIERS=$(system storage-tier-list ceph_cluster) - OSDs="/dev/sdb" - for OSD in $OSDs; do - system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}') - while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done - done - - system host-stor-list $HOST - -Your Kubernetes cluster is now up and running. - -*************************** -Access StarlingX Kubernetes -*************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-kubernetes-start: - :end-before: incl-access-starlingx-kubernetes-end: - -------------------- -StarlingX OpenStack -------------------- - -*************************** -Install StarlingX OpenStack -*************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-install-starlingx-openstack-start: - :end-before: incl-install-starlingx-openstack-end: - -************************** -Access StarlingX OpenStack -************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-openstack-start: - :end-before: incl-access-starlingx-openstack-end: - -***************************** -Uninstall StarlingX OpenStack -***************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-uninstall-starlingx-openstack-start: - :end-before: incl-uninstall-starlingx-openstack-end: \ No newline at end of file +======================================================================== +Install StarlingX Kubernetes on Virtual Standard with Controller Storage +======================================================================== + +This section describes the steps to install the StarlingX Kubernetes platform +on a **StarlingX R2.0 virtual Standard with Controller Storage** deployment +configuration. + +.. contents:: + :local: + :depth: 1 + +-------------------------------- +Install software on controller-0 +-------------------------------- + +In the last step of "Prepare the virtual environment and virtual servers" the +controller-0 virtual server 'controllerstorage-controller-0' was started by the +:command:`setup_configuration.sh` command. + +On the host, attach to the console of virtual controller-0 and select the appropriate +installer menu options to start the non-interactive install of +StarlingX software on controller-0. + +.. note:: + + When entering the console, it is very easy to miss the first installer menu + selection. Use ESC to navigate to previous menus, to ensure you are at the + first installer menu. + +:: + + virsh console controllerstorage-controller-0 + +Make the following menu selections in the installer: + +#. First menu: Select 'Standard Controller Configuration' +#. Second menu: Select 'Serial Console' +#. Third menu: Select 'Standard Security Profile' + +Wait for the non-interactive install of software to complete and for the server +to reboot. This can take 5-10 minutes depending on the performance of the host +machine. + +-------------------------------- +Bootstrap system on controller-0 +-------------------------------- + +.. incl-bootstrap-controller-0-virt-controller-storage-start: + +On virtual controller-0: + +#. Log in using the username / password of "sysadmin" / "sysadmin". + When logging in for the first time, you will be forced to change the password. + + :: + + Login: sysadmin + Password: + Changing password for sysadmin. + (current) UNIX Password: sysadmin + New Password: + (repeat) New Password: + +#. External connectivity is required to run the Ansible bootstrap playbook: + + :: + + export CONTROLLER0_OAM_CIDR=10.10.10.3/24 + export DEFAULT_OAM_GATEWAY=10.10.10.1 + sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1 + sudo ip link set up dev enp7s1 + sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1 + +#. Specify user configuration overrides for the Ansible bootstrap playbook. + + Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible + configuration are: + + ``/etc/ansible/hosts`` + The default Ansible inventory file. Contains a single host: localhost. + + ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml`` + The Ansible bootstrap playbook. + + ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml`` + The default configuration values for the bootstrap playbook. + + sysadmin home directory ($HOME) + The default location where Ansible looks for and imports user + configuration override files for hosts. For example: ``$HOME/.yml``. + + + Specify the user configuration override file for the Ansible bootstrap + playbook using one of the following methods: + + * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit + the configurable values as desired (use the commented instructions in + the file). + + or + + * Create the minimal user configuration override file as shown in the example + below: + + :: + + cd ~ + cat < localhost.yml + system_mode: duplex + + dns_servers: + - 8.8.8.8 + - 8.8.4.4 + + external_oam_subnet: 10.10.10.0/24 + external_oam_gateway_address: 10.10.10.1 + external_oam_floating_address: 10.10.10.2 + external_oam_node_0_address: 10.10.10.3 + external_oam_node_1_address: 10.10.10.4 + + admin_username: admin + admin_password: + ansible_become_pass: + EOF + + Additional Ansible bootstrap configurations for advanced use cases are available: + + * :ref:`IPv6 ` + +#. Run the Ansible bootstrap playbook: + + :: + + ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml + + Wait for Ansible bootstrap playbook to complete. + This can take 5-10 minutes, depending on the performance of the host machine. + +.. incl-bootstrap-controller-0-virt-controller-storage-end: + +---------------------- +Configure controller-0 +---------------------- + +.. incl-config-controller-0-virt-controller-storage-start: + +On virtual controller-0: + +#. Acquire admin credentials: + + :: + + source /etc/platform/openrc + +#. Configure the OAM and MGMT interfaces of controller-0 and specify the + attached networks: + + :: + + OAM_IF=enp7s1 + MGMT_IF=enp7s2 + system host-if-modify controller-0 lo -c none + IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') + for UUID in $IFNET_UUIDS; do + system interface-network-remove ${UUID} + done + system host-if-modify controller-0 $OAM_IF -c platform + system interface-network-assign controller-0 $OAM_IF oam + system host-if-modify controller-0 $MGMT_IF -c platform + system interface-network-assign controller-0 $MGMT_IF mgmt + system interface-network-assign controller-0 $MGMT_IF cluster-host + +#. Configure NTP Servers for network time synchronization: + + .. note:: + + In a virtual environment, this can sometimes cause Ceph clock skew alarms. + Also, the virtual instance clock is synchronized with the host clock, + so it is not absolutely required to configure NTP here. + + :: + + system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org + +************************************* +OpenStack-specific host configuration +************************************* + +.. important:: + + **This step is required only if the StarlingX OpenStack application + (stx-openstack) will be installed.** + +#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in + support of installing the stx-openstack manifest/helm-charts later: + + :: + + system host-label-assign controller-0 openstack-control-plane=enabled + +#. **For OpenStack only:** A vSwitch is required. + + The default vSwitch is containerized OVS that is packaged with the + stx-openstack manifest/helm-charts. StarlingX provides the option to use + OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT + supported, only OVS is supported. Therefore, simply use the default OVS + vSwitch here. + +.. incl-config-controller-0-virt-controller-storage-end: + +------------------- +Unlock controller-0 +------------------- + +Unlock virtual controller-0 in order to bring it into service: + +:: + + system host-unlock controller-0 + +Controller-0 will reboot in order to apply configuration changes and come into +service. This can take 5-10 minutes, depending on the performance of the host machine. + +-------------------------------------------------- +Install software on controller-1 and compute nodes +-------------------------------------------------- + +#. On the host, power on the controller-1 virtual server, + 'controllerstorage-controller-1'. It will automatically attempt to network + boot over the management network: + + :: + + virsh start controllerstorage-controller-1 + +#. Attach to the console of virtual controller-1: + + :: + + virsh console controllerstorage-controller-1 + + As controller-1 VM boots, a message appears on its console instructing you to + configure the personality of the node. + +#. On console of virtual controller-0, list hosts to see the newly discovered + controller-1 host (hostname=None): + + :: + + system host-list + +----+--------------+-------------+----------------+-------------+--------------+ + | id | hostname | personality | administrative | operational | availability | + +----+--------------+-------------+----------------+-------------+--------------+ + | 1 | controller-0 | controller | unlocked | enabled | available | + | 2 | None | None | locked | disabled | offline | + +----+--------------+-------------+----------------+-------------+--------------+ + +#. On virtual controller-0, using the host id, set the personality of this host + to 'controller': + + :: + + system host-update 2 personality=controller + + This initiates the install of software on controller-1. + This can take 5-10 minutes, depending on the performance of the host machine. + +#. While waiting on the previous step to complete, start up and set the personality + for 'controllerstorage-worker-0' and 'controllerstorage-worker-1'. Set the + personality to 'worker' and assign a unique hostname for each. + + For example, start 'controllerstorage-worker-0' from the host: + + :: + + virsh start controllerstorage-worker-0 + + Wait for new host (hostname=None) to be discovered by checking + ‘system host-list’ on virtual controller-0: + + :: + + system host-update 3 personality=worker hostname=compute-0 + + Repeat for 'controllerstorage-worker-1'. On the host: + + :: + + virsh start controllerstorage-worker-1 + + And wait for new host (hostname=None) to be discovered by checking + ‘system host-list’ on virtual controller-0: + + :: + + system host-update 4 personality=worker hostname=compute-1 + +#. Wait for the software installation on controller-1, compute-0, and compute-1 to + complete, for all virtual servers to reboot, and for all to show as + locked/disabled/online in 'system host-list'. + + :: + + system host-list + +----+--------------+-------------+----------------+-------------+--------------+ + | id | hostname | personality | administrative | operational | availability | + +----+--------------+-------------+----------------+-------------+--------------+ + | 1 | controller-0 | controller | unlocked | enabled | available | + | 2 | controller-1 | controller | locked | disabled | online | + | 3 | compute-0 | compute | locked | disabled | online | + | 4 | compute-1 | compute | locked | disabled | online | + +----+--------------+-------------+----------------+-------------+--------------+ + +---------------------- +Configure controller-1 +---------------------- + +.. incl-config-controller-1-virt-controller-storage-start: + +Configure the OAM and MGMT interfaces of virtual controller-0 and specify the +attached networks. Note that the MGMT interface is partially set up by the +network install procedure. + +:: + + OAM_IF=enp7s1 + system host-if-modify controller-1 $OAM_IF -c platform + system interface-network-assign controller-1 $OAM_IF oam + system interface-network-assign controller-1 mgmt0 cluster-host + +************************************* +OpenStack-specific host configuration +************************************* + +.. important:: + + **This step is required only if the StarlingX OpenStack application + (stx-openstack) will be installed.** + +**For OpenStack only:** Assign OpenStack host labels to controller-1 in support +of installing the stx-openstack manifest/helm-charts later: + +:: + + system host-label-assign controller-1 openstack-control-plane=enabled + +.. incl-config-controller-1-virt-controller-storage-end: + +------------------- +Unlock controller-1 +------------------- + +.. incl-unlock-controller-1-virt-controller-storage-start: + +Unlock virtual controller-1 in order to bring it into service: + +:: + + system host-unlock controller-1 + +Controller-1 will reboot in order to apply configuration changes and come into +service. This can take 5-10 minutes, depending on the performance of the host machine. + +.. incl-unlock-controller-1-virt-controller-storage-end: + +----------------------- +Configure compute nodes +----------------------- + +On virtual controller-0: + +#. Add the third Ceph monitor to compute-0: + + (The first two Ceph monitors are automatically assigned to controller-0 and + controller-1.) + + :: + + system ceph-mon-add compute-0 + +#. Wait for the compute node monitor to complete configuration: + + :: + + system ceph-mon-list + +--------------------------------------+-------+--------------+------------+------+ + | uuid | ceph_ | hostname | state | task | + | | mon_g | | | | + | | ib | | | | + +--------------------------------------+-------+--------------+------------+------+ + | 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None | + | a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None | + | f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | compute-0 | configured | None | + +--------------------------------------+-------+--------------+------------+------+ + +#. Assign the cluster-host network to the MGMT interface for the compute nodes. + + Note that the MGMT interfaces are partially set up automatically by the + network install procedure. + + :: + + for COMPUTE in compute-0 compute-1; do + system interface-network-assign $COMPUTE mgmt0 cluster-host + done + +#. Configure data interfaces for compute nodes. + + .. important:: + + **This step is required only if the StarlingX OpenStack application + (stx-openstack) will be installed.** + + 1G Huge Pages are not supported in the virtual environment and there is no + virtual NIC supporting SRIOV. For that reason, data interfaces are not + applicable in the virtual environment for the Kubernetes-only scenario. + + For OpenStack only: + + :: + + DATA0IF=eth1000 + DATA1IF=eth1001 + PHYSNET0='physnet0' + PHYSNET1='physnet1' + SPL=/tmp/tmp-system-port-list + SPIL=/tmp/tmp-system-host-if-list + + # configure the datanetworks in sysinv, prior to referencing it + # in the ``system host-if-modify`` command'. + system datanetwork-add ${PHYSNET0} vlan + system datanetwork-add ${PHYSNET1} vlan + + for COMPUTE in compute-0 compute-1; do + echo "Configuring interface for: $COMPUTE" + set -ex + system host-port-list ${COMPUTE} --nowrap > ${SPL} + system host-if-list -a ${COMPUTE} --nowrap > ${SPIL} + DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') + DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') + DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') + DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') + DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') + DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') + DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') + DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') + system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} + system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID} + system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} + system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1} + set +ex + done + +************************************* +OpenStack-specific host configuration +************************************* + +.. important:: + + **This step is required only if the StarlingX OpenStack application + (stx-openstack) will be installed.** + +#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in + support of installing the stx-openstack manifest/helm-charts later: + + :: + + for NODE in compute-0 compute-1; do + system host-label-assign $NODE openstack-compute-node=enabled + system host-label-assign $NODE openvswitch=enabled + system host-label-assign $NODE sriov=enabled + done + +#. **For OpenStack only:** Set up disk partition for nova-local volume group, + which is needed for stx-openstack nova ephemeral disks: + + :: + + for COMPUTE in compute-0 compute-1; do + echo "Configuring Nova local for: $COMPUTE" + ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}') + ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') + PARTITION_SIZE=10 + NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE}) + NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') + system host-lvg-add ${COMPUTE} nova-local + system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID} + done + +-------------------- +Unlock compute nodes +-------------------- + +.. incl-unlock-compute-nodes-virt-controller-storage-start: + +Unlock virtual compute nodes to bring them into service: + +:: + + for COMPUTE in compute-0 compute-1; do + system host-unlock $COMPUTE + done + +The compute nodes will reboot in order to apply configuration changes and come into +service. This can take 5-10 minutes, depending on the performance of the host machine. + +.. incl-unlock-compute-nodes-virt-controller-storage-end: + +---------------------------- +Add Ceph OSDs to controllers +---------------------------- + +On virtual controller-0: + +#. Add OSDs to controller-0: + + :: + + HOST=controller-0 + DISKS=$(system host-disk-list ${HOST}) + TIERS=$(system storage-tier-list ceph_cluster) + OSDs="/dev/sdb" + for OSD in $OSDs; do + system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}') + while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done + done + + system host-stor-list $HOST + +#. Add OSDs to controller-1: + + :: + + HOST=controller-1 + DISKS=$(system host-disk-list ${HOST}) + TIERS=$(system storage-tier-list ceph_cluster) + OSDs="/dev/sdb" + for OSD in $OSDs; do + system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}') + while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done + done + + system host-stor-list $HOST + +---------- +Next steps +---------- + +.. include:: ../kubernetes_install_next.txt diff --git a/doc/source/deploy_install_guides/current/virtual/dedicated_storage.rst b/doc/source/deploy_install_guides/current/virtual/dedicated_storage.rst new file mode 100644 index 000000000..36e4c554c --- /dev/null +++ b/doc/source/deploy_install_guides/current/virtual/dedicated_storage.rst @@ -0,0 +1,21 @@ +========================================================= +Virtual Standard with Dedicated Storage Installation R2.0 +========================================================= + +-------- +Overview +-------- + +.. include:: ../desc_dedicated_storage.txt + +.. include:: ../ipv6_note.txt + +------------ +Installation +------------ + +.. toctree:: + :maxdepth: 2 + + dedicated_storage_environ + dedicated_storage_install_kubernetes diff --git a/doc/source/deploy_install_guides/current/virtual/dedicated_storage_environ.rst b/doc/source/deploy_install_guides/current/virtual/dedicated_storage_environ.rst new file mode 100644 index 000000000..99fadb88c --- /dev/null +++ b/doc/source/deploy_install_guides/current/virtual/dedicated_storage_environ.rst @@ -0,0 +1,56 @@ +============================ +Prepare Host and Environment +============================ + +This section describes how to prepare the physical host and virtual environment +for a **StarlingX R2.0 virtual Standard with Dedicated Storage** deployment +configuration. + +.. contents:: + :local: + :depth: 1 + +------------------------------------ +Physical host requirements and setup +------------------------------------ + +.. include:: physical_host_req.txt + +----------------------------------------- +Preparing virtual environment and servers +----------------------------------------- + +The following steps explain how to prepare the virtual environment and servers +on a physical host for a StarlingX R2.0 virtual Standard with Dedicated Storage +deployment configuration. + +#. Prepare virtual environment. + + Set up virtual platform networks for virtual deployment: + + :: + + bash setup_network.sh + +#. Prepare virtual servers. + + Create the XML definitions for the virtual servers required by this + configuration option. This will create the XML virtual server definition for: + + * dedicatedstorage-controller-0 + * dedicatedstorage-controller-1 + * dedicatedstorage-storage-0 + * dedicatedstorage-storage-1 + * dedicatedstorage-worker-0 + * dedicatedstorage-worker-1 + + The following command will start/virtually power on: + + * The 'dedicatedstorage-controller-0' virtual server + * The X-based graphical virt-manager application + + :: + + bash setup_configuration.sh -c dedicatedstorage -i ./bootimage.iso + + If there is no X-server present, then errors are returned. diff --git a/doc/source/deploy_install_guides/current/virtual_dedicated_storage.rst b/doc/source/deploy_install_guides/current/virtual/dedicated_storage_install_kubernetes.rst similarity index 52% rename from doc/source/deploy_install_guides/current/virtual_dedicated_storage.rst rename to doc/source/deploy_install_guides/current/virtual/dedicated_storage_install_kubernetes.rst index 63e141988..e0a749e55 100644 --- a/doc/source/deploy_install_guides/current/virtual_dedicated_storage.rst +++ b/doc/source/deploy_install_guides/current/virtual/dedicated_storage_install_kubernetes.rst @@ -1,674 +1,390 @@ -============================================ -Virtual Standard with Dedicated Storage R2.0 -============================================ - -.. contents:: - :local: - :depth: 1 - ------------ -Description ------------ - -.. incl-dedicated-storage-intro-start: - -The Standard with Dedicated Storage deployment option is a standard installation -with independent controller, compute, and storage nodes. - -A Standard with Dedicated Storage configuration provides the following benefits: - -* A pool of up to 100 compute nodes -* A 2x node high availability (HA) controller cluster with HA services running - across the controller nodes in either active/active or active/standby mode -* A storage back end solution using a two-to-9x node HA CEPH storage cluster - that supports a replication factor of two or three -* Up to four groups of 2x storage nodes, or up to three groups of 3x storage nodes - -.. figure:: figures/starlingx-deployment-options-dedicated-storage.png - :scale: 50% - :alt: Standard with Dedicated Storage deployment configuration - - *Figure 1: Standard with Dedicated Storage deployment configuration* - -.. incl-dedicated-storage-intro-end: - -.. include:: virtual_aio_simplex.rst - :start-after: incl-ipv6-note-start: - :end-before: incl-ipv6-note-end: - ------------------------------------- -Physical host requirements and setup ------------------------------------- - -.. include:: virtual_aio_simplex.rst - :start-after: incl-virt-physical-host-req-start: - :end-before: incl-virt-physical-host-req-end: - ---------------------------------------- -Prepare virtual environment and servers ---------------------------------------- - -On the host, prepare the virtual environment and virtual servers. - -#. Set up virtual platform networks for virtual deployment: - - :: - - bash setup_network.sh - -#. Create the XML definitions for the virtual servers required by this - configuration option. This creates the XML virtual server definition for: - - * dedicatedstorage-controller-0 - * dedicatedstorage-controller-1 - * dedicatedstorage-storage-0 - * dedicatedstorage-storage-1 - * dedicatedstorage-worker-0 - * dedicatedstorage-worker-1 - - The following command will start/virtually power on: - - * the 'dedicatedstorage-controller-0' virtual server - * the X-based graphical virt-manager application - - If there is no X-server present, then errors are returned. - - :: - - bash setup_configuration.sh -c dedicatedstorage -i ./bootimage.iso - --------------------- -StarlingX Kubernetes --------------------- - -******************************* -Installing StarlingX Kubernetes -******************************* - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Install software on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -In the last step of "Prepare the virtual environment and virtual servers" the -controller-0 virtual server 'dedicatedstorage-controller-0' was started by the -:command:`setup_configuration.sh` command. - -On the host, attach to the console of virtual controller-0 and select the appropriate -installer menu options to start the non-interactive install of -StarlingX software on controller-0. - -.. note:: - - When entering the console, it is very easy to miss the first installer menu - selection. Use ESC to navigate to previous menus, to ensure you are at the - first installer menu. - -:: - - virsh console dedicatedstorage-controller-0 - -Make the following menu selections in the installer: - -#. First menu: Select 'Standard Controller Configuration' -#. Second menu: Select 'Serial Console' -#. Third menu: Select 'Standard Security Profile' - -Wait for the non-interactive install of software to complete and for the server -to reboot. This can take 5-10 minutes depending on the performance of the host -machine. - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Bootstrap system on controller-0 -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -On virtual controller-0: - -#. Log in using the username / password of "sysadmin" / "sysadmin". - When logging in for the first time, you will be forced to change the password. - - :: - - Login: sysadmin - Password: - Changing password for sysadmin. - (current) UNIX Password: sysadmin - New Password: - (repeat) New Password: - -#. External connectivity is required to run the Ansible bootstrap playbook: - - :: - - export CONTROLLER0_OAM_CIDR=10.10.10.3/24 - export DEFAULT_OAM_GATEWAY=10.10.10.1 - sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1 - sudo ip link set up dev enp7s1 - sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1 - -#. Specify user configuration overrides for the Ansible bootstrap playbook. - - Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible - configuration are: - - ``/etc/ansible/hosts`` - The default Ansible inventory file. Contains a single host: localhost. - - ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml`` - The Ansible bootstrap playbook. - - ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml`` - The default configuration values for the bootstrap playbook. - - sysadmin home directory ($HOME) - The default location where Ansible looks for and imports user - configuration override files for hosts. For example: ``$HOME/.yml``. - - - Specify the user configuration override file for the Ansible bootstrap - playbook using one of the following methods: - - * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit - the configurable values as desired (use the commented instructions in - the file). - - or - - * Create the minimal user configuration override file as shown in the example - below: - - :: - - cd ~ - cat < localhost.yml - system_mode: duplex - - dns_servers: - - 8.8.8.8 - - 8.8.4.4 - - external_oam_subnet: 10.10.10.0/24 - external_oam_gateway_address: 10.10.10.1 - external_oam_floating_address: 10.10.10.2 - external_oam_node_0_address: 10.10.10.3 - external_oam_node_1_address: 10.10.10.4 - - admin_username: admin - admin_password: - ansible_become_pass: - EOF - - Additional Ansible bootstrap configurations for advanced use cases are available: - - * :ref:`IPv6 ` - -#. Run the Ansible bootstrap playbook: - - :: - - ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml - - Wait for Ansible bootstrap playbook to complete. - This can take 5-10 minutes, depending on the performance of the host machine. - -^^^^^^^^^^^^^^^^^^^^^^ -Configure controller-0 -^^^^^^^^^^^^^^^^^^^^^^ - -On virtual controller-0: - -#. Acquire admin credentials: - - :: - - source /etc/platform/openrc - -#. Configure the OAM and MGMT interfaces of controller-0 and specify the - attached networks: - - :: - - OAM_IF=enp7s1 - MGMT_IF=enp7s2 - system host-if-modify controller-0 lo -c none - IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') - for UUID in $IFNET_UUIDS; do - system interface-network-remove ${UUID} - done - system host-if-modify controller-0 $OAM_IF -c platform - system interface-network-assign controller-0 $OAM_IF oam - system host-if-modify controller-0 $MGMT_IF -c platform - system interface-network-assign controller-0 $MGMT_IF mgmt - system interface-network-assign controller-0 $MGMT_IF cluster-host - -#. Configure NTP Servers for network time synchronization: - - .. note:: - - In a virtual environment, this can sometimes cause Ceph clock skew alarms. - Also, the virtual instance clock is synchronized with the host clock, - so it is not absolutely required to configure NTP here. - - :: - - system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. important:: - - **This step is required only if the StarlingX OpenStack application - (stx-openstack) will be installed.** - -#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in - support of installing the stx-openstack manifest/helm-charts later. - - :: - - system host-label-assign controller-0 openstack-control-plane=enabled - -#. **For OpenStack only:** A vSwitch is required. - - The default vSwitch is containerized OVS that is packaged with the - stx-openstack manifest/helm-charts. StarlingX provides the option to use - OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT - supported, only OVS is supported. Therefore, simply use the default OVS - vSwitch here. - -^^^^^^^^^^^^^^^^^^^ -Unlock controller-0 -^^^^^^^^^^^^^^^^^^^ - -Unlock virtual controller-0 in order to bring it into service: - -:: - - system host-unlock controller-0 - -Controller-0 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host machine. - -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Install software on controller-1, storage nodes, and compute nodes -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -#. On the host, power on the controller-1 virtual server, - 'dedicatedstorage-controller-1'. It will automatically attempt to network - boot over the management network: - - :: - - virsh start dedicatedstorage-controller-1 - -#. Attach to the console of virtual controller-1: - - :: - - virsh console dedicatedstorage-controller-1 - -#. As controller-1 VM boots, a message appears on its console instructing you to - configure the personality of the node. - -#. On the console of controller-0, list hosts to see newly discovered - controller-1 host (hostname=None): - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | None | None | locked | disabled | offline | - +----+--------------+-------------+----------------+-------------+--------------+ - -#. Using the host id, set the personality of this host to 'controller': - - :: - - system host-update 2 personality=controller - - This initiates software installation on controller-1. - This can take 5-10 minutes, depending on the performance of the host machine. - -#. While waiting on the previous step to complete, start up and set the personality - for 'dedicatedstorage-storage-0' and 'dedicatedstorage-storage-1'. Set the - personality to 'storage' and assign a unique hostname for each. - - For example, start 'dedicatedstorage-storage-0' from the host: - - :: - - virsh start dedicatedstorage-storage-0 - - Wait for new host (hostname=None) to be discovered by checking - ‘system host-list’ on virtual controller-0: - - :: - - system host-update 3 personality=storage - - Repeat for 'dedicatedstorage-storage-1'. On the host: - - :: - - virsh start dedicatedstorage-storage-1 - - And wait for new host (hostname=None) to be discovered by checking - ‘system host-list’ on virtual controller-0: - - :: - - system host-update 4 personality=storage - - This initiates software installation on storage-0 and storage-1. - This can take 5-10 minutes, depending on the performance of the host machine. - -#. While waiting on the previous step to complete, start up and set the personality - for 'dedicatedstorage-worker-0' and 'dedicatedstorage-worker-1'. Set the - personality to 'worker' and assign a unique hostname for each. - - For example, start 'dedicatedstorage-worker-0' from the host: - - :: - - virsh start dedicatedstorage-worker-0 - - Wait for new host (hostname=None) to be discovered by checking - ‘system host-list’ on virtual controller-0: - - :: - - system host-update 5 personality=worker hostname=compute-0 - - Repeat for 'dedicatedstorage-worker-1'. On the host: - - :: - - virsh start dedicatedstorage-worker-1 - - And wait for new host (hostname=None) to be discovered by checking - ‘system host-list’ on virtual controller-0: - - :: - - ssystem host-update 6 personality=worker hostname=compute-1 - - This initiates software installation on compute-0 and compute-1. - -#. Wait for the software installation on controller-1, storage-0, storage-1, - compute-0, and compute-1 to complete, for all virtual servers to reboot, and for all - to show as locked/disabled/online in 'system host-list'. - - :: - - system host-list - +----+--------------+-------------+----------------+-------------+--------------+ - | id | hostname | personality | administrative | operational | availability | - +----+--------------+-------------+----------------+-------------+--------------+ - | 1 | controller-0 | controller | unlocked | enabled | available | - | 2 | controller-1 | controller | locked | disabled | online | - | 3 | storage-0 | storage | locked | disabled | online | - | 4 | storage-1 | storage | locked | disabled | online | - | 5 | compute-0 | compute | locked | disabled | online | - | 6 | compute-1 | compute | locked | disabled | online | - +----+--------------+-------------+----------------+-------------+--------------+ - -^^^^^^^^^^^^^^^^^^^^^^ -Configure controller-1 -^^^^^^^^^^^^^^^^^^^^^^ - -Configure the OAM and MGMT interfaces of virtual controller-0 and specify the -attached networks. Note that the MGMT interface is partially set up by the -network install procedure. - -:: - - OAM_IF=enp7s1 - system host-if-modify controller-1 $OAM_IF -c platform - system interface-network-assign controller-1 $OAM_IF oam - system interface-network-assign controller-1 mgmt0 cluster-host - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. important:: - - **This step is required only if the StarlingX OpenStack application - (stx-openstack) will be installed.** - -**For OpenStack only:** Assign OpenStack host labels to controller-1 in support -of installing the stx-openstack manifest/helm-charts later. - -:: - - system host-label-assign controller-1 openstack-control-plane=enabled - -^^^^^^^^^^^^^^^^^^^ -Unlock controller-1 -^^^^^^^^^^^^^^^^^^^ - -Unlock virtual controller-1 in order to bring it into service: - -:: - - system host-unlock controller-1 - -Controller-1 will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host machine. - -^^^^^^^^^^^^^^^^^^^^^^^ -Configure storage nodes -^^^^^^^^^^^^^^^^^^^^^^^ - -On virtual controller-0: - -#. Assign the cluster-host network to the MGMT interface for the storage nodes. - - Note that the MGMT interfaces are partially set up by the network install procedure. - - :: - - for COMPUTE in storage-0 storage-1; do - system interface-network-assign $COMPUTE mgmt0 cluster-host - done - -#. Add OSDs to storage-0: - - :: - - HOST=storage-0 - DISKS=$(system host-disk-list ${HOST}) - TIERS=$(system storage-tier-list ceph_cluster) - OSDs="/dev/sdb" - for OSD in $OSDs; do - system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}') - done - - system host-stor-list $HOST - -#. Add OSDs to storage-1: - - :: - - HOST=storage-1 - DISKS=$(system host-disk-list ${HOST}) - TIERS=$(system storage-tier-list ceph_cluster) - OSDs="/dev/sdb" - for OSD in $OSDs; do - system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}') - done - - system host-stor-list $HOST - -^^^^^^^^^^^^^^^^^^^^ -Unlock storage nodes -^^^^^^^^^^^^^^^^^^^^ - -Unlock virtual storage nodes in order to bring them into service: - -:: - - for STORAGE in storage-0 storage-1; do - system host-unlock $STORAGE - done - -The storage nodes will reboot in order to apply configuration changes and come -into service. This can take 5-10 minutes, depending on the performance of the host machine. - -^^^^^^^^^^^^^^^^^^^^^^^ -Configure compute nodes -^^^^^^^^^^^^^^^^^^^^^^^ - -On virtual controller-0: - -#. Assign the cluster-host network to the MGMT interface for the compute nodes. - - Note that the MGMT interfaces are partially set up automatically by the - network install procedure. - - :: - - for COMPUTE in compute-0 compute-1; do - system interface-network-assign $COMPUTE mgmt0 cluster-host - done - -#. Configure data interfaces for compute nodes. - - .. important:: - - **This step is required only if the StarlingX OpenStack application - (stx-openstack) will be installed.** - - 1G Huge Pages are not supported in the virtual environment and there is no - virtual NIC supporting SRIOV. For that reason, data interfaces are not - applicable in the virtual environment for the Kubernetes-only scenario. - - For OpenStack only: - - :: - - DATA0IF=eth1000 - DATA1IF=eth1001 - PHYSNET0='physnet0' - PHYSNET1='physnet1' - SPL=/tmp/tmp-system-port-list - SPIL=/tmp/tmp-system-host-if-list - - Configure the datanetworks in sysinv, prior to referencing it in the :command:`system host-if-modify` command. - - :: - - system datanetwork-add ${PHYSNET0} vlan - system datanetwork-add ${PHYSNET1} vlan - - for COMPUTE in compute-0 compute-1; do - echo "Configuring interface for: $COMPUTE" - set -ex - system host-port-list ${COMPUTE} --nowrap > ${SPL} - system host-if-list -a ${COMPUTE} --nowrap > ${SPIL} - DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') - DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') - DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') - DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') - DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') - DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') - DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') - DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') - system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} - system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID} - system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} - system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1} - set +ex - done - -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -OpenStack-specific host configuration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. important:: - - **This step is required only if the StarlingX OpenStack application - (stx-openstack) will be installed.** - -#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in - support of installing the stx-openstack manifest/helm-charts later. - - :: - - for NODE in compute-0 compute-1; do - system host-label-assign $NODE openstack-compute-node=enabled - system host-label-assign $NODE openvswitch=enabled - system host-label-assign $NODE sriov=enabled - done - -#. **For OpenStack only:** Set up disk partition for nova-local volume group, - which is needed for stx-openstack nova ephemeral disks. - - :: - - for COMPUTE in compute-0 compute-1; do - echo "Configuring Nova local for: $COMPUTE" - ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}') - ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') - PARTITION_SIZE=10 - NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE}) - NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') - system host-lvg-add ${COMPUTE} nova-local - system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID} - done - -^^^^^^^^^^^^^^^^^^^^ -Unlock compute nodes -^^^^^^^^^^^^^^^^^^^^ - -Unlock virtual compute nodes in order to bring them into service: - -:: - - for COMPUTE in compute-0 compute-1; do - system host-unlock $COMPUTE - done - -The compute nodes will reboot in order to apply configuration changes and come into -service. This can take 5-10 minutes, depending on the performance of the host machine. - -Your Kubernetes cluster is up and running. - -*************************** -Access StarlingX Kubernetes -*************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-kubernetes-start: - :end-before: incl-access-starlingx-kubernetes-end: - -------------------- -StarlingX OpenStack -------------------- - -*************************** -Install StarlingX OpenStack -*************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-install-starlingx-openstack-start: - :end-before: incl-install-starlingx-openstack-end: - -************************** -Access StarlingX OpenStack -************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-access-starlingx-openstack-start: - :end-before: incl-access-starlingx-openstack-end: - -***************************** -Uninstall StarlingX OpenStack -***************************** - -.. include:: virtual_aio_simplex.rst - :start-after: incl-uninstall-starlingx-openstack-start: - :end-before: incl-uninstall-starlingx-openstack-end: +======================================================================= +Install StarlingX Kubernetes on Virtual Standard with Dedicated Storage +======================================================================= + +This section describes the steps to install the StarlingX Kubernetes platform +on a **StarlingX R2.0 virtual Standard with Dedicated Storage** deployment +configuration. + +.. contents:: + :local: + :depth: 1 + +-------------------------------- +Install software on controller-0 +-------------------------------- + +In the last step of "Prepare the virtual environment and virtual servers" the +controller-0 virtual server 'dedicatedstorage-controller-0' was started by the +:command:`setup_configuration.sh` command. + +On the host, attach to the console of virtual controller-0 and select the appropriate +installer menu options to start the non-interactive install of +StarlingX software on controller-0. + +.. note:: + + When entering the console, it is very easy to miss the first installer menu + selection. Use ESC to navigate to previous menus, to ensure you are at the + first installer menu. + +:: + + virsh console dedicatedstorage-controller-0 + +Make the following menu selections in the installer: + +#. First menu: Select 'Standard Controller Configuration' +#. Second menu: Select 'Serial Console' +#. Third menu: Select 'Standard Security Profile' + +Wait for the non-interactive install of software to complete and for the server +to reboot. This can take 5-10 minutes depending on the performance of the host +machine. + +-------------------------------- +Bootstrap system on controller-0 +-------------------------------- + +.. include:: controller_storage_install_kubernetes.rst + :start-after: incl-bootstrap-controller-0-virt-controller-storage-start: + :end-before: incl-bootstrap-controller-0-virt-controller-storage-end: + +---------------------- +Configure controller-0 +---------------------- + +.. include:: controller_storage_install_kubernetes.rst + :start-after: incl-config-controller-0-virt-controller-storage-start: + :end-before: incl-config-controller-0-virt-controller-storage-end: + +------------------- +Unlock controller-0 +------------------- + +Unlock virtual controller-0 in order to bring it into service: + +:: + + system host-unlock controller-0 + +Controller-0 will reboot in order to apply configuration changes and come into +service. This can take 5-10 minutes, depending on the performance of the host machine. + +------------------------------------------------------------------ +Install software on controller-1, storage nodes, and compute nodes +------------------------------------------------------------------ + +#. On the host, power on the controller-1 virtual server, + 'dedicatedstorage-controller-1'. It will automatically attempt to network + boot over the management network: + + :: + + virsh start dedicatedstorage-controller-1 + +#. Attach to the console of virtual controller-1: + + :: + + virsh console dedicatedstorage-controller-1 + +#. As controller-1 VM boots, a message appears on its console instructing you to + configure the personality of the node. + +#. On the console of controller-0, list hosts to see newly discovered + controller-1 host (hostname=None): + + :: + + system host-list + +----+--------------+-------------+----------------+-------------+--------------+ + | id | hostname | personality | administrative | operational | availability | + +----+--------------+-------------+----------------+-------------+--------------+ + | 1 | controller-0 | controller | unlocked | enabled | available | + | 2 | None | None | locked | disabled | offline | + +----+--------------+-------------+----------------+-------------+--------------+ + +#. Using the host id, set the personality of this host to 'controller': + + :: + + system host-update 2 personality=controller + + This initiates software installation on controller-1. + This can take 5-10 minutes, depending on the performance of the host machine. + +#. While waiting on the previous step to complete, start up and set the personality + for 'dedicatedstorage-storage-0' and 'dedicatedstorage-storage-1'. Set the + personality to 'storage' and assign a unique hostname for each. + + For example, start 'dedicatedstorage-storage-0' from the host: + + :: + + virsh start dedicatedstorage-storage-0 + + Wait for new host (hostname=None) to be discovered by checking + ‘system host-list’ on virtual controller-0: + + :: + + system host-update 3 personality=storage + + Repeat for 'dedicatedstorage-storage-1'. On the host: + + :: + + virsh start dedicatedstorage-storage-1 + + And wait for new host (hostname=None) to be discovered by checking + ‘system host-list’ on virtual controller-0: + + :: + + system host-update 4 personality=storage + + This initiates software installation on storage-0 and storage-1. + This can take 5-10 minutes, depending on the performance of the host machine. + +#. While waiting on the previous step to complete, start up and set the personality + for 'dedicatedstorage-worker-0' and 'dedicatedstorage-worker-1'. Set the + personality to 'worker' and assign a unique hostname for each. + + For example, start 'dedicatedstorage-worker-0' from the host: + + :: + + virsh start dedicatedstorage-worker-0 + + Wait for new host (hostname=None) to be discovered by checking + ‘system host-list’ on virtual controller-0: + + :: + + system host-update 5 personality=worker hostname=compute-0 + + Repeat for 'dedicatedstorage-worker-1'. On the host: + + :: + + virsh start dedicatedstorage-worker-1 + + And wait for new host (hostname=None) to be discovered by checking + ‘system host-list’ on virtual controller-0: + + :: + + ssystem host-update 6 personality=worker hostname=compute-1 + + This initiates software installation on compute-0 and compute-1. + +#. Wait for the software installation on controller-1, storage-0, storage-1, + compute-0, and compute-1 to complete, for all virtual servers to reboot, and for all + to show as locked/disabled/online in 'system host-list'. + + :: + + system host-list + +----+--------------+-------------+----------------+-------------+--------------+ + | id | hostname | personality | administrative | operational | availability | + +----+--------------+-------------+----------------+-------------+--------------+ + | 1 | controller-0 | controller | unlocked | enabled | available | + | 2 | controller-1 | controller | locked | disabled | online | + | 3 | storage-0 | storage | locked | disabled | online | + | 4 | storage-1 | storage | locked | disabled | online | + | 5 | compute-0 | compute | locked | disabled | online | + | 6 | compute-1 | compute | locked | disabled | online | + +----+--------------+-------------+----------------+-------------+--------------+ + +---------------------- +Configure controller-1 +---------------------- + +.. include:: controller_storage_install_kubernetes.rst + :start-after: incl-config-controller-1-virt-controller-storage-start: + :end-before: incl-config-controller-1-virt-controller-storage-end: + +------------------- +Unlock controller-1 +------------------- + +.. include:: controller_storage_install_kubernetes.rst + :start-after: incl-unlock-controller-1-virt-controller-storage-start: + :end-before: incl-unlock-controller-1-virt-controller-storage-end: + +----------------------- +Configure storage nodes +----------------------- + +On virtual controller-0: + +#. Assign the cluster-host network to the MGMT interface for the storage nodes. + + Note that the MGMT interfaces are partially set up by the network install procedure. + + :: + + for COMPUTE in storage-0 storage-1; do + system interface-network-assign $COMPUTE mgmt0 cluster-host + done + +#. Add OSDs to storage-0: + + :: + + HOST=storage-0 + DISKS=$(system host-disk-list ${HOST}) + TIERS=$(system storage-tier-list ceph_cluster) + OSDs="/dev/sdb" + for OSD in $OSDs; do + system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}') + done + + system host-stor-list $HOST + +#. Add OSDs to storage-1: + + :: + + HOST=storage-1 + DISKS=$(system host-disk-list ${HOST}) + TIERS=$(system storage-tier-list ceph_cluster) + OSDs="/dev/sdb" + for OSD in $OSDs; do + system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}') + done + + system host-stor-list $HOST + +-------------------- +Unlock storage nodes +-------------------- + +Unlock virtual storage nodes in order to bring them into service: + +:: + + for STORAGE in storage-0 storage-1; do + system host-unlock $STORAGE + done + +The storage nodes will reboot in order to apply configuration changes and come +into service. This can take 5-10 minutes, depending on the performance of the host machine. + +----------------------- +Configure compute nodes +----------------------- + +On virtual controller-0: + +#. Assign the cluster-host network to the MGMT interface for the compute nodes. + + Note that the MGMT interfaces are partially set up automatically by the + network install procedure. + + :: + + for COMPUTE in compute-0 compute-1; do + system interface-network-assign $COMPUTE mgmt0 cluster-host + done + +#. Configure data interfaces for compute nodes. + + .. important:: + + **This step is required only if the StarlingX OpenStack application + (stx-openstack) will be installed.** + + 1G Huge Pages are not supported in the virtual environment and there is no + virtual NIC supporting SRIOV. For that reason, data interfaces are not + applicable in the virtual environment for the Kubernetes-only scenario. + + For OpenStack only: + + :: + + DATA0IF=eth1000 + DATA1IF=eth1001 + PHYSNET0='physnet0' + PHYSNET1='physnet1' + SPL=/tmp/tmp-system-port-list + SPIL=/tmp/tmp-system-host-if-list + + Configure the datanetworks in sysinv, prior to referencing it in the + :command:`system host-if-modify` command. + + :: + + system datanetwork-add ${PHYSNET0} vlan + system datanetwork-add ${PHYSNET1} vlan + + for COMPUTE in compute-0 compute-1; do + echo "Configuring interface for: $COMPUTE" + set -ex + system host-port-list ${COMPUTE} --nowrap > ${SPL} + system host-if-list -a ${COMPUTE} --nowrap > ${SPIL} + DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') + DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') + DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') + DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') + DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') + DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') + DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') + DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') + system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID} + system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID} + system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0} + system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1} + set +ex + done + +************************************* +OpenStack-specific host configuration +************************************* + +.. important:: + + **This step is required only if the StarlingX OpenStack application + (stx-openstack) will be installed.** + +#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in + support of installing the stx-openstack manifest/helm-charts later: + + :: + + for NODE in compute-0 compute-1; do + system host-label-assign $NODE openstack-compute-node=enabled + system host-label-assign $NODE openvswitch=enabled + system host-label-assign $NODE sriov=enabled + done + +#. **For OpenStack only:** Set up disk partition for nova-local volume group, + which is needed for stx-openstack nova ephemeral disks: + + :: + + for COMPUTE in compute-0 compute-1; do + echo "Configuring Nova local for: $COMPUTE" + ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}') + ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') + PARTITION_SIZE=10 + NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE}) + NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') + system host-lvg-add ${COMPUTE} nova-local + system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID} + done + +-------------------- +Unlock compute nodes +-------------------- + +.. include:: controller_storage_install_kubernetes.rst + :start-after: incl-unlock-compute-nodes-virt-controller-storage-start: + :end-before: incl-unlock-compute-nodes-virt-controller-storage-end: + +---------- +Next steps +---------- + +.. include:: ../kubernetes_install_next.txt diff --git a/doc/source/deploy_install_guides/current/virtual/physical_host_req.txt b/doc/source/deploy_install_guides/current/virtual/physical_host_req.txt new file mode 100644 index 000000000..ce3f47d56 --- /dev/null +++ b/doc/source/deploy_install_guides/current/virtual/physical_host_req.txt @@ -0,0 +1,75 @@ +The following sections describe system requirements and host setup for a +workstation hosting virtual machine(s) where StarlingX will be deployed. + +********************* +Hardware requirements +********************* + +The host system should have at least: + +* **Processor:** x86_64 only supported architecture with BIOS enabled hardware + virtualization extensions + +* **Cores:** 8 + +* **Memory:** 32GB RAM + +* **Hard Disk:** 500GB HDD + +* **Network:** One network adapter with active Internet connection + +********************* +Software requirements +********************* + +The host system should have at least: + +* A workstation computer with Ubuntu 16.04 LTS 64-bit + +All other required packages will be installed by scripts in the StarlingX tools repository. + +********** +Host setup +********** + +Set up the host with the following steps: + +#. Update OS: + + :: + + apt-get update + +#. Clone the StarlingX tools repository: + + :: + + apt-get install -y git + cd $HOME + git clone https://opendev.org/starlingx/tools + +#. Install required packages: + + :: + + cd $HOME/tools/deployment/libvirt/ + bash install_packages.sh + apt install -y apparmor-profiles + apt-get install -y ufw + ufw disable + ufw status + + + .. note:: + + On Ubuntu 16.04, if apparmor-profile modules were installed as shown in + the example above, you must reboot the server to fully install the + apparmor-profile modules. + + +#. Get the StarlingX ISO. This can be from a private StarlingX build or from the public Cengn + StarlingX build off 'master' branch, as shown below: + + :: + + wget http://mirror.starlingx.cengn.ca/mirror/starlingx/release/2.0.0/centos/outputs/iso/bootimage.iso diff --git a/doc/source/deploy_install_guides/index.rst b/doc/source/deploy_install_guides/index.rst index 2bb225e01..597c6e3cb 100755 --- a/doc/source/deploy_install_guides/index.rst +++ b/doc/source/deploy_install_guides/index.rst @@ -12,40 +12,10 @@ Latest release (stable) StarlingX R2.0 is the latest officially released version of StarlingX. -************************* -R2.0 virtual installation -************************* - .. toctree:: :maxdepth: 1 - current/virtual_aio_simplex - current/virtual_aio_duplex - current/virtual_controller_storage - current/virtual_dedicated_storage - -**************************** -R2.0 bare metal installation -**************************** - -.. toctree:: - :maxdepth: 1 - - current/bare_metal_aio_simplex - current/bare_metal_aio_duplex - current/bare_metal_controller_storage - current/bare_metal_dedicated_storage - current/bare_metal_ironic - -.. toctree:: - :maxdepth: 1 - :hidden: - - current/access_starlingx_kubernetes - current/access_starlingx_openstack - current/install_openstack - current/uninstall_delete_openstack - current/ansible_bootstrap_configs + current/index --------------------- Upcoming R3.0 release