Improve readability of install guides R2

- Organize major sections of guides into pages
- Set up includes for shared content between bare metal guides, virtual guides
- Set up txt files to reuse common deployment definitions
- Add in necessary intro sentences for consistency between pages
- Add 'Next steps' section after Kubernetes install
- Add in index pages where needed for better readability/grouping of content

Change-Id: I77e0e545eb284afb7c8802c95db1b4d598bd9940
Signed-off-by: Kristal Dale <kristal.dale@intel.com>
This commit is contained in:
Kristal Dale
2019-09-20 12:12:47 -07:00
parent d0229cdc99
commit 66b226f3f6
43 changed files with 2954 additions and 3257 deletions

View File

@@ -0,0 +1,26 @@
==============================================
Bare metal All-in-one Duplex Installation R2.0
==============================================
--------
Overview
--------
.. include:: ../desc_aio_duplex.txt
The bare metal AIO-DX deployment configuration may be extended with up to four
worker/compute nodes (not shown in the diagram). Installation instructions for
these additional nodes are described in :doc:`aio_duplex_extend`.
.. include:: ../ipv6_note.txt
------------
Installation
------------
.. toctree::
:maxdepth: 2
aio_duplex_hardware
aio_duplex_install_kubernetes
aio_duplex_extend

View File

@@ -0,0 +1,196 @@
================================================
Extend Capacity with Worker and/or Compute Nodes
================================================
This section describes the steps to extend capacity with worker and/or compute
nodes on a **StarlingX R2.0 bare metal All-in-one Duplex** deployment
configuration.
.. contents::
:local:
:depth: 1
---------------------------------
Install software on compute nodes
---------------------------------
#. Power on the compute servers and force them to network boot with the
appropriate BIOS boot options for your particular server.
#. As the compute servers boot, a message appears on their console instructing
you to configure the personality of the node.
#. On the console of controller-0, list hosts to see newly discovered compute
hosts (hostname=None):
::
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-0 | controller | unlocked | enabled | available |
| 3 | None | None | locked | disabled | offline |
| 4 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
#. Using the host id, set the personality of this host to 'controller':
::
system host-update 3 personality=worker hostname=compute-0
system host-update 4 personality=worker hostname=compute-1
This initiates the install of software on compute nodes.
This can take 5-10 minutes, depending on the performance of the host machine.
#. Wait for the install of software on the computes to complete, the computes to
reboot and to both show as locked/disabled/online in 'system host-list'.
::
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
| 3 | compute-0 | compute | locked | disabled | online |
| 4 | compute-1 | compute | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
-----------------------
Configure compute nodes
-----------------------
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
(Note that the MGMT interfaces are partially set up automatically by the
network install procedure.)
::
for COMPUTE in compute-0 compute-1; do
system interface-network-assign $COMPUTE mgmt0 cluster-host
done
#. Configure data interfaces for compute nodes. Use the DATA port names, for
example eth0, that are applicable to your deployment environment.
.. important::
This step is **required** for OpenStack.
This step is optional for Kubernetes: Do this step if using SRIOV network
attachments in hosted application containers.
For Kubernetes SRIOV network attachments:
* Configure SRIOV device plug in:
::
system host-label-assign controller-1 sriovdp=enabled
* If planning on running DPDK in containers on this host, configure the number
of 1G Huge pages required on both NUMA nodes:
::
system host-memory-modify controller-1 0 -1G 100
system host-memory-modify controller-1 1 -1G 100
For both Kubernetes and OpenStack:
::
DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
# configure the datanetworks in sysinv, prior to referencing it
# in the ``system host-if-modify`` command'.
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
for COMPUTE in compute-0 compute-1; do
echo "Configuring interface for: $COMPUTE"
set -ex
system host-port-list ${COMPUTE} --nowrap > ${SPL}
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
set +ex
done
*************************************
OpenStack-specific host configuration
*************************************
.. important::
**This step is required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
support of installing the stx-openstack manifest and helm-charts later.
::
for NODE in compute-0 compute-1; do
system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled
done
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
needed for stx-openstack nova ephemeral disks.
::
for COMPUTE in compute-0 compute-1; do
echo "Configuring Nova local for: $COMPUTE"
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${COMPUTE} nova-local
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
done
for COMPUTE in compute-0 compute-1; do
echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready."
while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
done
--------------------
Unlock compute nodes
--------------------
Unlock compute nodes in order to bring them into service:
::
for COMPUTE in compute-0 compute-1; do
system host-unlock $COMPUTE
done
The compute nodes will reboot to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host
machine.

View File

@@ -0,0 +1,58 @@
=====================
Hardware Requirements
=====================
This section describes the hardware requirements and server preparation for a
**StarlingX R2.0 bare metal All-in-one Duplex** deployment configuration.
.. contents::
:local:
:depth: 1
-----------------------------
Minimum hardware requirements
-----------------------------
The recommended minimum hardware requirements for bare metal servers for various
host types are:
+-------------------------+-----------------------------------------------------------+
| Minimum Requirement | All-in-one Controller Node |
+=========================+===========================================================+
| Number of servers | 2 |
+-------------------------+-----------------------------------------------------------+
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
| | 8 cores/socket |
| | |
| | or |
| | |
| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores |
| | (low-power/low-cost option) |
+-------------------------+-----------------------------------------------------------+
| Minimum memory | 64 GB |
+-------------------------+-----------------------------------------------------------+
| Primary disk | 500 GB SDD or NVMe |
+-------------------------+-----------------------------------------------------------+
| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD |
| | - Recommended, but not required: 1 or more SSDs or NVMe |
| | drives for Ceph journals (min. 1024 MiB per OSD journal)|
| | - For OpenStack, recommend 1 or more 500 GB (min. 10K RPM)|
| | for VM local ephemeral storage |
+-------------------------+-----------------------------------------------------------+
| Minimum network ports | - Mgmt/Cluster: 1x10GE |
| | - OAM: 1x1GE |
| | - Data: 1 or more x 10GE |
+-------------------------+-----------------------------------------------------------+
| BIOS settings | - Hyper-Threading technology enabled |
| | - Virtualization technology enabled |
| | - VT for directed I/O enabled |
| | - CPU power and performance policy set to performance |
| | - CPU C state control disabled |
| | - Plug & play BMC detection disabled |
+-------------------------+-----------------------------------------------------------+
--------------------------
Prepare bare metal servers
--------------------------
.. include:: prep_servers.txt

View File

@@ -0,0 +1,328 @@
=================================================
Install StarlingX Kubernetes on Bare Metal AIO-DX
=================================================
This section describes the steps to install the StarlingX Kubernetes platform
on a **StarlingX R2.0 bare metal All-in-one Duplex** deployment configuration.
.. contents::
:local:
:depth: 1
---------------------
Create a bootable USB
---------------------
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
create a bootable USB with the StarlingX ISO on your system.
--------------------------------
Install software on controller-0
--------------------------------
.. include:: aio_simplex_install_kubernetes.rst
:start-after: incl-install-software-controller-0-aio-simplex-start:
:end-before: incl-install-software-controller-0-aio-simplex-end:
--------------------------------
Bootstrap system on controller-0
--------------------------------
#. Login using the username / password of "sysadmin" / "sysadmin".
When logging in for the first time, you will be forced to change the password.
::
Login: sysadmin
Password:
Changing password for sysadmin.
(current) UNIX Password: sysadmin
New Password:
(repeat) New Password:
#. External connectivity is required to run the Ansible bootstrap playbook. The
StarlingX boot image will DHCP out all interfaces so the server may have
obtained an IP address and have external IP connectivity if a DHCP server is
present in your environment. Verify this using the :command:`ip addr` and
:command:`ping 8.8.8.8` commands.
Otherwise, manually configure an IP address and default IP route. Use the
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
deployment environment.
::
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
sudo ip link set up dev <PORT>
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
ping 8.8.8.8
#. Specify user configuration overrides for the Ansible bootstrap playbook.
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
configuration are:
``/etc/ansible/hosts``
The default Ansible inventory file. Contains a single host: localhost.
``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml``
The Ansible bootstrap playbook.
``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml``
The default configuration values for the bootstrap playbook.
sysadmin home directory ($HOME)
The default location where Ansible looks for and imports user
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
Specify the user configuration override file for the Ansible bootstrap
playbook using one of the following methods:
* Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
the configurable values as desired (use the commented instructions in
the file).
or
* Create the minimal user configuration override file as shown in the
example below, using the OAM IP SUBNET and IP ADDRESSing applicable to your
deployment environment:
::
cd ~
cat <<EOF > localhost.yml
system_mode: duplex
dns_servers:
- 8.8.8.8
- 8.8.4.4
external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
admin_username: admin
admin_password: <sysadmin-password>
ansible_become_pass: <sysadmin-password>
EOF
Additional :doc:`ansible_bootstrap_configs` are available for advanced use cases.
#. Run the Ansible bootstrap playbook:
::
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml
Wait for Ansible bootstrap playbook to complete.
This can take 5-10 minutes, depending on the performance of the host machine.
----------------------
Configure controller-0
----------------------
.. include:: aio_simplex_install_kubernetes.rst
:start-after: incl-config-controller-0-aio-simplex-start:
:end-before: incl-config-controller-0-aio-simplex-end:
-------------------
Unlock controller-0
-------------------
.. include:: aio_simplex_install_kubernetes.rst
:start-after: incl-unlock-controller-0-aio-simplex-start:
:end-before: incl-unlock-controller-0-aio-simplex-end:
-------------------------------------
Install software on controller-1 node
-------------------------------------
#. Power on the controller-1 server and force it to network boot with the
appropriate BIOS boot options for your particular server.
#. As controller-1 boots, a message appears on its console instructing you to
configure the personality of the node.
#. On the console of controller-0, list hosts to see newly discovered controller-1
host (hostname=None):
::
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
#. Using the host id, set the personality of this host to 'controller':
::
system host-update 2 personality=controller
#. Wait for the software installation on controller-1 to complete, for controller-1 to
reboot, and for controller-1 to show as locked/disabled/online in 'system host-list'.
This can take 5-10 minutes, depending on the performance of the host machine.
::
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
----------------------
Configure controller-1
----------------------
#. Configure the OAM and MGMT interfaces of controller-1 and specify the
attached networks. Use the OAM and MGMT port names, for example eth0, that are
applicable to your deployment environment:
(Note that the MGMT interface is partially set up automatically by the network
install procedure.)
::
OAM_IF=<OAM-PORT>
MGMT_IF=<MGMT-PORT>
system host-if-modify controller-1 $OAM_IF -c platform
system interface-network-assign controller-1 $OAM_IF oam
system interface-network-assign controller-1 $MGMT_IF cluster-host
#. Configure data interfaces for controller-1. Use the DATA port names, for example
eth0, applicable to your deployment environment.
.. important::
This step is **required** for OpenStack.
This step is optional for Kubernetes: Do this step if using SRIOV network
attachments in hosted application containers.
For Kubernetes SRIOV network attachments:
* Configure the SRIOV device plugin:
::
system host-label-assign controller-1 sriovdp=enabled
* If planning on running DPDK in containers on this host, configure the number
of 1G Huge pages required on both NUMA nodes:
::
system host-memory-modify controller-1 0 -1G 100
system host-memory-modify controller-1 1 -1G 100
For both Kubernetes and OpenStack:
::
DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
export COMPUTE=controller-1
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
system host-port-list ${COMPUTE} --nowrap > ${SPL}
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
#. Add an OSD on controller-1 for ceph:
::
echo ">>> Add OSDs to primary tier"
system host-disk-list controller-1
system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {}
system host-stor-list controller-1
*************************************
OpenStack-specific host configuration
*************************************
.. important::
**This step is required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
support of installing the stx-openstack manifest and helm-charts later.
::
system host-label-assign controller-1 openstack-control-plane=enabled
system host-label-assign controller-1 openstack-compute-node=enabled
system host-label-assign controller-1 openvswitch=enabled
system host-label-assign controller-1 sriov=enabled
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for stx-openstack nova ephemeral disks.
::
export COMPUTE=controller-1
echo ">>> Getting root disk info"
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
echo ">>>> Configuring nova-local"
NOVA_SIZE=34
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${COMPUTE} nova-local
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
sleep 2
echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready."
while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
-------------------
Unlock controller-1
-------------------
Unlock controller-1 in order to bring it into service:
::
system host-unlock controller-1
Controller-1 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host
machine.
----------
Next steps
----------
.. include:: ../kubernetes_install_next.txt

View File

@@ -0,0 +1,21 @@
===============================================
Bare metal All-in-one Simplex Installation R2.0
===============================================
--------
Overview
--------
.. include:: ../desc_aio_simplex.txt
.. include:: ../ipv6_note.txt
------------
Installation
------------
.. toctree::
:maxdepth: 2
aio_simplex_hardware
aio_simplex_install_kubernetes

View File

@@ -0,0 +1,58 @@
=====================
Hardware Requirements
=====================
This section describes the hardware requirements and server preparation for a
**StarlingX R2.0 bare metal All-in-one Simplex** deployment configuration.
.. contents::
:local:
:depth: 1
-----------------------------
Minimum hardware requirements
-----------------------------
The recommended minimum hardware requirements for bare metal servers for various
host types are:
+-------------------------+-----------------------------------------------------------+
| Minimum Requirement | All-in-one Controller Node |
+=========================+===========================================================+
| Number of servers | 1 |
+-------------------------+-----------------------------------------------------------+
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
| | 8 cores/socket |
| | |
| | or |
| | |
| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores |
| | (low-power/low-cost option) |
+-------------------------+-----------------------------------------------------------+
| Minimum memory | 64 GB |
+-------------------------+-----------------------------------------------------------+
| Primary disk | 500 GB SDD or NVMe |
+-------------------------+-----------------------------------------------------------+
| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD |
| | - Recommended, but not required: 1 or more SSDs or NVMe |
| | drives for Ceph journals (min. 1024 MiB per OSD |
| | journal) |
| | - For OpenStack, recommend 1 or more 500 GB (min. 10K |
| | RPM) for VM local ephemeral storage |
+-------------------------+-----------------------------------------------------------+
| Minimum network ports | - OAM: 1x1GE |
| | - Data: 1 or more x 10GE |
+-------------------------+-----------------------------------------------------------+
| BIOS settings | - Hyper-Threading technology enabled |
| | - Virtualization technology enabled |
| | - VT for directed I/O enabled |
| | - CPU power and performance policy set to performance |
| | - CPU C state control disabled |
| | - Plug & play BMC detection disabled |
+-------------------------+-----------------------------------------------------------+
--------------------------
Prepare bare metal servers
--------------------------
.. include:: prep_servers.txt

View File

@@ -1,111 +1,26 @@
==================================
Bare metal All-in-one Simplex R2.0
==================================
=================================================
Install StarlingX Kubernetes on Bare Metal AIO-SX
=================================================
This section describes the steps to install the StarlingX Kubernetes platform
on a **StarlingX R2.0 bare metal All-in-one Simplex** deployment configuration.
.. contents::
:local:
:depth: 1
-----------
Description
-----------
.. include:: virtual_aio_simplex.rst
:start-after: incl-aio-simplex-intro-start:
:end-before: incl-aio-simplex-intro-end:
.. include:: virtual_aio_simplex.rst
:start-after: incl-ipv6-note-start:
:end-before: incl-ipv6-note-end:
---------------------
Hardware requirements
Create a bootable USB
---------------------
The recommended minimum requirements for bare metal servers for various host
types are:
+-------------------------+-----------------------------------------------------------+
| Minimum Requirement | All-in-one Controller Node |
+=========================+===========================================================+
| Number of servers | 1 |
+-------------------------+-----------------------------------------------------------+
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
| | 8 cores/socket |
| | |
| | or |
| | |
| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores |
| | (low-power/low-cost option) |
+-------------------------+-----------------------------------------------------------+
| Minimum memory | 64 GB |
+-------------------------+-----------------------------------------------------------+
| Primary disk | 500 GB SDD or NVMe |
+-------------------------+-----------------------------------------------------------+
| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD |
| | - Recommended, but not required: 1 or more SSDs or NVMe |
| | drives for Ceph journals (min. 1024 MiB per OSD |
| | journal) |
| | - For OpenStack, recommend 1 or more 500 GB (min. 10K |
| | RPM) for VM local ephemeral storage |
+-------------------------+-----------------------------------------------------------+
| Minimum network ports | - OAM: 1x1GE |
| | - Data: 1 or more x 10GE |
+-------------------------+-----------------------------------------------------------+
| BIOS settings | - Hyper-Threading technology enabled |
| | - Virtualization technology enabled |
| | - VT for directed I/O enabled |
| | - CPU power and performance policy set to performance |
| | - CPU C state control disabled |
| | - Plug & play BMC detection disabled |
+-------------------------+-----------------------------------------------------------+
---------------------
Preparing the servers
---------------------
.. incl-prepare-servers-start:
Prior to starting the StarlingX installation, the bare metal servers must be in the
following condition:
* Physically installed
* Cabled for power
* Cabled for networking
* Far-end switch ports should be properly configured to realize the networking
shown in Figure 1.
* All disks wiped
* Ensures that servers will boot from either the network or USB storage (if present)
* Powered off
.. incl-prepare-servers-end:
--------------------
StarlingX Kubernetes
--------------------
*******************************
Installing StarlingX Kubernetes
*******************************
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Create a bootable USB with the StarlingX ISO
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
create a bootable USB on your system.
create a bootable USB with the StarlingX ISO on your system.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
Install software on controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
.. incl-install-software-controller-0-aio-start:
.. incl-install-software-controller-0-aio-simplex-start:
#. Insert the bootable USB into a bootable USB port on the host you are
configuring as controller-0.
@@ -125,11 +40,11 @@ Install software on controller-0
#. Wait for non-interactive install of software to complete and server to reboot.
This can take 5-10 minutes, depending on the performance of the server.
.. incl-install-software-controller-0-aio-end:
.. incl-install-software-controller-0-aio-simplex-end:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
Bootstrap system on controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
#. Login using the username / password of "sysadmin" / "sysadmin".
When logging in for the first time, you will be forced to change the password.
@@ -210,9 +125,7 @@ Bootstrap system on controller-0
ansible_become_pass: <sysadmin-password>
EOF
Additional Ansible bootstrap configurations for advanced use cases are available:
* :ref:`IPv6 <ansible_bootstrap_ipv6>`
Additional :doc:`ansible_bootstrap_configs` are available for advanced use cases.
#. Run the Ansible bootstrap playbook:
@@ -223,11 +136,11 @@ Bootstrap system on controller-0
Wait for Ansible bootstrap playbook to complete.
This can take 5-10 minutes, depending on the performance of the host machine.
^^^^^^^^^^^^^^^^^^^^^^
----------------------
Configure controller-0
^^^^^^^^^^^^^^^^^^^^^^
----------------------
.. incl-config-controller-0-start:
.. incl-config-controller-0-aio-simplex-start:
#. Acquire admin credentials:
@@ -325,9 +238,9 @@ Configure controller-0
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*************************************
OpenStack-specific host configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*************************************
.. important::
@@ -410,11 +323,13 @@ OpenStack-specific host configuration
echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready."
while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
.. incl-config-controller-0-end:
.. incl-config-controller-0-aio-simplex-end:
^^^^^^^^^^^^^^^^^^^
-------------------
Unlock controller-0
^^^^^^^^^^^^^^^^^^^
-------------------
.. incl-unlock-controller-0-aio-simplex-start:
Unlock controller-0 in order to bring it into service:
@@ -422,43 +337,13 @@ Unlock controller-0 in order to bring it into service:
system host-unlock controller-0
Controller-0 will reboot in order to apply configuration change and come into
Controller-0 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host machine.
When it completes, your Kubernetes cluster is up and running.
.. incl-unlock-controller-0-aio-simplex-end:
***************************
Access StarlingX Kubernetes
***************************
----------
Next steps
----------
.. include:: virtual_aio_simplex.rst
:start-after: incl-access-starlingx-kubernetes-start:
:end-before: incl-access-starlingx-kubernetes-end:
-------------------
StarlingX OpenStack
-------------------
***************************
Install StarlingX OpenStack
***************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-install-starlingx-openstack-start:
:end-before: incl-install-starlingx-openstack-end:
**************************
Access StarlingX OpenStack
**************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-access-starlingx-openstack-start:
:end-before: incl-access-starlingx-openstack-end:
*****************************
Uninstall StarlingX OpenStack
*****************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-uninstall-starlingx-openstack-start:
:end-before: incl-uninstall-starlingx-openstack-end:
.. include:: ../kubernetes_install_next.txt

View File

@@ -1,6 +1,10 @@
===========================================
Additional Ansible bootstrap configurations
===========================================
================================
Ansible Bootstrap Configurations
================================
.. contents::
:local:
:depth: 1
.. _ansible_bootstrap_ipv6:

View File

@@ -0,0 +1,22 @@
=============================================================
Bare metal Standard with Controller Storage Installation R2.0
=============================================================
--------
Overview
--------
.. include:: ../desc_controller_storage.txt
.. include:: ../ipv6_note.txt
------------
Installation
------------
.. toctree::
:maxdepth: 2
controller_storage_hardware
controller_storage_install_kubernetes

View File

@@ -0,0 +1,55 @@
=====================
Hardware Requirements
=====================
This section describes the hardware requirements and server preparation for a
**StarlingX R2.0 bare metal Standard with Controller Storage** deployment
configuration.
.. contents::
:local:
:depth: 1
-----------------------------
Minimum hardware requirements
-----------------------------
The recommended minimum hardware requirements for bare metal servers for various
host types are:
+-------------------------+-----------------------------+-----------------------------+
| Minimum Requirement | Controller Node | Compute Node |
+=========================+=============================+=============================+
| Number of servers | 2 | 2-10 |
+-------------------------+-----------------------------+-----------------------------+
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
| | 8 cores/socket |
+-------------------------+-----------------------------+-----------------------------+
| Minimum memory | 64 GB | 32 GB |
+-------------------------+-----------------------------+-----------------------------+
| Primary disk | 500 GB SDD or NVMe | 120 GB (Minimum 10k RPM) |
+-------------------------+-----------------------------+-----------------------------+
| Additional disks | - 1 or more 500 GB (min. | - For OpenStack, recommend |
| | 10K RPM) for Ceph OSD | 1 or more 500 GB (min. |
| | - Recommended, but not | 10K RPM) for VM local |
| | required: 1 or more SSDs | ephemeral storage |
| | or NVMe drives for Ceph | |
| | journals (min. 1024 MiB | |
| | per OSD journal) | |
+-------------------------+-----------------------------+-----------------------------+
| Minimum network ports | - Mgmt/Cluster: 1x10GE | - Mgmt/Cluster: 1x10GE |
| | - OAM: 1x1GE | - Data: 1 or more x 10GE |
+-------------------------+-----------------------------+-----------------------------+
| BIOS settings | - Hyper-Threading technology enabled |
| | - Virtualization technology enabled |
| | - VT for directed I/O enabled |
| | - CPU power and performance policy set to performance |
| | - CPU C state control disabled |
| | - Plug & play BMC detection disabled |
+-------------------------+-----------------------------+-----------------------------+
--------------------------
Prepare bare metal servers
--------------------------
.. include:: prep_servers.txt

View File

@@ -1,89 +1,25 @@
.. _bm_standard_controller_r2:
===========================================================================
Install StarlingX Kubernetes on Bare Metal Standard with Controller Storage
===========================================================================
================================================
Bare metal Standard with Controller Storage R2.0
================================================
This section describes the steps to install the StarlingX Kubernetes platform
on a **StarlingX R2.0 bare metal Standard with Controller Storage** deployment
configuration.
.. contents::
:local:
:depth: 1
-----------
Description
-----------
.. include:: virtual_controller_storage.rst
:start-after: incl-controller-storage-intro-start:
:end-before: incl-controller-storage-intro-end:
.. include:: virtual_aio_simplex.rst
:start-after: incl-ipv6-note-start:
:end-before: incl-ipv6-note-end:
---------------------
Hardware requirements
Create a bootable USB
---------------------
The recommended minimum requirements for bare metal servers for various host
types are:
+-------------------------+-----------------------------+-----------------------------+
| Minimum Requirement | Controller Node | Compute Node |
+=========================+=============================+=============================+
| Number of servers | 2 | 2-10 |
+-------------------------+-----------------------------+-----------------------------+
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
| | 8 cores/socket |
+-------------------------+-----------------------------+-----------------------------+
| Minimum memory | 64 GB | 32 GB |
+-------------------------+-----------------------------+-----------------------------+
| Primary disk | 500 GB SDD or NVMe | 120 GB (Minimum 10k RPM) |
+-------------------------+-----------------------------+-----------------------------+
| Additional disks | - 1 or more 500 GB (min. | - For OpenStack, recommend |
| | 10K RPM) for Ceph OSD | 1 or more 500 GB (min. |
| | - Recommended, but not | 10K RPM) for VM local |
| | required: 1 or more SSDs | ephemeral storage |
| | or NVMe drives for Ceph | |
| | journals (min. 1024 MiB | |
| | per OSD journal) | |
+-------------------------+-----------------------------+-----------------------------+
| Minimum network ports | - Mgmt/Cluster: 1x10GE | - Mgmt/Cluster: 1x10GE |
| | - OAM: 1x1GE | - Data: 1 or more x 10GE |
+-------------------------+-----------------------------+-----------------------------+
| BIOS settings | - Hyper-Threading technology enabled |
| | - Virtualization technology enabled |
| | - VT for directed I/O enabled |
| | - CPU power and performance policy set to performance |
| | - CPU C state control disabled |
| | - Plug & play BMC detection disabled |
+-------------------------+-----------------------------+-----------------------------+
---------------
Prepare Servers
---------------
.. include:: bare_metal_aio_simplex.rst
:start-after: incl-prepare-servers-start:
:end-before: incl-prepare-servers-end:
--------------------
StarlingX Kubernetes
--------------------
*******************************
Installing StarlingX Kubernetes
*******************************
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Create a bootable USB with the StarlingX ISO
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
create a bootable USB on your system.
create a bootable USB with the StarlingX ISO on your system.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
Install software on controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
.. incl-install-software-controller-0-standard-start:
@@ -107,9 +43,9 @@ Install software on controller-0
.. incl-install-software-controller-0-standard-end:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
Bootstrap system on controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
.. incl-bootstrap-sys-controller-0-standard-start:
@@ -194,9 +130,7 @@ Bootstrap system on controller-0
ansible_become_pass: <sysadmin-password>
EOF
Additional Ansible bootstrap configurations for advanced use cases are available:
* :ref:`IPv6 <ansible_bootstrap_ipv6>`
Additional :doc:`ansible_bootstrap_configs` are available for advanced use cases.
#. Run the Ansible bootstrap playbook:
@@ -210,9 +144,9 @@ Bootstrap system on controller-0
.. incl-bootstrap-sys-controller-0-standard-end:
^^^^^^^^^^^^^^^^^^^^^^
----------------------
Configure controller-0
^^^^^^^^^^^^^^^^^^^^^^
----------------------
.. incl-config-controller-0-storage-start:
@@ -247,9 +181,9 @@ Configure controller-0
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*************************************
OpenStack-specific host configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*************************************
.. important::
@@ -308,17 +242,22 @@ OpenStack-specific host configuration
.. incl-config-controller-0-storage-end:
^^^^^^^^^^^^^^^^^^^
-------------------
Unlock controller-0
^^^^^^^^^^^^^^^^^^^
-------------------
.. include:: bare_metal_aio_duplex.rst
:start-after: incl-unlock-controller-0-start:
:end-before: incl-unlock-controller-0-end:
Unlock controller-0 in order to bring it into service:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
system host-unlock controller-0
Controller-0 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host machine.
--------------------------------------------------
Install software on controller-1 and compute nodes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------------------------
#. Power on the controller-1 server and force it to network boot with the
appropriate BIOS boot options for your particular server.
@@ -383,9 +322,9 @@ Install software on controller-1 and compute nodes
| 4 | compute-1 | compute | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
^^^^^^^^^^^^^^^^^^^^^^
----------------------
Configure controller-1
^^^^^^^^^^^^^^^^^^^^^^
----------------------
.. incl-config-controller-1-start:
@@ -404,9 +343,9 @@ install procedure.)
system interface-network-assign controller-1 $OAM_IF oam
system interface-network-assign controller-1 $MGMT_IF cluster-host
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*************************************
OpenStack-specific host configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*************************************
.. important::
@@ -422,9 +361,9 @@ of installing the stx-openstack manifest and helm-charts later.
.. incl-config-controller-1-end:
^^^^^^^^^^^^^^^^^^^
-------------------
Unlock controller-1
^^^^^^^^^^^^^^^^^^^
-------------------
.. incl-unlock-controller-1-start:
@@ -440,9 +379,9 @@ machine.
.. incl-unlock-controller-1-end:
^^^^^^^^^^^^^^^^^^^^^^^
-----------------------
Configure compute nodes
^^^^^^^^^^^^^^^^^^^^^^^
-----------------------
#. Add the third Ceph monitor to compute-0:
@@ -545,9 +484,9 @@ Configure compute nodes
set +ex
done
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*************************************
OpenStack-specific host configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*************************************
.. important::
@@ -586,9 +525,9 @@ OpenStack-specific host configuration
while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
done
^^^^^^^^^^^^^^^^^^^^
--------------------
Unlock compute nodes
^^^^^^^^^^^^^^^^^^^^
--------------------
Unlock compute nodes in order to bring them into service:
@@ -601,9 +540,9 @@ Unlock compute nodes in order to bring them into service:
The compute nodes will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host machine.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
----------------------------
Add Ceph OSDs to controllers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
----------------------------
#. Add OSDs to controller-0:
@@ -635,40 +574,8 @@ Add Ceph OSDs to controllers
system host-stor-list $HOST
Your Kubernetes cluster is up and running.
----------
Next steps
----------
***************************
Access StarlingX Kubernetes
***************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-access-starlingx-kubernetes-start:
:end-before: incl-access-starlingx-kubernetes-end:
-------------------
StarlingX OpenStack
-------------------
***************************
Install StarlingX OpenStack
***************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-install-starlingx-openstack-start:
:end-before: incl-install-starlingx-openstack-end:
**************************
Access StarlingX OpenStack
**************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-access-starlingx-openstack-start:
:end-before: incl-access-starlingx-openstack-end:
*****************************
Uninstall StarlingX OpenStack
*****************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-uninstall-starlingx-openstack-start:
:end-before: incl-uninstall-starlingx-openstack-end:
.. include:: ../kubernetes_install_next.txt

View File

@@ -0,0 +1,23 @@
.. _bm_standard_dedicated_r2:
============================================================
Bare metal Standard with Dedicated Storage Installation R2.0
============================================================
--------
Overview
--------
.. include:: ../desc_dedicated_storage.txt
.. include:: ../ipv6_note.txt
------------
Installation
------------
.. toctree::
:maxdepth: 2
dedicated_storage_hardware
dedicated_storage_install_kubernetes

View File

@@ -0,0 +1,60 @@
=====================
Hardware Requirements
=====================
This section describes the hardware requirements and server preparation for a
**StarlingX R2.0 bare metal Standard with Dedicated Storage** deployment
configuration.
.. contents::
:local:
:depth: 1
-----------------------------
Minimum hardware requirements
-----------------------------
The recommended minimum hardware requirements for bare metal servers for various
host types are:
+---------------------+-----------------------+-----------------------+-----------------------+
| Minimum Requirement | Controller Node | Storage Node | Compute Node |
+=====================+=======================+=======================+=======================+
| Number of servers | 2 | 2-9 | 2-100 |
+---------------------+-----------------------+-----------------------+-----------------------+
| Minimum processor | Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket |
| class | |
+---------------------+-----------------------+-----------------------+-----------------------+
| Minimum memory | 64 GB | 64 GB | 32 GB |
+---------------------+-----------------------+-----------------------+-----------------------+
| Primary disk | 500 GB SDD or NVM | 120 GB (min. 10k RPM) | 120 GB (min. 10k RPM) |
+---------------------+-----------------------+-----------------------+-----------------------+
| Additional disks | None | - 1 or more 500 GB | - For OpenStack, |
| | | (min.10K RPM) for | recommend 1 or more |
| | | Ceph OSD | 500 GB (min. 10K |
| | | - Recommended, but | RPM) for VM |
| | | not required: 1 or | ephemeral storage |
| | | more SSDs or NVMe | |
| | | drives for Ceph | |
| | | journals (min. 1024 | |
| | | MiB per OSD | |
| | | journal) | |
+---------------------+-----------------------+-----------------------+-----------------------+
| Minimum network | - Mgmt/Cluster: | - Mgmt/Cluster: | - Mgmt/Cluster: |
| ports | 1x10GE | 1x10GE | 1x10GE |
| | - OAM: 1x1GE | | - Data: 1 or more |
| | | | x 10GE |
+---------------------+-----------------------+-----------------------+-----------------------+
| BIOS settings | - Hyper-Threading technology enabled |
| | - Virtualization technology enabled |
| | - VT for directed I/O enabled |
| | - CPU power and performance policy set to performance |
| | - CPU C state control disabled |
| | - Plug & play BMC detection disabled |
+---------------------+-----------------------+-----------------------+-----------------------+
--------------------------
Prepare bare metal servers
--------------------------
.. include:: prep_servers.txt

View File

@@ -1,126 +1,62 @@
.. _bm_standard_dedicated_r2:
==========================================================================
Install StarlingX Kubernetes on Bare Metal Standard with Dedicated Storage
==========================================================================
===============================================
Bare metal Standard with Dedicated Storage R2.0
===============================================
This section describes the steps to install the StarlingX Kubernetes platform
on a **StarlingX R2.0 bare metal Standard with Dedicated Storage** deployment
configuration.
.. contents::
:local:
:depth: 1
-----------
Description
-----------
.. include:: virtual_dedicated_storage.rst
:start-after: incl-dedicated-storage-intro-start:
:end-before: incl-dedicated-storage-intro-end:
.. include:: virtual_aio_simplex.rst
:start-after: incl-ipv6-note-start:
:end-before: incl-ipv6-note-end:
---------------------
Hardware requirements
---------------------
The recommended minimum requirements for bare metal servers for various host
types are:
+---------------------+-----------------------+-----------------------+-----------------------+
| Minimum Requirement | Controller Node | Storage Node | Compute Node |
+=====================+=======================+=======================+=======================+
| Number of servers | 2 | 2-9 | 2-100 |
+---------------------+-----------------------+-----------------------+-----------------------+
| Minimum processor | Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket |
| class | |
+---------------------+-----------------------+-----------------------+-----------------------+
| Minimum memory | 64 GB | 64 GB | 32 GB |
+---------------------+-----------------------+-----------------------+-----------------------+
| Primary disk | 500 GB SDD or NVM | 120 GB (min. 10k RPM) | 120 GB (min. 10k RPM) |
+---------------------+-----------------------+-----------------------+-----------------------+
| Additional disks | None | - 1 or more 500 GB | - For OpenStack, |
| | | (min.10K RPM) for | recommend 1 or more |
| | | Ceph OSD | 500 GB (min. 10K |
| | | - Recommended, but | RPM) for VM |
| | | not required: 1 or | ephemeral storage |
| | | more SSDs or NVMe | |
| | | drives for Ceph | |
| | | journals (min. 1024 | |
| | | MiB per OSD | |
| | | journal) | |
+---------------------+-----------------------+-----------------------+-----------------------+
| Minimum network | - Mgmt/Cluster: | - Mgmt/Cluster: | - Mgmt/Cluster: |
| ports | 1x10GE | 1x10GE | 1x10GE |
| | - OAM: 1x1GE | | - Data: 1 or more |
| | | | x 10GE |
+---------------------+-----------------------+-----------------------+-----------------------+
| BIOS settings | - Hyper-Threading technology enabled |
| | - Virtualization technology enabled |
| | - VT for directed I/O enabled |
| | - CPU power and performance policy set to performance |
| | - CPU C state control disabled |
| | - Plug & play BMC detection disabled |
+---------------------+-----------------------+-----------------------+-----------------------+
---------------
Prepare Servers
---------------
.. include:: bare_metal_aio_simplex.rst
:start-after: incl-prepare-servers-start:
:end-before: incl-prepare-servers-end:
--------------------
StarlingX Kubernetes
--------------------
*******************************
Installing StarlingX Kubernetes
*******************************
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------------------
Create a bootable USB with the StarlingX ISO
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------------------
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
create a bootable USB on your system.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
Install software on controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
.. include:: bare_metal_controller_storage.rst
.. include:: controller_storage_install_kubernetes.rst
:start-after: incl-install-software-controller-0-standard-start:
:end-before: incl-install-software-controller-0-standard-end:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
Bootstrap system on controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
.. include:: bare_metal_controller_storage.rst
.. include:: controller_storage_install_kubernetes.rst
:start-after: incl-bootstrap-sys-controller-0-standard-start:
:end-before: incl-bootstrap-sys-controller-0-standard-end:
^^^^^^^^^^^^^^^^^^^^^^
----------------------
Configure controller-0
^^^^^^^^^^^^^^^^^^^^^^
----------------------
.. include:: bare_metal_controller_storage.rst
.. include:: controller_storage_install_kubernetes.rst
:start-after: incl-config-controller-0-storage-start:
:end-before: incl-config-controller-0-storage-end:
^^^^^^^^^^^^^^^^^^^
-------------------
Unlock controller-0
^^^^^^^^^^^^^^^^^^^
-------------------
.. include:: bare_metal_aio_duplex.rst
:start-after: incl-unlock-controller-0-start:
:end-before: incl-unlock-controller-0-end:
Unlock controller-0 in order to bring it into service:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Install software on controller-1, storage nodes and compute nodes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
system host-unlock controller-0
Controller-0 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host machine.
------------------------------------------------------------------
Install software on controller-1, storage nodes, and compute nodes
------------------------------------------------------------------
#. Power on the controller-1 server and force it to network boot with the
appropriate BIOS boot options for your particular server.
@@ -209,25 +145,25 @@ Install software on controller-1, storage nodes and compute nodes
| 6 | compute-1 | compute | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
^^^^^^^^^^^^^^^^^^^^^^
----------------------
Configure controller-1
^^^^^^^^^^^^^^^^^^^^^^
----------------------
.. include:: bare_metal_controller_storage.rst
.. include:: controller_storage_install_kubernetes.rst
:start-after: incl-config-controller-1-start:
:end-before: incl-config-controller-1-end:
^^^^^^^^^^^^^^^^^^^
-------------------
Unlock controller-1
^^^^^^^^^^^^^^^^^^^
-------------------
.. include:: bare_metal_controller_storage.rst
.. include:: controller_storage_install_kubernetes.rst
:start-after: incl-unlock-controller-1-start:
:end-before: incl-unlock-controller-1-end:
^^^^^^^^^^^^^^^^^^^^^^^
-----------------------
Configure storage nodes
^^^^^^^^^^^^^^^^^^^^^^^
-----------------------
#. Assign the cluster-host network to the MGMT interface for the storage nodes:
@@ -270,9 +206,9 @@ Configure storage nodes
system host-stor-list $HOST
^^^^^^^^^^^^^^^^^^^^
--------------------
Unlock storage nodes
^^^^^^^^^^^^^^^^^^^^
--------------------
Unlock storage nodes in order to bring them into service:
@@ -286,9 +222,9 @@ The storage nodes will reboot in order to apply configuration changes and come
into service. This can take 5-10 minutes, depending on the performance of the
host machine.
^^^^^^^^^^^^^^^^^^^^^^^
-----------------------
Configure compute nodes
^^^^^^^^^^^^^^^^^^^^^^^
-----------------------
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
@@ -367,9 +303,9 @@ Configure compute nodes
set +ex
done
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*************************************
OpenStack-specific host configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*************************************
.. important::
@@ -408,9 +344,9 @@ OpenStack-specific host configuration
while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
done
^^^^^^^^^^^^^^^^^^^^
--------------------
Unlock compute nodes
^^^^^^^^^^^^^^^^^^^^
--------------------
Unlock compute nodes in order to bring them into service:
@@ -424,40 +360,8 @@ The compute nodes will reboot in order to apply configuration changes and come
into service. This can take 5-10 minutes, depending on the performance of the
host machine.
Your Kubernetes cluster is up and running.
----------
Next steps
----------
***************************
Access StarlingX Kubernetes
***************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-access-starlingx-kubernetes-start:
:end-before: incl-access-starlingx-kubernetes-end:
-------------------
StarlingX OpenStack
-------------------
***************************
Install StarlingX OpenStack
***************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-install-starlingx-openstack-start:
:end-before: incl-install-starlingx-openstack-end:
**************************
Access StarlingX OpenStack
**************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-access-starlingx-openstack-start:
:end-before: incl-access-starlingx-openstack-end:
*****************************
Uninstall StarlingX OpenStack
*****************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-uninstall-starlingx-openstack-start:
:end-before: incl-uninstall-starlingx-openstack-end:
.. include:: ../kubernetes_install_next.txt

View File

@@ -6,9 +6,9 @@ Bare metal Standard with Ironic R2.0
:local:
:depth: 1
------------
Introduction
------------
--------
Overview
--------
Ironic is an OpenStack project that provisions bare metal machines. For
information about the Ironic project, see
@@ -18,7 +18,7 @@ End user applications can be deployed on bare metal servers (instead of
virtual machines) by configuring OpenStack Ironic and deploying a pool of 1 or
more bare metal servers.
.. figure:: figures/starlingx-deployment-options-ironic.png
.. figure:: ../figures/starlingx-deployment-options-ironic.png
:scale: 90%
:alt: Standard with Ironic deployment configuration
@@ -54,9 +54,9 @@ Installation options
StarlingX currently supports only a bare metal installation of Ironic with a
standard configuration, either:
* :ref:`Bare metal Standard with Controller Storage R2.0 <bm_standard_controller_r2>`
* :doc:`controller_storage`
* :ref:`Bare metal Standard with Dedicated Storage R2.0 <bm_standard_dedicated_r2>`
* :doc:`dedicated_storage`
This guide assumes that you have a standard deployment installed and configured

View File

@@ -0,0 +1,17 @@
Prior to starting the StarlingX installation, the bare metal servers must be in
the following condition:
* Physically installed
* Cabled for power
* Cabled for networking
* Far-end switch ports should be properly configured to realize the networking
shown in Figure 1.
* All disks wiped
* Ensures that servers will boot from either the network or USB storage (if present)
* Powered off

View File

@@ -1,631 +0,0 @@
=================================
Bare metal All-in-one Duplex R2.0
=================================
.. contents::
:local:
:depth: 1
-----------
Description
-----------
.. include:: virtual_aio_duplex.rst
:start-after: incl-aio-duplex-intro-start:
:end-before: incl-aio-duplex-intro-end:
The bare metal AIO-DX deployment configuration may be extended with up to four
worker/compute nodes (not shown in the diagram). Installation instructions for
these additional nodes are described in `Extending capacity with worker / compute nodes`_.
.. include:: virtual_aio_simplex.rst
:start-after: incl-ipv6-note-start:
:end-before: incl-ipv6-note-end:
---------------------
Hardware requirements
---------------------
The recommended minimum requirements for bare metal servers for various host
types are:
+-------------------------+-----------------------------------------------------------+
| Minimum Requirement | All-in-one Controller Node |
+=========================+===========================================================+
| Number of servers | 2 |
+-------------------------+-----------------------------------------------------------+
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) |
| | 8 cores/socket |
| | |
| | or |
| | |
| | - Single-CPU Intel® Xeon® D-15xx family, 8 cores |
| | (low-power/low-cost option) |
+-------------------------+-----------------------------------------------------------+
| Minimum memory | 64 GB |
+-------------------------+-----------------------------------------------------------+
| Primary disk | 500 GB SDD or NVMe |
+-------------------------+-----------------------------------------------------------+
| Additional disks | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD |
| | - Recommended, but not required: 1 or more SSDs or NVMe |
| | drives for Ceph journals (min. 1024 MiB per OSD journal)|
| | - For OpenStack, recommend 1 or more 500 GB (min. 10K RPM)|
| | for VM local ephemeral storage |
+-------------------------+-----------------------------------------------------------+
| Minimum network ports | - Mgmt/Cluster: 1x10GE |
| | - OAM: 1x1GE |
| | - Data: 1 or more x 10GE |
+-------------------------+-----------------------------------------------------------+
| BIOS settings | - Hyper-Threading technology enabled |
| | - Virtualization technology enabled |
| | - VT for directed I/O enabled |
| | - CPU power and performance policy set to performance |
| | - CPU C state control disabled |
| | - Plug & play BMC detection disabled |
+-------------------------+-----------------------------------------------------------+
---------------
Prepare Servers
---------------
.. include:: bare_metal_aio_simplex.rst
:start-after: incl-prepare-servers-start:
:end-before: incl-prepare-servers-end:
--------------------
StarlingX Kubernetes
--------------------
*******************************
Installing StarlingX Kubernetes
*******************************
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Create a bootable USB with the StarlingX ISO
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
create a bootable USB on your system.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Install software on controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: bare_metal_aio_simplex.rst
:start-after: incl-install-software-controller-0-aio-start:
:end-before: incl-install-software-controller-0-aio-end:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Bootstrap system on controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#. Login using the username / password of "sysadmin" / "sysadmin".
When logging in for the first time, you will be forced to change the password.
::
Login: sysadmin
Password:
Changing password for sysadmin.
(current) UNIX Password: sysadmin
New Password:
(repeat) New Password:
#. External connectivity is required to run the Ansible bootstrap playbook. The
StarlingX boot image will DHCP out all interfaces so the server may have
obtained an IP address and have external IP connectivity if a DHCP server is
present in your environment. Verify this using the :command:`ip addr` and
:command:`ping 8.8.8.8` commands.
Otherwise, manually configure an IP address and default IP route. Use the
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
deployment environment.
::
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
sudo ip link set up dev <PORT>
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
ping 8.8.8.8
#. Specify user configuration overrides for the Ansible bootstrap playbook.
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
configuration are:
``/etc/ansible/hosts``
The default Ansible inventory file. Contains a single host: localhost.
``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml``
The Ansible bootstrap playbook.
``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml``
The default configuration values for the bootstrap playbook.
sysadmin home directory ($HOME)
The default location where Ansible looks for and imports user
configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
Specify the user configuration override file for the Ansible bootstrap
playbook using one of the following methods:
* Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
the configurable values as desired (use the commented instructions in
the file).
or
* Create the minimal user configuration override file as shown in the
example below, using the OAM IP SUBNET and IP ADDRESSing applicable to your
deployment environment:
::
cd ~
cat <<EOF > localhost.yml
system_mode: duplex
dns_servers:
- 8.8.8.8
- 8.8.4.4
external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
admin_username: admin
admin_password: <sysadmin-password>
ansible_become_pass: <sysadmin-password>
EOF
Additional Ansible bootstrap configurations for advanced use cases are available:
* :ref:`IPv6 <ansible_bootstrap_ipv6>`
#. Run the Ansible bootstrap playbook:
::
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml
Wait for Ansible bootstrap playbook to complete.
This can take 5-10 minutes, depending on the performance of the host machine.
^^^^^^^^^^^^^^^^^^^^^^
Configure controller-0
^^^^^^^^^^^^^^^^^^^^^^
.. include:: bare_metal_aio_simplex.rst
:start-after: incl-config-controller-0-start:
:end-before: incl-config-controller-0-end:
^^^^^^^^^^^^^^^^^^^
Unlock controller-0
^^^^^^^^^^^^^^^^^^^
.. incl-unlock-controller-0-start:
Unlock controller-0 in order to bring it into service:
::
system host-unlock controller-0
Controller-0 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host machine.
.. incl-unlock-controller-0-end:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Install software on controller-1 node
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#. Power on the controller-1 server and force it to network boot with the
appropriate BIOS boot options for your particular server.
#. As controller-1 boots, a message appears on its console instructing you to
configure the personality of the node.
#. On the console of controller-0, list hosts to see newly discovered controller-1
host (hostname=None):
::
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
#. Using the host id, set the personality of this host to 'controller':
::
system host-update 2 personality=controller
#. Wait for the software installation on controller-1 to complete, for controller-1 to
reboot, and for controller-1 to show as locked/disabled/online in 'system host-list'.
This can take 5-10 minutes, depending on the performance of the host machine.
::
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
^^^^^^^^^^^^^^^^^^^^^^
Configure controller-1
^^^^^^^^^^^^^^^^^^^^^^
#. Configure the OAM and MGMT interfaces of controller-1 and specify the
attached networks. Use the OAM and MGMT port names, for example eth0, that are
applicable to your deployment environment:
(Note that the MGMT interface is partially set up automatically by the network
install procedure.)
::
OAM_IF=<OAM-PORT>
MGMT_IF=<MGMT-PORT>
system host-if-modify controller-1 $OAM_IF -c platform
system interface-network-assign controller-1 $OAM_IF oam
system interface-network-assign controller-1 $MGMT_IF cluster-host
#. Configure data interfaces for controller-1. Use the DATA port names, for example
eth0, applicable to your deployment environment.
.. important::
This step is **required** for OpenStack.
This step is optional for Kubernetes: Do this step if using SRIOV network
attachments in hosted application containers.
For Kubernetes SRIOV network attachments:
* Configure the SRIOV device plugin:
::
system host-label-assign controller-1 sriovdp=enabled
* If planning on running DPDK in containers on this host, configure the number
of 1G Huge pages required on both NUMA nodes:
::
system host-memory-modify controller-1 0 -1G 100
system host-memory-modify controller-1 1 -1G 100
For both Kubernetes and OpenStack:
::
DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
export COMPUTE=controller-1
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
system host-port-list ${COMPUTE} --nowrap > ${SPL}
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
#. Add an OSD on controller-1 for ceph:
::
echo ">>> Add OSDs to primary tier"
system host-disk-list controller-1
system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {}
system host-stor-list controller-1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack-specific host configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. important::
**This step is required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
support of installing the stx-openstack manifest and helm-charts later.
::
system host-label-assign controller-1 openstack-control-plane=enabled
system host-label-assign controller-1 openstack-compute-node=enabled
system host-label-assign controller-1 openvswitch=enabled
system host-label-assign controller-1 sriov=enabled
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for stx-openstack nova ephemeral disks.
::
export COMPUTE=controller-1
echo ">>> Getting root disk info"
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
echo ">>>> Configuring nova-local"
NOVA_SIZE=34
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${COMPUTE} nova-local
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
sleep 2
echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready."
while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
^^^^^^^^^^^^^^^^^^^
Unlock controller-1
^^^^^^^^^^^^^^^^^^^
Unlock controller-1 in order to bring it into service:
::
system host-unlock controller-1
Controller-1 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host
machine.
When it completes, your Kubernetes cluster is up and running.
***************************
Access StarlingX Kubernetes
***************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-access-starlingx-kubernetes-start:
:end-before: incl-access-starlingx-kubernetes-end:
-------------------
StarlingX OpenStack
-------------------
***************************
Install StarlingX OpenStack
***************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-install-starlingx-openstack-start:
:end-before: incl-install-starlingx-openstack-end:
**************************
Access StarlingX OpenStack
**************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-access-starlingx-openstack-start:
:end-before: incl-access-starlingx-openstack-end:
*****************************
Uninstall StarlingX OpenStack
*****************************
.. include:: virtual_aio_simplex.rst
:start-after: incl-uninstall-starlingx-openstack-start:
:end-before: incl-uninstall-starlingx-openstack-end:
----------------------------------------------
Extending capacity with worker / compute nodes
----------------------------------------------
*********************************
Install software on compute nodes
*********************************
#. Power on the compute servers and force them to network boot with the
appropriate BIOS boot options for your particular server.
#. As the compute servers boot, a message appears on their console instructing
you to configure the personality of the node.
#. On the console of controller-0, list hosts to see newly discovered compute
hosts (hostname=None):
::
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-0 | controller | unlocked | enabled | available |
| 3 | None | None | locked | disabled | offline |
| 4 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
#. Using the host id, set the personality of this host to 'controller':
::
system host-update 3 personality=worker hostname=compute-0
system host-update 4 personality=worker hostname=compute-1
This initiates the install of software on compute nodes.
This can take 5-10 minutes, depending on the performance of the host machine.
#. Wait for the install of software on the computes to complete, the computes to
reboot and to both show as locked/disabled/online in 'system host-list'.
::
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
| 3 | compute-0 | compute | locked | disabled | online |
| 4 | compute-1 | compute | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
***********************
Configure compute nodes
***********************
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
(Note that the MGMT interfaces are partially set up automatically by the
network install procedure.)
::
for COMPUTE in compute-0 compute-1; do
system interface-network-assign $COMPUTE mgmt0 cluster-host
done
#. Configure data interfaces for compute nodes. Use the DATA port names, for
example eth0, that are applicable to your deployment environment.
.. important::
This step is **required** for OpenStack.
This step is optional for Kubernetes: Do this step if using SRIOV network
attachments in hosted application containers.
For Kubernetes SRIOV network attachments:
* Configure SRIOV device plug in:
::
system host-label-assign controller-1 sriovdp=enabled
* If planning on running DPDK in containers on this host, configure the number
of 1G Huge pages required on both NUMA nodes:
::
system host-memory-modify controller-1 0 -1G 100
system host-memory-modify controller-1 1 -1G 100
For both Kubernetes and OpenStack:
::
DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
# configure the datanetworks in sysinv, prior to referencing it
# in the ``system host-if-modify`` command'.
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
for COMPUTE in compute-0 compute-1; do
echo "Configuring interface for: $COMPUTE"
set -ex
system host-port-list ${COMPUTE} --nowrap > ${SPL}
system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
set +ex
done
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OpenStack-specific host configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. important::
**This step is required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
support of installing the stx-openstack manifest and helm-charts later.
::
for NODE in compute-0 compute-1; do
system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled
done
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
needed for stx-openstack nova ephemeral disks.
::
for COMPUTE in compute-0 compute-1; do
echo "Configuring Nova local for: $COMPUTE"
ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${COMPUTE} nova-local
system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
done
for COMPUTE in compute-0 compute-1; do
echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready."
while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
done
********************
Unlock compute nodes
********************
Unlock compute nodes in order to bring them into service:
::
for COMPUTE in compute-0 compute-1; do
system host-unlock $COMPUTE
done
The compute nodes will reboot to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host
machine.

View File

@@ -0,0 +1,23 @@
The All-in-one Duplex (AIO-DX) deployment option provides a pair of high
availability (HA) servers with each server providing all three cloud functions
(controller, compute, and storage).
An AIO-DX configuration provides the following benefits:
* Only a small amount of cloud processing and storage power is required
* Application consolidation using multiple virtual machines on a single pair of
physical servers
* High availability (HA) services run on the controller function across two
physical servers in either active/active or active/standby mode
* A storage back end solution using a two-node CEPH deployment across two servers
* Virtual machines scheduled on both compute functions
* Protection against overall server hardware fault, where
* All controller HA services go active on the remaining healthy server
* All virtual machines are recovered on the remaining healthy server
.. figure:: ../figures/starlingx-deployment-options-duplex.png
:scale: 50%
:alt: All-in-one Duplex deployment configuration
*Figure 1: All-in-one Duplex deployment configuration*

View File

@@ -0,0 +1,18 @@
The All-in-one Simplex (AIO-SX) deployment option provides all three cloud
functions (controller, compute, and storage) on a single server with the
following benefits:
* Requires only a small amount of cloud processing and storage power
* Application consolidation using multiple virtual machines on a single pair of
physical servers
* A storage backend solution using a single-node CEPH deployment
.. figure:: ../figures/starlingx-deployment-options-simplex.png
:scale: 50%
:alt: All-in-one Simplex deployment configuration
*Figure 1: All-in-one Simplex deployment configuration*
An AIO-SX deployment gives no protection against overall server hardware fault.
Hardware component protection can be enabled with, for example, a hardware RAID
or 2x Port LAG in the deployment.

View File

@@ -0,0 +1,22 @@
The Standard with Controller Storage deployment option provides two high
availability (HA) controller nodes and a pool of up to 10 compute nodes.
A Standard with Controller Storage configuration provides the following benefits:
* A pool of up to 10 compute nodes
* High availability (HA) services run across the controller nodes in either
active/active or active/standby mode
* A storage back end solution using a two-node CEPH deployment across two
controller servers
* Protection against overall controller and compute node failure, where
* On overall controller node failure, all controller HA services go active on
the remaining healthy controller node
* On overall compute node failure, virtual machines and containers are
recovered on the remaining healthy compute nodes
.. figure:: ../figures/starlingx-deployment-options-controller-storage.png
:scale: 50%
:alt: Standard with Controller Storage deployment configuration
*Figure 1: Standard with Controller Storage deployment configuration*

View File

@@ -0,0 +1,17 @@
The Standard with Dedicated Storage deployment option is a standard installation
with independent controller, compute, and storage nodes.
A Standard with Dedicated Storage configuration provides the following benefits:
* A pool of up to 100 compute nodes
* A 2x node high availability (HA) controller cluster with HA services running
across the controller nodes in either active/active or active/standby mode
* A storage back end solution using a two-to-9x node HA CEPH storage cluster
that supports a replication factor of two or three
* Up to four groups of 2x storage nodes, or up to three groups of 3x storage nodes
.. figure:: ../figures/starlingx-deployment-options-dedicated-storage.png
:scale: 50%
:alt: Standard with Dedicated Storage deployment configuration
*Figure 1: Standard with Dedicated Storage deployment configuration*

View File

@@ -0,0 +1,51 @@
===========================
StarlingX R2.0 Installation
===========================
StarlingX provides a pre-defined set of standard
:doc:`deployment configurations </introduction/deploy_options>`. Most deployment options may
be installed in a virtual environment or on bare metal.
-----------------------------------------------------
Install StarlingX Kubernetes in a virtual environment
-----------------------------------------------------
.. toctree::
:maxdepth: 2
virtual/aio_simplex
virtual/aio_duplex
virtual/controller_storage
virtual/dedicated_storage
------------------------------------------
Install StarlingX Kubernetes on bare metal
------------------------------------------
.. toctree::
:maxdepth: 2
bare_metal/aio_simplex
bare_metal/aio_duplex
bare_metal/controller_storage
bare_metal/dedicated_storage
bare_metal/ironic
bare_metal/ansible_bootstrap_configs
-----------------
Access Kubernetes
-----------------
.. toctree::
:maxdepth: 2
kubernetes_access
-------------------
StarlingX OpenStack
-------------------
.. toctree::
:maxdepth: 2
openstack/index

View File

@@ -0,0 +1,10 @@
.. note::
By default, StarlingX uses IPv4. To use StarlingX with IPv6:
* The entire infrastructure and cluster configuration must be IPv6, with the
exception of the PXE boot network.
* Not all external servers are reachable via IPv6 addresses (for example
Docker registries). Depending on your infrastructure, it may be necessary
to deploy a NAT64/DNS64 gateway to translate the IPv4 addresses to IPv6.

View File

@@ -1,6 +1,9 @@
===========================
Access StarlingX Kubernetes
===========================
================================
Access StarlingX Kubernetes R2.0
================================
Use local/remote CLIs, GUIs, and/or REST APIs to access and manage StarlingX
Kubernetes and hosted containerized applications.
.. contents::
:local:

View File

@@ -0,0 +1,7 @@
Your Kubernetes cluster is now up and running.
For instructions on how to access StarlingX Kubernetes see
:doc:`../kubernetes_access`.
For instructions on how to install and access StarlingX OpenStack see
:doc:`../openstack/index`.

View File

@@ -2,6 +2,9 @@
Access StarlingX OpenStack
==========================
Use local/remote CLIs, GUIs and/or REST APIs to access and manage StarlingX
OpenStack and hosted virtualized applications.
.. contents::
:local:
:depth: 1

View File

@@ -0,0 +1,16 @@
===================
StarlingX OpenStack
===================
This section describes the steps to install and access StarlingX OpenStack.
Other than the OpenStack-specific configurations required in the underlying
StarlingX Kubernetes infrastructure (described in the installation steps for
StarlingX Kubernetes), the installation of containerized OpenStack for StarlingX
is independent of deployment configuration.
.. toctree::
:maxdepth: 2
install
access
uninstall_delete

View File

@@ -1,12 +1,8 @@
=================
Install OpenStack
=================
===========================
Install StarlingX OpenStack
===========================
.. contents::
:local:
:depth: 1
These installation instructions assume that you have completed the following
These instructions assume that you have completed the following
OpenStack-specific configuration tasks that are required by the underlying
StarlingX Kubernetes platform:
@@ -60,12 +56,10 @@ Install application manifest and helm-charts
watch -n 5 system application-list
When it completes, your OpenStack cloud is up and running.
----------
Next steps
----------
--------------------------
Access StarlingX OpenStack
--------------------------
Your OpenStack cloud is now up and running.
.. include:: virtual_aio_simplex.rst
:start-after: incl-access-starlingx-openstack-start:
:end-before: incl-access-starlingx-openstack-end:
See :doc:`access` for details on how to access StarlingX OpenStack.

View File

@@ -1,9 +1,9 @@
===================
Uninstall OpenStack
===================
=============================
Uninstall StarlingX OpenStack
=============================
This section provides additional commands for uninstalling and deleting the
OpenStack application.
StarlingX OpenStack application.
.. warning::

View File

@@ -0,0 +1,21 @@
===========================================
Virtual All-in-one Duplex Installation R2.0
===========================================
--------
Overview
--------
.. include:: ../desc_aio_duplex.txt
.. include:: ../ipv6_note.txt
------------
Installation
------------
.. toctree::
:maxdepth: 2
aio_duplex_environ
aio_duplex_install_kubernetes

View File

@@ -0,0 +1,52 @@
============================
Prepare Host and Environment
============================
This section describes how to prepare the physical host and virtual environment
for a **StarlingX R2.0 virtual All-in-one Duplex** deployment configuration.
.. contents::
:local:
:depth: 1
------------------------------------
Physical host requirements and setup
------------------------------------
.. include:: physical_host_req.txt
---------------------------------------
Prepare virtual environment and servers
---------------------------------------
The following steps explain how to prepare the virtual environment and servers
on a physical host for a StarlingX R2.0 virtual All-in-one Duplex deployment
configuration.
#. Prepare virtual environment.
Set up the virtual platform networks for virtual deployment:
::
bash setup_network.sh
#. Prepare virtual servers.
Create the XML definitions for the virtual servers required by this
configuration option. This will create the XML virtual server definition for:
* duplex-controller-0
* duplex-controller-1
The following command will start/virtually power on:
* The 'duplex-controller-0' virtual server
* The X-based graphical virt-manager application
::
bash setup_configuration.sh -c duplex -i ./bootimage.iso
If there is no X-server present, then errors are returned.

View File

@@ -0,0 +1,21 @@
============================================
Virtual All-in-one Simplex Installation R2.0
============================================
--------
Overview
--------
.. include:: ../desc_aio_simplex.txt
.. include:: ../ipv6_note.txt
------------
Installation
------------
.. toctree::
:maxdepth: 2
aio_simplex_environ
aio_simplex_install_kubernetes

View File

@@ -0,0 +1,50 @@
============================
Prepare Host and Environment
============================
This section describes how to prepare the physical host and virtual environment
for a **StarlingX R2.0 virtual All-in-one Simplex** deployment configuration.
.. contents::
:local:
:depth: 1
------------------------------------
Physical host requirements and setup
------------------------------------
.. include:: physical_host_req.txt
---------------------------------------
Prepare virtual environment and servers
---------------------------------------
The following steps explain how to prepare the virtual environment and servers
on a physical host for a StarlingX R2.0 virtual All-in-one Simplex deployment
configuration.
#. Prepare virtual environment.
Set up the virtual platform networks for virtual deployment:
::
bash setup_network.sh
#. Prepare virtual servers.
Create the XML definitions for the virtual servers required by this
configuration option. This will create the XML virtual server definition for:
* simplex-controller-0
The following command will start/virtually power on:
* The 'simplex-controller-0' virtual server
* The X-based graphical virt-manager application
::
bash setup_configuration.sh -c simplex -i ./bootimage.iso
If there is no X-server present, then errors will occur.

View File

@@ -1,185 +1,19 @@
===============================
Virtual All-in-one Simplex R2.0
===============================
==============================================
Install StarlingX Kubernetes on Virtual AIO-SX
==============================================
This section describes the steps to install the StarlingX Kubernetes platform
on a **StarlingX R2.0 virtual All-in-one Simplex** deployment configuration.
.. contents::
:local:
:depth: 1
-----------
Description
-----------
.. incl-aio-simplex-intro-start:
The All-in-one Simplex (AIO-SX) deployment option provides all three cloud
functions (controller, compute, and storage) on a single server.
An AIO-SX configuration provides the following benefits:
* Only a small amount of cloud processing and storage power is required
* Application consolidation using multiple virtual machines on a single pair of
physical servers
* A storage backend solution using a single-node CEPH deployment
An AIO-SX deployment provides no protection against overall server hardware
fault, as protection is either not required or provided at a higher level.
Hardware component protection can be enable with, for example, a hardware RAID
or 2x Port LAG in the deployment.
.. figure:: figures/starlingx-deployment-options-simplex.png
:scale: 50%
:alt: All-in-one Simplex deployment configuration
*Figure 1: All-in-one Simplex deployment configuration*
.. incl-aio-simplex-intro-end:
.. incl-ipv6-note-start:
.. note::
By default, StarlingX uses IPv4. To use StarlingX with IPv6:
* The entire infrastructure and cluster configuration must be IPv6, with the
exception of the PXE boot network.
* Not all external servers are reachable via IPv6 addresses (e.g. Docker
registries). Depending on your infrastructure, it may be necessary to deploy
a NAT64/DNS64 gateway to translate the IPv4 addresses to IPv6.
.. incl-ipv6-note-end:
------------------------------------
Physical host requirements and setup
------------------------------------
.. incl-virt-physical-host-req-start:
This section describes:
* System requirements for the workstation hosting the virtual machine(s) where
StarlingX will be deployed
* Host setup
*********************
Hardware requirements
*********************
The host system should have at least:
* **Processor:** x86_64 only supported architecture with BIOS enabled hardware
virtualization extensions
* **Cores:** 8
* **Memory:** 32GB RAM
* **Hard Disk:** 500GB HDD
* **Network:** One network adapter with active Internet connection
*********************
Software requirements
*********************
The host system should have at least:
* A workstation computer with Ubuntu 16.04 LTS 64-bit
All other required packages will be installed by scripts in the StarlingX tools repository.
**********
Host setup
**********
Set up the host with the following steps:
#. Update OS:
::
apt-get update
#. Clone the StarlingX tools repository:
::
apt-get install -y git
cd $HOME
git clone https://opendev.org/starlingx/tools
#. Install required packages:
::
cd $HOME/tools/deployment/libvirt/
bash install_packages.sh
apt install -y apparmor-profiles
apt-get install -y ufw
ufw disable
ufw status
.. note::
On Ubuntu 16.04, if apparmor-profile modules were installed as shown in
the example above, you must reboot the server to fully install the
apparmor-profile modules.
#. Get the StarlingX ISO. This can be from a private StarlingX build or from the public Cengn
StarlingX build off 'master' branch, as shown below:
::
wget http://mirror.starlingx.cengn.ca/mirror/starlingx/release/2.0.0/centos/outputs/iso/bootimage.iso
.. incl-virt-physical-host-req-end:
---------------------------------------
Prepare virtual environment and servers
---------------------------------------
On the host, prepare the virtual environment and virtual servers.
#. Set up virtual platform networks for virtual deployment:
::
bash setup_network.sh
#. Create the XML definitions for the virtual servers required by this
configuration option. This creates the XML virtual server definition for:
* simplex-controller-0
The following command will start/virtually power on:
* the 'simplex-controller-0' virtual server
* the X-based graphical virt-manager application
If there is no X-server present, then errors will occur.
::
bash setup_configuration.sh -c simplex -i ./bootimage.iso
--------------------
StarlingX Kubernetes
--------------------
*****************************************
Install the StarlingX Kubernetes platform
*****************************************
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
Install software on controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
In the last step of `Prepare virtual environment and servers`_, the
In the last step of "Prepare virtual environment and servers", the
controller-0 virtual server 'simplex-controller-0' was started by the
:command:`setup_configuration.sh` command.
@@ -207,9 +41,9 @@ Wait for the non-interactive install of software to complete and for the server
to reboot. This can take 5-10 minutes, depending on the performance of the host
machine.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
Bootstrap system on controller-0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
--------------------------------
On virtual controller-0:
@@ -300,9 +134,9 @@ On virtual controller-0:
Wait for Ansible bootstrap playbook to complete.
This can take 5-10 minutes, depending on the performance of the host machine.
^^^^^^^^^^^^^^^^^^^^^^
----------------------
Configure controller-0
^^^^^^^^^^^^^^^^^^^^^^
----------------------
On virtual controller-0:
@@ -381,9 +215,11 @@ On virtual controller-0:
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*************************************
OpenStack-specific host configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*************************************
.. incl-config-controller-0-openstack-specific-aio-simplex-start:
.. important::
@@ -400,7 +236,7 @@ OpenStack-specific host configuration
system host-label-assign controller-0 openvswitch=enabled
system host-label-assign controller-0 sriov=enabled
#. **For OpenStack only**: A vSwitch is required.
#. **For OpenStack only:** A vSwitch is required.
The default vSwitch is containerized OVS that is packaged with the
stx-openstack manifest/helm-charts. StarlingX provides the option to use
@@ -431,9 +267,11 @@ OpenStack-specific host configuration
echo ">>> Wait for partition $NOVA_PARTITION_UUID to be ready."
while true; do system host-disk-partition-list $COMPUTE --nowrap | grep $NOVA_PARTITION_UUID | grep Ready; if [ $? -eq 0 ]; then break; fi; sleep 1; done
^^^^^^^^^^^^^^^^^^^
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
-------------------
Unlock controller-0
^^^^^^^^^^^^^^^^^^^
-------------------
Unlock virtual controller-0 to bring it into service:
@@ -444,60 +282,8 @@ Unlock virtual controller-0 to bring it into service:
Controller-0 will reboot to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host machine.
When it completes, your Kubernetes cluster is up and running.
----------
Next steps
----------
***************************
Access StarlingX Kubernetes
***************************
.. incl-access-starlingx-kubernetes-start:
Use local/remote CLIs, GUIs, and/or REST APIs to access and manage StarlingX
Kubernetes and hosted containerized applications. Refer to details on accessing
the StarlingX Kubernetes cluster in the
:doc:`Access StarlingX Kubernetes guide <access_starlingx_kubernetes>`.
.. incl-access-starlingx-kubernetes-end:
-------------------
StarlingX OpenStack
-------------------
***************************
Install StarlingX OpenStack
***************************
.. incl-install-starlingx-openstack-start:
Other than the OpenStack-specific configurations required in the underlying
StarlingX/Kubernetes infrastructure (described in the installation steps for the
StarlingX Kubernetes platform above), the installation of containerized OpenStack
for StarlingX is independent of deployment configuration. Refer to the
:doc:`Install OpenStack guide <install_openstack>`
for installation instructions.
.. incl-install-starlingx-openstack-end:
**************************
Access StarlingX OpenStack
**************************
.. incl-access-starlingx-openstack-start:
Use local/remote CLIs, GUIs and/or REST APIs to access and manage StarlingX
OpenStack and hosted virtualized applications. Refer to details on accessing
StarlingX OpenStack in the
:doc:`Access StarlingX OpenStack guide <access_starlingx_openstack>`.
.. incl-access-starlingx-openstack-end:
*****************************
Uninstall StarlingX OpenStack
*****************************
.. incl-uninstall-starlingx-openstack-start:
Refer to the :doc:`Uninstall OpenStack guide <uninstall_delete_openstack>` for
instructions on how to uninstall and delete the OpenStack application.
.. incl-uninstall-starlingx-openstack-end:
.. include:: ../kubernetes_install_next.txt

View File

@@ -0,0 +1,21 @@
==========================================================
Virtual Standard with Controller Storage Installation R2.0
==========================================================
--------
Overview
--------
.. include:: ../desc_controller_storage.txt
.. include:: ../ipv6_note.txt
------------
Installation
------------
.. toctree::
:maxdepth: 2
controller_storage_environ
controller_storage_install_kubernetes

View File

@@ -0,0 +1,54 @@
============================
Prepare Host and Environment
============================
This section describes how to prepare the physical host and virtual environment
for a **StarlingX R2.0 virtual Standard with Controller Storage** deployment
configuration.
.. contents::
:local:
:depth: 1
------------------------------------
Physical host requirements and setup
------------------------------------
.. include:: physical_host_req.txt
---------------------------------------
Prepare virtual environment and servers
---------------------------------------
The following steps explain how to prepare the virtual environment and servers
on a physical host for a StarlingX R2.0 virtual Standard with Controller Storage
deployment configuration.
#. Prepare virtual environment.
Set up virtual platform networks for virtual deployment:
::
bash setup_network.sh
#. Prepare virtual servers.
Create the XML definitions for the virtual servers required by this
configuration option. This will create the XML virtual server definition for:
* controllerstorage-controller-0
* controllerstorage-controller-1
* controllerstorage-worker-0
* controllerstorage-worker-1
The following command will start/virtually power on:
* The 'controllerstorage-controller-0' virtual server
* The X-based graphical virt-manager application
::
bash setup_configuration.sh -c controllerstorage -i ./bootimage.iso
If there is no X-server present, then errors are returned.

View File

@@ -0,0 +1,21 @@
=========================================================
Virtual Standard with Dedicated Storage Installation R2.0
=========================================================
--------
Overview
--------
.. include:: ../desc_dedicated_storage.txt
.. include:: ../ipv6_note.txt
------------
Installation
------------
.. toctree::
:maxdepth: 2
dedicated_storage_environ
dedicated_storage_install_kubernetes

View File

@@ -0,0 +1,56 @@
============================
Prepare Host and Environment
============================
This section describes how to prepare the physical host and virtual environment
for a **StarlingX R2.0 virtual Standard with Dedicated Storage** deployment
configuration.
.. contents::
:local:
:depth: 1
------------------------------------
Physical host requirements and setup
------------------------------------
.. include:: physical_host_req.txt
-----------------------------------------
Preparing virtual environment and servers
-----------------------------------------
The following steps explain how to prepare the virtual environment and servers
on a physical host for a StarlingX R2.0 virtual Standard with Dedicated Storage
deployment configuration.
#. Prepare virtual environment.
Set up virtual platform networks for virtual deployment:
::
bash setup_network.sh
#. Prepare virtual servers.
Create the XML definitions for the virtual servers required by this
configuration option. This will create the XML virtual server definition for:
* dedicatedstorage-controller-0
* dedicatedstorage-controller-1
* dedicatedstorage-storage-0
* dedicatedstorage-storage-1
* dedicatedstorage-worker-0
* dedicatedstorage-worker-1
The following command will start/virtually power on:
* The 'dedicatedstorage-controller-0' virtual server
* The X-based graphical virt-manager application
::
bash setup_configuration.sh -c dedicatedstorage -i ./bootimage.iso
If there is no X-server present, then errors are returned.

View File

@@ -0,0 +1,75 @@
The following sections describe system requirements and host setup for a
workstation hosting virtual machine(s) where StarlingX will be deployed.
*********************
Hardware requirements
*********************
The host system should have at least:
* **Processor:** x86_64 only supported architecture with BIOS enabled hardware
virtualization extensions
* **Cores:** 8
* **Memory:** 32GB RAM
* **Hard Disk:** 500GB HDD
* **Network:** One network adapter with active Internet connection
*********************
Software requirements
*********************
The host system should have at least:
* A workstation computer with Ubuntu 16.04 LTS 64-bit
All other required packages will be installed by scripts in the StarlingX tools repository.
**********
Host setup
**********
Set up the host with the following steps:
#. Update OS:
::
apt-get update
#. Clone the StarlingX tools repository:
::
apt-get install -y git
cd $HOME
git clone https://opendev.org/starlingx/tools
#. Install required packages:
::
cd $HOME/tools/deployment/libvirt/
bash install_packages.sh
apt install -y apparmor-profiles
apt-get install -y ufw
ufw disable
ufw status
.. note::
On Ubuntu 16.04, if apparmor-profile modules were installed as shown in
the example above, you must reboot the server to fully install the
apparmor-profile modules.
#. Get the StarlingX ISO. This can be from a private StarlingX build or from the public Cengn
StarlingX build off 'master' branch, as shown below:
::
wget http://mirror.starlingx.cengn.ca/mirror/starlingx/release/2.0.0/centos/outputs/iso/bootimage.iso

View File

@@ -12,40 +12,10 @@ Latest release (stable)
StarlingX R2.0 is the latest officially released version of StarlingX.
*************************
R2.0 virtual installation
*************************
.. toctree::
:maxdepth: 1
current/virtual_aio_simplex
current/virtual_aio_duplex
current/virtual_controller_storage
current/virtual_dedicated_storage
****************************
R2.0 bare metal installation
****************************
.. toctree::
:maxdepth: 1
current/bare_metal_aio_simplex
current/bare_metal_aio_duplex
current/bare_metal_controller_storage
current/bare_metal_dedicated_storage
current/bare_metal_ironic
.. toctree::
:maxdepth: 1
:hidden:
current/access_starlingx_kubernetes
current/access_starlingx_openstack
current/install_openstack
current/uninstall_delete_openstack
current/ansible_bootstrap_configs
current/index
---------------------
Upcoming R3.0 release