R5 updates to landing page and installation
Patch set 5: Updated R5 RN. Patch set 4: Resolved merge confict. Patch set 3: Removed R5 install guides. Local docs build is successful. Patch set 2: Added R6 install guides. Removed R1-R4 install guides. R5 will be removed next patch set. Updated main docs landing page with R5 links. Installation index page: promoted R5 to supported, moved R4 to archive section, created R6 (latest) section. Change-Id: Ic0e0409f2385a9a6f29b83a2eda12753ea4ac1a3 Signed-off-by: MCamp859 <maryx.camp@intel.com>
@@ -16,7 +16,7 @@ Set proxy at bootstrap
 | 
			
		||||
 | 
			
		||||
To set the Docker proxy at bootstrap time, refer to :doc:`Ansible Bootstrap
 | 
			
		||||
Configurations
 | 
			
		||||
<../deploy_install_guides/r3_release/ansible_bootstrap_configs>`.
 | 
			
		||||
<../deploy_install_guides/r6_release/ansible_bootstrap_configs>`.
 | 
			
		||||
 | 
			
		||||
.. r3_end
 | 
			
		||||
 | 
			
		||||
 
 | 
			
		||||
@@ -108,7 +108,7 @@ Certificate Authority.
 | 
			
		||||
 | 
			
		||||
Currently the Kubernetes root CA certificate and key can only be updated at
 | 
			
		||||
Ansible bootstrap time. For details, see
 | 
			
		||||
:ref:`Kubernetes root CA certificate and key <k8s-root-ca-cert-key-r4>`.
 | 
			
		||||
:ref:`Kubernetes root CA certificate and key <k8s-root-ca-cert-key-r6>`.
 | 
			
		||||
 | 
			
		||||
---------------------
 | 
			
		||||
Local Docker registry
 | 
			
		||||
 
 | 
			
		||||
@@ -10,7 +10,7 @@ Standard Configuration with Controller Storage
 | 
			
		||||
back-end for Kubernetes |PVCs| deployed on the
 | 
			
		||||
controller nodes instead of using dedicated storage nodes.
 | 
			
		||||
 | 
			
		||||
.. image:: /deploy_install_guides/r5_release/figures/starlingx-deployment-options-controller-storage.png
 | 
			
		||||
.. image:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-controller-storage.png
 | 
			
		||||
   :width: 800
 | 
			
		||||
 | 
			
		||||
See :ref:`Common Components <common-components>` for a description of common
 | 
			
		||||
 
 | 
			
		||||
@@ -14,7 +14,7 @@ redundant pair of hosts.
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
.. image:: /deploy_install_guides/r5_release/figures/starlingx-deployment-options-duplex.png
 | 
			
		||||
.. image:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-duplex.png
 | 
			
		||||
   :width: 800
 | 
			
		||||
 | 
			
		||||
See :ref:`Common Components <common-components>` for a description of common
 | 
			
		||||
@@ -97,7 +97,7 @@ Up to fifty worker/compute nodes can be added to the  All-in-one Duplex
 | 
			
		||||
deployment, allowing a capacity growth path if starting with an AIO-Duplex
 | 
			
		||||
deployment.
 | 
			
		||||
 | 
			
		||||
.. image:: /deploy_install_guides/r5_release/figures/starlingx-deployment-options-duplex-extended.png
 | 
			
		||||
.. image:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-duplex-extended.png
 | 
			
		||||
   :width: 800
 | 
			
		||||
 | 
			
		||||
The extended capacity is limited up to fifty worker/compute nodes as the
 | 
			
		||||
 
 | 
			
		||||
@@ -10,7 +10,7 @@ The AIO Simplex deployment configuration provides a scaled-down |prod| that
 | 
			
		||||
combines controller, storage, and worker functionality on a single
 | 
			
		||||
non-redundant host.
 | 
			
		||||
 | 
			
		||||
.. image:: /deploy_install_guides/r5_release/figures/starlingx-deployment-options-simplex.png
 | 
			
		||||
.. image:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-simplex.png
 | 
			
		||||
   :width: 800
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 
 | 
			
		||||
@@ -9,7 +9,7 @@ Standard Configuration with Dedicated Storage
 | 
			
		||||
Deployment of |prod| with dedicated storage nodes provides the highest capacity
 | 
			
		||||
\(single region\), performance, and scalability.
 | 
			
		||||
 | 
			
		||||
.. image:: /deploy_install_guides/r5_release/figures/starlingx-deployment-options-dedicated-storage.png
 | 
			
		||||
.. image:: /deploy_install_guides/r6_release/figures/starlingx-deployment-options-dedicated-storage.png
 | 
			
		||||
   :width: 800
 | 
			
		||||
 | 
			
		||||
See :ref:`Common Components <common-components>` for a description of common
 | 
			
		||||
 
 | 
			
		||||
@@ -6,41 +6,28 @@ Installation and deployment guides for StarlingX are release-specific.
 | 
			
		||||
Each guide provides instruction on a specific StarlingX configuration
 | 
			
		||||
(e.g. All-in-one Simplex).
 | 
			
		||||
 | 
			
		||||
.. _latest_release:
 | 
			
		||||
 | 
			
		||||
------------------------
 | 
			
		||||
Supported release (R4.0)
 | 
			
		||||
------------------------
 | 
			
		||||
 | 
			
		||||
StarlingX R4.0 is the most recent supported release of StarlingX.
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   r4_release/index
 | 
			
		||||
 | 
			
		||||
-------------------------
 | 
			
		||||
Upcoming release (latest)
 | 
			
		||||
-------------------------
 | 
			
		||||
 | 
			
		||||
StarlingX R5.0 is under development.
 | 
			
		||||
StarlingX R6.0 is under development.
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   r5_release/index
 | 
			
		||||
   r6_release/index
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
-----------------
 | 
			
		||||
Archived releases
 | 
			
		||||
-----------------
 | 
			
		||||
-------------------------------
 | 
			
		||||
Supported and archived releases
 | 
			
		||||
-------------------------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
StarlingX R5.0 is the most recent supported release of StarlingX.
 | 
			
		||||
 | 
			
		||||
To view the R5.0 documentation, use the **Version** selector in the upper right
 | 
			
		||||
or go directly to `Installation guides for R5.0 and older releases
 | 
			
		||||
<https://docs.starlingx.io/r/stx.5.0/deploy_install_guides/index.html>`_.
 | 
			
		||||
 | 
			
		||||
   r3_release/index
 | 
			
		||||
   r2_release/index
 | 
			
		||||
   r1_release/index
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
.. Add common files to toctree
 | 
			
		||||
@@ -52,16 +39,18 @@ Archived releases
 | 
			
		||||
   bootable_usb
 | 
			
		||||
   nvme_config
 | 
			
		||||
 | 
			
		||||
.. Docs note: Starting with R5 (May 2021), team agreed that the latest/working
 | 
			
		||||
   branch will include the current install guides only. The archived releases
 | 
			
		||||
   will only be available in a release-specific branch. The instructions below
 | 
			
		||||
   are modified to reflect this change.
 | 
			
		||||
 | 
			
		||||
.. Making a new release
 | 
			
		||||
.. 1. Archive the previous 'supported' release.
 | 
			
		||||
      Move the toctree link from the Supported release section into the Archived
 | 
			
		||||
      releases toctree.
 | 
			
		||||
.. 2. Make the previous 'upcoming' release the new 'supported'.
 | 
			
		||||
      Move the toctree link from the Upcoming release section into the Supported
 | 
			
		||||
      release. Update intro text for the Supported release section to use the
 | 
			
		||||
.. 1. Make the previous 'upcoming' release the new 'supported' release.
 | 
			
		||||
      Copy the folder to the release-specific branch.
 | 
			
		||||
      Copy the toctree link into the Supported section of install landing page.
 | 
			
		||||
      Update intro text for the Supported release section to use the
 | 
			
		||||
      latest version.
 | 
			
		||||
.. 3. Add new 'upcoming' release, aka 'Latest' on the version button.
 | 
			
		||||
.. 2. Add new 'upcoming' release, aka 'Latest' on the version button.
 | 
			
		||||
      If new upcoming release docs aren't ready, remove toctree from Upcoming
 | 
			
		||||
      section and just leave intro text. Update text for the upcoming
 | 
			
		||||
      release version. Once the new upcoming docs are ready, add them in the
 | 
			
		||||
@@ -70,9 +59,9 @@ Archived releases
 | 
			
		||||
.. Adding new release docs
 | 
			
		||||
.. 1. Make sure the most recent release versioned docs are complete for that
 | 
			
		||||
      release.
 | 
			
		||||
.. 2. Make a copy of the most recent release folder e.g. 'r4_release.' Rename
 | 
			
		||||
      the folder for the new release e.g. 'r5_release'.
 | 
			
		||||
.. 2. Make a copy of the most recent release folder e.g. 'r6_release.' Rename
 | 
			
		||||
      the folder for the new release e.g. 'r7_release'.
 | 
			
		||||
.. 3. Search and replace all references to previous release number with the new
 | 
			
		||||
      release number. For example replace all 'R4.0' with 'R5.0.' Also search
 | 
			
		||||
      release number. For example replace all 'R6.0' with 'R7.0.' Also search
 | 
			
		||||
      and replace any links that may have a specific release number in the path.
 | 
			
		||||
.. 4. Link new version on this page (the index page).
 | 
			
		||||
 
 | 
			
		||||
@@ -1,918 +0,0 @@
 | 
			
		||||
======================
 | 
			
		||||
Dedicated storage R1.0
 | 
			
		||||
======================
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
**NOTE:**  The instructions to setup a StarlingX Cloud with Dedicated
 | 
			
		||||
Storage with containerized openstack services in this guide
 | 
			
		||||
are under development.
 | 
			
		||||
For approved instructions, see the
 | 
			
		||||
`StarlingX Cloud with Dedicated Storage wiki page <https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnStandardStorage>`__.
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Deployment description
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
Cloud with Dedicated Storage is the standard StarlingX deployment option with
 | 
			
		||||
independent controller, compute, and storage nodes.
 | 
			
		||||
 | 
			
		||||
This deployment option provides the maximum capacity for a single region
 | 
			
		||||
deployment, with a supported growth path to a multi-region deployment option by
 | 
			
		||||
adding a secondary region.
 | 
			
		||||
 | 
			
		||||
.. figure:: figures/starlingx-deployment-options-dedicated-storage.png
 | 
			
		||||
   :scale: 50%
 | 
			
		||||
   :alt: Dedicated Storage deployment configuration
 | 
			
		||||
 | 
			
		||||
   *Dedicated Storage deployment configuration*
 | 
			
		||||
 | 
			
		||||
Cloud with Dedicated Storage includes:
 | 
			
		||||
 | 
			
		||||
- 2x node HA controller cluster with HA services running across the controller
 | 
			
		||||
  nodes in either active/active or active/standby mode.
 | 
			
		||||
- Pool of up to 100 compute nodes for hosting virtual machines and virtual
 | 
			
		||||
  networks.
 | 
			
		||||
- 2-9x node HA Ceph storage cluster for hosting virtual volumes, images, and
 | 
			
		||||
  object storage that supports a replication factor of 2 or 3.
 | 
			
		||||
 | 
			
		||||
  Storage nodes are deployed in replication groups of 2 or 3. Replication
 | 
			
		||||
  of objects is done strictly within the replication group.
 | 
			
		||||
 | 
			
		||||
  Supports up to 4 groups of 2x storage nodes, or up to 3 groups of 3x storage
 | 
			
		||||
  nodes.
 | 
			
		||||
 | 
			
		||||
-----------------------------------
 | 
			
		||||
Preparing dedicated storage servers
 | 
			
		||||
-----------------------------------
 | 
			
		||||
 | 
			
		||||
**********
 | 
			
		||||
Bare metal
 | 
			
		||||
**********
 | 
			
		||||
 | 
			
		||||
Required Servers:
 | 
			
		||||
 | 
			
		||||
-  Controllers: 2
 | 
			
		||||
-  Storage
 | 
			
		||||
 | 
			
		||||
   -  Replication factor of 2: 2 - 8
 | 
			
		||||
   -  Replication factor of 3: 3 - 9
 | 
			
		||||
 | 
			
		||||
-  Computes: 2 - 100
 | 
			
		||||
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
Hardware requirements
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
 | 
			
		||||
The recommended minimum requirements for the physical servers where
 | 
			
		||||
Dedicated Storage will be deployed, include:
 | 
			
		||||
 | 
			
		||||
-  Minimum processor:
 | 
			
		||||
 | 
			
		||||
   -  Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket
 | 
			
		||||
 | 
			
		||||
-  Memory:
 | 
			
		||||
 | 
			
		||||
   -  64 GB controller, storage
 | 
			
		||||
   -  32 GB compute
 | 
			
		||||
 | 
			
		||||
-  BIOS:
 | 
			
		||||
 | 
			
		||||
   -  Hyper-Threading technology: Enabled
 | 
			
		||||
   -  Virtualization technology: Enabled
 | 
			
		||||
   -  VT for directed I/O: Enabled
 | 
			
		||||
   -  CPU power and performance policy: Performance
 | 
			
		||||
   -  CPU C state control: Disabled
 | 
			
		||||
   -  Plug & play BMC detection: Disabled
 | 
			
		||||
 | 
			
		||||
-  Primary disk:
 | 
			
		||||
 | 
			
		||||
   -  500 GB SSD or NVMe controller
 | 
			
		||||
   -  120 GB (min. 10K RPM) compute and storage
 | 
			
		||||
 | 
			
		||||
-  Additional disks:
 | 
			
		||||
 | 
			
		||||
   -  1 or more 500 GB disks (min. 10K RPM) storage, compute
 | 
			
		||||
 | 
			
		||||
-  Network ports\*
 | 
			
		||||
 | 
			
		||||
   -  Management: 10GE controller, storage, compute
 | 
			
		||||
   -  OAM: 10GE controller
 | 
			
		||||
   -  Data: n x 10GE compute
 | 
			
		||||
 | 
			
		||||
*******************
 | 
			
		||||
Virtual environment
 | 
			
		||||
*******************
 | 
			
		||||
 | 
			
		||||
Run the libvirt qemu setup scripts. Setting up virtualized OAM and
 | 
			
		||||
management networks:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   $ bash setup_network.sh
 | 
			
		||||
 | 
			
		||||
Building XML for definition of virtual servers:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   $ bash setup_configuration.sh -c dedicatedstorage -i <starlingx iso image>
 | 
			
		||||
 | 
			
		||||
The default XML server definitions that are created by the previous script
 | 
			
		||||
are:
 | 
			
		||||
 | 
			
		||||
- dedicatedstorage-controller-0
 | 
			
		||||
- dedicatedstorage-controller-1
 | 
			
		||||
- dedicatedstorage-compute-0
 | 
			
		||||
- dedicatedstorage-compute-1
 | 
			
		||||
- dedicatedstorage-storage-0
 | 
			
		||||
- dedicatedstorage-storage-1
 | 
			
		||||
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
Power up a virtual server
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
 | 
			
		||||
To power up a virtual server, run the following command:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
    $ sudo virsh start <server-xml-name>
 | 
			
		||||
 | 
			
		||||
e.g.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
    $ sudo virsh start dedicatedstorage-controller-0
 | 
			
		||||
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
Access virtual server consoles
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
 | 
			
		||||
The XML for virtual servers in stx-tools repo, deployment/libvirt,
 | 
			
		||||
provides both graphical and text consoles.
 | 
			
		||||
 | 
			
		||||
Access the graphical console in virt-manager by right-click on the
 | 
			
		||||
domain (the server) and selecting "Open".
 | 
			
		||||
 | 
			
		||||
Access the textual console with the command "virsh console $DOMAIN",
 | 
			
		||||
where DOMAIN is the name of the server shown in virsh.
 | 
			
		||||
 | 
			
		||||
When booting the controller-0 for the first time, both the serial and
 | 
			
		||||
graphical consoles will present the initial configuration menu for the
 | 
			
		||||
cluster. One can select serial or graphical console for controller-0.
 | 
			
		||||
For the other nodes however only serial is used, regardless of which
 | 
			
		||||
option is selected.
 | 
			
		||||
 | 
			
		||||
Open the graphic console on all servers before powering them on to
 | 
			
		||||
observe the boot device selection and PXI boot progress. Run "virsh
 | 
			
		||||
console $DOMAIN" command promptly after power on to see the initial boot
 | 
			
		||||
sequence which follows the boot device selection. One has a few seconds
 | 
			
		||||
to do this.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Installing the controller-0 host
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
Installing controller-0 involves initializing a host with software and
 | 
			
		||||
then applying a bootstrap configuration from the command line. The
 | 
			
		||||
configured bootstrapped host becomes controller-0.
 | 
			
		||||
 | 
			
		||||
Procedure:
 | 
			
		||||
 | 
			
		||||
#. Power on the server that will be controller-0 with the StarlingX ISO
 | 
			
		||||
   on a USB in a bootable USB slot.
 | 
			
		||||
#. Configure the controller using the config_controller script.
 | 
			
		||||
 | 
			
		||||
*************************
 | 
			
		||||
Initializing controller-0
 | 
			
		||||
*************************
 | 
			
		||||
 | 
			
		||||
This section describes how to initialize StarlingX in host controller-0.
 | 
			
		||||
Except where noted, all the commands must be executed from a console of
 | 
			
		||||
the host.
 | 
			
		||||
 | 
			
		||||
Power on the host to be configured as controller-0, with the StarlingX
 | 
			
		||||
ISO on a USB in a bootable USB slot. Wait for the console to show the
 | 
			
		||||
StarlingX ISO booting options:
 | 
			
		||||
 | 
			
		||||
-  **Standard Controller Configuration**
 | 
			
		||||
 | 
			
		||||
   -  When the installer is loaded and the installer welcome screen
 | 
			
		||||
      appears in the controller-0 host, select the type of installation
 | 
			
		||||
      "Standard Controller Configuration".
 | 
			
		||||
 | 
			
		||||
-  **Graphical Console**
 | 
			
		||||
 | 
			
		||||
   -  Select the "Graphical Console" as the console to use during
 | 
			
		||||
      installation.
 | 
			
		||||
 | 
			
		||||
-  **Standard Security Boot Profile**
 | 
			
		||||
 | 
			
		||||
   -  Select "Standard Security Boot Profile" as the security profile.
 | 
			
		||||
 | 
			
		||||
Monitor the initialization. When it is complete, a reboot is initiated
 | 
			
		||||
on the controller-0 host, briefly displays a GNU GRUB screen, and then
 | 
			
		||||
boots automatically into the StarlingX image.
 | 
			
		||||
 | 
			
		||||
Log into controller-0 as user wrsroot, with password wrsroot. The
 | 
			
		||||
first time you log in as wrsroot, you are required to change your
 | 
			
		||||
password. Enter the current password (wrsroot):
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   Changing password for wrsroot.
 | 
			
		||||
   (current) UNIX Password:
 | 
			
		||||
 | 
			
		||||
Enter a new password for the wrsroot account:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   New password:
 | 
			
		||||
 | 
			
		||||
Enter the new password again to confirm it:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   Retype new password:
 | 
			
		||||
 | 
			
		||||
controller-0 is initialized with StarlingX, and is ready for configuration.
 | 
			
		||||
 | 
			
		||||
************************
 | 
			
		||||
Configuring controller-0
 | 
			
		||||
************************
 | 
			
		||||
 | 
			
		||||
This section describes how to perform the controller-0 configuration
 | 
			
		||||
interactively just to bootstrap system with minimum critical data.
 | 
			
		||||
Except where noted, all the commands must be executed from the console
 | 
			
		||||
of the active controller (here assumed to be controller-0).
 | 
			
		||||
 | 
			
		||||
When run interactively, the config_controller script presents a series
 | 
			
		||||
of prompts for initial configuration of StarlingX:
 | 
			
		||||
 | 
			
		||||
-  For the virtual environment, you can accept all the default values
 | 
			
		||||
   immediately after ‘system date and time’.
 | 
			
		||||
-  For a physical deployment, answer the bootstrap configuration
 | 
			
		||||
   questions with answers applicable to your particular physical setup.
 | 
			
		||||
 | 
			
		||||
The script is used to configure the first controller in the StarlingX
 | 
			
		||||
cluster as controller-0. The prompts are grouped by configuration
 | 
			
		||||
area. To start the script interactively, use the following command
 | 
			
		||||
with no parameters:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   controller-0:~$ sudo config_controller
 | 
			
		||||
   System Configuration
 | 
			
		||||
   ================
 | 
			
		||||
   Enter ! at any prompt to abort...
 | 
			
		||||
   ...
 | 
			
		||||
 | 
			
		||||
Accept all the default values immediately after ‘system date and time’:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   ...
 | 
			
		||||
   Applying configuration (this will take several minutes):
 | 
			
		||||
 | 
			
		||||
   01/08: Creating bootstrap configuration ... DONE
 | 
			
		||||
   02/08: Applying bootstrap manifest ... DONE
 | 
			
		||||
   03/08: Persisting local configuration ... DONE
 | 
			
		||||
   04/08: Populating initial system inventory ... DONE
 | 
			
		||||
   05:08: Creating system configuration ... DONE
 | 
			
		||||
   06:08: Applying controller manifest ... DONE
 | 
			
		||||
   07:08: Finalize controller configuration ... DONE
 | 
			
		||||
   08:08: Waiting for service activation ... DONE
 | 
			
		||||
 | 
			
		||||
   Configuration was applied
 | 
			
		||||
 | 
			
		||||
   Please complete any out of service commissioning steps with system commands and unlock controller to proceed.
 | 
			
		||||
 | 
			
		||||
After config_controller bootstrap configuration, REST API, CLI and
 | 
			
		||||
Horizon interfaces are enabled on the controller-0 OAM IP address. The
 | 
			
		||||
remaining installation instructions will use the CLI.
 | 
			
		||||
 | 
			
		||||
------------------------------------
 | 
			
		||||
Provisioning controller-0 and system
 | 
			
		||||
------------------------------------
 | 
			
		||||
 | 
			
		||||
On controller-0, acquire Keystone administrative privileges:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   controller-0:~$ source /etc/nova/openrc
 | 
			
		||||
 | 
			
		||||
*********************************************
 | 
			
		||||
Configuring provider networks at installation
 | 
			
		||||
*********************************************
 | 
			
		||||
 | 
			
		||||
You must set up provider networks at installation so that you can attach
 | 
			
		||||
data interfaces and unlock the compute nodes.
 | 
			
		||||
 | 
			
		||||
Set up one provider network of the vlan type, named providernet-a:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
 | 
			
		||||
 | 
			
		||||
*********************************************
 | 
			
		||||
Adding a Ceph storage backend at installation
 | 
			
		||||
*********************************************
 | 
			
		||||
 | 
			
		||||
Add Ceph Storage backend:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph -s cinder,glance,swift,nova
 | 
			
		||||
 | 
			
		||||
   WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED.
 | 
			
		||||
 | 
			
		||||
   By confirming this operation, Ceph backend will be created.
 | 
			
		||||
   A minimum of 2 storage nodes are required to complete the configuration.
 | 
			
		||||
   Please set the 'confirmed' field to execute this operation for the ceph backend.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph -s cinder,glance,swift,nova --confirmed
 | 
			
		||||
 | 
			
		||||
   System configuration has changed.
 | 
			
		||||
   Please follow the administrator guide to complete configuring the system.
 | 
			
		||||
 | 
			
		||||
   +--------------------------------------+------------+---------+-------------+--------------------+----------+...
 | 
			
		||||
   | uuid                                 | name       | backend | state       | task               | services |...
 | 
			
		||||
   +--------------------------------------+------------+---------+-------------+--------------------+----------+...
 | 
			
		||||
   | 48ddb10a-206c-42da-bb3f-f7160a356724 | ceph-store | ceph    | configuring | applying-manifests | cinder,  |...
 | 
			
		||||
   |                                      |            |         |             |                    | glance,  |...
 | 
			
		||||
   |                                      |            |         |             |                    | swift    |...
 | 
			
		||||
   |                                      |            |         |             |                    | nova     |...
 | 
			
		||||
   |                                      |            |         |             |                    |          |...
 | 
			
		||||
   | 55f49f86-3e01-4d03-a014-42e1b55ba487 | file-store | file    | configured  | None               | glance   |...
 | 
			
		||||
   +--------------------------------------+------------+---------+-------------+--------------------+----------+...
 | 
			
		||||
 | 
			
		||||
Confirm Ceph storage is configured:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-list
 | 
			
		||||
   +--------------------------------------+------------+---------+------------+-------------------+-----------+...
 | 
			
		||||
   | uuid                                 | name       | backend | state      | task              | services  |...
 | 
			
		||||
   +--------------------------------------+------------+---------+------------+-------------------+-----------+...
 | 
			
		||||
   | 48ddb10a-206c-42da-bb3f-f7160a356724 | ceph-store | ceph    | configured | provision-storage | cinder,   |...
 | 
			
		||||
   |                                      |            |         |            |                   | glance,   |...
 | 
			
		||||
   |                                      |            |         |            |                   | swift     |...
 | 
			
		||||
   |                                      |            |         |            |                   | nova      |...
 | 
			
		||||
   |                                      |            |         |            |                   |           |...
 | 
			
		||||
   | 55f49f86-3e01-4d03-a014-42e1b55ba487 | file-store | file    | configured | None              | glance    |...
 | 
			
		||||
   +--------------------------------------+------------+---------+------------+-------------------+-----------+...
 | 
			
		||||
 | 
			
		||||
**********************
 | 
			
		||||
Unlocking controller-0
 | 
			
		||||
**********************
 | 
			
		||||
 | 
			
		||||
You must unlock controller-0 so that you can use it to install the remaining
 | 
			
		||||
hosts. Use the system host-unlock command:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
The host is rebooted. During the reboot, the command line is unavailable, and
 | 
			
		||||
any ssh connections are dropped. To monitor the progress of the reboot, use the
 | 
			
		||||
controller-0 console.
 | 
			
		||||
 | 
			
		||||
****************************************
 | 
			
		||||
Verifying the controller-0 configuration
 | 
			
		||||
****************************************
 | 
			
		||||
 | 
			
		||||
On controller-0, acquire Keystone administrative privileges:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   controller-0:~$ source /etc/nova/openrc
 | 
			
		||||
 | 
			
		||||
Verify that the StarlingX controller services are running:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system service-list
 | 
			
		||||
   +-----+-------------------------------+--------------+----------------+
 | 
			
		||||
   | id  | service_name                  | hostname     | state          |
 | 
			
		||||
   +-----+-------------------------------+--------------+----------------+
 | 
			
		||||
   ...
 | 
			
		||||
   | 1   | oam-ip                        | controller-0 | enabled-active |
 | 
			
		||||
   | 2   | management-ip                 | controller-0 | enabled-active |
 | 
			
		||||
   ...
 | 
			
		||||
   +-----+-------------------------------+--------------+----------------+
 | 
			
		||||
 | 
			
		||||
Verify that controller-0 is unlocked, enabled, and available:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-list
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
   | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
   | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
*******************************
 | 
			
		||||
Provisioning filesystem storage
 | 
			
		||||
*******************************
 | 
			
		||||
 | 
			
		||||
List the controller file systems with status and current sizes:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system controllerfs-list
 | 
			
		||||
   +--------------------------------------+-----------------+------+--------------------+------------+-------+
 | 
			
		||||
   | UUID                                 | FS Name         | Size | Logical Volume     | Replicated | State |
 | 
			
		||||
   |                                      |                 | in   |                    |            |       |
 | 
			
		||||
   |                                      |                 | GiB  |                    |            |       |
 | 
			
		||||
   +--------------------------------------+-----------------+------+--------------------+------------+-------+
 | 
			
		||||
   | 4e31c4ea-6970-4fc6-80ba-431fdcdae15f | backup          | 5    | backup-lv          | False      | None  |
 | 
			
		||||
   | 6c689cd7-2bef-4755-a2fb-ddd9504692f3 | database        | 5    | pgsql-lv           | True       | None  |
 | 
			
		||||
   | 44c7d520-9dbe-41be-ac6a-5d02e3833fd5 | extension       | 1    | extension-lv       | True       | None  |
 | 
			
		||||
   | 809a5ed3-22c0-4385-9d1e-dd250f634a37 | glance          | 8    | cgcs-lv            | True       | None  |
 | 
			
		||||
   | 9c94ef09-c474-425c-a8ba-264e82d9467e | gnocchi         | 5    | gnocchi-lv         | False      | None  |
 | 
			
		||||
   | 895222b3-3ce5-486a-be79-9fe21b94c075 | img-conversions | 8    | img-conversions-lv | False      | None  |
 | 
			
		||||
   | 5811713f-def2-420b-9edf-6680446cd379 | scratch         | 8    | scratch-lv         | False      | None  |
 | 
			
		||||
   +--------------------------------------+-----------------+------+--------------------+------------+-------+
 | 
			
		||||
 | 
			
		||||
Modify filesystem sizes
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system controllerfs-modify backup=42 database=12 img-conversions=12
 | 
			
		||||
 | 
			
		||||
-------------------------------------------------------
 | 
			
		||||
Installing controller-1 / storage hosts / compute hosts
 | 
			
		||||
-------------------------------------------------------
 | 
			
		||||
 | 
			
		||||
After initializing and configuring an active controller, you can add and
 | 
			
		||||
configure a backup controller and additional compute or storage hosts.
 | 
			
		||||
For each host do the following:
 | 
			
		||||
 | 
			
		||||
*****************
 | 
			
		||||
Initializing host
 | 
			
		||||
*****************
 | 
			
		||||
 | 
			
		||||
Power on Host. In host console you will see:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   Waiting for this node to be configured.
 | 
			
		||||
 | 
			
		||||
   Please configure the personality for this node from the
 | 
			
		||||
   controller node in order to proceed.
 | 
			
		||||
 | 
			
		||||
**********************************
 | 
			
		||||
Updating host name and personality
 | 
			
		||||
**********************************
 | 
			
		||||
 | 
			
		||||
On controller-0, acquire Keystone administrative privileges:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   controller-0:~$ source /etc/nova/openrc
 | 
			
		||||
 | 
			
		||||
Wait for controller-0 to discover new host, list the host until new
 | 
			
		||||
UNKNOWN host shows up in table:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-list
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
   | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
   | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
   | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
Use the system host-add to update host personality attribute:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n <controller_name> -p <personality> -m <mac address>
 | 
			
		||||
 | 
			
		||||
**REMARK:** use the Mac address for the specific network interface you
 | 
			
		||||
are going to be connected. e.g. OAM network interface for controller-1
 | 
			
		||||
node, management network interface for compute and storage nodes.
 | 
			
		||||
 | 
			
		||||
Check the **NIC** MAC address from "Virtual Manager GUI" under *"Show
 | 
			
		||||
virtual hardware details -*\ **i**\ *" Main Banner --> NIC: --> specific
 | 
			
		||||
"Bridge name:" under MAC address text field.*
 | 
			
		||||
 | 
			
		||||
***************
 | 
			
		||||
Monitoring host
 | 
			
		||||
***************
 | 
			
		||||
 | 
			
		||||
On controller-0, you can monitor the installation progress by running
 | 
			
		||||
the system host-show command for the host periodically. Progress is
 | 
			
		||||
shown in the install_state field.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-show <host> | grep install
 | 
			
		||||
   | install_output      | text                                 |
 | 
			
		||||
   | install_state       | booting                              |
 | 
			
		||||
   | install_state_info  | None                                 |
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
Wait while the host is configured and rebooted. Up to 20 minutes may be
 | 
			
		||||
required for a reboot, depending on hardware. When the reboot is
 | 
			
		||||
complete, the host is reported as locked, disabled, and online.
 | 
			
		||||
 | 
			
		||||
*************
 | 
			
		||||
Listing hosts
 | 
			
		||||
*************
 | 
			
		||||
 | 
			
		||||
Once all nodes have been installed, configured and rebooted, on
 | 
			
		||||
controller-0 list the hosts:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-list
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
   | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
   | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
   | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
   | 3  | compute-0    | compute     | locked         | disabled    | online       |
 | 
			
		||||
   | 4  | compute-1    | compute     | locked         | disabled    | online       |
 | 
			
		||||
   | 5  | storage-0    | storage     | locked         | disabled    | online       |
 | 
			
		||||
   | 6  | storage-1    | storage     | locked         | disabled    | online       |
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
-------------------------
 | 
			
		||||
Provisioning controller-1
 | 
			
		||||
-------------------------
 | 
			
		||||
 | 
			
		||||
On controller-0, list hosts:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-list
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
   | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
   ...
 | 
			
		||||
   | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
   ...
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
***********************************************
 | 
			
		||||
Provisioning network interfaces on controller-1
 | 
			
		||||
***********************************************
 | 
			
		||||
 | 
			
		||||
In order to list out hardware port names, types, PCI addresses that have
 | 
			
		||||
been discovered:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list controller-1
 | 
			
		||||
 | 
			
		||||
Provision the OAM interface for controller-1:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n <oam interface> -c platform --networks oam controller-1 <oam interface>
 | 
			
		||||
 | 
			
		||||
**********************
 | 
			
		||||
Unlocking controller-1
 | 
			
		||||
**********************
 | 
			
		||||
 | 
			
		||||
Unlock controller-1:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
Wait while the controller-1 is rebooted. Up to 10 minutes may be
 | 
			
		||||
required for a reboot, depending on hardware.
 | 
			
		||||
 | 
			
		||||
**REMARK:** controller-1 will remain in degraded state until
 | 
			
		||||
data-syncing is complete. The duration is dependant on the
 | 
			
		||||
virtualization host's configuration - i.e., the number and configuration
 | 
			
		||||
of physical disks used to host the nodes' virtual disks. Also, the
 | 
			
		||||
management network is expected to have link capacity of 10000 (1000 is
 | 
			
		||||
not supported due to excessive data-sync time). Use 'fm alarm-list' to
 | 
			
		||||
confirm status.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-list
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
   | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
   | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
   | 2  | controller-1 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
   ...
 | 
			
		||||
 | 
			
		||||
-------------------------
 | 
			
		||||
Provisioning storage host
 | 
			
		||||
-------------------------
 | 
			
		||||
 | 
			
		||||
**************************************
 | 
			
		||||
Provisioning storage on a storage host
 | 
			
		||||
**************************************
 | 
			
		||||
 | 
			
		||||
Available physical disks in storage-N:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list storage-0
 | 
			
		||||
   +--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
 | 
			
		||||
   | uuid                                 | device_no | device_ | device_ | size_ | available_ | rpm          |...
 | 
			
		||||
   |                                      | de        | num     | type    | gib   | gib        |              |...
 | 
			
		||||
   +--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
 | 
			
		||||
   | a2bbfe1f-cf91-4d39-a2e8-a9785448aa56 | /dev/sda  | 2048    | HDD     | 292.  | 0.0        | Undetermined |...
 | 
			
		||||
   |                                      |           |         |         | 968   |            |              |...
 | 
			
		||||
   |                                      |           |         |         |       |            |              |...
 | 
			
		||||
   | c7cc08e6-ff18-4229-a79d-a04187de7b8d | /dev/sdb  | 2064    | HDD     | 100.0 | 99.997     | Undetermined |...
 | 
			
		||||
   |                                      |           |         |         |       |            |              |...
 | 
			
		||||
   |                                      |           |         |         |       |            |              |...
 | 
			
		||||
   | 1ece5d1b-5dcf-4e3c-9d10-ea83a19dd661 | /dev/sdc  | 2080    | HDD     | 4.0   | 3.997      |...
 | 
			
		||||
   |                                      |           |         |         |       |            |              |...
 | 
			
		||||
   |                                      |           |         |         |       |            |              |...
 | 
			
		||||
   +--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
 | 
			
		||||
 | 
			
		||||
Available storage tiers in storage-N:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system storage-tier-list ceph_cluster
 | 
			
		||||
   +--------------------------------------+---------+--------+--------------------------------------+
 | 
			
		||||
   | uuid                                 | name    | status | backend_using                        |
 | 
			
		||||
   +--------------------------------------+---------+--------+--------------------------------------+
 | 
			
		||||
   | 4398d910-75e4-4e99-a57f-fc147fb87bdb | storage | in-use | 5131a848-25ea-4cd8-bbce-0d65c84183df |
 | 
			
		||||
   +--------------------------------------+---------+--------+--------------------------------------+
 | 
			
		||||
 | 
			
		||||
Create a storage function (i.e. OSD) in storage-N. At least two unlocked and
 | 
			
		||||
enabled hosts with monitors are required. Candidates are: controller-0,
 | 
			
		||||
controller-1, and storage-0.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-stor-add storage-0 c7cc08e6-ff18-4229-a79d-a04187de7b8d
 | 
			
		||||
   +------------------+--------------------------------------------------+
 | 
			
		||||
   | Property         | Value                                            |
 | 
			
		||||
   +------------------+--------------------------------------------------+
 | 
			
		||||
   | osdid            | 0                                                |
 | 
			
		||||
   | function         | osd                                              |
 | 
			
		||||
   | journal_location | 34989bad-67fc-49ea-9e9c-38ca4be95fad             |
 | 
			
		||||
   | journal_size_gib | 1024                                             |
 | 
			
		||||
   | journal_path     | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
 | 
			
		||||
   | journal_node     | /dev/sdb2                                        |
 | 
			
		||||
   | uuid             | 34989bad-67fc-49ea-9e9c-38ca4be95fad             |
 | 
			
		||||
   | ihost_uuid       | 4a5ed4fc-1d2b-4607-acf9-e50a3759c994             |
 | 
			
		||||
   | idisk_uuid       | c7cc08e6-ff18-4229-a79d-a04187de7b8d             |
 | 
			
		||||
   | tier_uuid        | 4398d910-75e4-4e99-a57f-fc147fb87bdb             |
 | 
			
		||||
   | tier_name        | storage                                          |
 | 
			
		||||
   | created_at       | 2018-08-16T00:39:44.409448+00:00                 |
 | 
			
		||||
   | updated_at       | 2018-08-16T00:40:07.626762+00:00                 |
 | 
			
		||||
   +------------------+--------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
Create remaining available storage function (an OSD) in storage-N
 | 
			
		||||
based in the number of available physical disks.
 | 
			
		||||
 | 
			
		||||
List the OSDs:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-stor-list storage-0
 | 
			
		||||
   +--------------------------------------+----------+-------+--------------+--------------------------------------+
 | 
			
		||||
   | uuid                                 | function | osdid | capabilities | idisk_uuid                           |
 | 
			
		||||
   +--------------------------------------+----------+-------+--------------+--------------------------------------+
 | 
			
		||||
   | 34989bad-67fc-49ea-9e9c-38ca4be95fad | osd      | 0     | {}           | c7cc08e6-ff18-4229-a79d-a04187de7b8d |
 | 
			
		||||
   +--------------------------------------+----------+-------+--------------+--------------------------------------+
 | 
			
		||||
 | 
			
		||||
Unlock storage-N:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock storage-0
 | 
			
		||||
 | 
			
		||||
**REMARK:** Before you continue, repeat Provisioning Storage steps on
 | 
			
		||||
remaining storage nodes.
 | 
			
		||||
 | 
			
		||||
---------------------------
 | 
			
		||||
Provisioning a compute host
 | 
			
		||||
---------------------------
 | 
			
		||||
 | 
			
		||||
You must configure the network interfaces and the storage disks on a
 | 
			
		||||
host before you can unlock it. For each compute host do the following:
 | 
			
		||||
 | 
			
		||||
On controller-0, acquire Keystone administrative privileges:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   controller-0:~$ source /etc/nova/openrc
 | 
			
		||||
 | 
			
		||||
*************************************************
 | 
			
		||||
Provisioning network interfaces on a compute host
 | 
			
		||||
*************************************************
 | 
			
		||||
 | 
			
		||||
On controller-0, in order to list out hardware port names, types,
 | 
			
		||||
pci-addresses that have been discovered:
 | 
			
		||||
 | 
			
		||||
-  **Only in virtual environment**: Ensure that the interface used is
 | 
			
		||||
   one of those attached to host bridge with model type "virtio" (i.e.,
 | 
			
		||||
   eth1000 and eth1001). The model type "e1000" emulated devices will
 | 
			
		||||
   not work for provider networks.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list compute-0
 | 
			
		||||
 | 
			
		||||
Provision the data interface for compute:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -c data compute-0 eth1000
 | 
			
		||||
 | 
			
		||||
***************************
 | 
			
		||||
VSwitch virtual environment
 | 
			
		||||
***************************
 | 
			
		||||
 | 
			
		||||
**Only in virtual environment**. If the compute has more than 4 CPUs,
 | 
			
		||||
the system will auto-configure the vswitch to use 2 cores. However some
 | 
			
		||||
virtual environments do not properly support multi-queue required in a
 | 
			
		||||
multi-CPU environment. Therefore run the following command to reduce the
 | 
			
		||||
vswitch cores to 1:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-cpu-modify compute-0 -f vswitch -p0 1
 | 
			
		||||
   +--------------------------------------+-------+-----------+-------+--------+...
 | 
			
		||||
   | uuid                                 | log_c | processor | phy_c | thread |...
 | 
			
		||||
   |                                      | ore   |           | ore   |        |...
 | 
			
		||||
   +--------------------------------------+-------+-----------+-------+--------+...
 | 
			
		||||
   | a3b5620c-28b1-4fe0-9e97-82950d8582c2 | 0     | 0         | 0     | 0      |...
 | 
			
		||||
   | f2e91c2b-bfc5-4f2a-9434-bceb7e5722c3 | 1     | 0         | 1     | 0      |...
 | 
			
		||||
   | 18a98743-fdc4-4c0c-990f-3c1cb2df8cb3 | 2     | 0         | 2     | 0      |...
 | 
			
		||||
   | 690d25d2-4f99-4ba1-a9ba-0484eec21cc7 | 3     | 0         | 3     | 0      |...
 | 
			
		||||
   +--------------------------------------+-------+-----------+-------+--------+...
 | 
			
		||||
 | 
			
		||||
**************************************
 | 
			
		||||
Provisioning storage on a compute host
 | 
			
		||||
**************************************
 | 
			
		||||
 | 
			
		||||
Review the available disk space and capacity and obtain the uuid(s) of
 | 
			
		||||
the physical disk(s) to be used for nova local:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list compute-0
 | 
			
		||||
   +--------------------------------------+-----------+---------+---------+-------+------------+...
 | 
			
		||||
   | uuid                                 | device_no | device_ | device_ | size_ | available_ |...
 | 
			
		||||
   |                                      | de        | num     | type    | gib   | gib        |...
 | 
			
		||||
   +--------------------------------------+-----------+---------+---------+-------+------------+
 | 
			
		||||
   | 14e52a55-f6a7-40ad-a0b1-11c2c3b6e7e9 | /dev/sda  | 2048    | HDD     | 292.  | 265.132    |...
 | 
			
		||||
   | a639914b-23a9-4071-9f25-a5f1960846cc | /dev/sdb  | 2064    | HDD     | 100.0 | 99.997     |...
 | 
			
		||||
   +--------------------------------------+-----------+---------+---------+-------+------------+...
 | 
			
		||||
 | 
			
		||||
Create the 'nova-local' local volume group:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add compute-0 nova-local
 | 
			
		||||
   +-----------------+-------------------------------------------------------------------+
 | 
			
		||||
   | Property        | Value                                                             |
 | 
			
		||||
   +-----------------+-------------------------------------------------------------------+
 | 
			
		||||
   | lvm_vg_name     | nova-local                                                        |
 | 
			
		||||
   | vg_state        | adding                                                            |
 | 
			
		||||
   | uuid            | 37f4c178-f0fe-422d-b66e-24ae057da674                              |
 | 
			
		||||
   | ihost_uuid      | f56921a6-8784-45ac-bd72-c0372cd95964                              |
 | 
			
		||||
   | lvm_vg_access   | None                                                              |
 | 
			
		||||
   | lvm_max_lv      | 0                                                                 |
 | 
			
		||||
   | lvm_cur_lv      | 0                                                                 |
 | 
			
		||||
   | lvm_max_pv      | 0                                                                 |
 | 
			
		||||
   | lvm_cur_pv      | 0                                                                 |
 | 
			
		||||
   | lvm_vg_size_gib | 0.00                                                              |
 | 
			
		||||
   | lvm_vg_total_pe | 0                                                                 |
 | 
			
		||||
   | lvm_vg_free_pe  | 0                                                                 |
 | 
			
		||||
   | created_at      | 2018-08-16T00:57:46.340454+00:00                                  |
 | 
			
		||||
   | updated_at      | None                                                              |
 | 
			
		||||
   | parameters      | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
 | 
			
		||||
   +-----------------+-------------------------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
Create a disk partition to add to the volume group based on uuid of the
 | 
			
		||||
physical disk:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add compute-0 nova-local a639914b-23a9-4071-9f25-a5f1960846cc
 | 
			
		||||
   +--------------------------+--------------------------------------------+
 | 
			
		||||
   | Property                 | Value                                      |
 | 
			
		||||
   +--------------------------+--------------------------------------------+
 | 
			
		||||
   | uuid                     | 56fdb63a-1078-4394-b1ce-9a0b3bff46dc       |
 | 
			
		||||
   | pv_state                 | adding                                     |
 | 
			
		||||
   | pv_type                  | disk                                       |
 | 
			
		||||
   | disk_or_part_uuid        | a639914b-23a9-4071-9f25-a5f1960846cc       |
 | 
			
		||||
   | disk_or_part_device_node | /dev/sdb                                   |
 | 
			
		||||
   | disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
 | 
			
		||||
   | lvm_pv_name              | /dev/sdb                                   |
 | 
			
		||||
   | lvm_vg_name              | nova-local                                 |
 | 
			
		||||
   | lvm_pv_uuid              | None                                       |
 | 
			
		||||
   | lvm_pv_size_gib          | 0.0                                        |
 | 
			
		||||
   | lvm_pe_total             | 0                                          |
 | 
			
		||||
   | lvm_pe_alloced           | 0                                          |
 | 
			
		||||
   | ihost_uuid               | f56921a6-8784-45ac-bd72-c0372cd95964       |
 | 
			
		||||
   | created_at               | 2018-08-16T01:05:59.013257+00:00           |
 | 
			
		||||
   | updated_at               | None                                       |
 | 
			
		||||
   +--------------------------+--------------------------------------------+
 | 
			
		||||
 | 
			
		||||
Remote RAW Ceph storage backed will be used to back nova local ephemeral
 | 
			
		||||
volumes:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-modify -b remote compute-0 nova-local
 | 
			
		||||
 | 
			
		||||
************************
 | 
			
		||||
Unlocking a compute host
 | 
			
		||||
************************
 | 
			
		||||
 | 
			
		||||
On controller-0, use the system host-unlock command to unlock the
 | 
			
		||||
compute-N:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0
 | 
			
		||||
 | 
			
		||||
Wait while the compute-N is rebooted. Up to 10 minutes may be required
 | 
			
		||||
for a reboot, depending on hardware. The host is rebooted, and its
 | 
			
		||||
availability state is reported as in-test, followed by unlocked/enabled.
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
System health check
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
***********************
 | 
			
		||||
Listing StarlingX nodes
 | 
			
		||||
***********************
 | 
			
		||||
 | 
			
		||||
On controller-0, after a few minutes, all nodes shall be reported as
 | 
			
		||||
unlocked, enabled, and available:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-list
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
   | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
   | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
   | 2  | controller-1 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
   | 3  | compute-0    | compute     | unlocked       | enabled     | available    |
 | 
			
		||||
   | 4  | compute-1    | compute     | unlocked       | enabled     | available    |
 | 
			
		||||
   | 5  | storage-0    | storage     | unlocked       | enabled     | available    |
 | 
			
		||||
   | 6  | storage-1    | storage     | unlocked       | enabled     | available    |
 | 
			
		||||
   +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
******************************
 | 
			
		||||
Checking StarlingX Ceph health
 | 
			
		||||
******************************
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ ceph -s
 | 
			
		||||
       cluster e14ebfd6-5030-4592-91c3-7e6146b3c910
 | 
			
		||||
        health HEALTH_OK
 | 
			
		||||
        monmap e1: 3 mons at {controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0,storage-0=192.168.204.204:6789/0}
 | 
			
		||||
               election epoch 22, quorum 0,1,2 controller-0,controller-1,storage-0
 | 
			
		||||
        osdmap e84: 2 osds: 2 up, 2 in
 | 
			
		||||
               flags sortbitwise,require_jewel_osds
 | 
			
		||||
         pgmap v168: 1600 pgs, 5 pools, 0 bytes data, 0 objects
 | 
			
		||||
               87444 kB used, 197 GB / 197 GB avail
 | 
			
		||||
                   1600 active+clean
 | 
			
		||||
   controller-0:~$
 | 
			
		||||
 | 
			
		||||
*****************
 | 
			
		||||
System alarm list
 | 
			
		||||
*****************
 | 
			
		||||
 | 
			
		||||
When all nodes are unlocked, enabled and available: check 'fm alarm-list' for
 | 
			
		||||
issues.
 | 
			
		||||
 | 
			
		||||
Your StarlingX deployment is now up and running with 2x HA controllers with
 | 
			
		||||
Cinder storage, 1x compute, 3x storages and all OpenStack services up and
 | 
			
		||||
running. You can now proceed with standard OpenStack APIs, CLIs and/or Horizon
 | 
			
		||||
to load Glance images, configure Nova Flavors, configure Neutron networks and
 | 
			
		||||
launch Nova virtual machines.
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Deployment terminology
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. include:: deployment_terminology.rst
 | 
			
		||||
   :start-after: incl-standard-controller-deployment-terminology:
 | 
			
		||||
   :end-before: incl-standard-controller-deployment-terminology-end:
 | 
			
		||||
 | 
			
		||||
.. include:: deployment_terminology.rst
 | 
			
		||||
   :start-after: incl-dedicated-storage-deployment-terminology:
 | 
			
		||||
   :end-before: incl-dedicated-storage-deployment-terminology-end:
 | 
			
		||||
 | 
			
		||||
.. include:: deployment_terminology.rst
 | 
			
		||||
   :start-after: incl-common-deployment-terminology:
 | 
			
		||||
   :end-before: incl-common-deployment-terminology-end:
 | 
			
		||||
@@ -1,119 +0,0 @@
 | 
			
		||||
.. _incl-simplex-deployment-terminology:
 | 
			
		||||
 | 
			
		||||
**All-in-one controller node**
 | 
			
		||||
    A single physical node that provides a controller function, compute
 | 
			
		||||
    function, and storage function.
 | 
			
		||||
 | 
			
		||||
.. _incl-simplex-deployment-terminology-end:
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
.. _incl-standard-controller-deployment-terminology:
 | 
			
		||||
 | 
			
		||||
**Controller node / function**
 | 
			
		||||
    A node that runs cloud control function for managing cloud resources.
 | 
			
		||||
 | 
			
		||||
    - Runs cloud control functions for managing cloud resources.
 | 
			
		||||
    - Runs all OpenStack control functions (e.g. managing images, virtual
 | 
			
		||||
      volumes, virtual network, and virtual machines).
 | 
			
		||||
    - Can be part of a two-node HA control node cluster for running control
 | 
			
		||||
      functions either active/active or active/standby.
 | 
			
		||||
 | 
			
		||||
**Compute ( & network ) node / function**
 | 
			
		||||
    A node that hosts applications in virtual machines using compute resources
 | 
			
		||||
    such as CPU, memory, and disk.
 | 
			
		||||
 | 
			
		||||
    - Runs virtual switch for realizing virtual networks.
 | 
			
		||||
    - Provides L3 routing and NET services.
 | 
			
		||||
 | 
			
		||||
.. _incl-standard-controller-deployment-terminology-end:
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
.. _incl-dedicated-storage-deployment-terminology:
 | 
			
		||||
 | 
			
		||||
**Storage node / function**
 | 
			
		||||
    A node that contains a set of disks (e.g. SATA, SAS, SSD, and/or NVMe).
 | 
			
		||||
 | 
			
		||||
    - Runs CEPH distributed storage software.
 | 
			
		||||
    - Part of an HA multi-node CEPH storage cluster supporting a replication
 | 
			
		||||
      factor of two or three, journal caching, and class tiering.
 | 
			
		||||
    - Provides HA persistent storage for images, virtual volumes
 | 
			
		||||
      (i.e. block storage), and object storage.
 | 
			
		||||
 | 
			
		||||
.. _incl-dedicated-storage-deployment-terminology-end:
 | 
			
		||||
 | 
			
		||||
.. _incl-common-deployment-terminology:
 | 
			
		||||
 | 
			
		||||
**OAM network**
 | 
			
		||||
    The network on which all external StarlingX platform APIs are exposed,
 | 
			
		||||
    (i.e. REST APIs, Horizon web server, SSH, and SNMP), typically 1GE.
 | 
			
		||||
 | 
			
		||||
    Only controller type nodes are required to be connected to the OAM
 | 
			
		||||
    network.
 | 
			
		||||
 | 
			
		||||
**Management network**
 | 
			
		||||
    A private network (i.e. not connected externally), tipically 10GE,
 | 
			
		||||
    used for the following:
 | 
			
		||||
 | 
			
		||||
    - Internal OpenStack / StarlingX monitoring and control.
 | 
			
		||||
    - VM I/O access to a storage cluster.
 | 
			
		||||
 | 
			
		||||
    All nodes are required to be connected to the management network.
 | 
			
		||||
 | 
			
		||||
**Data network(s)**
 | 
			
		||||
    Networks on which the OpenStack / Neutron provider networks are realized
 | 
			
		||||
    and become the VM tenant networks.
 | 
			
		||||
 | 
			
		||||
    Only compute type and all-in-one type nodes are required to be connected
 | 
			
		||||
    to the data network(s). These node types require one or more interface(s)
 | 
			
		||||
    on the data network(s).
 | 
			
		||||
 | 
			
		||||
**IPMI network**
 | 
			
		||||
    An optional network on which IPMI interfaces of all nodes are connected.
 | 
			
		||||
    The network must be reachable using L3/IP from the controller's OAM
 | 
			
		||||
    interfaces.
 | 
			
		||||
 | 
			
		||||
    You can optionally connect all node types to the IPMI network.
 | 
			
		||||
 | 
			
		||||
**PXEBoot network**
 | 
			
		||||
    An optional network for controllers to boot/install other nodes over the
 | 
			
		||||
    network.
 | 
			
		||||
 | 
			
		||||
    By default, controllers use the management network for boot/install of other
 | 
			
		||||
    nodes in the openstack cloud. If this optional network is used, all node
 | 
			
		||||
    types are required to be connected to the PXEBoot network.
 | 
			
		||||
 | 
			
		||||
    A PXEBoot network is required for a variety of special case situations:
 | 
			
		||||
 | 
			
		||||
    - Cases where the management network must be IPv6:
 | 
			
		||||
 | 
			
		||||
      - IPv6 does not support PXEBoot. Therefore, IPv4 PXEBoot network must be
 | 
			
		||||
        configured.
 | 
			
		||||
 | 
			
		||||
    - Cases where the management network must be VLAN tagged:
 | 
			
		||||
 | 
			
		||||
      - Most server's BIOS do not support PXEBooting over tagged networks.
 | 
			
		||||
        Therefore, you must configure an untagged PXEBoot network.
 | 
			
		||||
 | 
			
		||||
    - Cases where a management network must be shared across regions but
 | 
			
		||||
      individual regions' controllers want to only network boot/install nodes
 | 
			
		||||
      of their own region:
 | 
			
		||||
 | 
			
		||||
      - You must configure separate, per-region PXEBoot networks.
 | 
			
		||||
 | 
			
		||||
**Infra network**
 | 
			
		||||
    A deprecated optional network that was historically used for access to the
 | 
			
		||||
    storage cluster.
 | 
			
		||||
 | 
			
		||||
    If this optional network is used, all node types are required to be
 | 
			
		||||
    connected to the INFRA network,
 | 
			
		||||
 | 
			
		||||
**Node interfaces**
 | 
			
		||||
    All nodes' network interfaces can, in general, optionally be either:
 | 
			
		||||
 | 
			
		||||
    - Untagged single port.
 | 
			
		||||
    - Untagged two-port LAG and optionally split between redudant L2 switches
 | 
			
		||||
      running vPC (Virtual Port-Channel), also known as multichassis
 | 
			
		||||
      EtherChannel (MEC).
 | 
			
		||||
    - VLAN on either single-port ETH interface or two-port LAG interface.
 | 
			
		||||
 | 
			
		||||
.. _incl-common-deployment-terminology-end:
 | 
			
		||||
| 
		 Before Width: | Height: | Size: 100 KiB  | 
| 
		 Before Width: | Height: | Size: 107 KiB  | 
| 
		 Before Width: | Height: | Size: 104 KiB  | 
| 
		 Before Width: | Height: | Size: 89 KiB  | 
| 
		 Before Width: | Height: | Size: 79 KiB  | 
@@ -1,300 +0,0 @@
 | 
			
		||||
===========================
 | 
			
		||||
StarlingX R1.0 Installation
 | 
			
		||||
===========================
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   Significant changes in the underlying StarlingX infrastructure have occurred
 | 
			
		||||
   since the R1.0 release. Due to these changes, the R1.0 installation
 | 
			
		||||
   instructions may not work as described.
 | 
			
		||||
 | 
			
		||||
   Installation of the current :ref:`latest_release` is recommended.
 | 
			
		||||
 | 
			
		||||
This is the installation guide for the StarlingX R1.0 release. If this is not
 | 
			
		||||
the installation guide you want to use, see the :doc:`available installation
 | 
			
		||||
guides </deploy_install_guides/index>`.
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Introduction
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
StarlingX may be installed in:
 | 
			
		||||
 | 
			
		||||
-  **Bare metal**: Real deployments of StarlingX are only supported on
 | 
			
		||||
   physical servers.
 | 
			
		||||
-  **Virtual environment**: It should only be used for evaluation or
 | 
			
		||||
   development purposes.
 | 
			
		||||
 | 
			
		||||
StarlingX installed in virtual environments has two options:
 | 
			
		||||
 | 
			
		||||
- :doc:`Libvirt/QEMU <installation_libvirt_qemu>`
 | 
			
		||||
- VirtualBox
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Requirements
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
Different use cases require different configurations.
 | 
			
		||||
 | 
			
		||||
**********
 | 
			
		||||
Bare metal
 | 
			
		||||
**********
 | 
			
		||||
 | 
			
		||||
The minimum requirements for the physical servers where StarlingX might
 | 
			
		||||
be deployed, include:
 | 
			
		||||
 | 
			
		||||
-  **Controller hosts**
 | 
			
		||||
 | 
			
		||||
   -  Minimum processor is:
 | 
			
		||||
 | 
			
		||||
      -  Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8
 | 
			
		||||
         cores/socket
 | 
			
		||||
 | 
			
		||||
   -  Minimum memory: 64 GB
 | 
			
		||||
   -  Hard drives:
 | 
			
		||||
 | 
			
		||||
      -  Primary hard drive, minimum 500 GB for OS and system databases.
 | 
			
		||||
      -  Secondary hard drive, minimum 500 GB for persistent VM storage.
 | 
			
		||||
 | 
			
		||||
   -  2 physical Ethernet interfaces: OAM and MGMT network.
 | 
			
		||||
   -  USB boot support.
 | 
			
		||||
   -  PXE boot support.
 | 
			
		||||
 | 
			
		||||
-  **Storage hosts**
 | 
			
		||||
 | 
			
		||||
   -  Minimum processor is:
 | 
			
		||||
 | 
			
		||||
      -  Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8
 | 
			
		||||
         cores/socket.
 | 
			
		||||
 | 
			
		||||
   -  Minimum memory: 64 GB.
 | 
			
		||||
   -  Hard drives:
 | 
			
		||||
 | 
			
		||||
      -  Primary hard drive, minimum 500 GB for OS.
 | 
			
		||||
      -  1 or more additional hard drives for CEPH OSD storage, and
 | 
			
		||||
      -  Optionally 1 or more SSD or NVMe drives for CEPH journals.
 | 
			
		||||
 | 
			
		||||
   -  1 physical Ethernet interface: MGMT network
 | 
			
		||||
   -  PXE boot support.
 | 
			
		||||
 | 
			
		||||
-  **Compute hosts**
 | 
			
		||||
 | 
			
		||||
   -  Minimum processor is:
 | 
			
		||||
 | 
			
		||||
      -  Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8
 | 
			
		||||
         cores/socket.
 | 
			
		||||
 | 
			
		||||
   -  Minimum memory: 32 GB.
 | 
			
		||||
   -  Hard drives:
 | 
			
		||||
 | 
			
		||||
      -  Primary hard drive, minimum 500 GB for OS.
 | 
			
		||||
      -  1 or more additional hard drives for ephemeral VM storage.
 | 
			
		||||
 | 
			
		||||
   -  2 or more physical Ethernet interfaces: MGMT network and 1 or more
 | 
			
		||||
      provider networks.
 | 
			
		||||
   -  PXE boot support.
 | 
			
		||||
 | 
			
		||||
-  **All-In-One Simplex or Duplex, controller + compute hosts**
 | 
			
		||||
 | 
			
		||||
   -  Minimum processor is:
 | 
			
		||||
 | 
			
		||||
      -  Typical hardware form factor:
 | 
			
		||||
 | 
			
		||||
         - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket
 | 
			
		||||
      -  Low cost / low power hardware form factor
 | 
			
		||||
 | 
			
		||||
         - Single-CPU Intel Xeon D-15xx family, 8 cores
 | 
			
		||||
 | 
			
		||||
   -  Minimum memory: 64 GB.
 | 
			
		||||
   -  Hard drives:
 | 
			
		||||
 | 
			
		||||
      -  Primary hard drive, minimum 500 GB SSD or NVMe.
 | 
			
		||||
      -  0 or more 500 GB disks (min. 10K RPM).
 | 
			
		||||
 | 
			
		||||
   -  Network ports:
 | 
			
		||||
 | 
			
		||||
      **NOTE:** Duplex and Simplex configurations require one or more data
 | 
			
		||||
      ports.
 | 
			
		||||
      The Duplex configuration requires a management port.
 | 
			
		||||
 | 
			
		||||
      - Management: 10GE (Duplex only)
 | 
			
		||||
      - OAM: 10GE
 | 
			
		||||
      - Data: n x 10GE
 | 
			
		||||
 | 
			
		||||
The recommended minimum requirements for the physical servers are
 | 
			
		||||
described later in each StarlingX deployment guide.
 | 
			
		||||
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
NVMe drive as boot drive
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
 | 
			
		||||
To use a Non-Volatile Memory Express (NVMe) drive as the boot drive for any of
 | 
			
		||||
your nodes, you must configure your host and adjust kernel parameters during
 | 
			
		||||
installation:
 | 
			
		||||
 | 
			
		||||
- Configure the host to be in UEFI mode.
 | 
			
		||||
- Edit the kernel boot parameter. After you are presented with the StarlingX
 | 
			
		||||
  ISO boot options and after you have selected the preferred installation option
 | 
			
		||||
  (e.g. Standard Configuration / All-in-One Controller Configuration), press the
 | 
			
		||||
  TAB key to edit the kernel boot parameters. Modify the **boot_device** and
 | 
			
		||||
  **rootfs_device** from the default **sda** so that it is the correct device
 | 
			
		||||
  name for the NVMe drive (e.g. "nvme0n1").
 | 
			
		||||
 | 
			
		||||
  ::
 | 
			
		||||
 | 
			
		||||
     vmlinuz rootwait console=tty0 inst.text inst.stage2=hd:LABEL=oe_iso_boot
 | 
			
		||||
     inst.ks=hd:LABEL=oe_iso_boot:/smallsystem_ks.cfg boot_device=nvme0n1
 | 
			
		||||
     rootfs_device=nvme0n1 biosdevname=0 usbcore.autosuspend=-1 inst.gpt
 | 
			
		||||
     security_profile=standard user_namespace.enable=1 initrd=initrd.img
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
*******************
 | 
			
		||||
Virtual environment
 | 
			
		||||
*******************
 | 
			
		||||
 | 
			
		||||
The recommended minimum requirements for the workstation, hosting the
 | 
			
		||||
virtual machine(s) where StarlingX will be deployed, include:
 | 
			
		||||
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
Hardware requirements
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
 | 
			
		||||
A workstation computer with:
 | 
			
		||||
 | 
			
		||||
-  Processor: x86_64 only supported architecture with BIOS enabled
 | 
			
		||||
   hardware virtualization extensions
 | 
			
		||||
-  Cores: 8 (4 with careful monitoring of cpu load)
 | 
			
		||||
-  Memory: At least 32GB RAM
 | 
			
		||||
-  Hard Disk: 500GB HDD
 | 
			
		||||
-  Network: Two network adapters with active Internet connection
 | 
			
		||||
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
Software requirements
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
 | 
			
		||||
A workstation computer with:
 | 
			
		||||
 | 
			
		||||
-  Operating System: Freshly installed Ubuntu 16.04 LTS 64-bit
 | 
			
		||||
-  Proxy settings configured (if applies)
 | 
			
		||||
-  Git
 | 
			
		||||
-  KVM/VirtManager
 | 
			
		||||
-  Libvirt library
 | 
			
		||||
-  QEMU full-system emulation binaries
 | 
			
		||||
-  stx-tools project
 | 
			
		||||
-  StarlingX ISO image
 | 
			
		||||
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
Deployment environment setup
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
 | 
			
		||||
This section describes how to set up the workstation computer which will
 | 
			
		||||
host the virtual machine(s) where StarlingX will be deployed.
 | 
			
		||||
 | 
			
		||||
''''''''''''''''''''''''''''''
 | 
			
		||||
Updating your operating system
 | 
			
		||||
''''''''''''''''''''''''''''''
 | 
			
		||||
 | 
			
		||||
Before proceeding with the build, ensure your OS is up to date. You’ll
 | 
			
		||||
first need to update the local database list of available packages:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   $ sudo apt-get update
 | 
			
		||||
 | 
			
		||||
'''''''''''''''''''''''''
 | 
			
		||||
Install stx-tools project
 | 
			
		||||
'''''''''''''''''''''''''
 | 
			
		||||
 | 
			
		||||
Clone the stx-tools project. Usually you’ll want to clone it under your
 | 
			
		||||
user’s home directory.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   $ cd $HOME
 | 
			
		||||
   $ git clone https://git.starlingx.io/stx-tools
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
''''''''''''''''''''''''''''''''''''''''
 | 
			
		||||
Installing requirements and dependencies
 | 
			
		||||
''''''''''''''''''''''''''''''''''''''''
 | 
			
		||||
 | 
			
		||||
Navigate to the stx-tools installation libvirt directory:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   $ cd $HOME/stx-tools/deployment/libvirt/
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
Install the required packages:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   $ bash install_packages.sh
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
''''''''''''''''''
 | 
			
		||||
Disabling firewall
 | 
			
		||||
''''''''''''''''''
 | 
			
		||||
 | 
			
		||||
Unload firewall and disable firewall on boot:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   $ sudo ufw disable
 | 
			
		||||
   Firewall stopped and disabled on system startup
 | 
			
		||||
   $ sudo ufw status
 | 
			
		||||
   Status: inactive
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
-------------------------------
 | 
			
		||||
Getting the StarlingX ISO image
 | 
			
		||||
-------------------------------
 | 
			
		||||
 | 
			
		||||
Follow the instructions from the :doc:`/developer_resources/build_guide` to build a
 | 
			
		||||
StarlingX ISO image.
 | 
			
		||||
 | 
			
		||||
**********
 | 
			
		||||
Bare metal
 | 
			
		||||
**********
 | 
			
		||||
 | 
			
		||||
A bootable USB flash drive containing StarlingX ISO image.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
*******************
 | 
			
		||||
Virtual environment
 | 
			
		||||
*******************
 | 
			
		||||
 | 
			
		||||
Copy the StarlingX ISO Image to the stx-tools deployment libvirt project
 | 
			
		||||
directory:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   $ cp <starlingx iso image> $HOME/stx-tools/deployment/libvirt/
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
------------------
 | 
			
		||||
Deployment options
 | 
			
		||||
------------------
 | 
			
		||||
 | 
			
		||||
-  Standard controller
 | 
			
		||||
 | 
			
		||||
   .. toctree::
 | 
			
		||||
      :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
      controller_storage
 | 
			
		||||
      dedicated_storage
 | 
			
		||||
 | 
			
		||||
-  All-in-one
 | 
			
		||||
 | 
			
		||||
   .. toctree::
 | 
			
		||||
      :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
      simplex
 | 
			
		||||
      duplex
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :hidden:
 | 
			
		||||
 | 
			
		||||
   installation_libvirt_qemu
 | 
			
		||||
 | 
			
		||||
@@ -1,204 +0,0 @@
 | 
			
		||||
==============================
 | 
			
		||||
Installation libvirt qemu R1.0
 | 
			
		||||
==============================
 | 
			
		||||
 | 
			
		||||
Installation for StarlingX R1.0 using Libvirt/QEMU virtualization.
 | 
			
		||||
 | 
			
		||||
---------------------
 | 
			
		||||
Hardware requirements
 | 
			
		||||
---------------------
 | 
			
		||||
 | 
			
		||||
A workstation computer with:
 | 
			
		||||
 | 
			
		||||
-  Processor: x86_64 only supported architecture with BIOS enabled
 | 
			
		||||
   hardware virtualization extensions
 | 
			
		||||
-  Memory: At least 32GB RAM
 | 
			
		||||
-  Hard disk: 500GB HDD
 | 
			
		||||
-  Network: One network adapter with active Internet connection
 | 
			
		||||
 | 
			
		||||
---------------------
 | 
			
		||||
Software requirements
 | 
			
		||||
---------------------
 | 
			
		||||
 | 
			
		||||
A workstation computer with:
 | 
			
		||||
 | 
			
		||||
-  Operating system: This process is known to work on Ubuntu 16.04 and
 | 
			
		||||
   is likely to work on other Linux OS's with some appropriate adjustments.
 | 
			
		||||
-  Proxy settings configured (if applies)
 | 
			
		||||
-  Git
 | 
			
		||||
-  KVM/VirtManager
 | 
			
		||||
-  Libvirt library
 | 
			
		||||
-  QEMU full-system emulation binaries
 | 
			
		||||
-  stx-tools project
 | 
			
		||||
-  StarlingX ISO image
 | 
			
		||||
 | 
			
		||||
----------------------------
 | 
			
		||||
Deployment environment setup
 | 
			
		||||
----------------------------
 | 
			
		||||
 | 
			
		||||
*************
 | 
			
		||||
Configuration
 | 
			
		||||
*************
 | 
			
		||||
 | 
			
		||||
These scripts are configured using environment variables that all have
 | 
			
		||||
built-in defaults. On shared systems you probably do not want to use the
 | 
			
		||||
defaults. The simplest way to handle this is to keep a rc file that can
 | 
			
		||||
be sourced into an interactive shell that configures everything. Here's
 | 
			
		||||
an example called stxcloud.rc:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   export CONTROLLER=stxcloud
 | 
			
		||||
   export COMPUTE=stxnode
 | 
			
		||||
   export STORAGE=stxstorage
 | 
			
		||||
   export BRIDGE_INTERFACE=stxbr
 | 
			
		||||
   export INTERNAL_NETWORK=172.30.20.0/24
 | 
			
		||||
   export INTERNAL_IP=172.30.20.1/24
 | 
			
		||||
   export EXTERNAL_NETWORK=192.168.20.0/24
 | 
			
		||||
   export EXTERNAL_IP=192.168.20.1/24
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
This rc file shows the defaults baked into the scripts:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   export CONTROLLER=controller
 | 
			
		||||
   export COMPUTE=compute
 | 
			
		||||
   export STORAGE=storage
 | 
			
		||||
   export BRIDGE_INTERFACE=stxbr
 | 
			
		||||
   export INTERNAL_NETWORK=10.10.10.0/24
 | 
			
		||||
   export INTERNAL_IP=10.10.10.1/24
 | 
			
		||||
   export EXTERNAL_NETWORK=192.168.204.0/24
 | 
			
		||||
   export EXTERNAL_IP=192.168.204.1/24
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
*************************
 | 
			
		||||
Install stx-tools project
 | 
			
		||||
*************************
 | 
			
		||||
 | 
			
		||||
Clone the stx-tools project into a working directory.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   git clone https://git.openstack.org/openstack/stx-tools.git
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
It is convenient to set up a shortcut to the deployment script
 | 
			
		||||
directory:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   SCRIPTS=$(pwd)/stx-tools/deployment/libvirt
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
If you created a configuration, load it from stxcloud.rc:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   source stxcloud.rc
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
****************************************
 | 
			
		||||
Installing requirements and dependencies
 | 
			
		||||
****************************************
 | 
			
		||||
 | 
			
		||||
Install the required packages and configure QEMU. This only needs to be
 | 
			
		||||
done once per host. (NOTE: this script only knows about Ubuntu at this
 | 
			
		||||
time):
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   $SCRIPTS/install_packages.sh
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
******************
 | 
			
		||||
Disabling firewall
 | 
			
		||||
******************
 | 
			
		||||
 | 
			
		||||
Unload firewall and disable firewall on boot:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   sudo ufw disable
 | 
			
		||||
   sudo ufw status
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
******************
 | 
			
		||||
Configure networks
 | 
			
		||||
******************
 | 
			
		||||
 | 
			
		||||
Configure the network bridges using setup_network.sh before doing
 | 
			
		||||
anything else. It will create 4 bridges named stxbr1, stxbr2, stxbr3 and
 | 
			
		||||
stxbr4. Set the BRIDGE_INTERFACE environment variable if you need to
 | 
			
		||||
change stxbr to something unique.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   $SCRIPTS/setup_network.sh
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
The destroy_network.sh script does the reverse, and should not be used
 | 
			
		||||
lightly. It should also only be used after all of the VMs created below
 | 
			
		||||
have been destroyed.
 | 
			
		||||
 | 
			
		||||
There is also a script cleanup_network.sh that will remove networking
 | 
			
		||||
configuration from libvirt.
 | 
			
		||||
 | 
			
		||||
*********************
 | 
			
		||||
Configure controllers
 | 
			
		||||
*********************
 | 
			
		||||
 | 
			
		||||
One script exists for building different StarlingX cloud configurations:
 | 
			
		||||
setup_configuration.sh.
 | 
			
		||||
 | 
			
		||||
The script uses the cloud configuration with the -c option:
 | 
			
		||||
 | 
			
		||||
- simplex
 | 
			
		||||
- duplex
 | 
			
		||||
- controllerstorage
 | 
			
		||||
- dedicatedstorage
 | 
			
		||||
 | 
			
		||||
You need an ISO file for the installation, the script takes a file name
 | 
			
		||||
with the -i option:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   $SCRIPTS/setup_configuration.sh -c <cloud configuration> -i <starlingx iso image>
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
And the setup will begin. The scripts create one or more VMs and start
 | 
			
		||||
the boot of the first controller, named oddly enough \``controller-0``.
 | 
			
		||||
If you have Xwindows available you will get virt-manager running. If
 | 
			
		||||
not, Ctrl-C out of that attempt if it doesn't return to a shell prompt.
 | 
			
		||||
Then connect to the serial console:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   virsh console controller-0
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
Continue the usual StarlingX installation from this point forward.
 | 
			
		||||
 | 
			
		||||
Tear down the VMs using destroy_configuration.sh.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   $SCRIPTS/destroy_configuration.sh -c <cloud configuration>
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Continue
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
Pick up the installation in one of the existing guides at the initializing
 | 
			
		||||
controller-0 step.
 | 
			
		||||
 | 
			
		||||
-  Standard controller
 | 
			
		||||
 | 
			
		||||
   - :doc:`StarlingX Cloud with Dedicated Storage Virtual Environment <dedicated_storage>`
 | 
			
		||||
   - :doc:`StarlingX Cloud with Controller Storage Virtual Environment <controller_storage>`
 | 
			
		||||
 | 
			
		||||
-  All-in-one
 | 
			
		||||
 | 
			
		||||
   - :doc:`StarlingX Cloud Duplex Virtual Environment <duplex>`
 | 
			
		||||
   - :doc:`StarlingX Cloud Simplex Virtual Environment <simplex>`
 | 
			
		||||
@@ -1,748 +0,0 @@
 | 
			
		||||
=======================
 | 
			
		||||
All-in-one Simplex R1.0
 | 
			
		||||
=======================
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
**NOTE:**  The instructions to set up a StarlingX One Node Configuration
 | 
			
		||||
(AIO-SX) system with containerized openstack services in this guide
 | 
			
		||||
are under development.
 | 
			
		||||
For approved instructions, see the
 | 
			
		||||
`One Node Configuration wiki page <https://wiki.openstack.org/wiki/StarlingX/Containers/Installation>`__.
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Deployment description
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
The All-In-One Simplex (AIO-SX) deployment option provides all three cloud
 | 
			
		||||
functions (controller, compute, and storage) on a single physical server. With
 | 
			
		||||
these cloud functions, multiple application types can be deployed and
 | 
			
		||||
consolidated onto a single physical server. For example, with a AIO-SX
 | 
			
		||||
deployment you can do the following:
 | 
			
		||||
 | 
			
		||||
- Consolidate legacy applications that must run standalone on a server by using
 | 
			
		||||
  multiple virtual machines on a single physical server.
 | 
			
		||||
- Consolidate legacy applications that run on different operating systems or
 | 
			
		||||
  different distributions of operating systems by using multiple virtual
 | 
			
		||||
  machines on a single physical server.
 | 
			
		||||
 | 
			
		||||
Only a small amount of cloud processing / storage power is required with an
 | 
			
		||||
All-In-One Simplex deployment.
 | 
			
		||||
 | 
			
		||||
.. figure:: figures/starlingx-deployment-options-simplex.png
 | 
			
		||||
   :scale: 50%
 | 
			
		||||
   :alt: All-In-One Simplex deployment configuration
 | 
			
		||||
 | 
			
		||||
   *All-In-One Simplex deployment configuration*
 | 
			
		||||
 | 
			
		||||
An All-In-One Simplex deployment provides no protection against an overall
 | 
			
		||||
server hardware fault. Protection against overall server hardware faults is
 | 
			
		||||
either not required, or done at a higher level. Hardware component protection
 | 
			
		||||
could be enabled if, for example, an HW RAID or 2x Port LAG is used in the
 | 
			
		||||
deployment.
 | 
			
		||||
 | 
			
		||||
--------------------------------------
 | 
			
		||||
Preparing an All-In-One Simplex server
 | 
			
		||||
--------------------------------------
 | 
			
		||||
 | 
			
		||||
**********
 | 
			
		||||
Bare metal
 | 
			
		||||
**********
 | 
			
		||||
 | 
			
		||||
Required Server:
 | 
			
		||||
 | 
			
		||||
-  Combined server (controller + compute): 1
 | 
			
		||||
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
Hardware requirements
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
 | 
			
		||||
The recommended minimum requirements for the physical servers where
 | 
			
		||||
All-In-One Simplex is deployed are as follows:
 | 
			
		||||
 | 
			
		||||
-  Minimum processor:
 | 
			
		||||
 | 
			
		||||
   -  Typical hardware form factor:
 | 
			
		||||
 | 
			
		||||
      - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket
 | 
			
		||||
   -  Low cost / low power hardware form factor
 | 
			
		||||
 | 
			
		||||
      - Single-CPU Intel Xeon D-15xx family, 8 cores
 | 
			
		||||
 | 
			
		||||
-  Memory: 64 GB
 | 
			
		||||
-  BIOS:
 | 
			
		||||
 | 
			
		||||
   -  Hyper-Threading technology: Enabled
 | 
			
		||||
   -  Virtualization technology: Enabled
 | 
			
		||||
   -  VT for directed I/O: Enabled
 | 
			
		||||
   -  CPU power and performance policy: Performance
 | 
			
		||||
   -  CPU C state control: Disabled
 | 
			
		||||
   -  Plug & play BMC detection: Disabled
 | 
			
		||||
 | 
			
		||||
-  Primary disk:
 | 
			
		||||
 | 
			
		||||
   -  500 GB SSD or NVMe
 | 
			
		||||
 | 
			
		||||
-  Additional disks:
 | 
			
		||||
 | 
			
		||||
   -  Zero or more 500 GB disks (min. 10K RPM)
 | 
			
		||||
 | 
			
		||||
-  Network ports
 | 
			
		||||
 | 
			
		||||
   **NOTE:** All-In-One Simplex configuration requires one or more data ports.
 | 
			
		||||
   This configuration does not require a management port.
 | 
			
		||||
 | 
			
		||||
   -  OAM: 10GE
 | 
			
		||||
   -  Data: n x 10GE
 | 
			
		||||
 | 
			
		||||
*******************
 | 
			
		||||
Virtual environment
 | 
			
		||||
*******************
 | 
			
		||||
 | 
			
		||||
Run the libvirt QEMU setup scripts to set up virtualized OAM and
 | 
			
		||||
management networks:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   $ bash setup_network.sh
 | 
			
		||||
 | 
			
		||||
Building XML for definition of virtual servers:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   $ bash setup_configuration.sh -c simplex -i <starlingx iso image>
 | 
			
		||||
 | 
			
		||||
The default XML server definition created by the previous script is as follows:
 | 
			
		||||
 | 
			
		||||
- simplex-controller-0
 | 
			
		||||
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
Power up a virtual server
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
 | 
			
		||||
To power up the virtual server, run the following command:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
    $ sudo virsh start <server-xml-name>
 | 
			
		||||
 | 
			
		||||
Here is an example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
    $ sudo virsh start simplex-controller-0
 | 
			
		||||
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
Access a virtual server console
 | 
			
		||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 | 
			
		||||
 | 
			
		||||
The XML for virtual servers in stx-tools repo, deployment/libvirt,
 | 
			
		||||
provides both graphical and text consoles.
 | 
			
		||||
Follow these steps to access a virtual server console:
 | 
			
		||||
 | 
			
		||||
#. Access the graphical console in virt-manager by right-clicking on the
 | 
			
		||||
   domain (i.e. the server) and selecting "Open".
 | 
			
		||||
 | 
			
		||||
#. Access the textual console using the command "virsh console $DOMAIN",
 | 
			
		||||
   where DOMAIN is the name of the server shown in virsh.
 | 
			
		||||
 | 
			
		||||
#. When booting controller-0 for the first time, both the serial and
 | 
			
		||||
   graphical consoles present the initial configuration menu for the
 | 
			
		||||
   cluster. You can select the serial or graphical console for controller-0.
 | 
			
		||||
   However, for the other nodes, you can only use the serial console
 | 
			
		||||
   regardless of the selected option.
 | 
			
		||||
 | 
			
		||||
#. Open the graphic console on all servers before powering them on to
 | 
			
		||||
   observe the boot device selection and PXI boot progress. Run the "virsh
 | 
			
		||||
   console $DOMAIN" command promptly after powering up to see the initial boot
 | 
			
		||||
   sequence that follows the boot device selection. Only a few seconds exist
 | 
			
		||||
   during which you can see the sequence.
 | 
			
		||||
 | 
			
		||||
------------------------------
 | 
			
		||||
Installing the controller host
 | 
			
		||||
------------------------------
 | 
			
		||||
 | 
			
		||||
Installing controller-0 involves initializing a host with software and
 | 
			
		||||
then applying a bootstrap configuration from the command line. The
 | 
			
		||||
configured bootstrapped host becomes controller-0.
 | 
			
		||||
 | 
			
		||||
Following is the general procedure:
 | 
			
		||||
 | 
			
		||||
#. Be sure the StarlingX ISO is on a USB device and it is plugged into
 | 
			
		||||
   the USB port of the server that will be controller-0 and then
 | 
			
		||||
   power on the server.
 | 
			
		||||
 | 
			
		||||
#. Configure the controller using the config_controller script.
 | 
			
		||||
 | 
			
		||||
*************************
 | 
			
		||||
Initializing controller-0
 | 
			
		||||
*************************
 | 
			
		||||
 | 
			
		||||
This section describes how to initialize StarlingX in host controller-0.
 | 
			
		||||
Except where noted, you must execute all the commands from a console of
 | 
			
		||||
the host.
 | 
			
		||||
 | 
			
		||||
#. Be sure the StarlingX ISO is on a USB device and it is plugged into
 | 
			
		||||
   the USB port of the server that will be controller-0.
 | 
			
		||||
 | 
			
		||||
#. Power on the server.
 | 
			
		||||
 | 
			
		||||
#. Wait for the console to show the StarlingX ISO booting options:
 | 
			
		||||
 | 
			
		||||
   - **All-in-one Controller Configuration**
 | 
			
		||||
 | 
			
		||||
     - When the installer is loaded and the installer welcome screen
 | 
			
		||||
       appears in the controller-0 host, select "All-in-one Controller Configuration"
 | 
			
		||||
       for the type of installation.
 | 
			
		||||
 | 
			
		||||
   - **Graphical Console**
 | 
			
		||||
 | 
			
		||||
     - Select the "Graphical Console" as the console to use during
 | 
			
		||||
       installation.
 | 
			
		||||
 | 
			
		||||
   - **Standard Security Boot Profile**
 | 
			
		||||
 | 
			
		||||
     - Select "Standard Security Boot Profile" as the Security Profile.
 | 
			
		||||
 | 
			
		||||
#. Monitor the initialization. When the installation is complete, a reboot is initiated
 | 
			
		||||
   on the controller-0 host.  The GNU GRUB screen briefly displays and then
 | 
			
		||||
   boots automatically into the StarlingX image.
 | 
			
		||||
 | 
			
		||||
#. Log into controller-0 as user wrsroot and use wrsroot as the password. The
 | 
			
		||||
   first time you log in as wrsroot, you are required to change your
 | 
			
		||||
   password. Enter the current password (i.e. wrsroot):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Changing password for wrsroot.
 | 
			
		||||
      (current) UNIX Password:
 | 
			
		||||
 | 
			
		||||
#. Enter a new password for the wrsroot account:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      New password:
 | 
			
		||||
 | 
			
		||||
#. Enter the new password again to confirm it:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Retype new password:
 | 
			
		||||
 | 
			
		||||
#. The controller-0 is initialized with StarlingX and is ready for configuration.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
************************
 | 
			
		||||
Configuring controller-0
 | 
			
		||||
************************
 | 
			
		||||
 | 
			
		||||
This section describes how to interactively configure controller-0
 | 
			
		||||
to bootstrap the system with minimal critical data.
 | 
			
		||||
Except where noted, you must execute all commands from the console
 | 
			
		||||
of the active controller (i.e. controller-0).
 | 
			
		||||
 | 
			
		||||
When run interactively, the config_controller script presents a series
 | 
			
		||||
of prompts for initial configuration of StarlingX:
 | 
			
		||||
 | 
			
		||||
-  For the virtual environment, you can accept all the default values
 | 
			
		||||
   immediately after "system date and time".
 | 
			
		||||
-  For a physical deployment, answer the bootstrap configuration
 | 
			
		||||
   questions with answers applicable to your particular physical setup.
 | 
			
		||||
 | 
			
		||||
The script configures the first controller in the StarlingX
 | 
			
		||||
cluster as controller-0. The prompts are grouped by configuration
 | 
			
		||||
area.
 | 
			
		||||
 | 
			
		||||
Follow this procedure to interactively configure controller-0:
 | 
			
		||||
 | 
			
		||||
#. Start the script with no parameters:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      controller-0:~$ sudo config_controller
 | 
			
		||||
      System Configuration
 | 
			
		||||
      ================
 | 
			
		||||
      Enter ! at any prompt to abort...
 | 
			
		||||
      ...
 | 
			
		||||
 | 
			
		||||
#. Select [y] for System date and time:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      System date and time:
 | 
			
		||||
      -----------------------------
 | 
			
		||||
 | 
			
		||||
      Is the current date and time correct?  [y/N]: y
 | 
			
		||||
 | 
			
		||||
#. For System mode choose "simplex":
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      ...
 | 
			
		||||
      1) duplex-direct: two node-redundant configuration. Management and
 | 
			
		||||
      infrastructure networks are directly connected to peer ports
 | 
			
		||||
      2) duplex - two node redundant configuration
 | 
			
		||||
      3) simplex - single node non-redundant configuration
 | 
			
		||||
      System mode [duplex-direct]: 3
 | 
			
		||||
 | 
			
		||||
#. After System date and time and System mode:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Applying configuration (this will take several minutes):
 | 
			
		||||
 | 
			
		||||
      01/08: Creating bootstrap configuration ... DONE
 | 
			
		||||
      02/08: Applying bootstrap manifest ... DONE
 | 
			
		||||
      03/08: Persisting local configuration ... DONE
 | 
			
		||||
      04/08: Populating initial system inventory ... DONE
 | 
			
		||||
      05:08: Creating system configuration ... DONE
 | 
			
		||||
      06:08: Applying controller manifest ... DONE
 | 
			
		||||
      07:08: Finalize controller configuration ... DONE
 | 
			
		||||
      08:08: Waiting for service activation ... DONE
 | 
			
		||||
 | 
			
		||||
      Configuration was applied
 | 
			
		||||
 | 
			
		||||
      Please complete any out of service commissioning steps with system
 | 
			
		||||
      commands and unlock controller to proceed.
 | 
			
		||||
 | 
			
		||||
#. After config_controller bootstrap configuration, REST API, CLI and
 | 
			
		||||
   Horizon interfaces are enabled on the controller-0 OAM IP address. The
 | 
			
		||||
   remaining installation instructions use the CLI.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Provisioning the controller host
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
On controller-0, acquire Keystone administrative privileges:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   controller-0:~$ source /etc/nova/openrc
 | 
			
		||||
 | 
			
		||||
*********************************************
 | 
			
		||||
Configuring provider networks at installation
 | 
			
		||||
*********************************************
 | 
			
		||||
 | 
			
		||||
Set up one provider network of the vlan type and name it providernet-a:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
 | 
			
		||||
 | 
			
		||||
*****************************************
 | 
			
		||||
Providing data interfaces on controller-0
 | 
			
		||||
*****************************************
 | 
			
		||||
 | 
			
		||||
Follow these steps:
 | 
			
		||||
 | 
			
		||||
#. List all interfaces:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system host-if-list -a controller-0
 | 
			
		||||
      +--------------------------------------+---------+----------+...+------+--------------+------+---------+------------+..
 | 
			
		||||
      | uuid                                 | name    | class    |...| vlan | ports        | uses | used by | attributes |..
 | 
			
		||||
      |                                      |         |          |...| id   |              | i/f  | i/f     |            |..
 | 
			
		||||
      +--------------------------------------+----------+---------+...+------+--------------+------+---------+------------+..
 | 
			
		||||
      | 49fd8938-e76f-49f1-879e-83c431a9f1af | enp0s3  | platform |...| None | [u'enp0s3']  | []   | []      | MTU=1500   |..
 | 
			
		||||
      | 8957bb2c-fec3-4e5d-b4ed-78071f9f781c | eth1000 | None     |...| None | [u'eth1000'] | []   | []      | MTU=1500   |..
 | 
			
		||||
      | bf6f4cad-1022-4dd7-962b-4d7c47d16d54 | eth1001 | None     |...| None | [u'eth1001'] | []   | []      | MTU=1500   |..
 | 
			
		||||
      | f59b9469-7702-4b46-bad5-683b95f0a1cb | enp0s8  | platform |...| None | [u'enp0s8']  | []   | []      | MTU=1500   |..
 | 
			
		||||
      +--------------------------------------+---------+----------+...+------+--------------+------+---------+------------+..
 | 
			
		||||
 | 
			
		||||
#. Configure the data interfaces:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -c data controller-0 eth1000 -p providernet-a
 | 
			
		||||
      +------------------+--------------------------------------+
 | 
			
		||||
      | Property         | Value                                |
 | 
			
		||||
      +------------------+--------------------------------------+
 | 
			
		||||
      | ifname           | eth1000                              |
 | 
			
		||||
      | iftype           | ethernet                             |
 | 
			
		||||
      | ports            | [u'eth1000']                         |
 | 
			
		||||
      | providernetworks | providernet-a                        |
 | 
			
		||||
      | imac             | 08:00:27:c4:ad:3e                    |
 | 
			
		||||
      | imtu             | 1500                                 |
 | 
			
		||||
      | ifclass          | data                                 |
 | 
			
		||||
      | aemode           | None                                 |
 | 
			
		||||
      | schedpolicy      | None                                 |
 | 
			
		||||
      | txhashpolicy     | None                                 |
 | 
			
		||||
      | uuid             | 8957bb2c-fec3-4e5d-b4ed-78071f9f781c |
 | 
			
		||||
      | ihost_uuid       | 9c332b27-6f22-433b-bf51-396371ac4608 |
 | 
			
		||||
      | vlan_id          | None                                 |
 | 
			
		||||
      | uses             | []                                   |
 | 
			
		||||
      | used_by          | []                                   |
 | 
			
		||||
      | created_at       | 2018-08-28T12:50:51.820151+00:00     |
 | 
			
		||||
      | updated_at       | 2018-08-28T14:46:18.333109+00:00     |
 | 
			
		||||
      | sriov_numvfs     | 0                                    |
 | 
			
		||||
      | ipv4_mode        | disabled                             |
 | 
			
		||||
      | ipv6_mode        | disabled                             |
 | 
			
		||||
      | accelerated      | [True]                               |
 | 
			
		||||
      +------------------+--------------------------------------+
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
Configuring Cinder on controller disk
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
Follow these steps:
 | 
			
		||||
 | 
			
		||||
#. Review the available disk space and capacity and obtain the uuid of the
 | 
			
		||||
   physical disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0
 | 
			
		||||
      +--------------------------------------+-----------+---------+---------+---------+------------+...
 | 
			
		||||
      | uuid                                 | device_no | device_ | device_ | size_mi | available_ |...
 | 
			
		||||
      |                                      | de        | num     | type    | b       | mib        |...
 | 
			
		||||
      +--------------------------------------+-----------+---------+---------+---------+------------+...
 | 
			
		||||
      | 6b42c9dc-f7c0-42f1-a410-6576f5f069f1 | /dev/sda  | 2048    | HDD     | 600000  | 434072     |...
 | 
			
		||||
      |                                      |           |         |         |         |            |...
 | 
			
		||||
      |                                      |           |         |         |         |            |...
 | 
			
		||||
      | 534352d8-fec2-4ca5-bda7-0e0abe5a8e17 | /dev/sdb  | 2064    | HDD     | 16240   | 16237      |...
 | 
			
		||||
      |                                      |           |         |         |         |            |...
 | 
			
		||||
      |                                      |           |         |         |         |            |...
 | 
			
		||||
      | 146195b2-f3d7-42f9-935d-057a53736929 | /dev/sdc  | 2080    | HDD     | 16240   | 16237      |...
 | 
			
		||||
      |                                      |           |         |         |         |            |...
 | 
			
		||||
      |                                      |           |         |         |         |            |...
 | 
			
		||||
      +--------------------------------------+-----------+---------+---------+---------+------------+...
 | 
			
		||||
 | 
			
		||||
#. Create the 'cinder-volumes' local volume group:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-0 cinder-volumes
 | 
			
		||||
      +-----------------+--------------------------------------+
 | 
			
		||||
      | lvm_vg_name     | cinder-volumes                       |
 | 
			
		||||
      | vg_state        | adding                               |
 | 
			
		||||
      | uuid            | 61cb5cd2-171e-4ef7-8228-915d3560cdc3 |
 | 
			
		||||
      | ihost_uuid      | 9c332b27-6f22-433b-bf51-396371ac4608 |
 | 
			
		||||
      | lvm_vg_access   | None                                 |
 | 
			
		||||
      | lvm_max_lv      | 0                                    |
 | 
			
		||||
      | lvm_cur_lv      | 0                                    |
 | 
			
		||||
      | lvm_max_pv      | 0                                    |
 | 
			
		||||
      | lvm_cur_pv      | 0                                    |
 | 
			
		||||
      | lvm_vg_size     | 0.00                                 |
 | 
			
		||||
      | lvm_vg_total_pe | 0                                    |
 | 
			
		||||
      | lvm_vg_free_pe  | 0                                    |
 | 
			
		||||
      | created_at      | 2018-08-28T13:45:20.218905+00:00     |
 | 
			
		||||
      | updated_at      | None                                 |
 | 
			
		||||
      | parameters      | {u'lvm_type': u'thin'}               |
 | 
			
		||||
      +-----------------+--------------------------------------+
 | 
			
		||||
 | 
			
		||||
#. Create a disk partition to add to the volume group:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-0 534352d8-fec2-4ca5-bda7-0e0abe5a8e17 16237 -t    lvm_phys_vol
 | 
			
		||||
      +-------------+--------------------------------------------------+
 | 
			
		||||
      | Property    | Value                                            |
 | 
			
		||||
      +-------------+--------------------------------------------------+
 | 
			
		||||
      | device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part1 |
 | 
			
		||||
      | device_node | /dev/sdb1                                        |
 | 
			
		||||
      | type_guid   | ba5eba11-0000-1111-2222-000000000001             |
 | 
			
		||||
      | type_name   | None                                             |
 | 
			
		||||
      | start_mib   | None                                             |
 | 
			
		||||
      | end_mib     | None                                             |
 | 
			
		||||
      | size_mib    | 16237                                            |
 | 
			
		||||
      | uuid        | 0494615f-bd79-4490-84b9-dcebbe5f377a             |
 | 
			
		||||
      | ihost_uuid  | 9c332b27-6f22-433b-bf51-396371ac4608             |
 | 
			
		||||
      | idisk_uuid  | 534352d8-fec2-4ca5-bda7-0e0abe5a8e17             |
 | 
			
		||||
      | ipv_uuid    | None                                             |
 | 
			
		||||
      | status      | Creating                                         |
 | 
			
		||||
      | created_at  | 2018-08-28T13:45:48.512226+00:00                 |
 | 
			
		||||
      | updated_at  | None                                             |
 | 
			
		||||
      +-------------+--------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
#. Wait for the new partition to be created (i.e. status=Ready):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-0 --disk  534352d8-fec2-4ca5-bda7-0e0abe5a8e17
 | 
			
		||||
      +--------------------------------------+...+------------+...+---------------------+----------+--------+
 | 
			
		||||
      | uuid                                 |...| device_nod |...| type_name           | size_mib | status |
 | 
			
		||||
      |                                      |...| e          |...|                     |          |        |
 | 
			
		||||
      +--------------------------------------+...+------------+...+---------------------+----------+--------+
 | 
			
		||||
      | 0494615f-bd79-4490-84b9-dcebbe5f377a |...| /dev/sdb1  |...| LVM Physical Volume | 16237    | Ready  |
 | 
			
		||||
      |                                      |...|            |...|                     |          |        |
 | 
			
		||||
      |                                      |...|            |...|                     |          |        |
 | 
			
		||||
      +--------------------------------------+...+------------+...+---------------------+----------+--------+
 | 
			
		||||
 | 
			
		||||
#. Add the partition to the volume group:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-0 cinder-volumes 0494615f-bd79-4490-84b9-dcebbe5f377a
 | 
			
		||||
      +--------------------------+--------------------------------------------------+
 | 
			
		||||
      | Property                 | Value                                            |
 | 
			
		||||
      +--------------------------+--------------------------------------------------+
 | 
			
		||||
      | uuid                     | 9a0ad568-0ace-4d57-9e03-e7a63f609cf2             |
 | 
			
		||||
      | pv_state                 | adding                                           |
 | 
			
		||||
      | pv_type                  | partition                                        |
 | 
			
		||||
      | disk_or_part_uuid        | 0494615f-bd79-4490-84b9-dcebbe5f377a             |
 | 
			
		||||
      | disk_or_part_device_node | /dev/sdb1                                        |
 | 
			
		||||
      | disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part1 |
 | 
			
		||||
      | lvm_pv_name              | /dev/sdb1                                        |
 | 
			
		||||
      | lvm_vg_name              | cinder-volumes                                   |
 | 
			
		||||
      | lvm_pv_uuid              | None                                             |
 | 
			
		||||
      | lvm_pv_size              | 0                                                |
 | 
			
		||||
      | lvm_pe_total             | 0                                                |
 | 
			
		||||
      | lvm_pe_alloced           | 0                                                |
 | 
			
		||||
      | ihost_uuid               | 9c332b27-6f22-433b-bf51-396371ac4608             |
 | 
			
		||||
      | created_at               | 2018-08-28T13:47:39.450763+00:00                 |
 | 
			
		||||
      | updated_at               | None                                             |
 | 
			
		||||
      +--------------------------+--------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
*********************************************
 | 
			
		||||
Adding an LVM storage backend at installation
 | 
			
		||||
*********************************************
 | 
			
		||||
 | 
			
		||||
Follow these steps:
 | 
			
		||||
 | 
			
		||||
#. Ensure requirements are met to add LVM storage:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add lvm -s cinder
 | 
			
		||||
 | 
			
		||||
      WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED.
 | 
			
		||||
 | 
			
		||||
      By confirming this operation, the LVM backend will be created.
 | 
			
		||||
 | 
			
		||||
      Please refer to the system admin guide for minimum spec for LVM
 | 
			
		||||
      storage. Set the 'confirmed' field to execute this operation
 | 
			
		||||
      for the lvm backend.
 | 
			
		||||
 | 
			
		||||
#. Add the LVM storage backend:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add lvm -s cinder --confirmed
 | 
			
		||||
 | 
			
		||||
      System configuration has changed.
 | 
			
		||||
      Please follow the administrator guide to complete configuring the system.
 | 
			
		||||
 | 
			
		||||
      +--------------------------------------+------------+---------+-------------+...+----------+--------------+
 | 
			
		||||
      | uuid                                 | name       | backend | state       |...| services | capabilities |
 | 
			
		||||
      +--------------------------------------+------------+---------+-------------+...+----------+--------------+
 | 
			
		||||
      | 6d750a68-115a-4c26-adf4-58d6e358a00d | file-store | file    | configured  |...| glance   | {}           |
 | 
			
		||||
      | e2697426-2d79-4a83-beb7-2eafa9ceaee5 | lvm-store  | lvm     | configuring |...| cinder   | {}           |
 | 
			
		||||
      +--------------------------------------+------------+---------+-------------+...+----------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Wait for the LVM storage backend to be configured (i.e. state=configured):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-list
 | 
			
		||||
      +--------------------------------------+------------+---------+------------+------+----------+--------------+
 | 
			
		||||
      | uuid                                 | name       | backend | state      | task | services | capabilities |
 | 
			
		||||
      +--------------------------------------+------------+---------+------------+------+----------+--------------+
 | 
			
		||||
      | 6d750a68-115a-4c26-adf4-58d6e358a00d | file-store | file    | configured | None | glance   | {}           |
 | 
			
		||||
      | e2697426-2d79-4a83-beb7-2eafa9ceaee5 | lvm-store  | lvm     | configured | None | cinder   | {}           |
 | 
			
		||||
      +--------------------------------------+------------+---------+------------+------+----------+--------------+
 | 
			
		||||
 | 
			
		||||
***********************************************
 | 
			
		||||
Configuring VM local storage on controller disk
 | 
			
		||||
***********************************************
 | 
			
		||||
 | 
			
		||||
Follow these steps:
 | 
			
		||||
 | 
			
		||||
#. Review the available disk space and capacity and obtain the uuid of the
 | 
			
		||||
   physical disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0
 | 
			
		||||
      +--------------------------------------+-----------+---------+---------+---------+------------+...
 | 
			
		||||
      | uuid                                 | device_no | device_ | device_ | size_mi | available_ |...
 | 
			
		||||
      |                                      | de        | num     | type    | b       | mib        |...
 | 
			
		||||
      +--------------------------------------+-----------+---------+---------+---------+------------+...
 | 
			
		||||
      | 6b42c9dc-f7c0-42f1-a410-6576f5f069f1 | /dev/sda  | 2048    | HDD     | 600000  | 434072     |...
 | 
			
		||||
      |                                      |           |         |         |         |            |...
 | 
			
		||||
      |                                      |           |         |         |         |            |...
 | 
			
		||||
      | 534352d8-fec2-4ca5-bda7-0e0abe5a8e17 | /dev/sdb  | 2064    | HDD     | 16240   | 0          |...
 | 
			
		||||
      |                                      |           |         |         |         |            |...
 | 
			
		||||
      |                                      |           |         |         |         |            |...
 | 
			
		||||
      | 146195b2-f3d7-42f9-935d-057a53736929 | /dev/sdc  | 2080    | HDD     | 16240   | 16237      |...
 | 
			
		||||
      |                                      |           |         |         |         |            |...
 | 
			
		||||
      |                                      |           |         |         |         |            |...
 | 
			
		||||
      +--------------------------------------+-----------+---------+---------+---------+------------+...
 | 
			
		||||
 | 
			
		||||
#. Create the 'nova-local' volume group:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-0 nova-local
 | 
			
		||||
      +-----------------+-------------------------------------------------------------------+
 | 
			
		||||
      | Property        | Value                                                             |
 | 
			
		||||
      +-----------------+-------------------------------------------------------------------+
 | 
			
		||||
      | lvm_vg_name     | nova-local                                                        |
 | 
			
		||||
      | vg_state        | adding                                                            |
 | 
			
		||||
      | uuid            | 517d313e-8aa0-4b4d-92e6-774b9085f336                              |
 | 
			
		||||
      | ihost_uuid      | 9c332b27-6f22-433b-bf51-396371ac4608                              |
 | 
			
		||||
      | lvm_vg_access   | None                                                              |
 | 
			
		||||
      | lvm_max_lv      | 0                                                                 |
 | 
			
		||||
      | lvm_cur_lv      | 0                                                                 |
 | 
			
		||||
      | lvm_max_pv      | 0                                                                 |
 | 
			
		||||
      | lvm_cur_pv      | 0                                                                 |
 | 
			
		||||
      | lvm_vg_size     | 0.00                                                              |
 | 
			
		||||
      | lvm_vg_total_pe | 0                                                                 |
 | 
			
		||||
      | lvm_vg_free_pe  | 0                                                                 |
 | 
			
		||||
      | created_at      | 2018-08-28T14:02:58.486716+00:00                                  |
 | 
			
		||||
      | updated_at      | None                                                              |
 | 
			
		||||
      | parameters      | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
 | 
			
		||||
      +-----------------+-------------------------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
#. Create a disk partition to add to the volume group:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-0 146195b2-f3d7-42f9-935d-057a53736929 16237 -t lvm_phys_vol
 | 
			
		||||
      +-------------+--------------------------------------------------+
 | 
			
		||||
      | Property    | Value                                            |
 | 
			
		||||
      +-------------+--------------------------------------------------+
 | 
			
		||||
      | device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part1 |
 | 
			
		||||
      | device_node | /dev/sdc1                                        |
 | 
			
		||||
      | type_guid   | ba5eba11-0000-1111-2222-000000000001             |
 | 
			
		||||
      | type_name   | None                                             |
 | 
			
		||||
      | start_mib   | None                                             |
 | 
			
		||||
      | end_mib     | None                                             |
 | 
			
		||||
      | size_mib    | 16237                                            |
 | 
			
		||||
      | uuid        | 009ce3b1-ed07-46e9-9560-9d2371676748             |
 | 
			
		||||
      | ihost_uuid  | 9c332b27-6f22-433b-bf51-396371ac4608             |
 | 
			
		||||
      | idisk_uuid  | 146195b2-f3d7-42f9-935d-057a53736929             |
 | 
			
		||||
      | ipv_uuid    | None                                             |
 | 
			
		||||
      | status      | Creating                                         |
 | 
			
		||||
      | created_at  | 2018-08-28T14:04:29.714030+00:00                 |
 | 
			
		||||
      | updated_at  | None                                             |
 | 
			
		||||
      +-------------+--------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
#. Wait for the new partition to be created (i.e. status=Ready):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-0 --disk 146195b2-f3d7-42f9-935d-057a53736929
 | 
			
		||||
      +--------------------------------------+...+------------+...+---------------------+----------+--------+
 | 
			
		||||
      | uuid                                 |...| device_nod |...| type_name           | size_mib | status |
 | 
			
		||||
      |                                      |...| e          |...|                     |          |        |
 | 
			
		||||
      +--------------------------------------+...+------------+...+---------------------+----------+--------+
 | 
			
		||||
      | 009ce3b1-ed07-46e9-9560-9d2371676748 |...| /dev/sdc1  |...| LVM Physical Volume | 16237    | Ready  |
 | 
			
		||||
      |                                      |...|            |...|                     |          |        |
 | 
			
		||||
      |                                      |...|            |...|                     |          |        |
 | 
			
		||||
      +--------------------------------------+...+------------+...+---------------------+----------+--------+
 | 
			
		||||
 | 
			
		||||
#. Add the partition to the volume group:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-0 nova-local 009ce3b1-ed07-46e9-9560-9d2371676748
 | 
			
		||||
      +--------------------------+--------------------------------------------------+
 | 
			
		||||
      | Property                 | Value                                            |
 | 
			
		||||
      +--------------------------+--------------------------------------------------+
 | 
			
		||||
      | uuid                     | 830c9dc8-c71a-4cb2-83be-c4d955ef4f6b             |
 | 
			
		||||
      | pv_state                 | adding                                           |
 | 
			
		||||
      | pv_type                  | partition                                        |
 | 
			
		||||
      | disk_or_part_uuid        | 009ce3b1-ed07-46e9-9560-9d2371676748             |
 | 
			
		||||
      | disk_or_part_device_node | /dev/sdc1                                        |
 | 
			
		||||
      | disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part1 |
 | 
			
		||||
      | lvm_pv_name              | /dev/sdc1                                        |
 | 
			
		||||
      | lvm_vg_name              | nova-local                                       |
 | 
			
		||||
      | lvm_pv_uuid              | None                                             |
 | 
			
		||||
      | lvm_pv_size              | 0                                                |
 | 
			
		||||
      | lvm_pe_total             | 0                                                |
 | 
			
		||||
      | lvm_pe_alloced           | 0                                                |
 | 
			
		||||
      | ihost_uuid               | 9c332b27-6f22-433b-bf51-396371ac4608             |
 | 
			
		||||
      | created_at               | 2018-08-28T14:06:05.705546+00:00                 |
 | 
			
		||||
      | updated_at               | None                                             |
 | 
			
		||||
      +--------------------------+--------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
**********************
 | 
			
		||||
Unlocking controller-0
 | 
			
		||||
**********************
 | 
			
		||||
 | 
			
		||||
You must unlock controller-0 so that you can use it to install
 | 
			
		||||
controller-1. Use the system host-unlock command:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   [wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
The host reboots. During the reboot, the command line is
 | 
			
		||||
unavailable and any ssh connections are dropped. To monitor the
 | 
			
		||||
progress of the reboot, use the controller-0 console.
 | 
			
		||||
 | 
			
		||||
****************************************
 | 
			
		||||
Verifying the controller-0 configuration
 | 
			
		||||
****************************************
 | 
			
		||||
 | 
			
		||||
Follow these steps:
 | 
			
		||||
 | 
			
		||||
#. On controller-0, acquire Keystone administrative privileges:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      controller-0:~$ source /etc/nova/openrc
 | 
			
		||||
 | 
			
		||||
#. Verify that the controller-0 services are running:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system service-list
 | 
			
		||||
      +-----+-------------------------------+--------------+----------------+
 | 
			
		||||
      | id  | service_name                  | hostname     | state          |
 | 
			
		||||
      +-----+-------------------------------+--------------+----------------+
 | 
			
		||||
      ...
 | 
			
		||||
      | 1   | oam-ip                        | controller-0 | enabled-active |
 | 
			
		||||
      | 2   | management-ip                 | controller-0 | enabled-active |
 | 
			
		||||
      ...
 | 
			
		||||
      +-----+-------------------------------+--------------+----------------+
 | 
			
		||||
 | 
			
		||||
#. Verify that controller-0 has controller and compute subfunctions:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system host-show 1 | grep subfunctions
 | 
			
		||||
      | subfunctions        | controller,compute                         |
 | 
			
		||||
 | 
			
		||||
#. Verify that controller-0 is unlocked, enabled, and available:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      [wrsroot@controller-0 ~(keystone_admin)]$ system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
*****************
 | 
			
		||||
System alarm list
 | 
			
		||||
*****************
 | 
			
		||||
 | 
			
		||||
When all nodes are unlocked, enabled, and available, check 'fm alarm-list' for
 | 
			
		||||
issues.
 | 
			
		||||
 | 
			
		||||
Your StarlingX deployment is now up and running with one controller with Cinder
 | 
			
		||||
storage and all OpenStack services up and running. You can now proceed with
 | 
			
		||||
standard OpenStack APIs, CLIs and/or Horizon to load Glance images, configure
 | 
			
		||||
Nova Flavors, configure Neutron networks, and launch Nova virtual machines.
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Deployment terminology
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. include:: deployment_terminology.rst
 | 
			
		||||
   :start-after: incl-simplex-deployment-terminology:
 | 
			
		||||
   :end-before: incl-simplex-deployment-terminology-end:
 | 
			
		||||
 | 
			
		||||
.. include:: deployment_terminology.rst
 | 
			
		||||
   :start-after: incl-standard-controller-deployment-terminology:
 | 
			
		||||
   :end-before: incl-standard-controller-deployment-terminology-end:
 | 
			
		||||
 | 
			
		||||
.. include:: deployment_terminology.rst
 | 
			
		||||
   :start-after: incl-common-deployment-terminology:
 | 
			
		||||
   :end-before: incl-common-deployment-terminology-end:
 | 
			
		||||
@@ -1,246 +0,0 @@
 | 
			
		||||
================================
 | 
			
		||||
Ansible Bootstrap Configurations
 | 
			
		||||
================================
 | 
			
		||||
 | 
			
		||||
This section describes additional Ansible bootstrap configurations for advanced
 | 
			
		||||
Ansible bootstrap scenarios.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
----
 | 
			
		||||
IPv6
 | 
			
		||||
----
 | 
			
		||||
 | 
			
		||||
If you are using IPv6, provide IPv6 configuration overrides for the Ansible
 | 
			
		||||
bootstrap playbook. Note that all addressing, except pxeboot_subnet, should be
 | 
			
		||||
updated to IPv6 addressing.
 | 
			
		||||
 | 
			
		||||
Example IPv6 override values are shown below:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   dns_servers:
 | 
			
		||||
   ‐ 2001:4860:4860::8888
 | 
			
		||||
   ‐ 2001:4860:4860::8844
 | 
			
		||||
   pxeboot_subnet: 169.254.202.0/24
 | 
			
		||||
   management_subnet: 2001:db8:2::/64
 | 
			
		||||
   cluster_host_subnet: 2001:db8:3::/64
 | 
			
		||||
   cluster_pod_subnet: 2001:db8:4::/64
 | 
			
		||||
   cluster_service_subnet: 2001:db8:4::/112
 | 
			
		||||
   external_oam_subnet: 2001:db8:1::/64
 | 
			
		||||
   external_oam_gateway_address: 2001:db8::1
 | 
			
		||||
   external_oam_floating_address: 2001:db8::2
 | 
			
		||||
   external_oam_node_0_address: 2001:db8::3
 | 
			
		||||
   external_oam_node_1_address: 2001:db8::4
 | 
			
		||||
   management_multicast_subnet: ff08::1:1:0/124
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   The `external_oam_node_0_address`, and `external_oam_node_1_address` parameters
 | 
			
		||||
   are not required for the AIO‐SX installation.
 | 
			
		||||
 | 
			
		||||
----------------
 | 
			
		||||
Private registry
 | 
			
		||||
----------------
 | 
			
		||||
 | 
			
		||||
To bootstrap StarlingX requires pulling container images for multiple system
 | 
			
		||||
services. By default these container images are pulled from public registries:
 | 
			
		||||
k8s.gcr.io, gcr.io, quay.io, and docker.io.
 | 
			
		||||
 | 
			
		||||
It may be required (or desired) to copy the container images to a private
 | 
			
		||||
registry and pull the images from the private registry (instead of the public
 | 
			
		||||
registries) as part of the StarlingX bootstrap. For example, a private registry
 | 
			
		||||
would be required if a StarlingX system was deployed in an air-gapped network
 | 
			
		||||
environment.
 | 
			
		||||
 | 
			
		||||
Use the `docker_registries` structure in the bootstrap overrides file to specify
 | 
			
		||||
alternate registry(s) for the public registries from which container images are
 | 
			
		||||
pulled. These alternate registries are used during the bootstrapping of
 | 
			
		||||
controller-0, and on :command:`system application-apply` of application packages.
 | 
			
		||||
 | 
			
		||||
The `docker_registries` structure is a map of public registries and the
 | 
			
		||||
alternate registry values for each public registry. For each public registry the
 | 
			
		||||
key is a fully scoped registry name of a public registry (for example "k8s.gcr.io")
 | 
			
		||||
and the alternate registry URL and username/password (if authenticated).
 | 
			
		||||
 | 
			
		||||
url
 | 
			
		||||
   The fully scoped registry name (and optionally namespace/) for the alternate
 | 
			
		||||
   registry location where the images associated with this public registry
 | 
			
		||||
   should now be pulled from.
 | 
			
		||||
 | 
			
		||||
   Valid formats for the `url` value are:
 | 
			
		||||
 | 
			
		||||
   * Domain. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
       example.domain
 | 
			
		||||
 | 
			
		||||
   * Domain with port. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
       example.domain:5000
 | 
			
		||||
 | 
			
		||||
   * IPv4 address. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
       1.2.3.4
 | 
			
		||||
 | 
			
		||||
   * IPv4 address with port. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
       1.2.3.4:5000
 | 
			
		||||
 | 
			
		||||
   * IPv6 address. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
       FD01::0100
 | 
			
		||||
 | 
			
		||||
   * IPv6 address with port. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
       [FD01::0100]:5000
 | 
			
		||||
 | 
			
		||||
username
 | 
			
		||||
   The username for logging into the alternate registry, if authenticated.
 | 
			
		||||
 | 
			
		||||
password
 | 
			
		||||
   The password for logging into the alternate registry, if authenticated.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
Additional configuration options in the `docker_registries` structure are:
 | 
			
		||||
 | 
			
		||||
unified
 | 
			
		||||
   A special public registry key which, if defined, will specify that images
 | 
			
		||||
   from all public registries should be retrieved from this single source.
 | 
			
		||||
   Alternate registry values, if specified, are ignored. The `unified` key
 | 
			
		||||
   supports the same set of alternate registry values of `url`, `username`, and
 | 
			
		||||
   `password`.
 | 
			
		||||
 | 
			
		||||
is_secure_registry
 | 
			
		||||
   Specifies whether the registry(s) supports HTTPS (secure) or HTTP (not secure).
 | 
			
		||||
   Applies to all alternate registries. A boolean value. The default value is
 | 
			
		||||
   True (secure, HTTPS).
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
If an alternate registry is specified to be secure (using HTTPS), the certificate
 | 
			
		||||
used by the registry may not be signed by a well-known Certificate Authority (CA).
 | 
			
		||||
This results in the :command:`docker pull` of images from this registry to fail.
 | 
			
		||||
Use the `ssl_ca_cert` override to specify the public certificate of the CA that
 | 
			
		||||
signed the alternate registry’s certificate. This will add the CA as a trusted
 | 
			
		||||
CA to the StarlingX system.
 | 
			
		||||
 | 
			
		||||
ssl_ca_cert
 | 
			
		||||
   The `ssl_ca_cert` value is the absolute path of the certificate file. The
 | 
			
		||||
   certificate must be in PEM format and the file may contain a single CA
 | 
			
		||||
   certificate or multiple CA certificates in a bundle.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
The following example specifies a single alternate registry from which to
 | 
			
		||||
bootstrap StarlingX, where the images of the public registries have been
 | 
			
		||||
copied to the single alternate registry. It additionally defines an alternate
 | 
			
		||||
registry certificate:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  docker_registries:
 | 
			
		||||
     k8s.gcr.io:
 | 
			
		||||
       url:
 | 
			
		||||
     gcr.io:
 | 
			
		||||
       url:
 | 
			
		||||
     quay.io:
 | 
			
		||||
       url:
 | 
			
		||||
     docker.io:
 | 
			
		||||
       url:
 | 
			
		||||
     unified:
 | 
			
		||||
       url: my.registry.io
 | 
			
		||||
       username: myreguser
 | 
			
		||||
       password: myregP@ssw0rd
 | 
			
		||||
     is_secure_registry: True
 | 
			
		||||
 | 
			
		||||
  ssl_ca_cert: /path/to/ssl_ca_cert_file
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Docker proxy
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
If the StarlingX OAM interface or network is behind a http/https proxy, relative
 | 
			
		||||
to the Docker registries used by StarlingX or applications running on StarlingX,
 | 
			
		||||
then Docker within StarlingX must be configured to use these http/https proxies.
 | 
			
		||||
 | 
			
		||||
Use the following configuration overrides to configure your Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
docker_http_proxy
 | 
			
		||||
   Specify the HTTP proxy URL to use. For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      docker_http_proxy: http://my.proxy.com:1080
 | 
			
		||||
 | 
			
		||||
docker_https_proxy
 | 
			
		||||
   Specify the HTTPS proxy URL to use. For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      docker_https_proxy: https://my.proxy.com:1443
 | 
			
		||||
 | 
			
		||||
docker_no_proxy
 | 
			
		||||
   A no-proxy address list can be provided for registries not on the other side
 | 
			
		||||
   of the proxies. This list will be added to the default no-proxy list derived
 | 
			
		||||
   from localhost, loopback, management, and OAM floating addresses at run time.
 | 
			
		||||
   Each address in the no-proxy list must neither contain a wildcard nor have
 | 
			
		||||
   subnet format. For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      docker_no_proxy:
 | 
			
		||||
        - 1.2.3.4
 | 
			
		||||
        - 5.6.7.8
 | 
			
		||||
 | 
			
		||||
-------------------------------
 | 
			
		||||
K8S Root CA Certificate and Key
 | 
			
		||||
-------------------------------
 | 
			
		||||
 | 
			
		||||
By default the K8S Root CA Certificate and Key are auto-generated and result in
 | 
			
		||||
the use of self-signed certificates for the Kubernetes API server. In the case
 | 
			
		||||
where self-signed certificates are not acceptable, use the bootstrap override
 | 
			
		||||
values `k8s_root_ca_cert` and `k8s_root_ca_key` to specify the certificate and
 | 
			
		||||
key for the Kubernetes root CA.
 | 
			
		||||
 | 
			
		||||
k8s_root_ca_cert
 | 
			
		||||
   Specifies the certificate for the Kubernetes root CA. The `k8s_root_ca_cert`
 | 
			
		||||
   value is the absolute path of the certificate file. The certificate must be
 | 
			
		||||
   in PEM format and the value must be provided as part of a pair with
 | 
			
		||||
   `k8s_root_ca_key`. The playbook will not proceed if only one value is provided.
 | 
			
		||||
 | 
			
		||||
k8s_root_ca_key
 | 
			
		||||
   Specifies the key for the Kubernetes root CA. The `k8s_root_ca_key`
 | 
			
		||||
   value is the absolute path of the certificate file. The certificate must be
 | 
			
		||||
   in PEM format and the value must be provided as part of a pair with
 | 
			
		||||
   `k8s_root_ca_cert`. The playbook will not proceed if only one value is provided.
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   The default length for the generated Kubernetes root CA certificate is 10
 | 
			
		||||
   years. Replacing the root CA certificate is an involved process so the custom
 | 
			
		||||
   certificate expiry should be as long as possible. We recommend ensuring root
 | 
			
		||||
   CA certificate has an expiry of at least 5-10 years.
 | 
			
		||||
 | 
			
		||||
The administrator can also provide values to add to the Kubernetes API server
 | 
			
		||||
certificate Subject Alternative Name list using the 'apiserver_cert_sans`
 | 
			
		||||
override parameter.
 | 
			
		||||
 | 
			
		||||
apiserver_cert_sans
 | 
			
		||||
   Specifies a list of Subject Alternative Name entries that will be added to the
 | 
			
		||||
   Kubernetes API server certificate. Each entry in the list must be an IP address
 | 
			
		||||
   or domain name. For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      apiserver_cert_sans:
 | 
			
		||||
        - hostname.domain
 | 
			
		||||
        - 198.51.100.75
 | 
			
		||||
 | 
			
		||||
StarlingX automatically updates this parameter to include IP records for the OAM
 | 
			
		||||
floating IP and both OAM unit IP addresses.
 | 
			
		||||
@@ -1,26 +0,0 @@
 | 
			
		||||
==============================================
 | 
			
		||||
Bare metal All-in-one Duplex Installation R2.0
 | 
			
		||||
==============================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_aio_duplex.txt
 | 
			
		||||
 | 
			
		||||
The bare metal AIO-DX deployment configuration may be extended with up to four
 | 
			
		||||
worker/compute nodes (not shown in the diagram). Installation instructions for
 | 
			
		||||
these additional nodes are described in :doc:`aio_duplex_extend`.
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   aio_duplex_hardware
 | 
			
		||||
   aio_duplex_install_kubernetes
 | 
			
		||||
   aio_duplex_extend
 | 
			
		||||
@@ -1,192 +0,0 @@
 | 
			
		||||
================================================
 | 
			
		||||
Extend Capacity with Worker and/or Compute Nodes
 | 
			
		||||
================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to extend capacity with worker and/or compute
 | 
			
		||||
nodes on a **StarlingX R2.0 bare metal All-in-one Duplex** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
---------------------------------
 | 
			
		||||
Install software on compute nodes
 | 
			
		||||
---------------------------------
 | 
			
		||||
 | 
			
		||||
#. Power on the compute servers and force them to network boot with the
 | 
			
		||||
   appropriate BIOS boot options for your particular server.
 | 
			
		||||
 | 
			
		||||
#. As the compute servers boot, a message appears on their console instructing
 | 
			
		||||
   you to configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of controller-0, list hosts to see newly discovered compute
 | 
			
		||||
   hosts (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 3  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
      | 4  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Using the host id, set the personality of this host to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 3 personality=worker hostname=compute-0
 | 
			
		||||
      system host-update 4 personality=worker hostname=compute-1
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on compute nodes.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. Wait for the install of software on the computes to complete, the computes to
 | 
			
		||||
   reboot and to both show as locked/disabled/online in 'system host-list'.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | controller-1 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 3  | compute-0    | compute     | locked         | disabled    | online       |
 | 
			
		||||
      | 4  | compute-1    | compute     | locked         | disabled    | online       |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure compute nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
         system interface-network-assign $COMPUTE mgmt0 cluster-host
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
 | 
			
		||||
   example eth0, that are applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure SRIOV device plug in:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
         system host-label-assign controller-1 sriovdp=enabled
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
           system host-memory-modify controller-1 0 -1G 100
 | 
			
		||||
           system host-memory-modify controller-1 1 -1G 100
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=<DATA-0-PORT>
 | 
			
		||||
      DATA1IF=<DATA-1-PORT>
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
 | 
			
		||||
      # configure the datanetworks in sysinv, prior to referencing it
 | 
			
		||||
      # in the ``system host-if-modify`` command'.
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
        echo "Configuring interface for: $COMPUTE"
 | 
			
		||||
        set -ex
 | 
			
		||||
        system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
        system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
        DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
        DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
        DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
        DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
        DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
        DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
        DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
        DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
        system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
        system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
        system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
        system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
        set +ex
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for NODE in compute-0 compute-1; do
 | 
			
		||||
        system host-label-assign $NODE  openstack-compute-node=enabled
 | 
			
		||||
        system host-label-assign $NODE  openvswitch=enabled
 | 
			
		||||
        system host-label-assign $NODE  sriov=enabled
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
 | 
			
		||||
   needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
        echo "Configuring Nova local for: $COMPUTE"
 | 
			
		||||
        ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
        ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
        PARTITION_SIZE=10
 | 
			
		||||
        NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 | 
			
		||||
        NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
        system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
        system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock compute nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
Unlock compute nodes in order to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
     system host-unlock $COMPUTE
 | 
			
		||||
  done
 | 
			
		||||
 | 
			
		||||
The compute nodes will reboot to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
@@ -1,58 +0,0 @@
 | 
			
		||||
=====================
 | 
			
		||||
Hardware Requirements
 | 
			
		||||
=====================
 | 
			
		||||
 | 
			
		||||
This section describes the hardware requirements and server preparation for a
 | 
			
		||||
**StarlingX R2.0 bare metal All-in-one Duplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Minimum hardware requirements
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
The recommended minimum hardware requirements for bare metal servers for various
 | 
			
		||||
host types are:
 | 
			
		||||
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum Requirement     | All-in-one Controller Node                                |
 | 
			
		||||
+=========================+===========================================================+
 | 
			
		||||
| Number of servers       | 2                                                         |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge)      |
 | 
			
		||||
|                         |   8 cores/socket                                          |
 | 
			
		||||
|                         |                                                           |
 | 
			
		||||
|                         | or                                                        |
 | 
			
		||||
|                         |                                                           |
 | 
			
		||||
|                         | - Single-CPU Intel® Xeon® D-15xx family, 8 cores          |
 | 
			
		||||
|                         |   (low-power/low-cost option)                             |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum memory          | 64 GB                                                     |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Primary disk            | 500 GB SSD or NVMe (see :doc:`../../nvme_config`)         |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Additional disks        | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD            |
 | 
			
		||||
|                         | - Recommended, but not required: 1 or more SSDs or NVMe   |
 | 
			
		||||
|                         |   drives for Ceph journals (min. 1024 MiB per OSD journal)|
 | 
			
		||||
|                         | - For OpenStack, recommend 1 or more 500 GB (min. 10K RPM)|
 | 
			
		||||
|                         |   for VM local ephemeral storage                          |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum network ports   | - Mgmt/Cluster: 1x10GE                                    |
 | 
			
		||||
|                         | - OAM: 1x1GE                                              |
 | 
			
		||||
|                         | - Data: 1 or more x 10GE                                  |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| BIOS settings           | - Hyper-Threading technology enabled                      |
 | 
			
		||||
|                         | - Virtualization technology enabled                       |
 | 
			
		||||
|                         | - VT for directed I/O enabled                             |
 | 
			
		||||
|                         | - CPU power and performance policy set to performance     |
 | 
			
		||||
|                         | - CPU C state control disabled                            |
 | 
			
		||||
|                         | - Plug & play BMC detection disabled                      |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
--------------------------
 | 
			
		||||
Prepare bare metal servers
 | 
			
		||||
--------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: prep_servers.txt
 | 
			
		||||
@@ -1,435 +0,0 @@
 | 
			
		||||
=================================================
 | 
			
		||||
Install StarlingX Kubernetes on Bare Metal AIO-DX
 | 
			
		||||
=================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R2.0 bare metal All-in-one Duplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
---------------------
 | 
			
		||||
Create a bootable USB
 | 
			
		||||
---------------------
 | 
			
		||||
 | 
			
		||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
 | 
			
		||||
create a bootable USB with the StarlingX ISO on your system.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: aio_simplex_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-install-software-controller-0-aio-simplex-start:
 | 
			
		||||
   :end-before: incl-install-software-controller-0-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
#. Login using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Login: sysadmin
 | 
			
		||||
      Password:
 | 
			
		||||
      Changing password for sysadmin.
 | 
			
		||||
      (current) UNIX Password: sysadmin
 | 
			
		||||
      New Password:
 | 
			
		||||
      (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. Verify and/or configure IP connectivity.
 | 
			
		||||
 | 
			
		||||
   External connectivity is required to run the Ansible bootstrap playbook. The
 | 
			
		||||
   StarlingX boot image will DHCP out all interfaces so the server may have
 | 
			
		||||
   obtained an IP address and have external IP connectivity if a DHCP server is
 | 
			
		||||
   present in your environment. Verify this using the :command:`ip addr` and
 | 
			
		||||
   :command:`ping 8.8.8.8` commands.
 | 
			
		||||
 | 
			
		||||
   Otherwise, manually configure an IP address and default IP route. Use the
 | 
			
		||||
   PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
 | 
			
		||||
   deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
 | 
			
		||||
      sudo ip link set up dev <PORT>
 | 
			
		||||
      sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
 | 
			
		||||
      ping 8.8.8.8
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   sysadmin home directory ($HOME)
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   #. Use a copy of the default.yml file listed above to provide your overrides.
 | 
			
		||||
 | 
			
		||||
      The default.yml file lists all available parameters for bootstrap
 | 
			
		||||
      configuration with a brief description for each parameter in the file comments.
 | 
			
		||||
 | 
			
		||||
      To use this method, copy the default.yml file listed above to
 | 
			
		||||
      ``$HOME/localhost.yml`` and edit the configurable values as desired.
 | 
			
		||||
 | 
			
		||||
   #. Create a minimal user configuration override file.
 | 
			
		||||
 | 
			
		||||
      To use this method, create your override file at ``$HOME/localhost.yml``
 | 
			
		||||
      and provide the minimum required parameters for the deployment configuration
 | 
			
		||||
      as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
 | 
			
		||||
      applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: duplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
 | 
			
		||||
        external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
 | 
			
		||||
        external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
 | 
			
		||||
        external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
 | 
			
		||||
        external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <sysadmin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r2_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
 | 
			
		||||
   attached networks. Use the OAM and MGMT port names, for example eth0, that are
 | 
			
		||||
   applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     OAM_IF=<OAM-PORT>
 | 
			
		||||
     MGMT_IF=<MGMT-PORT>
 | 
			
		||||
     system host-if-modify controller-0 lo -c none
 | 
			
		||||
     IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
 | 
			
		||||
     for UUID in $IFNET_UUIDS; do
 | 
			
		||||
         system interface-network-remove ${UUID}
 | 
			
		||||
     done
 | 
			
		||||
     system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
     system host-if-modify controller-0 $MGMT_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $MGMT_IF mgmt
 | 
			
		||||
     system interface-network-assign controller-0 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-0. Use the DATA port names, for example
 | 
			
		||||
   eth0, applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure the SRIOV device plugin
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       system host-label-assign controller-0 sriovdp=enabled
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes.
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       system host-memory-modify controller-0 0 -1G 100
 | 
			
		||||
       system host-memory-modify controller-0 1 -1G 100
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=<DATA-0-PORT>
 | 
			
		||||
      DATA1IF=<DATA-1-PORT>
 | 
			
		||||
      export COMPUTE=controller-0
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
      system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
      system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
      DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
      DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
      DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
      DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
      system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
 | 
			
		||||
   to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      echo ">>> Add OSDs to primary tier"
 | 
			
		||||
      system host-disk-list controller-0
 | 
			
		||||
      system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
 | 
			
		||||
      system host-stor-list controller-0
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. include:: aio_simplex_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-0-openstack-specific-aio-simplex-start:
 | 
			
		||||
   :end-before: incl-config-controller-0-openstack-specific-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. include:: aio_simplex_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-unlock-controller-0-aio-simplex-start:
 | 
			
		||||
   :end-before: incl-unlock-controller-0-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------------------------
 | 
			
		||||
Install software on controller-1 node
 | 
			
		||||
-------------------------------------
 | 
			
		||||
 | 
			
		||||
#. Power on the controller-1 server and force it to network boot with the
 | 
			
		||||
   appropriate BIOS boot options for your particular server.
 | 
			
		||||
 | 
			
		||||
#. As controller-1 boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
 | 
			
		||||
   host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Using the host id, set the personality of this host to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1 to complete, for controller-1 to
 | 
			
		||||
   reboot, and for controller-1 to show as locked/disabled/online in 'system host-list'.
 | 
			
		||||
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-1 and specify the
 | 
			
		||||
   attached networks. Use the OAM and MGMT port names, for example eth0, that are
 | 
			
		||||
   applicable to your deployment environment:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interface is partially set up automatically by the network
 | 
			
		||||
   install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      OAM_IF=<OAM-PORT>
 | 
			
		||||
      MGMT_IF=<MGMT-PORT>
 | 
			
		||||
      system host-if-modify controller-1 $OAM_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-1 $OAM_IF oam
 | 
			
		||||
      system interface-network-assign controller-1 mgmt0 cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-1. Use the DATA port names, for example
 | 
			
		||||
   eth0, applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure the SRIOV device plugin:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        system host-label-assign controller-1 sriovdp=enabled
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        system host-memory-modify controller-1 0 -1G 100
 | 
			
		||||
        system host-memory-modify controller-1 1 -1G 100
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=<DATA-0-PORT>
 | 
			
		||||
      DATA1IF=<DATA-1-PORT>
 | 
			
		||||
      export COMPUTE=controller-1
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
      system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
      system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
      DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
      DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
      DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
      DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
      system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-1 for Ceph:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      echo ">>> Add OSDs to primary tier"
 | 
			
		||||
      system host-disk-list controller-1
 | 
			
		||||
      system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {}
 | 
			
		||||
      system host-stor-list controller-1
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-label-assign controller-1 openstack-control-plane=enabled
 | 
			
		||||
      system host-label-assign controller-1 openstack-compute-node=enabled
 | 
			
		||||
      system host-label-assign controller-1 openvswitch=enabled
 | 
			
		||||
      system host-label-assign controller-1 sriov=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      export COMPUTE=controller-1
 | 
			
		||||
 | 
			
		||||
      echo ">>> Getting root disk info"
 | 
			
		||||
      ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
      ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
      echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
 | 
			
		||||
 | 
			
		||||
      echo ">>>> Configuring nova-local"
 | 
			
		||||
      NOVA_SIZE=34
 | 
			
		||||
      NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
 | 
			
		||||
      NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
      system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
      system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
      sleep 2
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock controller-1 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
Controller-1 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,21 +0,0 @@
 | 
			
		||||
===============================================
 | 
			
		||||
Bare metal All-in-one Simplex Installation R2.0
 | 
			
		||||
===============================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_aio_simplex.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   aio_simplex_hardware
 | 
			
		||||
   aio_simplex_install_kubernetes
 | 
			
		||||
@@ -1,58 +0,0 @@
 | 
			
		||||
=====================
 | 
			
		||||
Hardware Requirements
 | 
			
		||||
=====================
 | 
			
		||||
 | 
			
		||||
This section describes the hardware requirements and server preparation for a
 | 
			
		||||
**StarlingX R2.0 bare metal All-in-one Simplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Minimum hardware requirements
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
The recommended minimum hardware requirements for bare metal servers for various
 | 
			
		||||
host types are:
 | 
			
		||||
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum Requirement     | All-in-one Controller Node                                |
 | 
			
		||||
+=========================+===========================================================+
 | 
			
		||||
| Number of servers       |  1                                                        |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge)      |
 | 
			
		||||
|                         |   8 cores/socket                                          |
 | 
			
		||||
|                         |                                                           |
 | 
			
		||||
|                         | or                                                        |
 | 
			
		||||
|                         |                                                           |
 | 
			
		||||
|                         | - Single-CPU Intel® Xeon® D-15xx family, 8 cores          |
 | 
			
		||||
|                         |   (low-power/low-cost option)                             |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum memory          | 64 GB                                                     |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Primary disk            | 500 GB SSD or NVMe (see :doc:`../../nvme_config`)         |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Additional disks        | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD            |
 | 
			
		||||
|                         | - Recommended, but not required: 1 or more SSDs or NVMe   |
 | 
			
		||||
|                         |   drives for Ceph journals (min. 1024 MiB per OSD         |
 | 
			
		||||
|                         |   journal)                                                |
 | 
			
		||||
|                         | - For OpenStack, recommend 1 or more 500 GB (min. 10K     |
 | 
			
		||||
|                         |   RPM) for VM local ephemeral storage                     |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum network ports   | - OAM: 1x1GE                                              |
 | 
			
		||||
|                         | - Data: 1 or more x 10GE                                  |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| BIOS settings           | - Hyper-Threading technology enabled                      |
 | 
			
		||||
|                         | - Virtualization technology enabled                       |
 | 
			
		||||
|                         | - VT for directed I/O enabled                             |
 | 
			
		||||
|                         | - CPU power and performance policy set to performance     |
 | 
			
		||||
|                         | - CPU C state control disabled                            |
 | 
			
		||||
|                         | - Plug & play BMC detection disabled                      |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
--------------------------
 | 
			
		||||
Prepare bare metal servers
 | 
			
		||||
--------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: prep_servers.txt
 | 
			
		||||
@@ -1,347 +0,0 @@
 | 
			
		||||
=================================================
 | 
			
		||||
Install StarlingX Kubernetes on Bare Metal AIO-SX
 | 
			
		||||
=================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R2.0 bare metal All-in-one Simplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
---------------------
 | 
			
		||||
Create a bootable USB
 | 
			
		||||
---------------------
 | 
			
		||||
 | 
			
		||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
 | 
			
		||||
create a bootable USB with the StarlingX ISO on your system.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. incl-install-software-controller-0-aio-simplex-start:
 | 
			
		||||
 | 
			
		||||
#. Insert the bootable USB into a bootable USB port on the host you are
 | 
			
		||||
   configuring as controller-0.
 | 
			
		||||
 | 
			
		||||
#. Power on the host.
 | 
			
		||||
 | 
			
		||||
#. Attach to a console, ensure the host boots from the USB, and wait for the
 | 
			
		||||
   StarlingX Installer Menus.
 | 
			
		||||
 | 
			
		||||
#. Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
   #. First menu: Select 'All-in-one Controller Configuration'
 | 
			
		||||
   #. Second menu: Select 'Graphical Console' or 'Textual Console' depending on
 | 
			
		||||
      your terminal access to the console port
 | 
			
		||||
   #. Third menu: Select 'Standard Security Profile'
 | 
			
		||||
 | 
			
		||||
#. Wait for non-interactive install of software to complete and server to reboot.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the server.
 | 
			
		||||
 | 
			
		||||
.. incl-install-software-controller-0-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
#. Login using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Login: sysadmin
 | 
			
		||||
      Password:
 | 
			
		||||
      Changing password for sysadmin.
 | 
			
		||||
      (current) UNIX Password: sysadmin
 | 
			
		||||
      New Password:
 | 
			
		||||
      (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. Verify and/or configure IP connectivity.
 | 
			
		||||
 | 
			
		||||
   External connectivity is required to run the Ansible bootstrap playbook. The
 | 
			
		||||
   StarlingX boot image will DHCP out all interfaces so the server may have
 | 
			
		||||
   obtained an IP address and have external IP connectivity if a DHCP server is
 | 
			
		||||
   present in your environment. Verify this using the :command:`ip addr` and
 | 
			
		||||
   :command:`ping 8.8.8.8` commands.
 | 
			
		||||
 | 
			
		||||
   Otherwise, manually configure an IP address and default IP route. Use the
 | 
			
		||||
   PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
 | 
			
		||||
   deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
 | 
			
		||||
      sudo ip link set up dev <PORT>
 | 
			
		||||
      sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
 | 
			
		||||
      ping 8.8.8.8
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   sysadmin home directory ($HOME)
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   #. Use a copy of the default.yml file listed above to provide your overrides.
 | 
			
		||||
 | 
			
		||||
      The default.yml file lists all available parameters for bootstrap
 | 
			
		||||
      configuration with a brief description for each parameter in the file comments.
 | 
			
		||||
 | 
			
		||||
      To use this method, copy the default.yml file listed above to
 | 
			
		||||
      ``$HOME/localhost.yml`` and edit the configurable values as desired.
 | 
			
		||||
 | 
			
		||||
   #. Create a minimal user configuration override file.
 | 
			
		||||
 | 
			
		||||
      To use this method, create your override file at ``$HOME/localhost.yml``
 | 
			
		||||
      and provide the minimum required parameters for the deployment configuration
 | 
			
		||||
      as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
 | 
			
		||||
      applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: simplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
 | 
			
		||||
        external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
 | 
			
		||||
        external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <sysadmin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r2_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM interface of controller-0 and specify the attached network
 | 
			
		||||
   as "oam". Use the OAM port name, for example eth0, that is applicable to your
 | 
			
		||||
   deployment environment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     OAM_IF=<OAM-PORT>
 | 
			
		||||
     system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-0. Use the DATA port names, for example
 | 
			
		||||
   eth0, applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure the SRIOV device plugin
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       system host-label-assign controller-0 sriovdp=enabled
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes.
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       system host-memory-modify controller-0 0 -1G 100
 | 
			
		||||
       system host-memory-modify controller-0 1 -1G 100
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     DATA0IF=<DATA-0-PORT>
 | 
			
		||||
     DATA1IF=<DATA-1-PORT>
 | 
			
		||||
     export COMPUTE=controller-0
 | 
			
		||||
     PHYSNET0='physnet0'
 | 
			
		||||
     PHYSNET1='physnet1'
 | 
			
		||||
     SPL=/tmp/tmp-system-port-list
 | 
			
		||||
     SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
     system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
     system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
     DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
     DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
     DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
     DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
     DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
     DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
     DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
     DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
     system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
     system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
     system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
     system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
     system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
     system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
 | 
			
		||||
   to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     echo ">>> Add OSDs to primary tier"
 | 
			
		||||
     system host-disk-list controller-0
 | 
			
		||||
     system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
 | 
			
		||||
     system host-stor-list controller-0
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-openstack-specific-aio-simplex-start:
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-label-assign controller-0 openstack-control-plane=enabled
 | 
			
		||||
     system host-label-assign controller-0 openstack-compute-node=enabled
 | 
			
		||||
     system host-label-assign controller-0 openvswitch=enabled
 | 
			
		||||
     system host-label-assign controller-0 sriov=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
 | 
			
		||||
 | 
			
		||||
   StarlingX has OVS (kernel-based) vSwitch configured as default:
 | 
			
		||||
 | 
			
		||||
   * Runs in a container; defined within the helm charts of stx-openstack
 | 
			
		||||
     manifest.
 | 
			
		||||
   * Shares the core(s) assigned to the platform.
 | 
			
		||||
 | 
			
		||||
   If you require better performance, OVS-DPDK should be used:
 | 
			
		||||
 | 
			
		||||
   * Runs directly on the host (it is not containerized).
 | 
			
		||||
   * Requires that at least 1 core be assigned/dedicated to the vSwitch function.
 | 
			
		||||
 | 
			
		||||
   To deploy the default containerized OVS:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system modify --vswitch_type none
 | 
			
		||||
 | 
			
		||||
   Do not run any vSwitch directly on the host, instead, use the containerized
 | 
			
		||||
   OVS defined in the helm charts of stx-openstack manifest.
 | 
			
		||||
 | 
			
		||||
   To deploy OVS-DPDK (OVS with the Data Plane Development Kit, which is
 | 
			
		||||
   supported only on bare metal hardware), run the following command:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system modify --vswitch_type ovs-dpdk
 | 
			
		||||
     system host-cpu-modify -f vswitch -p0 1 controller-0
 | 
			
		||||
 | 
			
		||||
   Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
 | 
			
		||||
   default to automatically assigning 1 vSwitch core for AIO controllers and 2
 | 
			
		||||
   vSwitch cores for computes.
 | 
			
		||||
 | 
			
		||||
   When using OVS-DPDK, virtual machines must be configured to use a flavor with
 | 
			
		||||
   property: hw:mem_page_size=large
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      After controller-0 is unlocked, changing vswitch_type requires
 | 
			
		||||
      locking and unlocking all computes (and/or AIO Controllers) to
 | 
			
		||||
      apply the change.
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     export COMPUTE=controller-0
 | 
			
		||||
 | 
			
		||||
     echo ">>> Getting root disk info"
 | 
			
		||||
     ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
     ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
     echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
 | 
			
		||||
 | 
			
		||||
     echo ">>>> Configuring nova-local"
 | 
			
		||||
     NOVA_SIZE=34
 | 
			
		||||
     NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
 | 
			
		||||
     NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
     system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
     system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
     sleep 2
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-0-aio-simplex-start:
 | 
			
		||||
 | 
			
		||||
Unlock controller-0 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-0-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,22 +0,0 @@
 | 
			
		||||
=============================================================
 | 
			
		||||
Bare metal Standard with Controller Storage Installation R2.0
 | 
			
		||||
=============================================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_controller_storage.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   controller_storage_hardware
 | 
			
		||||
   controller_storage_install_kubernetes
 | 
			
		||||
@@ -1,56 +0,0 @@
 | 
			
		||||
=====================
 | 
			
		||||
Hardware Requirements
 | 
			
		||||
=====================
 | 
			
		||||
 | 
			
		||||
This section describes the hardware requirements and server preparation for a
 | 
			
		||||
**StarlingX R2.0 bare metal Standard with Controller Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Minimum hardware requirements
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
The recommended minimum hardware requirements for bare metal servers for various
 | 
			
		||||
host types are:
 | 
			
		||||
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Minimum Requirement     | Controller Node             | Compute Node                |
 | 
			
		||||
+=========================+=============================+=============================+
 | 
			
		||||
| Number of servers       | 2                           | 2-10                        |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge)      |
 | 
			
		||||
|                         |   8 cores/socket                                          |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Minimum memory          | 64 GB                       | 32 GB                       |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Primary disk            | 500 GB SSD or NVMe (see     | 120 GB (Minimum 10k RPM)    |
 | 
			
		||||
|                         | :doc:`../../nvme_config`)   |                             |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Additional disks        | - 1 or more 500 GB (min.    | - For OpenStack, recommend  |
 | 
			
		||||
|                         |   10K RPM) for Ceph OSD     |   1 or more 500 GB (min.    |
 | 
			
		||||
|                         | - Recommended, but not      |   10K RPM) for VM local     |
 | 
			
		||||
|                         |   required: 1 or more SSDs  |   ephemeral storage         |
 | 
			
		||||
|                         |   or NVMe drives for Ceph   |                             |
 | 
			
		||||
|                         |   journals (min. 1024 MiB   |                             |
 | 
			
		||||
|                         |   per OSD journal)          |                             |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Minimum network ports   | - Mgmt/Cluster: 1x10GE      | - Mgmt/Cluster: 1x10GE      |
 | 
			
		||||
|                         | - OAM: 1x1GE                | - Data: 1 or more x 10GE    |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| BIOS settings           | - Hyper-Threading technology enabled                      |
 | 
			
		||||
|                         | - Virtualization technology enabled                       |
 | 
			
		||||
|                         | - VT for directed I/O enabled                             |
 | 
			
		||||
|                         | - CPU power and performance policy set to performance     |
 | 
			
		||||
|                         | - CPU C state control disabled                            |
 | 
			
		||||
|                         | - Plug & play BMC detection disabled                      |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
 | 
			
		||||
--------------------------
 | 
			
		||||
Prepare bare metal servers
 | 
			
		||||
--------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: prep_servers.txt
 | 
			
		||||
@@ -1,586 +0,0 @@
 | 
			
		||||
===========================================================================
 | 
			
		||||
Install StarlingX Kubernetes on Bare Metal Standard with Controller Storage
 | 
			
		||||
===========================================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R2.0 bare metal Standard with Controller Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Create bootable USB
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
 | 
			
		||||
create a bootable USB with the StarlingX ISO on your system.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. incl-install-software-controller-0-standard-start:
 | 
			
		||||
 | 
			
		||||
#. Insert the bootable USB into a bootable USB port on the host you are
 | 
			
		||||
   configuring as controller-0.
 | 
			
		||||
 | 
			
		||||
#. Power on the host.
 | 
			
		||||
 | 
			
		||||
#. Attach to a console, ensure the host boots from the USB, and wait for the
 | 
			
		||||
   StarlingX Installer Menus.
 | 
			
		||||
 | 
			
		||||
#. Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
   #. First menu: Select 'Standard Controller Configuration'
 | 
			
		||||
   #. Second menu: Select 'Graphical Console' or 'Textual Console' depending on
 | 
			
		||||
      your terminal access to the console port
 | 
			
		||||
   #. Third menu: Select 'Standard Security Profile'
 | 
			
		||||
 | 
			
		||||
#. Wait for non-interactive install of software to complete and server to reboot.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the server.
 | 
			
		||||
 | 
			
		||||
.. incl-install-software-controller-0-standard-end:
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. incl-bootstrap-sys-controller-0-standard-start:
 | 
			
		||||
 | 
			
		||||
#. Login using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Login: sysadmin
 | 
			
		||||
      Password:
 | 
			
		||||
      Changing password for sysadmin.
 | 
			
		||||
      (current) UNIX Password: sysadmin
 | 
			
		||||
      New Password:
 | 
			
		||||
      (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. Verify and/or configure IP connectivity.
 | 
			
		||||
 | 
			
		||||
   External connectivity is required to run the Ansible bootstrap playbook. The
 | 
			
		||||
   StarlingX boot image will DHCP out all interfaces so the server may have
 | 
			
		||||
   obtained an IP address and have external IP connectivity if a DHCP server is
 | 
			
		||||
   present in your environment. Verify this using the :command:`ip addr` and
 | 
			
		||||
   :command:`ping 8.8.8.8` commands.
 | 
			
		||||
 | 
			
		||||
   Otherwise, manually configure an IP address and default IP route. Use the
 | 
			
		||||
   PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
 | 
			
		||||
   deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
 | 
			
		||||
      sudo ip link set up dev <PORT>
 | 
			
		||||
      sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
 | 
			
		||||
      ping 8.8.8.8
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   sysadmin home directory ($HOME)
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   #. Use a copy of the default.yml file listed above to provide your overrides.
 | 
			
		||||
 | 
			
		||||
      The default.yml file lists all available parameters for bootstrap
 | 
			
		||||
      configuration with a brief description for each parameter in the file comments.
 | 
			
		||||
 | 
			
		||||
      To use this method, copy the default.yml file listed above to
 | 
			
		||||
      ``$HOME/localhost.yml`` and edit the configurable values as desired.
 | 
			
		||||
 | 
			
		||||
   #. Create a minimal user configuration override file.
 | 
			
		||||
 | 
			
		||||
      To use this method, create your override file at ``$HOME/localhost.yml``
 | 
			
		||||
      and provide the minimum required parameters for the deployment configuration
 | 
			
		||||
      as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
 | 
			
		||||
      applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: duplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
 | 
			
		||||
        external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
 | 
			
		||||
        external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
 | 
			
		||||
        external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
 | 
			
		||||
        external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <sysadmin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r2_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. incl-bootstrap-sys-controller-0-standard-end:
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-storage-start:
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	   source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
 | 
			
		||||
   attached networks. Use the OAM and MGMT port names, for example eth0, that are
 | 
			
		||||
   applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
  	 OAM_IF=<OAM-PORT>
 | 
			
		||||
  	 MGMT_IF=<MGMT-PORT>
 | 
			
		||||
  	 system host-if-modify controller-0 lo -c none
 | 
			
		||||
  	 IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
 | 
			
		||||
  	 for UUID in $IFNET_UUIDS; do
 | 
			
		||||
  	     system interface-network-remove ${UUID}
 | 
			
		||||
  	 done
 | 
			
		||||
  	 system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
  	 system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
  	 system host-if-modify controller-0 $MGMT_IF -c platform
 | 
			
		||||
  	 system interface-network-assign controller-0 $MGMT_IF mgmt
 | 
			
		||||
  	 system interface-network-assign controller-0 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   	 system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-label-assign controller-0 openstack-control-plane=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
 | 
			
		||||
 | 
			
		||||
   StarlingX has OVS (kernel-based) vSwitch configured as default:
 | 
			
		||||
 | 
			
		||||
   * Runs in a container; defined within the helm charts of stx-openstack
 | 
			
		||||
     manifest.
 | 
			
		||||
   * Shares the core(s) assigned to the platform.
 | 
			
		||||
 | 
			
		||||
   If you require better performance, OVS-DPDK should be used:
 | 
			
		||||
 | 
			
		||||
   * Runs directly on the host (it is not containerized).
 | 
			
		||||
   * Requires that at least 1 core be assigned/dedicated to the vSwitch function.
 | 
			
		||||
 | 
			
		||||
   To deploy the default containerized OVS:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system modify --vswitch_type none
 | 
			
		||||
 | 
			
		||||
   Do not run any vSwitch directly on the host, instead, use the containerized
 | 
			
		||||
   OVS defined in the helm charts of stx-openstack manifest.
 | 
			
		||||
 | 
			
		||||
   To deploy OVS-DPDK (OVS with the Data Plane Development Kit, which is
 | 
			
		||||
   supported only on bare metal hardware), run the following command:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system modify --vswitch_type ovs-dpdk
 | 
			
		||||
	   system host-cpu-modify -f vswitch -p0 1 controller-0
 | 
			
		||||
 | 
			
		||||
   Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
 | 
			
		||||
   default to automatically assigning 1 vSwitch core for AIO controllers and 2
 | 
			
		||||
   vSwitch cores for computes.
 | 
			
		||||
 | 
			
		||||
   When using OVS-DPDK, Virtual Machines must be configured to use a flavor with
 | 
			
		||||
   property: hw:mem_page_size=large.
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
   	  After controller-0 is unlocked, changing vswitch_type requires
 | 
			
		||||
   	  locking and unlocking all computes (and/or AIO controllers) to
 | 
			
		||||
   	  apply the change.
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock controller-0 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------------------------
 | 
			
		||||
Install software on controller-1 and compute nodes
 | 
			
		||||
--------------------------------------------------
 | 
			
		||||
 | 
			
		||||
#. Power on the controller-1 server and force it to network boot with the
 | 
			
		||||
   appropriate BIOS boot options for your particular server.
 | 
			
		||||
 | 
			
		||||
#. As controller-1 boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
 | 
			
		||||
   host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-list
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	 | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	 | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
	 | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Using the host id, set the personality of this host to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on controller-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting for the previous step to complete, power on the compute-0 and
 | 
			
		||||
   compute-1 servers. Set the personality to 'worker' and assign a unique
 | 
			
		||||
   hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, power on compute-0 and wait for the new host (hostname=None) to
 | 
			
		||||
   be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 3 personality=worker hostname=compute-0
 | 
			
		||||
 | 
			
		||||
   Repeat for compute-1. Power on compute-1 and wait for the new host (hostname=None) to
 | 
			
		||||
   be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 4 personality=worker hostname=compute-1
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1, compute-0, and compute-1 to
 | 
			
		||||
   complete, for all servers to reboot, and for all to show as locked/disabled/online in
 | 
			
		||||
   'system host-list'.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 system host-list
 | 
			
		||||
 | 
			
		||||
 	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	 | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	 | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
	 | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
	 | 3  | compute-0    | compute     | locked         | disabled    | online       |
 | 
			
		||||
	 | 4  | compute-1    | compute     | locked         | disabled    | online       |
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-1-start:
 | 
			
		||||
 | 
			
		||||
Configure the OAM and MGMT interfaces of controller-0 and specify the attached
 | 
			
		||||
networks. Use the OAM and MGMT port names, for example eth0, that are applicable
 | 
			
		||||
to your deployment environment.
 | 
			
		||||
 | 
			
		||||
(Note that the MGMT interface is partially set up automatically by the network
 | 
			
		||||
install procedure.)
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	OAM_IF=<OAM-PORT>
 | 
			
		||||
	MGMT_IF=<MGMT-PORT>
 | 
			
		||||
	system host-if-modify controller-1 $OAM_IF -c platform
 | 
			
		||||
	system interface-network-assign controller-1 $OAM_IF oam
 | 
			
		||||
	system interface-network-assign controller-1 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in support
 | 
			
		||||
of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	system host-label-assign controller-1 openstack-control-plane=enabled
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-1-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-1-start:
 | 
			
		||||
 | 
			
		||||
Unlock controller-1 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
Controller-1 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-1-end:
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure compute nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
#. Add the third Ceph monitor to compute-0:
 | 
			
		||||
 | 
			
		||||
   (The first two Ceph monitors are automatically assigned to controller-0 and
 | 
			
		||||
   controller-1.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   	 system ceph-mon-add compute-0
 | 
			
		||||
 | 
			
		||||
#. Wait for the compute node monitor to complete configuration:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 system ceph-mon-list
 | 
			
		||||
	 +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
	 | uuid                                 | ceph_ | hostname     | state      | task |
 | 
			
		||||
	 |                                      | mon_g |              |            |      |
 | 
			
		||||
	 |                                      | ib    |              |            |      |
 | 
			
		||||
	 +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
	 | 64176b6c-e284-4485-bb2a-115dee215279 | 20    | controller-1 | configured | None |
 | 
			
		||||
	 | a9ca151b-7f2c-4551-8167-035d49e2df8c | 20    | controller-0 | configured | None |
 | 
			
		||||
	 | f76bc385-190c-4d9a-aa0f-107346a9907b | 20    | compute-0    | configured | None |
 | 
			
		||||
	 +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
  	 for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
  	    system interface-network-assign $COMPUTE mgmt0 cluster-host
 | 
			
		||||
  	 done
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
 | 
			
		||||
   example eth0, that are applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure SRIOV device plug in:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
  		for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
  		   system host-label-assign ${COMPUTE} sriovdp=enabled
 | 
			
		||||
  		done
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
    		for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
    		   system host-memory-modify ${COMPUTE} 0 -1G 100
 | 
			
		||||
    		   system host-memory-modify ${COMPUTE} 1 -1G 100
 | 
			
		||||
    		done
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	    DATA0IF=<DATA-0-PORT>
 | 
			
		||||
  		DATA1IF=<DATA-1-PORT>
 | 
			
		||||
  		PHYSNET0='physnet0'
 | 
			
		||||
  		PHYSNET1='physnet1'
 | 
			
		||||
  		SPL=/tmp/tmp-system-port-list
 | 
			
		||||
  		SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
 | 
			
		||||
  		# configure the datanetworks in sysinv, prior to referencing it
 | 
			
		||||
  		# in the ``system host-if-modify`` command'.
 | 
			
		||||
  		system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
  		system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
  		for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
  		  echo "Configuring interface for: $COMPUTE"
 | 
			
		||||
  		  set -ex
 | 
			
		||||
  		  system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
  		  system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
  		  DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
  		  DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
  		  DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
  		  DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
  		  DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
  		  DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
  		  DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
  		  DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
  		  system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
  		  system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
  		  system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
  		  system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
  		  set +ex
 | 
			
		||||
  		done
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
 	 for NODE in compute-0 compute-1; do
 | 
			
		||||
	   system host-label-assign $NODE  openstack-compute-node=enabled
 | 
			
		||||
	   system host-label-assign $NODE  openvswitch=enabled
 | 
			
		||||
	   system host-label-assign $NODE  sriov=enabled
 | 
			
		||||
	 done
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
	   echo "Configuring Nova local for: $COMPUTE"
 | 
			
		||||
	   ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
	   ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
	   PARTITION_SIZE=10
 | 
			
		||||
	   NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 | 
			
		||||
	   NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
	   system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
	   system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
	 done
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock compute nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
Unlock compute nodes in order to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
	   system host-unlock $COMPUTE
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
The compute nodes will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
----------------------------
 | 
			
		||||
Add Ceph OSDs to controllers
 | 
			
		||||
----------------------------
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to controller-0. The following example adds OSDs to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 HOST=controller-0
 | 
			
		||||
	 DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
	 TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
	 OSDs="/dev/sdb"
 | 
			
		||||
	 for OSD in $OSDs; do
 | 
			
		||||
	    system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
	    while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
	 done
 | 
			
		||||
 | 
			
		||||
	 system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to controller-1. The following example adds OSDs to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 HOST=controller-1
 | 
			
		||||
	 DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
	 TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
	 OSDs="/dev/sdb"
 | 
			
		||||
	 for OSD in $OSDs; do
 | 
			
		||||
	     system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
	     while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
	 done
 | 
			
		||||
 | 
			
		||||
	 system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,21 +0,0 @@
 | 
			
		||||
============================================================
 | 
			
		||||
Bare metal Standard with Dedicated Storage Installation R2.0
 | 
			
		||||
============================================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_dedicated_storage.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   dedicated_storage_hardware
 | 
			
		||||
   dedicated_storage_install_kubernetes
 | 
			
		||||
@@ -1,61 +0,0 @@
 | 
			
		||||
=====================
 | 
			
		||||
Hardware Requirements
 | 
			
		||||
=====================
 | 
			
		||||
 | 
			
		||||
This section describes the hardware requirements and server preparation for a
 | 
			
		||||
**StarlingX R2.0 bare metal Standard with Dedicated Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Minimum hardware requirements
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
The recommended minimum hardware requirements for bare metal servers for various
 | 
			
		||||
host types are:
 | 
			
		||||
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Minimum Requirement | Controller Node           | Storage Node          | Compute Node          |
 | 
			
		||||
+=====================+===========================+=======================+=======================+
 | 
			
		||||
| Number of servers   | 2                         | 2-9                   | 2-100                 |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Minimum processor   | Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket         |
 | 
			
		||||
| class               |                                                                           |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Minimum memory      | 64 GB                     | 64 GB                 | 32 GB                 |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Primary disk        | 500 GB SSD or NVMe ( see  | 120 GB (min. 10k RPM) | 120 GB (min. 10k RPM) |
 | 
			
		||||
|                     | :doc:`../../nvme_config`) |                       |                       |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Additional disks    | None                      | - 1 or more 500 GB    | - For OpenStack,      |
 | 
			
		||||
|                     |                           |   (min. 10K RPM) for  |   recommend 1 or more |
 | 
			
		||||
|                     |                           |   Ceph OSD            |   500 GB (min. 10K    |
 | 
			
		||||
|                     |                           | - Recommended, but    |   RPM) for VM         |
 | 
			
		||||
|                     |                           |   not required: 1 or  |   ephemeral storage   |
 | 
			
		||||
|                     |                           |   more SSDs or NVMe   |                       |
 | 
			
		||||
|                     |                           |   drives for Ceph     |                       |
 | 
			
		||||
|                     |                           |   journals (min. 1024 |                       |
 | 
			
		||||
|                     |                           |   MiB per OSD         |                       |
 | 
			
		||||
|                     |                           |   journal)            |                       |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Minimum network     | - Mgmt/Cluster:           | - Mgmt/Cluster:       | - Mgmt/Cluster:       |
 | 
			
		||||
| ports               |   1x10GE                  |   1x10GE              |   1x10GE              |
 | 
			
		||||
|                     | - OAM: 1x1GE              |                       | - Data: 1 or more     |
 | 
			
		||||
|                     |                           |                       |   x 10GE              |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| BIOS settings       | - Hyper-Threading technology enabled                                      |
 | 
			
		||||
|                     | - Virtualization technology enabled                                       |
 | 
			
		||||
|                     | - VT for directed I/O enabled                                             |
 | 
			
		||||
|                     | - CPU power and performance policy set to performance                     |
 | 
			
		||||
|                     | - CPU C state control disabled                                            |
 | 
			
		||||
|                     | - Plug & play BMC detection disabled                                      |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
 | 
			
		||||
--------------------------
 | 
			
		||||
Prepare bare metal servers
 | 
			
		||||
--------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: prep_servers.txt
 | 
			
		||||
@@ -1,362 +0,0 @@
 | 
			
		||||
==========================================================================
 | 
			
		||||
Install StarlingX Kubernetes on Bare Metal Standard with Dedicated Storage
 | 
			
		||||
==========================================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R2.0 bare metal Standard with Dedicated Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Create bootable USB
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
 | 
			
		||||
create a bootable USB with the StarlingX ISO on your system.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-install-software-controller-0-standard-start:
 | 
			
		||||
   :end-before: incl-install-software-controller-0-standard-end:
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-bootstrap-sys-controller-0-standard-start:
 | 
			
		||||
   :end-before: incl-bootstrap-sys-controller-0-standard-end:
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-0-storage-start:
 | 
			
		||||
   :end-before: incl-config-controller-0-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock controller-0 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
------------------------------------------------------------------
 | 
			
		||||
Install software on controller-1, storage nodes, and compute nodes
 | 
			
		||||
------------------------------------------------------------------
 | 
			
		||||
 | 
			
		||||
#. Power on the controller-1 server and force it to network boot with the
 | 
			
		||||
   appropriate BIOS boot options for your particular server.
 | 
			
		||||
 | 
			
		||||
#. As controller-1 boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
 | 
			
		||||
   host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	system host-list
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	| id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
	| 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Using the host id, set the personality of this host to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on controller-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting for the previous step to complete, power on the storage-0 and
 | 
			
		||||
   storage-1 servers. Set the personality to 'storage' and assign a unique
 | 
			
		||||
   hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, power on storage-0 and wait for the new host (hostname=None) to
 | 
			
		||||
   be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   		system host-update 3 personality=storage
 | 
			
		||||
 | 
			
		||||
   Repeat for storage-1. Power on storage-1 and wait for the new host
 | 
			
		||||
   (hostname=None) to be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   		system host-update 4 personality=storage
 | 
			
		||||
 | 
			
		||||
   This initiates the software installation on storage-0 and storage-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting for the previous step to complete, power on the compute-0 and
 | 
			
		||||
   compute-1 servers. Set the personality to 'worker' and assign a unique
 | 
			
		||||
   hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, power on compute-0 and wait for the new host (hostname=None) to
 | 
			
		||||
   be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 5 personality=worker hostname=compute-0
 | 
			
		||||
 | 
			
		||||
   Repeat for compute-1. Power on compute-1 and wait for the new host
 | 
			
		||||
   (hostname=None) to be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 system host-update 6 personality=worker hostname=compute-1
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on compute-0 and compute-1.
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
 | 
			
		||||
   compute-0, and compute-1 to complete, for all servers to reboot, and for all to
 | 
			
		||||
   show as locked/disabled/online in 'system host-list'.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 system host-list
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	 | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	 | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
	 | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
	 | 3  | storage-0    | storage     | locked         | disabled    | online       |
 | 
			
		||||
	 | 4  | storage-1    | storage     | locked         | disabled    | online       |
 | 
			
		||||
	 | 5  | compute-0    | compute     | locked         | disabled    | online       |
 | 
			
		||||
	 | 6  | compute-1    | compute     | locked         | disabled    | online       |
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-1-start:
 | 
			
		||||
   :end-before: incl-config-controller-1-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-unlock-controller-1-start:
 | 
			
		||||
   :end-before: incl-unlock-controller-1-end:
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure storage nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	for COMPUTE in storage-0 storage-1; do
 | 
			
		||||
	   system interface-network-assign $COMPUTE mgmt0 cluster-host
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to storage-0. The following example adds OSDs to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	HOST=storage-0
 | 
			
		||||
	DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
	TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
	OSDs="/dev/sdb"
 | 
			
		||||
	for OSD in $OSDs; do
 | 
			
		||||
	   system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
	   while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
	system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to storage-1. The following example adds OSDs to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	HOST=storage-1
 | 
			
		||||
	DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
	TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
	OSDs="/dev/sdb"
 | 
			
		||||
	for OSD in $OSDs; do
 | 
			
		||||
	    system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
	    while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
	system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock storage nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
Unlock storage nodes in order to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	for STORAGE in storage-0 storage-1; do
 | 
			
		||||
	   system host-unlock $STORAGE
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
The storage nodes will reboot in order to apply configuration changes and come
 | 
			
		||||
into service. This can take 5-10 minutes, depending on the performance of the
 | 
			
		||||
host machine.
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure compute nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
	   system interface-network-assign $COMPUTE mgmt0 cluster-host
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
 | 
			
		||||
   example eth0, that are applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure SRIOV device plug in:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
		for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
		   system host-label-assign ${COMPUTE} sriovdp=enabled
 | 
			
		||||
		done
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
		for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
		   system host-memory-modify ${COMPUTE} 0 -1G 100
 | 
			
		||||
		   system host-memory-modify ${COMPUTE} 1 -1G 100
 | 
			
		||||
		done
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 	DATA0IF=<DATA-0-PORT>
 | 
			
		||||
		DATA1IF=<DATA-1-PORT>
 | 
			
		||||
		PHYSNET0='physnet0'
 | 
			
		||||
		PHYSNET1='physnet1'
 | 
			
		||||
		SPL=/tmp/tmp-system-port-list
 | 
			
		||||
		SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
 | 
			
		||||
		# configure the datanetworks in sysinv, prior to referencing it
 | 
			
		||||
		# in the ``system host-if-modify`` command'.
 | 
			
		||||
		system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
		system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
		for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
		  echo "Configuring interface for: $COMPUTE"
 | 
			
		||||
		  set -ex
 | 
			
		||||
		  system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
		  system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
		  DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
		  DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
		  DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
		  DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
		  DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
		  DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
		  DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
		  DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
		  system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
		  system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
		  system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
		  system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
		  set +ex
 | 
			
		||||
		done
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	for NODE in compute-0 compute-1; do
 | 
			
		||||
	  system host-label-assign $NODE  openstack-compute-node=enabled
 | 
			
		||||
	  system host-label-assign $NODE  openvswitch=enabled
 | 
			
		||||
	  system host-label-assign $NODE  sriov=enabled
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
	  echo "Configuring Nova local for: $COMPUTE"
 | 
			
		||||
	  ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
	  ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
	  PARTITION_SIZE=10
 | 
			
		||||
	  NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 | 
			
		||||
	  NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
	  system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
	  system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock compute nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
Unlock compute nodes in order to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
	   system host-unlock $COMPUTE
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
The compute nodes will reboot in order to apply configuration changes and come
 | 
			
		||||
into service. This can take 5-10 minutes, depending on the performance of the
 | 
			
		||||
host machine.
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,66 +0,0 @@
 | 
			
		||||
====================================
 | 
			
		||||
Bare metal Standard with Ironic R2.0
 | 
			
		||||
====================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
Ironic is an OpenStack project that provisions bare metal machines. For
 | 
			
		||||
information about the Ironic project, see
 | 
			
		||||
`Ironic Documentation <https://docs.openstack.org/ironic>`__.
 | 
			
		||||
 | 
			
		||||
End user applications can be deployed on bare metal servers (instead of
 | 
			
		||||
virtual machines) by configuring OpenStack Ironic and deploying a pool of 1 or
 | 
			
		||||
more bare metal servers.
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-deployment-options-ironic.png
 | 
			
		||||
   :scale: 90%
 | 
			
		||||
   :alt: Standard with Ironic deployment configuration
 | 
			
		||||
 | 
			
		||||
   *Figure 1: Standard with Ironic deployment configuration*
 | 
			
		||||
 | 
			
		||||
Bare metal servers must be connected to:
 | 
			
		||||
 | 
			
		||||
* IPMI for OpenStack Ironic control
 | 
			
		||||
* ironic-provisioning-net tenant network via their untagged physical interface,
 | 
			
		||||
  which supports PXE booting
 | 
			
		||||
 | 
			
		||||
As part of configuring OpenStack Ironic in StarlingX:
 | 
			
		||||
 | 
			
		||||
* An ironic-provisioning-net tenant network must be identified as the boot
 | 
			
		||||
  network for bare metal nodes.
 | 
			
		||||
* An additional untagged physical interface must be configured on controller
 | 
			
		||||
  nodes and connected to the ironic-provisioning-net tenant network. The
 | 
			
		||||
  OpenStack Ironic tftpboot server will PXE boot the bare metal servers over
 | 
			
		||||
  this interface.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   Bare metal servers are NOT:
 | 
			
		||||
 | 
			
		||||
   * Running any OpenStack / StarlingX software; they are running end user
 | 
			
		||||
     applications (for example, Glance Images).
 | 
			
		||||
   * To be connected to the internal management network.
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
StarlingX currently supports only a bare metal installation of Ironic with a
 | 
			
		||||
standard configuration, either:
 | 
			
		||||
 | 
			
		||||
* :doc:`controller_storage`
 | 
			
		||||
 | 
			
		||||
* :doc:`dedicated_storage`
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
This guide assumes that you have a standard deployment installed and configured
 | 
			
		||||
with 2x controllers and at least 1x compute node, with the StarlingX OpenStack
 | 
			
		||||
application (stx-openstack) applied.
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   ironic_hardware
 | 
			
		||||
   ironic_install
 | 
			
		||||
@@ -1,51 +0,0 @@
 | 
			
		||||
=====================
 | 
			
		||||
Hardware Requirements
 | 
			
		||||
=====================
 | 
			
		||||
 | 
			
		||||
This section describes the hardware requirements and server preparation for a
 | 
			
		||||
**StarlingX R2.0 bare metal Ironic** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Minimum hardware requirements
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
* One or more bare metal hosts as Ironic nodes as well as tenant instance node.
 | 
			
		||||
 | 
			
		||||
* BMC support on bare metal host and controller node connectivity to the BMC IP
 | 
			
		||||
  address of bare metal hosts.
 | 
			
		||||
 | 
			
		||||
For controller nodes:
 | 
			
		||||
 | 
			
		||||
* Additional NIC port on both controller nodes for connecting to the
 | 
			
		||||
  ironic-provisioning-net.
 | 
			
		||||
 | 
			
		||||
For compute nodes:
 | 
			
		||||
 | 
			
		||||
* If using a flat data network for the Ironic provisioning network, an additional
 | 
			
		||||
  NIC port on one of the compute nodes is required.
 | 
			
		||||
 | 
			
		||||
* Alternatively, use a VLAN data network for the Ironic provisioning network and
 | 
			
		||||
  simply add the new data network to an existing interface on the compute node.
 | 
			
		||||
 | 
			
		||||
* Additional switch ports / configuration for new ports on controller, compute,
 | 
			
		||||
  and Ironic nodes, for connectivity to the Ironic provisioning network.
 | 
			
		||||
 | 
			
		||||
-----------------------------------
 | 
			
		||||
BMC configuration of Ironic node(s)
 | 
			
		||||
-----------------------------------
 | 
			
		||||
 | 
			
		||||
Enable BMC and allocate a static IP, username, and password in the BIOS settings.
 | 
			
		||||
For example, set:
 | 
			
		||||
 | 
			
		||||
IP address
 | 
			
		||||
  10.10.10.126
 | 
			
		||||
 | 
			
		||||
username
 | 
			
		||||
  root
 | 
			
		||||
 | 
			
		||||
password
 | 
			
		||||
  test123
 | 
			
		||||
@@ -1,392 +0,0 @@
 | 
			
		||||
================================
 | 
			
		||||
Install Ironic on StarlingX R2.0
 | 
			
		||||
================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install Ironic on a standard configuration,
 | 
			
		||||
either:
 | 
			
		||||
 | 
			
		||||
* **StarlingX R2.0 bare metal Standard with Controller Storage** deployment
 | 
			
		||||
  configuration
 | 
			
		||||
 | 
			
		||||
* **StarlingX R2.0 bare metal Standard with Dedicated Storage** deployment
 | 
			
		||||
  configuration
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
---------------------
 | 
			
		||||
Enable Ironic service
 | 
			
		||||
---------------------
 | 
			
		||||
 | 
			
		||||
This section describes the pre-configuration required to enable the Ironic service.
 | 
			
		||||
All the commands in this section are for the StarlingX platform.
 | 
			
		||||
 | 
			
		||||
First acquire administrative privileges:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
********************************
 | 
			
		||||
Download Ironic deployment image
 | 
			
		||||
********************************
 | 
			
		||||
 | 
			
		||||
The Ironic service requires a deployment image (kernel and ramdisk) which is
 | 
			
		||||
used to clean Ironic nodes and install the end-user's image. The cleaning done
 | 
			
		||||
by the deployment image wipes the disks and tests connectivity to the Ironic
 | 
			
		||||
conductor on the controller nodes via the Ironic Python Agent (IPA).
 | 
			
		||||
 | 
			
		||||
The Ironic deployment Stein image (**Ironic-kernel** and **Ironic-ramdisk**)
 | 
			
		||||
can be found here:
 | 
			
		||||
 | 
			
		||||
* `Ironic-kernel coreos_production_pxe-stable-stein.vmlinuz
 | 
			
		||||
  <https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe-stable-stein.vmlinuz>`__
 | 
			
		||||
* `Ironic-ramdisk coreos_production_pxe_image-oem-stable-stein.cpio.gz
 | 
			
		||||
  <https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem-stable-stein.cpio.gz>`__
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
*******************************************************
 | 
			
		||||
Configure Ironic network on deployed standard StarlingX
 | 
			
		||||
*******************************************************
 | 
			
		||||
 | 
			
		||||
#. Add an address pool for the Ironic network. This example uses `ironic-pool`:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system addrpool-add --ranges 10.10.20.1-10.10.20.100 ironic-pool 10.10.20.0 24
 | 
			
		||||
 | 
			
		||||
#. Add the Ironic platform network. This example uses `ironic-net`:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system addrpool-list | grep ironic-pool | awk '{print$2}' | xargs system network-add ironic-net ironic false
 | 
			
		||||
 | 
			
		||||
#. Add the Ironic tenant network. This example uses `ironic-data`:
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      The tenant network is not the same as the platform network described in
 | 
			
		||||
      the previous step. You can specify any name for the tenant network other
 | 
			
		||||
      than ‘ironic’. If the name 'ironic' is used, a user override must be
 | 
			
		||||
      generated to indicate the tenant network name.
 | 
			
		||||
 | 
			
		||||
      Refer to section `Generate user Helm overrides`_ for details.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ironic-data flat
 | 
			
		||||
 | 
			
		||||
#. Configure the new interfaces (for Ironic) on controller nodes and assign
 | 
			
		||||
   them to the platform network. Host must be locked. This example uses the
 | 
			
		||||
   platform network `ironic-net` that was named in a previous step.
 | 
			
		||||
 | 
			
		||||
   These new interfaces to the controllers are used to connect to the Ironic
 | 
			
		||||
   provisioning network:
 | 
			
		||||
 | 
			
		||||
   **controller-0**
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system interface-network-assign controller-0 enp2s0 ironic-net
 | 
			
		||||
      system host-if-modify -n ironic -c platform \
 | 
			
		||||
      --ipv4-mode static --ipv4-pool ironic-pool controller-0 enp2s0
 | 
			
		||||
 | 
			
		||||
      # Apply the OpenStack Ironic node labels
 | 
			
		||||
      system host-label-assign controller-0 openstack-ironic=enabled
 | 
			
		||||
 | 
			
		||||
      # Unlock the node to apply changes
 | 
			
		||||
      system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   **controller-1**
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system interface-network-assign controller-1 enp2s0 ironic-net
 | 
			
		||||
      system host-if-modify -n ironic -c platform \
 | 
			
		||||
      --ipv4-mode static --ipv4-pool ironic-pool controller-1 enp2s0
 | 
			
		||||
 | 
			
		||||
      # Apply the OpenStack Ironic node labels
 | 
			
		||||
      system host-label-assign controller-1 openstack-ironic=enabled
 | 
			
		||||
 | 
			
		||||
      # Unlock the node to apply changes
 | 
			
		||||
      system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
#. Configure the new interface (for Ironic) on one of the compute nodes and
 | 
			
		||||
   assign it to the Ironic data network. This example uses the data network
 | 
			
		||||
   `ironic-data` that was named in a previous step.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system interface-datanetwork-assign compute-0 eno1 ironic-data
 | 
			
		||||
      system host-if-modify -n ironicdata -c data compute-0 eno1
 | 
			
		||||
 | 
			
		||||
****************************
 | 
			
		||||
Generate user Helm overrides
 | 
			
		||||
****************************
 | 
			
		||||
 | 
			
		||||
Ironic Helm Charts are included in the stx-openstack application. By default,
 | 
			
		||||
Ironic is disabled.
 | 
			
		||||
 | 
			
		||||
To enable Ironic, update the following Ironic Helm Chart attributes:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   system helm-override-update stx-openstack ironic openstack \
 | 
			
		||||
   --set network.pxe.neutron_subnet_alloc_start=10.10.20.10 \
 | 
			
		||||
   --set network.pxe.neutron_subnet_gateway=10.10.20.1 \
 | 
			
		||||
   --set network.pxe.neutron_provider_network=ironic-data
 | 
			
		||||
 | 
			
		||||
:command:`network.pxe.neutron_subnet_alloc_start` sets the DHCP start IP to
 | 
			
		||||
Neutron for Ironic node provision, and reserves several IPs for the platform.
 | 
			
		||||
 | 
			
		||||
If the data network name for Ironic is changed, modify
 | 
			
		||||
:command:`network.pxe.neutron_provider_network` to the command above:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   --set network.pxe.neutron_provider_network=ironic-data
 | 
			
		||||
 | 
			
		||||
*******************************
 | 
			
		||||
Apply stx-openstack application
 | 
			
		||||
*******************************
 | 
			
		||||
 | 
			
		||||
Re-apply the stx-openstack application to apply the changes to Ironic:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   system helm-chart-attribute-modify stx-openstack ironic openstack \
 | 
			
		||||
   --enabled true
 | 
			
		||||
 | 
			
		||||
   system application-apply stx-openstack
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Start an Ironic node
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
All the commands in this section are for the OpenStack application with
 | 
			
		||||
administrative privileges.
 | 
			
		||||
 | 
			
		||||
From a new shell as a root user, without sourcing ``/etc/platform/openrc``:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   mkdir -p /etc/openstack
 | 
			
		||||
 | 
			
		||||
   tee /etc/openstack/clouds.yaml << EOF
 | 
			
		||||
   clouds:
 | 
			
		||||
     openstack_helm:
 | 
			
		||||
       region_name: RegionOne
 | 
			
		||||
       identity_api_version: 3
 | 
			
		||||
       endpoint_type: internalURL
 | 
			
		||||
       auth:
 | 
			
		||||
         username: 'admin'
 | 
			
		||||
         password: 'Li69nux*'
 | 
			
		||||
         project_name: 'admin'
 | 
			
		||||
         project_domain_name: 'default'
 | 
			
		||||
         user_domain_name: 'default'
 | 
			
		||||
         auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
 | 
			
		||||
   EOF
 | 
			
		||||
 | 
			
		||||
   export OS_CLOUD=openstack_helm
 | 
			
		||||
 | 
			
		||||
********************
 | 
			
		||||
Create Glance images
 | 
			
		||||
********************
 | 
			
		||||
 | 
			
		||||
#. Create the **ironic-kernel** image:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack image create \
 | 
			
		||||
      --file ~/coreos_production_pxe-stable-stein.vmlinuz \
 | 
			
		||||
      --disk-format aki \
 | 
			
		||||
      --container-format aki \
 | 
			
		||||
      --public \
 | 
			
		||||
      ironic-kernel
 | 
			
		||||
 | 
			
		||||
#. Create the **ironic-ramdisk** image:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack image create \
 | 
			
		||||
      --file ~/coreos_production_pxe_image-oem-stable-stein.cpio.gz \
 | 
			
		||||
      --disk-format ari \
 | 
			
		||||
      --container-format ari \
 | 
			
		||||
      --public \
 | 
			
		||||
      ironic-ramdisk
 | 
			
		||||
 | 
			
		||||
#. Create the end user application image (for example, CentOS):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack image create \
 | 
			
		||||
      --file ~/CentOS-7-x86_64-GenericCloud-root.qcow2 \
 | 
			
		||||
      --public --disk-format \
 | 
			
		||||
      qcow2 --container-format bare centos
 | 
			
		||||
 | 
			
		||||
*********************
 | 
			
		||||
Create an Ironic node
 | 
			
		||||
*********************
 | 
			
		||||
 | 
			
		||||
#. Create a node:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node create --driver ipmi --name ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Add IPMI information:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node set \
 | 
			
		||||
      --driver-info ipmi_address=10.10.10.126 \
 | 
			
		||||
      --driver-info ipmi_username=root \
 | 
			
		||||
      --driver-info ipmi_password=test123 \
 | 
			
		||||
      --driver-info ipmi_terminal_port=623 ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Set `ironic-kernel` and `ironic-ramdisk` images driver information,
 | 
			
		||||
   on this bare metal node:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node set \
 | 
			
		||||
      --driver-info deploy_kernel=$(openstack image list | grep ironic-kernel | awk '{print$2}') \
 | 
			
		||||
      --driver-info deploy_ramdisk=$(openstack image list | grep ironic-ramdisk | awk '{print$2}') \
 | 
			
		||||
      ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Set resource properties on this bare metal node based on actual Ironic node
 | 
			
		||||
   capacities:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node set \
 | 
			
		||||
      --property cpus=4 \
 | 
			
		||||
      --property cpu_arch=x86_64\
 | 
			
		||||
      --property capabilities="boot_option:local" \
 | 
			
		||||
      --property memory_mb=65536 \
 | 
			
		||||
      --property local_gb=400 \
 | 
			
		||||
      --resource-class bm ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Add pxe_template location:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node set --driver-info \
 | 
			
		||||
      pxe_template='/var/lib/openstack/lib64/python2.7/site-packages/ironic/drivers/modules/ipxe_config.template' \
 | 
			
		||||
      ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Create a port to identify the specific port used by the Ironic node.
 | 
			
		||||
   Substitute **a4:bf:01:2b:3b:c8** with the MAC address for the Ironic node
 | 
			
		||||
   port which connects to the Ironic network:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal port create \
 | 
			
		||||
      --node $(openstack baremetal node list | grep ironic-test0 | awk '{print$2}') \
 | 
			
		||||
      --pxe-enabled true a4:bf:01:2b:3b:c8
 | 
			
		||||
 | 
			
		||||
#. Change node state to `manage`:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node manage ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Make node available for deployment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node provide ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Wait for ironic-test0 provision-state: available:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node show ironic-test0
 | 
			
		||||
 | 
			
		||||
---------------------------------
 | 
			
		||||
Deploy an instance on Ironic node
 | 
			
		||||
---------------------------------
 | 
			
		||||
 | 
			
		||||
All the commands in this section are for the OpenStack application, but this
 | 
			
		||||
time with *tenant* specific privileges.
 | 
			
		||||
 | 
			
		||||
#. From a new shell as a root user, without sourcing ``/etc/platform/openrc``:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      mkdir -p /etc/openstack
 | 
			
		||||
 | 
			
		||||
      tee /etc/openstack/clouds.yaml << EOF
 | 
			
		||||
      clouds:
 | 
			
		||||
        openstack_helm:
 | 
			
		||||
          region_name: RegionOne
 | 
			
		||||
          identity_api_version: 3
 | 
			
		||||
          endpoint_type: internalURL
 | 
			
		||||
          auth:
 | 
			
		||||
            username: 'joeuser'
 | 
			
		||||
            password: 'mypasswrd'
 | 
			
		||||
            project_name: 'intel'
 | 
			
		||||
            project_domain_name: 'default'
 | 
			
		||||
            user_domain_name: 'default'
 | 
			
		||||
            auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
 | 
			
		||||
      EOF
 | 
			
		||||
 | 
			
		||||
      export OS_CLOUD=openstack_helm
 | 
			
		||||
 | 
			
		||||
#. Create flavor.
 | 
			
		||||
 | 
			
		||||
   Set resource CUSTOM_BM corresponding to **--resource-class bm**:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack flavor create --ram 4096 --vcpus 4 --disk 400 \
 | 
			
		||||
      --property resources:CUSTOM_BM=1 \
 | 
			
		||||
      --property resources:VCPU=0 \
 | 
			
		||||
      --property resources:MEMORY_MB=0 \
 | 
			
		||||
      --property resources:DISK_GB=0 \
 | 
			
		||||
      --property capabilities:boot_option='local' \
 | 
			
		||||
      bm-flavor
 | 
			
		||||
 | 
			
		||||
   See `Adding scheduling information
 | 
			
		||||
   <https://docs.openstack.org/ironic/latest/install/enrollment.html#adding-scheduling-information>`__
 | 
			
		||||
   and `Configure Nova flavors
 | 
			
		||||
   <https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html>`__
 | 
			
		||||
   for more information.
 | 
			
		||||
 | 
			
		||||
#. Enable service
 | 
			
		||||
 | 
			
		||||
   List the compute services:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack compute service list
 | 
			
		||||
 | 
			
		||||
   Set compute service properties:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack compute service set --enable controller-0 nova-compute
 | 
			
		||||
 | 
			
		||||
#. Create instance
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      The :command:`keypair create` command is optional. It is not required to
 | 
			
		||||
      enable a bare metal instance.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   Create 2 new servers, one bare metal and one virtual:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack server create --image centos --flavor bm-flavor \
 | 
			
		||||
      --network baremetal --key-name mykey bm
 | 
			
		||||
 | 
			
		||||
      openstack server create --image centos --flavor m1.small \
 | 
			
		||||
      --network baremetal --key-name mykey vm
 | 
			
		||||
@@ -1,23 +0,0 @@
 | 
			
		||||
The All-in-one Duplex (AIO-DX) deployment option provides a pair of high
 | 
			
		||||
availability (HA) servers with each server providing all three cloud functions
 | 
			
		||||
(controller, compute, and storage).
 | 
			
		||||
 | 
			
		||||
An AIO-DX configuration provides the following benefits:
 | 
			
		||||
 | 
			
		||||
* Only a small amount of cloud processing and storage power is required
 | 
			
		||||
* Application consolidation using multiple virtual machines on a single pair of
 | 
			
		||||
  physical servers
 | 
			
		||||
* High availability (HA) services run on the controller function across two
 | 
			
		||||
  physical servers in either active/active or active/standby mode
 | 
			
		||||
* A storage back end solution using a two-node CEPH deployment across two servers
 | 
			
		||||
* Virtual machines scheduled on both compute functions
 | 
			
		||||
* Protection against overall server hardware fault, where
 | 
			
		||||
 | 
			
		||||
  * All controller HA services go active on the remaining healthy server
 | 
			
		||||
  * All virtual machines are recovered on the remaining healthy server
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-deployment-options-duplex.png
 | 
			
		||||
   :scale: 50%
 | 
			
		||||
   :alt: All-in-one Duplex deployment configuration
 | 
			
		||||
 | 
			
		||||
   *Figure 1: All-in-one Duplex deployment configuration*
 | 
			
		||||
@@ -1,18 +0,0 @@
 | 
			
		||||
The All-in-one Simplex (AIO-SX) deployment option provides all three cloud
 | 
			
		||||
functions (controller, compute, and storage) on a single server with the
 | 
			
		||||
following benefits:
 | 
			
		||||
 | 
			
		||||
* Requires only a small amount of cloud processing and storage power
 | 
			
		||||
* Application consolidation using multiple virtual machines on a single pair of
 | 
			
		||||
  physical servers
 | 
			
		||||
* A storage backend solution using a single-node CEPH deployment
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-deployment-options-simplex.png
 | 
			
		||||
   :scale: 50%
 | 
			
		||||
   :alt: All-in-one Simplex deployment configuration
 | 
			
		||||
 | 
			
		||||
   *Figure 1: All-in-one Simplex deployment configuration*
 | 
			
		||||
 | 
			
		||||
An AIO-SX deployment gives no protection against overall server hardware fault.
 | 
			
		||||
Hardware component protection can be enabled with, for example, a hardware RAID
 | 
			
		||||
or 2x Port LAG in the deployment.
 | 
			
		||||
@@ -1,22 +0,0 @@
 | 
			
		||||
The Standard with Controller Storage deployment option provides two high
 | 
			
		||||
availability (HA) controller nodes and a pool of up to 10 compute nodes.
 | 
			
		||||
 | 
			
		||||
A Standard with Controller Storage configuration provides the following benefits:
 | 
			
		||||
 | 
			
		||||
* A pool of up to 10 compute nodes
 | 
			
		||||
* High availability (HA) services run across the controller nodes in either
 | 
			
		||||
  active/active or active/standby mode
 | 
			
		||||
* A storage back end solution using a two-node CEPH deployment across two
 | 
			
		||||
  controller servers
 | 
			
		||||
* Protection against overall controller and compute node failure, where
 | 
			
		||||
 | 
			
		||||
  * On overall controller node failure, all controller HA services go active on
 | 
			
		||||
    the remaining healthy controller node
 | 
			
		||||
  * On overall compute node failure, virtual machines and containers are
 | 
			
		||||
    recovered on the remaining healthy compute nodes
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-deployment-options-controller-storage.png
 | 
			
		||||
   :scale: 50%
 | 
			
		||||
   :alt: Standard with Controller Storage deployment configuration
 | 
			
		||||
 | 
			
		||||
   *Figure 1: Standard with Controller Storage deployment configuration*
 | 
			
		||||
@@ -1,17 +0,0 @@
 | 
			
		||||
The Standard with Dedicated Storage deployment option is a standard installation
 | 
			
		||||
with independent controller, compute, and storage nodes.
 | 
			
		||||
 | 
			
		||||
A Standard with Dedicated Storage configuration provides the following benefits:
 | 
			
		||||
 | 
			
		||||
* A pool of up to 100 compute nodes
 | 
			
		||||
* A 2x node high availability (HA) controller cluster with HA services running
 | 
			
		||||
  across the controller nodes in either active/active or active/standby mode
 | 
			
		||||
* A storage back end solution using a two-to-9x node HA CEPH storage cluster
 | 
			
		||||
  that supports a replication factor of two or three
 | 
			
		||||
* Up to four groups of 2x storage nodes, or up to three groups of 3x storage nodes
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-deployment-options-dedicated-storage.png
 | 
			
		||||
   :scale: 50%
 | 
			
		||||
   :alt: Standard with Dedicated Storage deployment configuration
 | 
			
		||||
 | 
			
		||||
   *Figure 1: Standard with Dedicated Storage deployment configuration*
 | 
			
		||||
| 
		 Before Width: | Height: | Size: 103 KiB  | 
@@ -1,63 +0,0 @@
 | 
			
		||||
===========================
 | 
			
		||||
StarlingX R2.0 Installation
 | 
			
		||||
===========================
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   Changes in the underlying StarlingX infrastructure have occurred
 | 
			
		||||
   since the R2.0 release. Due to these changes, the R2.0 installation
 | 
			
		||||
   instructions may not work as described.
 | 
			
		||||
 | 
			
		||||
   Installation of the current :ref:`latest_release` is recommended.
 | 
			
		||||
 | 
			
		||||
StarlingX provides a pre-defined set of standard :doc:`deployment configurations
 | 
			
		||||
</introduction/deploy_options>`. Most deployment options may be installed in a
 | 
			
		||||
virtual environment or on bare metal.
 | 
			
		||||
 | 
			
		||||
-----------------------------------------------------
 | 
			
		||||
Install StarlingX Kubernetes in a virtual environment
 | 
			
		||||
-----------------------------------------------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   virtual/aio_simplex
 | 
			
		||||
   virtual/aio_duplex
 | 
			
		||||
   virtual/controller_storage
 | 
			
		||||
   virtual/dedicated_storage
 | 
			
		||||
 | 
			
		||||
------------------------------------------
 | 
			
		||||
Install StarlingX Kubernetes on bare metal
 | 
			
		||||
------------------------------------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   bare_metal/aio_simplex
 | 
			
		||||
   bare_metal/aio_duplex
 | 
			
		||||
   bare_metal/controller_storage
 | 
			
		||||
   bare_metal/dedicated_storage
 | 
			
		||||
   bare_metal/ironic
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :hidden:
 | 
			
		||||
 | 
			
		||||
   ansible_bootstrap_configs
 | 
			
		||||
 | 
			
		||||
-----------------
 | 
			
		||||
Access Kubernetes
 | 
			
		||||
-----------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   kubernetes_access
 | 
			
		||||
 | 
			
		||||
--------------------------
 | 
			
		||||
Access StarlingX OpenStack
 | 
			
		||||
--------------------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   openstack/index
 | 
			
		||||
@@ -1,181 +0,0 @@
 | 
			
		||||
================================
 | 
			
		||||
Access StarlingX Kubernetes R2.0
 | 
			
		||||
================================
 | 
			
		||||
 | 
			
		||||
Use local/remote CLIs, GUIs, and/or REST APIs to access and manage StarlingX
 | 
			
		||||
Kubernetes and hosted containerized applications.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Local CLIs
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
In order to access the StarlingX and Kubernetes commands on controller-O, first
 | 
			
		||||
follow these steps:
 | 
			
		||||
 | 
			
		||||
#. Log in to controller-0 via the console or SSH with a sysadmin/<sysadmin-password>.
 | 
			
		||||
 | 
			
		||||
#. Acquire Keystone admin and Kubernetes admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
*********************************************
 | 
			
		||||
StarlingX system and host management commands
 | 
			
		||||
*********************************************
 | 
			
		||||
 | 
			
		||||
Access StarlingX system and host management commands using the :command:`system`
 | 
			
		||||
command. For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	system host-list
 | 
			
		||||
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	| id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
Use the :command:`system help` command for the full list of options.
 | 
			
		||||
 | 
			
		||||
***********************************
 | 
			
		||||
StarlingX fault management commands
 | 
			
		||||
***********************************
 | 
			
		||||
 | 
			
		||||
Access StarlingX fault management commands using the :command:`fm` command, for example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	fm alarm-list
 | 
			
		||||
 | 
			
		||||
*******************
 | 
			
		||||
Kubernetes commands
 | 
			
		||||
*******************
 | 
			
		||||
 | 
			
		||||
Access Kubernetes commands using the :command:`kubectl` command, for example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	kubectl get nodes
 | 
			
		||||
 | 
			
		||||
	NAME           STATUS   ROLES    AGE     VERSION
 | 
			
		||||
	controller-0   Ready    master   5d19h   v1.13.5
 | 
			
		||||
 | 
			
		||||
See https://kubernetes.io/docs/reference/kubectl/overview/ for details.
 | 
			
		||||
 | 
			
		||||
-----------
 | 
			
		||||
Remote CLIs
 | 
			
		||||
-----------
 | 
			
		||||
 | 
			
		||||
Documentation coming soon.
 | 
			
		||||
 | 
			
		||||
---
 | 
			
		||||
GUI
 | 
			
		||||
---
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   For a virtual installation, run the browser on the host machine.
 | 
			
		||||
 | 
			
		||||
*********************
 | 
			
		||||
StarlingX Horizon GUI
 | 
			
		||||
*********************
 | 
			
		||||
 | 
			
		||||
Access the StarlingX Horizon GUI with the following steps:
 | 
			
		||||
 | 
			
		||||
#. Enter the OAM floating IP address in your browser:
 | 
			
		||||
   `\http://<oam-floating-ip-address>:8080`
 | 
			
		||||
 | 
			
		||||
   Discover your OAM floating IP address with the :command:`system oam-show` command.
 | 
			
		||||
 | 
			
		||||
#. Log in to Horizon with an admin/<sysadmin-password>.
 | 
			
		||||
 | 
			
		||||
********************
 | 
			
		||||
Kubernetes dashboard
 | 
			
		||||
********************
 | 
			
		||||
 | 
			
		||||
The Kubernetes dashboard is not installed by default.
 | 
			
		||||
 | 
			
		||||
To install the Kubernetes dashboard, execute the following steps on controller-0:
 | 
			
		||||
 | 
			
		||||
#. Use the kubernetes-dashboard helm chart from the stable helm repository with
 | 
			
		||||
   the override values shown below:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	cat <<EOF > dashboard-values.yaml
 | 
			
		||||
	service:
 | 
			
		||||
	  type: NodePort
 | 
			
		||||
	  nodePort: 30000
 | 
			
		||||
 | 
			
		||||
	rbac:
 | 
			
		||||
	  create: true
 | 
			
		||||
	  clusterAdminRole: true
 | 
			
		||||
 | 
			
		||||
	serviceAccount:
 | 
			
		||||
	  create: true
 | 
			
		||||
	  name: kubernetes-dashboard
 | 
			
		||||
	EOF
 | 
			
		||||
 | 
			
		||||
	helm repo update
 | 
			
		||||
 | 
			
		||||
	helm install stable/kubernetes-dashboard --name dashboard -f dashboard-values.yaml
 | 
			
		||||
 | 
			
		||||
#. Create an ``admin-user`` service account with ``cluster-admin`` privileges, and
 | 
			
		||||
   display its token for logging into the Kubernetes dashboard.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	cat <<EOF > admin-login.yaml
 | 
			
		||||
	apiVersion: v1
 | 
			
		||||
	kind: ServiceAccount
 | 
			
		||||
	metadata:
 | 
			
		||||
	  name: admin-user
 | 
			
		||||
	  namespace: kube-system
 | 
			
		||||
	---
 | 
			
		||||
	apiVersion: rbac.authorization.k8s.io/v1
 | 
			
		||||
	kind: ClusterRoleBinding
 | 
			
		||||
	metadata:
 | 
			
		||||
	  name: admin-user
 | 
			
		||||
	roleRef:
 | 
			
		||||
	  apiGroup: rbac.authorization.k8s.io
 | 
			
		||||
	  kind: ClusterRole
 | 
			
		||||
	  name: cluster-admin
 | 
			
		||||
	subjects:
 | 
			
		||||
	- kind: ServiceAccount
 | 
			
		||||
	  name: admin-user
 | 
			
		||||
	  namespace: kube-system
 | 
			
		||||
	EOF
 | 
			
		||||
 | 
			
		||||
	kubectl apply -f admin-login.yaml
 | 
			
		||||
 | 
			
		||||
	kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
Access the Kubernetes dashboard GUI with the following steps:
 | 
			
		||||
 | 
			
		||||
#. Enter the OAM floating IP address in your browser:
 | 
			
		||||
   `\https://<oam-floating-ip-address>:30000`.
 | 
			
		||||
 | 
			
		||||
   Discover your OAM floating IP address with the :command:`system oam-show` command.
 | 
			
		||||
 | 
			
		||||
#. Log in to the Kubernetes dashboard using the ``admin-user`` token.
 | 
			
		||||
 | 
			
		||||
---------
 | 
			
		||||
REST APIs
 | 
			
		||||
---------
 | 
			
		||||
 | 
			
		||||
List the StarlingX platform-related public REST API endpoints using the
 | 
			
		||||
following command:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	openstack endpoint list | grep public
 | 
			
		||||
 | 
			
		||||
Use these URLs as the prefix for the URL target of StarlingX Platform Services'
 | 
			
		||||
REST API messages.
 | 
			
		||||
@@ -1,273 +0,0 @@
 | 
			
		||||
==========================
 | 
			
		||||
Access StarlingX OpenStack
 | 
			
		||||
==========================
 | 
			
		||||
 | 
			
		||||
Use local/remote CLIs, GUIs and/or REST APIs to access and manage StarlingX
 | 
			
		||||
OpenStack and hosted virtualized applications.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
------------------------------
 | 
			
		||||
Configure helm endpoint domain
 | 
			
		||||
------------------------------
 | 
			
		||||
 | 
			
		||||
Containerized OpenStack services in StarlingX are deployed behind an ingress
 | 
			
		||||
controller (nginx) that listens on either port 80 (HTTP) or port 443 (HTTPS).
 | 
			
		||||
The ingress controller routes packets to the specific OpenStack service, such as
 | 
			
		||||
the Cinder service, or the Neutron service, by parsing the FQDN in the packet.
 | 
			
		||||
For example, `neutron.openstack.svc.cluster.local` is for the Neutron service,
 | 
			
		||||
`cinder‐api.openstack.svc.cluster.local` is for the Cinder service.
 | 
			
		||||
 | 
			
		||||
This routing requires that access to OpenStack REST APIs must be via a FQDN
 | 
			
		||||
or by using a remote OpenStack CLI that uses the REST APIs. You cannot access
 | 
			
		||||
OpenStack REST APIs using an IP address.
 | 
			
		||||
 | 
			
		||||
FQDNs (such as `cinder‐api.openstack.svc.cluster.local`) must be in a DNS server
 | 
			
		||||
that is publicly accessible.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   There is a way to wild‐card a set of FQDNs to the same IP address in a DNS
 | 
			
		||||
   server configuration so that you don’t need to update the DNS server every
 | 
			
		||||
   time an OpenStack service is added. Check your particular DNS server for
 | 
			
		||||
   details on how to wild-card a set of FQDNs.
 | 
			
		||||
 | 
			
		||||
In a “real” deployment, that is, not a lab scenario, you can not use the default
 | 
			
		||||
`openstack.svc.cluster.local` domain name externally. You must set a unique
 | 
			
		||||
domain name for your StarlingX system. StarlingX provides the
 | 
			
		||||
:command:`system service‐parameter-add` command to configure and set the
 | 
			
		||||
OpenStack domain name:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system service-parameter-add openstack helm endpoint_domain=<domain_name>
 | 
			
		||||
 | 
			
		||||
`<domain_name>` should be a fully qualified domain name that you own, such that
 | 
			
		||||
you can configure the DNS Server that owns `<domain_name>` with the OpenStack
 | 
			
		||||
service names underneath the domain.
 | 
			
		||||
 | 
			
		||||
For example:
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system service-parameter-add openstack helm endpoint_domain=my-starlingx-domain.my-company.com
 | 
			
		||||
  system application-apply stx-openstack
 | 
			
		||||
 | 
			
		||||
This command updates the helm charts of all OpenStack services and restarts them.
 | 
			
		||||
For example it would change `cinder‐api.openstack.svc.cluster.local` to
 | 
			
		||||
`cinder‐api.my-starlingx-domain.my-company.com`, and so on for all OpenStack
 | 
			
		||||
services.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   This command also changes the containerized OpenStack Horizon to listen on
 | 
			
		||||
   `horizon.my-starlingx-domain.my-company.com:80` instead of the initial
 | 
			
		||||
   `<oam‐floating‐ip>:31000`.
 | 
			
		||||
 | 
			
		||||
You must configure `{ ‘*.my-starlingx-domain.my-company.com’:  -->  oam‐floating‐ip‐address }`
 | 
			
		||||
in the external DNS server that owns `my-company.com`.
 | 
			
		||||
 | 
			
		||||
---------
 | 
			
		||||
Local CLI
 | 
			
		||||
---------
 | 
			
		||||
 | 
			
		||||
Access OpenStack using the local CLI with the following steps:
 | 
			
		||||
 | 
			
		||||
#. Log in to controller-0 via the console or SSH with a sysadmin/<sysadmin-password>.
 | 
			
		||||
   *Do not use* source /etc/platform/openrc .
 | 
			
		||||
 | 
			
		||||
#. Set the CLI context to the StarlingX OpenStack Cloud Application and set up
 | 
			
		||||
   OpenStack admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   	sudo su -
 | 
			
		||||
	mkdir -p /etc/openstack
 | 
			
		||||
	tee /etc/openstack/clouds.yaml << EOF
 | 
			
		||||
	clouds:
 | 
			
		||||
	  openstack_helm:
 | 
			
		||||
	    region_name: RegionOne
 | 
			
		||||
	    identity_api_version: 3
 | 
			
		||||
	    endpoint_type: internalURL
 | 
			
		||||
	    auth:
 | 
			
		||||
	      username: 'admin'
 | 
			
		||||
	      password: '<sysadmin-password>'
 | 
			
		||||
	      project_name: 'admin'
 | 
			
		||||
	      project_domain_name: 'default'
 | 
			
		||||
	      user_domain_name: 'default'
 | 
			
		||||
	      auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
 | 
			
		||||
	EOF
 | 
			
		||||
	exit
 | 
			
		||||
 | 
			
		||||
	export OS_CLOUD=openstack_helm
 | 
			
		||||
 | 
			
		||||
**********************
 | 
			
		||||
OpenStack CLI commands
 | 
			
		||||
**********************
 | 
			
		||||
 | 
			
		||||
Access OpenStack CLI commands for the StarlingX OpenStack cloud application
 | 
			
		||||
using the :command:`openstack` command. For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	[sysadmin@controller-0 ~(keystone_admin)]$ openstack flavor list
 | 
			
		||||
	[sysadmin@controller-0 ~(keystone_admin)]$ openstack image list
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Remote CLI
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
Documentation coming soon.
 | 
			
		||||
 | 
			
		||||
---
 | 
			
		||||
GUI
 | 
			
		||||
---
 | 
			
		||||
 | 
			
		||||
Access the StarlingX containerized OpenStack Horizon GUI in your browser at the
 | 
			
		||||
following address:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	http://<oam-floating-ip-address>:31000
 | 
			
		||||
 | 
			
		||||
Log in to the Containerized OpenStack Horizon GUI with an admin/<sysadmin-password>.
 | 
			
		||||
 | 
			
		||||
---------
 | 
			
		||||
REST APIs
 | 
			
		||||
---------
 | 
			
		||||
 | 
			
		||||
This section provides an overview of accessing REST APIs with examples of
 | 
			
		||||
`curl`-based REST API commands.
 | 
			
		||||
 | 
			
		||||
****************
 | 
			
		||||
Public endpoints
 | 
			
		||||
****************
 | 
			
		||||
 | 
			
		||||
Use the `Local CLI`_ to display OpenStack public REST API endpoints. For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  openstack endpoint list
 | 
			
		||||
 | 
			
		||||
The public endpoints will look like:
 | 
			
		||||
 | 
			
		||||
* `\http://keystone.openstack.svc.cluster.local:80/v3`
 | 
			
		||||
* `\http://nova.openstack.svc.cluster.local:80/v2.1/%(tenant_id)s`
 | 
			
		||||
* `\http://neutron.openstack.svc.cluster.local:80/`
 | 
			
		||||
* `etc.`
 | 
			
		||||
 | 
			
		||||
If you have set a unique domain name, then the public endpoints will look like:
 | 
			
		||||
 | 
			
		||||
* `\http://keystone.my-starlingx-domain.my-company.com:80/v3`
 | 
			
		||||
* `\http://nova.my-starlingx-domain.my-company.com:80/v2.1/%(tenant_id)s`
 | 
			
		||||
* `\http://neutron.my-starlingx-domain.my-company.com:80/`
 | 
			
		||||
* `etc.`
 | 
			
		||||
 | 
			
		||||
Documentation for the OpenStack REST APIs is available at
 | 
			
		||||
`OpenStack API Documentation <https://docs.openstack.org/api-quick-start/index.html>`_.
 | 
			
		||||
 | 
			
		||||
***********
 | 
			
		||||
Get a token
 | 
			
		||||
***********
 | 
			
		||||
 | 
			
		||||
The following command will request the Keystone token:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	curl -i   -H "Content-Type: application/json"   -d
 | 
			
		||||
	'{ "auth": {
 | 
			
		||||
	    "identity": {
 | 
			
		||||
	      "methods": ["password"],
 | 
			
		||||
	      "password": {
 | 
			
		||||
	        "user": {
 | 
			
		||||
	          "name": "admin",
 | 
			
		||||
	          "domain": { "id": "default" },
 | 
			
		||||
	          "password": "St8rlingX*"
 | 
			
		||||
	        }
 | 
			
		||||
	      }
 | 
			
		||||
	    },
 | 
			
		||||
	    "scope": {
 | 
			
		||||
	      "project": {
 | 
			
		||||
	        "name": "admin",
 | 
			
		||||
	        "domain": { "id": "default" }
 | 
			
		||||
	      }
 | 
			
		||||
	    }
 | 
			
		||||
	  }
 | 
			
		||||
	}'   http://keystone.openstack.svc.cluster.local:80/v3/auth/tokens
 | 
			
		||||
 | 
			
		||||
The token will be returned in the "X-Subject-Token" header field of the response:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	HTTP/1.1 201 CREATED
 | 
			
		||||
	Date: Wed, 02 Oct 2019 18:27:38 GMT
 | 
			
		||||
	Content-Type: application/json
 | 
			
		||||
	Content-Length: 8128
 | 
			
		||||
	Connection: keep-alive
 | 
			
		||||
	X-Subject-Token: gAAAAABdlOwafP71DXZjbyEf4gsNYA8ftso910S-RdJhg0fnqWuMGyMUhYUUJSossuUIitrvu2VXYXDNPbnaGzFveOoXxYTPlM6Fgo1aCl6wW85zzuXqT6AsxoCn95OMFhj_HHeYNPTkcyjbuW-HH_rJfhuUXt85iytZ_YAQQUfSXM7N3zAk7Pg
 | 
			
		||||
	Vary: X-Auth-Token
 | 
			
		||||
	x-openstack-request-id: req-d1bbe060-32f0-4cf1-ba1d-7b38c56b79fb
 | 
			
		||||
 | 
			
		||||
	{"token": {"is_domain": false,
 | 
			
		||||
 | 
			
		||||
		...
 | 
			
		||||
 | 
			
		||||
You can set an environment variable to hold the token value from the response.
 | 
			
		||||
For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  TOKEN=gAAAAABdlOwafP71DXZjbyEf4gsNYA8ftso910S
 | 
			
		||||
 | 
			
		||||
*****************
 | 
			
		||||
List Nova flavors
 | 
			
		||||
*****************
 | 
			
		||||
 | 
			
		||||
The following command will request a list of all Nova flavors:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	curl -i http://nova.openstack.svc.cluster.local:80/v2.1/flavors -X GET -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token:${TOKEN}" | tail -1 | python -m json.tool
 | 
			
		||||
 | 
			
		||||
The list will be returned in the response. For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	 % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
 | 
			
		||||
	                                 Dload  Upload   Total   Spent    Left  Speed
 | 
			
		||||
	100  2529  100  2529    0     0  24187      0 --:--:-- --:--:-- --:--:-- 24317
 | 
			
		||||
	{
 | 
			
		||||
	    "flavors": [
 | 
			
		||||
	        {
 | 
			
		||||
	            "id": "04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
 | 
			
		||||
	            "links": [
 | 
			
		||||
	                {
 | 
			
		||||
	                    "href": "http://nova.openstack.svc.cluster.local/v2.1/flavors/04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
 | 
			
		||||
	                    "rel": "self"
 | 
			
		||||
	                },
 | 
			
		||||
	                {
 | 
			
		||||
	                    "href": "http://nova.openstack.svc.cluster.local/flavors/04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
 | 
			
		||||
	                    "rel": "bookmark"
 | 
			
		||||
	                }
 | 
			
		||||
	            ],
 | 
			
		||||
	            "name": "m1.tiny"
 | 
			
		||||
	        },
 | 
			
		||||
	        {
 | 
			
		||||
	            "id": "14c725b1-1658-48ec-90e6-05048d269e89",
 | 
			
		||||
	            "links": [
 | 
			
		||||
	                {
 | 
			
		||||
	                    "href": "http://nova.openstack.svc.cluster.local/v2.1/flavors/14c725b1-1658-48ec-90e6-05048d269e89",
 | 
			
		||||
	                    "rel": "self"
 | 
			
		||||
	                },
 | 
			
		||||
	                {
 | 
			
		||||
	                    "href": "http://nova.openstack.svc.cluster.local/flavors/14c725b1-1658-48ec-90e6-05048d269e89",
 | 
			
		||||
	                    "rel": "bookmark"
 | 
			
		||||
	                }
 | 
			
		||||
	            ],
 | 
			
		||||
	            "name": "medium.dpdk"
 | 
			
		||||
	        },
 | 
			
		||||
	        {
 | 
			
		||||
 | 
			
		||||
	        	...
 | 
			
		||||
 | 
			
		||||
@@ -1,65 +0,0 @@
 | 
			
		||||
===========================
 | 
			
		||||
Install StarlingX OpenStack
 | 
			
		||||
===========================
 | 
			
		||||
 | 
			
		||||
These instructions assume that you have completed the following
 | 
			
		||||
OpenStack-specific configuration tasks that are required by the underlying
 | 
			
		||||
StarlingX Kubernetes platform:
 | 
			
		||||
 | 
			
		||||
* All nodes have been labeled appropriately for their OpenStack role(s).
 | 
			
		||||
* The vSwitch type has been configured.
 | 
			
		||||
* The nova-local volume group has been configured on any node's host, if running
 | 
			
		||||
  the compute function.
 | 
			
		||||
 | 
			
		||||
--------------------------------------------
 | 
			
		||||
Install application manifest and helm-charts
 | 
			
		||||
--------------------------------------------
 | 
			
		||||
 | 
			
		||||
#. Get the StarlingX OpenStack application (stx-openstack) manifest and helm-charts.
 | 
			
		||||
   This can be from a private StarlingX build or, as shown below, from the public
 | 
			
		||||
   Cengen StarlingX build off ``master`` branch:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   	wget http://mirror.starlingx.cengn.ca/mirror/starlingx/release/2.0.0/centos/outputs/helm-charts/stx-openstack-1.0-17-centos-stable-latest.tgz
 | 
			
		||||
 | 
			
		||||
#. Load the stx-openstack application's package into StarlingX. The tarball
 | 
			
		||||
   package contains stx-openstack's Airship Armada manifest and stx-openstack's
 | 
			
		||||
   set of helm charts:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   	system application-upload stx-openstack-1.0-17-centos-stable-latest.tgz
 | 
			
		||||
 | 
			
		||||
   This will:
 | 
			
		||||
 | 
			
		||||
   * Load the Armada manifest and helm charts.
 | 
			
		||||
   * Internally manage helm chart override values for each chart.
 | 
			
		||||
   * Automatically generate system helm chart overrides for each chart based on
 | 
			
		||||
     the current state of the underlying StarlingX Kubernetes platform and the
 | 
			
		||||
     recommended StarlingX configuration of OpenStack services.
 | 
			
		||||
 | 
			
		||||
#. Apply the stx-openstack application in order to bring StarlingX OpenStack into
 | 
			
		||||
   service.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   	system application-apply stx-openstack
 | 
			
		||||
 | 
			
		||||
#. Wait for the activation of stx-openstack to complete.
 | 
			
		||||
 | 
			
		||||
   This can take 5-10 minutes depending on the performance of your host machine.
 | 
			
		||||
 | 
			
		||||
   Monitor progress with the command:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   	watch -n 5 system application-list
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
Your OpenStack cloud is now up and running.
 | 
			
		||||
 | 
			
		||||
See :doc:`access` for details on how to access StarlingX OpenStack.
 | 
			
		||||
@@ -1,21 +0,0 @@
 | 
			
		||||
===========================================
 | 
			
		||||
Virtual All-in-one Duplex Installation R2.0
 | 
			
		||||
===========================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_aio_duplex.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   aio_duplex_environ
 | 
			
		||||
   aio_duplex_install_kubernetes
 | 
			
		||||
@@ -1,53 +0,0 @@
 | 
			
		||||
============================
 | 
			
		||||
Prepare Host and Environment
 | 
			
		||||
============================
 | 
			
		||||
 | 
			
		||||
This section describes how to prepare the physical host and virtual environment
 | 
			
		||||
for a **StarlingX R2.0 virtual All-in-one Duplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
------------------------------------
 | 
			
		||||
Physical host requirements and setup
 | 
			
		||||
------------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: physical_host_req.txt
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Prepare virtual environment and servers
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
The following steps explain how to prepare the virtual environment and servers
 | 
			
		||||
on a physical host for a StarlingX R2.0 virtual All-in-one Duplex deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual environment.
 | 
			
		||||
 | 
			
		||||
   Set up the virtual platform networks for virtual deployment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_network.sh
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual servers.
 | 
			
		||||
 | 
			
		||||
   Create the XML definitions for the virtual servers required by this
 | 
			
		||||
   configuration option. This will create the XML virtual server definition for:
 | 
			
		||||
 | 
			
		||||
   * duplex-controller-0
 | 
			
		||||
   * duplex-controller-1
 | 
			
		||||
 | 
			
		||||
   The following command will start/virtually power on:
 | 
			
		||||
 | 
			
		||||
   * The 'duplex-controller-0' virtual server
 | 
			
		||||
   * The X-based graphical virt-manager application
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_configuration.sh -c duplex -i ./bootimage.iso
 | 
			
		||||
 | 
			
		||||
   If there is no X-server present errors will occur and the X-based GUI for the
 | 
			
		||||
   virt-manager application will not start. The virt-manager GUI is not absolutely
 | 
			
		||||
   required and you can safely ignore errors and continue.
 | 
			
		||||
@@ -1,423 +0,0 @@
 | 
			
		||||
==============================================
 | 
			
		||||
Install StarlingX Kubernetes on Virtual AIO-DX
 | 
			
		||||
==============================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R2.0 virtual All-in-one Duplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
In the last step of :doc:`aio_duplex_environ`, the controller-0 virtual server 'duplex-controller-0' was started by the :command:`setup_configuration.sh` command.
 | 
			
		||||
 | 
			
		||||
On the host, attach to the console of virtual controller-0 and select the appropriate
 | 
			
		||||
installer menu options to start the non-interactive install of
 | 
			
		||||
StarlingX software on controller-0.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   When entering the console, it is very easy to miss the first installer menu
 | 
			
		||||
   selection. Use ESC to navigate to previous menus, to ensure you are at the
 | 
			
		||||
   first installer menu.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  virsh console duplex-controller-0
 | 
			
		||||
 | 
			
		||||
Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
#. First menu: Select 'All-in-one Controller Configuration'
 | 
			
		||||
#. Second menu: Select 'Serial Console'
 | 
			
		||||
#. Third menu: Select 'Standard Security Profile'
 | 
			
		||||
 | 
			
		||||
Wait for the non-interactive install of software to complete and for the server
 | 
			
		||||
to reboot. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     Login: sysadmin
 | 
			
		||||
     Password:
 | 
			
		||||
     Changing password for sysadmin.
 | 
			
		||||
     (current) UNIX Password: sysadmin
 | 
			
		||||
     New Password:
 | 
			
		||||
     (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. External connectivity is required to run the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     export CONTROLLER0_OAM_CIDR=10.10.10.3/24
 | 
			
		||||
     export DEFAULT_OAM_GATEWAY=10.10.10.1
 | 
			
		||||
     sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
 | 
			
		||||
     sudo ip link set up dev enp7s1
 | 
			
		||||
     sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   sysadmin home directory ($HOME)
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
 | 
			
		||||
     the configurable values as desired (use the commented instructions in
 | 
			
		||||
     the file).
 | 
			
		||||
 | 
			
		||||
   or
 | 
			
		||||
 | 
			
		||||
   * Create the minimal user configuration override file as shown in the example
 | 
			
		||||
     below:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: duplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: 10.10.10.0/24
 | 
			
		||||
        external_oam_gateway_address: 10.10.10.1
 | 
			
		||||
        external_oam_floating_address: 10.10.10.2
 | 
			
		||||
        external_oam_node_0_address: 10.10.10.3
 | 
			
		||||
        external_oam_node_1_address: 10.10.10.4
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <sysadmin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r2_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
 | 
			
		||||
   attached networks:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      OAM_IF=enp7s1
 | 
			
		||||
      MGMT_IF=enp7s2
 | 
			
		||||
      system host-if-modify controller-0 lo -c none
 | 
			
		||||
      IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
 | 
			
		||||
      for UUID in $IFNET_UUIDS; do
 | 
			
		||||
          system interface-network-remove ${UUID}
 | 
			
		||||
      done
 | 
			
		||||
      system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
      system host-if-modify controller-0 $MGMT_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-0 $MGMT_IF mgmt
 | 
			
		||||
      system interface-network-assign controller-0 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      In a virtual environment, this can sometimes cause Ceph clock skew alarms.
 | 
			
		||||
      Also, the virtual instances clock is synchronized with the host clock,
 | 
			
		||||
      so it is not absolutely required to configure NTP in this step.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-0.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
      (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
      1G Huge Pages are not supported in the virtual environment and there is no
 | 
			
		||||
      virtual NIC supporting SRIOV. For that reason, data interfaces are not
 | 
			
		||||
      applicable in the virtual environment for the Kubernetes-only scenario.
 | 
			
		||||
 | 
			
		||||
   For OpenStack only:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=eth1000
 | 
			
		||||
      DATA1IF=eth1001
 | 
			
		||||
      export COMPUTE=controller-0
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
      system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
      system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
      DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
      DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
      DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
      DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
      system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-0 for Ceph:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-disk-list controller-0
 | 
			
		||||
      system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
 | 
			
		||||
      system host-stor-list controller-0
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. include:: aio_simplex_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-0-openstack-specific-aio-simplex-start:
 | 
			
		||||
   :end-before: incl-config-controller-0-openstack-specific-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-0 to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
-------------------------------------
 | 
			
		||||
Install software on controller-1 node
 | 
			
		||||
-------------------------------------
 | 
			
		||||
 | 
			
		||||
#. On the host, power on the controller-1 virtual server, 'duplex-controller-1'. It will
 | 
			
		||||
   automatically attempt to network boot over the management network:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start duplex-controller-1
 | 
			
		||||
 | 
			
		||||
#. Attach to the console of virtual controller-1:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh console duplex-controller-1
 | 
			
		||||
 | 
			
		||||
   As controller-1 VM boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of virtual controller-0, list hosts to see the newly discovered
 | 
			
		||||
   controller-1 host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-list
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
    | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. On virtual controller-0, using the host id, set the personality of this host
 | 
			
		||||
   to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1 to complete, controller-1 to
 | 
			
		||||
   reboot, and controller-1 to show as locked/disabled/online in 'system host-list'.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-list
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
    | 2  | controller-1 | controller  | locked         | disabled    | online      |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-1 and specify the
 | 
			
		||||
   attached networks. Note that the MGMT interface is partially set up
 | 
			
		||||
   automatically by the network install procedure.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      OAM_IF=enp7s1
 | 
			
		||||
      system host-if-modify controller-1 $OAM_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-1 $OAM_IF oam
 | 
			
		||||
      system interface-network-assign controller-1 mgmt0 cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-1.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
      (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
      1G Huge Pages are not supported in the virtual environment and there is no
 | 
			
		||||
      virtual NIC supporting SRIOV. For that reason, data interfaces are not
 | 
			
		||||
      applicable in the virtual environment for the Kubernetes-only scenario.
 | 
			
		||||
 | 
			
		||||
   For OpenStack only:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=eth1000
 | 
			
		||||
      DATA1IF=eth1001
 | 
			
		||||
      export COMPUTE=controller-1
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
      system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
      system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
      DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
      DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
      DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
      DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
      system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-1 for Ceph:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    echo ">>> Add OSDs to primary tier"
 | 
			
		||||
    system host-disk-list controller-1
 | 
			
		||||
    system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {}
 | 
			
		||||
    system host-stor-list controller-1
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
 | 
			
		||||
   support of installing the stx-openstack manifest/helm-charts later:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-label-assign controller-1 openstack-control-plane=enabled
 | 
			
		||||
    system host-label-assign controller-1 openstack-compute-node=enabled
 | 
			
		||||
    system host-label-assign controller-1 openvswitch=enabled
 | 
			
		||||
    system host-label-assign controller-1 sriov=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      export COMPUTE=controller-1
 | 
			
		||||
 | 
			
		||||
      echo ">>> Getting root disk info"
 | 
			
		||||
      ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
      ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
      echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
 | 
			
		||||
 | 
			
		||||
      echo ">>>> Configuring nova-local"
 | 
			
		||||
      NOVA_SIZE=34
 | 
			
		||||
      NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
 | 
			
		||||
      NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
      system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
      system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-1 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
Controller-1 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,21 +0,0 @@
 | 
			
		||||
============================================
 | 
			
		||||
Virtual All-in-one Simplex Installation R2.0
 | 
			
		||||
============================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_aio_simplex.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   aio_simplex_environ
 | 
			
		||||
   aio_simplex_install_kubernetes
 | 
			
		||||
@@ -1,52 +0,0 @@
 | 
			
		||||
============================
 | 
			
		||||
Prepare Host and Environment
 | 
			
		||||
============================
 | 
			
		||||
 | 
			
		||||
This section describes how to prepare the physical host and virtual environment
 | 
			
		||||
for a **StarlingX R2.0 virtual All-in-one Simplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
------------------------------------
 | 
			
		||||
Physical host requirements and setup
 | 
			
		||||
------------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: physical_host_req.txt
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Prepare virtual environment and servers
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
The following steps explain how to prepare the virtual environment and servers
 | 
			
		||||
on a physical host for a StarlingX R2.0 virtual All-in-one Simplex deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual environment.
 | 
			
		||||
 | 
			
		||||
   Set up the virtual platform networks for virtual deployment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_network.sh
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual servers.
 | 
			
		||||
 | 
			
		||||
   Create the XML definitions for the virtual servers required by this
 | 
			
		||||
   configuration option. This will create the XML virtual server definition for:
 | 
			
		||||
 | 
			
		||||
   * simplex-controller-0
 | 
			
		||||
 | 
			
		||||
   The following command will start/virtually power on:
 | 
			
		||||
 | 
			
		||||
   * The 'simplex-controller-0' virtual server
 | 
			
		||||
   * The X-based graphical virt-manager application
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_configuration.sh -c simplex -i ./bootimage.iso
 | 
			
		||||
 | 
			
		||||
   If there is no X-server present errors will occur and the X-based GUI for the
 | 
			
		||||
   virt-manager application will not start. The virt-manager GUI is not absolutely
 | 
			
		||||
   required and you can safely ignore errors and continue.
 | 
			
		||||
@@ -1,284 +0,0 @@
 | 
			
		||||
==============================================
 | 
			
		||||
Install StarlingX Kubernetes on Virtual AIO-SX
 | 
			
		||||
==============================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R2.0 virtual All-in-one Simplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
In the last step of :doc:`aio_simplex_environ`, the controller-0 virtual server 'simplex-controller-0' was started by the :command:`setup_configuration.sh` command.
 | 
			
		||||
 | 
			
		||||
On the host, attach to the console of virtual controller-0 and select the
 | 
			
		||||
appropriate installer menu options to start the non-interactive install of
 | 
			
		||||
StarlingX software on controller-0.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   When entering the console, it is very easy to miss the first installer menu
 | 
			
		||||
   selection. Use ESC to navigate to previous menus, to ensure you are at the
 | 
			
		||||
   first installer menu.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  virsh console simplex-controller-0
 | 
			
		||||
 | 
			
		||||
Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
#. First menu: Select 'All-in-one Controller Configuration'
 | 
			
		||||
#. Second menu: Select 'Serial Console'
 | 
			
		||||
#. Third menu: Select 'Standard Security Profile'
 | 
			
		||||
 | 
			
		||||
Wait for the non-interactive install of software to complete and for the server
 | 
			
		||||
to reboot. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    Login: sysadmin
 | 
			
		||||
    Password:
 | 
			
		||||
    Changing password for sysadmin.
 | 
			
		||||
    (current) UNIX Password: sysadmin
 | 
			
		||||
    New Password:
 | 
			
		||||
    (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. External connectivity is required to run the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    export CONTROLLER0_OAM_CIDR=10.10.10.3/24
 | 
			
		||||
    export DEFAULT_OAM_GATEWAY=10.10.10.1
 | 
			
		||||
    sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
 | 
			
		||||
    sudo ip link set up dev enp7s1
 | 
			
		||||
    sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   sysadmin home directory ($HOME)
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
 | 
			
		||||
     the configurable values as desired (use the commented instructions in
 | 
			
		||||
     the file).
 | 
			
		||||
 | 
			
		||||
   or
 | 
			
		||||
 | 
			
		||||
   * Create the minimal user configuration override file as shown in the example
 | 
			
		||||
     below:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: simplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
        - 8.8.8.8
 | 
			
		||||
        - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: 10.10.10.0/24
 | 
			
		||||
        external_oam_gateway_address: 10.10.10.1
 | 
			
		||||
        external_oam_floating_address: 10.10.10.2
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <sysadmin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r2_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM interface of controller-0 and specify the attached network
 | 
			
		||||
   as "oam". Use the OAM port name, for example eth0, that is applicable to your
 | 
			
		||||
   deployment environment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     OAM_IF=enp7s1
 | 
			
		||||
     system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      In a virtual environment, this can sometimes cause Ceph clock skew alarms.
 | 
			
		||||
      Also, the virtual instances clock is synchronized with the host clock,
 | 
			
		||||
      so it is not absolutely required to configure NTP in this step.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-0.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
      (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
      1G Huge Pages are not supported in the virtual environment and there is no
 | 
			
		||||
      virtual NIC supporting SRIOV. For that reason, data interfaces are not
 | 
			
		||||
      applicable in the virtual environment for the Kubernetes-only scenario.
 | 
			
		||||
 | 
			
		||||
   For OpenStack only:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    DATA0IF=eth1000
 | 
			
		||||
    DATA1IF=eth1001
 | 
			
		||||
    export COMPUTE=controller-0
 | 
			
		||||
    PHYSNET0='physnet0'
 | 
			
		||||
    PHYSNET1='physnet1'
 | 
			
		||||
    SPL=/tmp/tmp-system-port-list
 | 
			
		||||
    SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
    system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
    system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
    DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
    DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
    DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
    DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
    DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
    DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
    DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
    DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
    system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
    system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
    system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
    system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
    system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
    system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-0 for Ceph:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-disk-list controller-0
 | 
			
		||||
    system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
 | 
			
		||||
    system host-stor-list controller-0
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-openstack-specific-aio-simplex-start:
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
 | 
			
		||||
   support of installing the stx-openstack manifest/helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-label-assign controller-0 openstack-control-plane=enabled
 | 
			
		||||
     system host-label-assign controller-0 openstack-compute-node=enabled
 | 
			
		||||
     system host-label-assign controller-0 openvswitch=enabled
 | 
			
		||||
     system host-label-assign controller-0 sriov=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** A vSwitch is required.
 | 
			
		||||
 | 
			
		||||
   The default vSwitch is containerized OVS that is packaged with the
 | 
			
		||||
   stx-openstack manifest/helm-charts. StarlingX provides the option to use
 | 
			
		||||
   OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT
 | 
			
		||||
   supported, only OVS is supported. Therefore, simply use the default OVS
 | 
			
		||||
   vSwitch here.
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack Only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     export COMPUTE=controller-0
 | 
			
		||||
 | 
			
		||||
     echo ">>> Getting root disk info"
 | 
			
		||||
     ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
     ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
     echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
 | 
			
		||||
 | 
			
		||||
     echo ">>>> Configuring nova-local"
 | 
			
		||||
     NOVA_SIZE=34
 | 
			
		||||
     NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
 | 
			
		||||
     NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
     system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
     system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
     sleep 2
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-0 to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,56 +0,0 @@
 | 
			
		||||
============================
 | 
			
		||||
Prepare Host and Environment
 | 
			
		||||
============================
 | 
			
		||||
 | 
			
		||||
This section describes how to prepare the physical host and virtual environment
 | 
			
		||||
for a **StarlingX R2.0 virtual Standard with Controller Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
------------------------------------
 | 
			
		||||
Physical host requirements and setup
 | 
			
		||||
------------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: physical_host_req.txt
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Prepare virtual environment and servers
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
The following steps explain how to prepare the virtual environment and servers
 | 
			
		||||
on a physical host for a StarlingX R2.0 virtual Standard with Controller Storage
 | 
			
		||||
deployment configuration.
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual environment.
 | 
			
		||||
 | 
			
		||||
   Set up virtual platform networks for virtual deployment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_network.sh
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual servers.
 | 
			
		||||
 | 
			
		||||
   Create the XML definitions for the virtual servers required by this
 | 
			
		||||
   configuration option. This will create the XML virtual server definition for:
 | 
			
		||||
 | 
			
		||||
   * controllerstorage-controller-0
 | 
			
		||||
   * controllerstorage-controller-1
 | 
			
		||||
   * controllerstorage-worker-0
 | 
			
		||||
   * controllerstorage-worker-1
 | 
			
		||||
 | 
			
		||||
   The following command will start/virtually power on:
 | 
			
		||||
 | 
			
		||||
   * The 'controllerstorage-controller-0' virtual server
 | 
			
		||||
   * The X-based graphical virt-manager application
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_configuration.sh -c controllerstorage -i ./bootimage.iso
 | 
			
		||||
 | 
			
		||||
   If there is no X-server present errors will occur and the X-based GUI for the
 | 
			
		||||
   virt-manager application will not start. The virt-manager GUI is not absolutely
 | 
			
		||||
   required and you can safely ignore errors and continue.
 | 
			
		||||
@@ -1,550 +0,0 @@
 | 
			
		||||
========================================================================
 | 
			
		||||
Install StarlingX Kubernetes on Virtual Standard with Controller Storage
 | 
			
		||||
========================================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R2.0 virtual Standard with Controller Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
In the last step of :doc:`controller_storage_environ`, the controller-0 virtual
 | 
			
		||||
server 'controllerstorage-controller-0' was started by the
 | 
			
		||||
:command:`setup_configuration.sh` command.
 | 
			
		||||
 | 
			
		||||
On the host, attach to the console of virtual controller-0 and select the appropriate
 | 
			
		||||
installer menu options to start the non-interactive install of
 | 
			
		||||
StarlingX software on controller-0.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   When entering the console, it is very easy to miss the first installer menu
 | 
			
		||||
   selection. Use ESC to navigate to previous menus, to ensure you are at the
 | 
			
		||||
   first installer menu.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  virsh console controllerstorage-controller-0
 | 
			
		||||
 | 
			
		||||
Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
#. First menu: Select 'Standard Controller Configuration'
 | 
			
		||||
#. Second menu: Select 'Serial Console'
 | 
			
		||||
#. Third menu: Select 'Standard Security Profile'
 | 
			
		||||
 | 
			
		||||
Wait for the non-interactive install of software to complete and for the server
 | 
			
		||||
to reboot. This can take 5-10 minutes depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. incl-bootstrap-controller-0-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Login: sysadmin
 | 
			
		||||
      Password:
 | 
			
		||||
      Changing password for sysadmin.
 | 
			
		||||
      (current) UNIX Password: sysadmin
 | 
			
		||||
      New Password:
 | 
			
		||||
      (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. External connectivity is required to run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      export CONTROLLER0_OAM_CIDR=10.10.10.3/24
 | 
			
		||||
      export DEFAULT_OAM_GATEWAY=10.10.10.1
 | 
			
		||||
      sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
 | 
			
		||||
      sudo ip link set up dev enp7s1
 | 
			
		||||
      sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   sysadmin home directory ($HOME)
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
 | 
			
		||||
     the configurable values as desired (use the commented instructions in
 | 
			
		||||
     the file).
 | 
			
		||||
 | 
			
		||||
   or
 | 
			
		||||
 | 
			
		||||
   * Create the minimal user configuration override file as shown in the example
 | 
			
		||||
     below:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: duplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: 10.10.10.0/24
 | 
			
		||||
        external_oam_gateway_address: 10.10.10.1
 | 
			
		||||
        external_oam_floating_address: 10.10.10.2
 | 
			
		||||
        external_oam_node_0_address: 10.10.10.3
 | 
			
		||||
        external_oam_node_1_address: 10.10.10.4
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <sysadmin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r2_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. incl-bootstrap-controller-0-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
 | 
			
		||||
   attached networks:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      OAM_IF=enp7s1
 | 
			
		||||
      MGMT_IF=enp7s2
 | 
			
		||||
      system host-if-modify controller-0 lo -c none
 | 
			
		||||
      IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
 | 
			
		||||
      for UUID in $IFNET_UUIDS; do
 | 
			
		||||
          system interface-network-remove ${UUID}
 | 
			
		||||
      done
 | 
			
		||||
      system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
      system host-if-modify controller-0 $MGMT_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-0 $MGMT_IF mgmt
 | 
			
		||||
      system interface-network-assign controller-0 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      In a virtual environment, this can sometimes cause Ceph clock skew alarms.
 | 
			
		||||
      Also, the virtual instance clock is synchronized with the host clock,
 | 
			
		||||
      so it is not absolutely required to configure NTP here.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
 | 
			
		||||
   support of installing the stx-openstack manifest/helm-charts later:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-label-assign controller-0 openstack-control-plane=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** A vSwitch is required.
 | 
			
		||||
 | 
			
		||||
   The default vSwitch is containerized OVS that is packaged with the
 | 
			
		||||
   stx-openstack manifest/helm-charts. StarlingX provides the option to use
 | 
			
		||||
   OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT
 | 
			
		||||
   supported, only OVS is supported. Therefore, simply use the default OVS
 | 
			
		||||
   vSwitch here.
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-0 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
    system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------------------------
 | 
			
		||||
Install software on controller-1 and compute nodes
 | 
			
		||||
--------------------------------------------------
 | 
			
		||||
 | 
			
		||||
#. On the host, power on the controller-1 virtual server,
 | 
			
		||||
   'controllerstorage-controller-1'. It will automatically attempt to network
 | 
			
		||||
   boot over the management network:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start controllerstorage-controller-1
 | 
			
		||||
 | 
			
		||||
#. Attach to the console of virtual controller-1:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh console controllerstorage-controller-1
 | 
			
		||||
 | 
			
		||||
   As controller-1 VM boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On console of virtual controller-0, list hosts to see the newly discovered
 | 
			
		||||
   controller-1 host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. On virtual controller-0, using the host id, set the personality of this host
 | 
			
		||||
   to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on controller-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting on the previous step to complete, start up and set the personality
 | 
			
		||||
   for 'controllerstorage-worker-0' and 'controllerstorage-worker-1'. Set the
 | 
			
		||||
   personality to 'worker' and assign a unique hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, start 'controllerstorage-worker-0' from the host:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start controllerstorage-worker-0
 | 
			
		||||
 | 
			
		||||
   Wait for new host (hostname=None) to be discovered by checking
 | 
			
		||||
   ‘system host-list’ on virtual controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 3 personality=worker hostname=compute-0
 | 
			
		||||
 | 
			
		||||
   Repeat for 'controllerstorage-worker-1'. On the host:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start controllerstorage-worker-1
 | 
			
		||||
 | 
			
		||||
   And wait for new host (hostname=None) to be discovered by checking
 | 
			
		||||
   ‘system host-list’ on virtual controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 4 personality=worker hostname=compute-1
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1, compute-0, and compute-1 to
 | 
			
		||||
   complete, for all virtual servers to reboot, and for all to show as
 | 
			
		||||
   locked/disabled/online in 'system host-list'.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
      | 3  | compute-0    | compute     | locked         | disabled    | online       |
 | 
			
		||||
      | 4  | compute-1    | compute     | locked         | disabled    | online       |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-1-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
Configure the OAM and MGMT interfaces of virtual controller-0 and specify the
 | 
			
		||||
attached networks. Note that the MGMT interface is partially set up by the
 | 
			
		||||
network install procedure.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  OAM_IF=enp7s1
 | 
			
		||||
  system host-if-modify controller-1 $OAM_IF -c platform
 | 
			
		||||
  system interface-network-assign controller-1 $OAM_IF oam
 | 
			
		||||
  system interface-network-assign controller-1 mgmt0 cluster-host
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in support
 | 
			
		||||
of installing the stx-openstack manifest/helm-charts later:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-label-assign controller-1 openstack-control-plane=enabled
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-1-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-1-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-1 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
Controller-1 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-1-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure compute nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Add the third Ceph monitor to compute-0:
 | 
			
		||||
 | 
			
		||||
   (The first two Ceph monitors are automatically assigned to controller-0 and
 | 
			
		||||
   controller-1.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ceph-mon-add compute-0
 | 
			
		||||
 | 
			
		||||
#. Wait for the compute node monitor to complete configuration:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ceph-mon-list
 | 
			
		||||
      +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
      | uuid                                 | ceph_ | hostname     | state      | task |
 | 
			
		||||
      |                                      | mon_g |              |            |      |
 | 
			
		||||
      |                                      | ib    |              |            |      |
 | 
			
		||||
      +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
      | 64176b6c-e284-4485-bb2a-115dee215279 | 20    | controller-1 | configured | None |
 | 
			
		||||
      | a9ca151b-7f2c-4551-8167-035d49e2df8c | 20    | controller-0 | configured | None |
 | 
			
		||||
      | f76bc385-190c-4d9a-aa0f-107346a9907b | 20    | compute-0    | configured | None |
 | 
			
		||||
      +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes.
 | 
			
		||||
 | 
			
		||||
   Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
         system interface-network-assign $COMPUTE mgmt0 cluster-host
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for compute nodes.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
      (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
      1G Huge Pages are not supported in the virtual environment and there is no
 | 
			
		||||
      virtual NIC supporting SRIOV. For that reason, data interfaces are not
 | 
			
		||||
      applicable in the virtual environment for the Kubernetes-only scenario.
 | 
			
		||||
 | 
			
		||||
   For OpenStack only:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=eth1000
 | 
			
		||||
      DATA1IF=eth1001
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
 | 
			
		||||
      # configure the datanetworks in sysinv, prior to referencing it
 | 
			
		||||
      # in the ``system host-if-modify`` command'.
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
        echo "Configuring interface for: $COMPUTE"
 | 
			
		||||
        set -ex
 | 
			
		||||
        system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
        system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
        DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
        DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
        DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
        DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
        DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
        DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
        DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
        DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
        system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
        system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
        system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
        system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
        set +ex
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
 | 
			
		||||
   support of installing the stx-openstack manifest/helm-charts later:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for NODE in compute-0 compute-1; do
 | 
			
		||||
        system host-label-assign $NODE  openstack-compute-node=enabled
 | 
			
		||||
        system host-label-assign $NODE  openvswitch=enabled
 | 
			
		||||
        system host-label-assign $NODE  sriov=enabled
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
        echo "Configuring Nova local for: $COMPUTE"
 | 
			
		||||
        ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
        ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
        PARTITION_SIZE=10
 | 
			
		||||
        NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 | 
			
		||||
        NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
        system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
        system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock compute nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-compute-nodes-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
Unlock virtual compute nodes to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
     system host-unlock $COMPUTE
 | 
			
		||||
  done
 | 
			
		||||
 | 
			
		||||
The compute nodes will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-compute-nodes-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
----------------------------
 | 
			
		||||
Add Ceph OSDs to controllers
 | 
			
		||||
----------------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      HOST=controller-0
 | 
			
		||||
      DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
      TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
      OSDs="/dev/sdb"
 | 
			
		||||
      for OSD in $OSDs; do
 | 
			
		||||
         system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
         while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
      system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to controller-1:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      HOST=controller-1
 | 
			
		||||
      DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
      TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
      OSDs="/dev/sdb"
 | 
			
		||||
      for OSD in $OSDs; do
 | 
			
		||||
          system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
          while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
      system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,58 +0,0 @@
 | 
			
		||||
============================
 | 
			
		||||
Prepare Host and Environment
 | 
			
		||||
============================
 | 
			
		||||
 | 
			
		||||
This section describes how to prepare the physical host and virtual environment
 | 
			
		||||
for a **StarlingX R2.0 virtual Standard with Dedicated Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
------------------------------------
 | 
			
		||||
Physical host requirements and setup
 | 
			
		||||
------------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: physical_host_req.txt
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Prepare virtual environment and servers
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
The following steps explain how to prepare the virtual environment and servers
 | 
			
		||||
on a physical host for a StarlingX R2.0 virtual Standard with Dedicated Storage
 | 
			
		||||
deployment configuration.
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual environment.
 | 
			
		||||
 | 
			
		||||
   Set up virtual platform networks for virtual deployment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_network.sh
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual servers.
 | 
			
		||||
 | 
			
		||||
   Create the XML definitions for the virtual servers required by this
 | 
			
		||||
   configuration option. This will create the XML virtual server definition for:
 | 
			
		||||
 | 
			
		||||
   * dedicatedstorage-controller-0
 | 
			
		||||
   * dedicatedstorage-controller-1
 | 
			
		||||
   * dedicatedstorage-storage-0
 | 
			
		||||
   * dedicatedstorage-storage-1
 | 
			
		||||
   * dedicatedstorage-worker-0
 | 
			
		||||
   * dedicatedstorage-worker-1
 | 
			
		||||
 | 
			
		||||
   The following command will start/virtually power on:
 | 
			
		||||
 | 
			
		||||
   * The 'dedicatedstorage-controller-0' virtual server
 | 
			
		||||
   * The X-based graphical virt-manager application
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_configuration.sh -c dedicatedstorage -i ./bootimage.iso
 | 
			
		||||
 | 
			
		||||
   If there is no X-server present errors will occur and the X-based GUI for the
 | 
			
		||||
   virt-manager application will not start. The virt-manager GUI is not absolutely
 | 
			
		||||
   required and you can safely ignore errors and continue.
 | 
			
		||||
@@ -1,390 +0,0 @@
 | 
			
		||||
=======================================================================
 | 
			
		||||
Install StarlingX Kubernetes on Virtual Standard with Dedicated Storage
 | 
			
		||||
=======================================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R2.0 virtual Standard with Dedicated Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
In the last step of :doc:`dedicated_storage_environ`, the controller-0 virtual
 | 
			
		||||
server 'dedicatedstorage-controller-0' was started by the
 | 
			
		||||
:command:`setup_configuration.sh` command.
 | 
			
		||||
 | 
			
		||||
On the host, attach to the console of virtual controller-0 and select the appropriate
 | 
			
		||||
installer menu options to start the non-interactive install of
 | 
			
		||||
StarlingX software on controller-0.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   When entering the console, it is very easy to miss the first installer menu
 | 
			
		||||
   selection. Use ESC to navigate to previous menus, to ensure you are at the
 | 
			
		||||
   first installer menu.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  virsh console dedicatedstorage-controller-0
 | 
			
		||||
 | 
			
		||||
Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
#. First menu: Select 'Standard Controller Configuration'
 | 
			
		||||
#. Second menu: Select 'Serial Console'
 | 
			
		||||
#. Third menu: Select 'Standard Security Profile'
 | 
			
		||||
 | 
			
		||||
Wait for the non-interactive install of software to complete and for the server
 | 
			
		||||
to reboot. This can take 5-10 minutes depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-bootstrap-controller-0-virt-controller-storage-start:
 | 
			
		||||
   :end-before: incl-bootstrap-controller-0-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-0-virt-controller-storage-start:
 | 
			
		||||
   :end-before: incl-config-controller-0-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-0 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
------------------------------------------------------------------
 | 
			
		||||
Install software on controller-1, storage nodes, and compute nodes
 | 
			
		||||
------------------------------------------------------------------
 | 
			
		||||
 | 
			
		||||
#. On the host, power on the controller-1 virtual server,
 | 
			
		||||
   'dedicatedstorage-controller-1'. It will automatically attempt to network
 | 
			
		||||
   boot over the management network:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start dedicatedstorage-controller-1
 | 
			
		||||
 | 
			
		||||
#. Attach to the console of virtual controller-1:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh console dedicatedstorage-controller-1
 | 
			
		||||
 | 
			
		||||
#. As controller-1 VM boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of controller-0, list hosts to see newly discovered
 | 
			
		||||
   controller-1 host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Using the host id, set the personality of this host to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
   This initiates software installation on controller-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting on the previous step to complete, start up and set the personality
 | 
			
		||||
   for 'dedicatedstorage-storage-0' and 'dedicatedstorage-storage-1'. Set the
 | 
			
		||||
   personality to 'storage' and assign a unique hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, start 'dedicatedstorage-storage-0' from the host:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start dedicatedstorage-storage-0
 | 
			
		||||
 | 
			
		||||
   Wait for new host (hostname=None) to be discovered by checking
 | 
			
		||||
   ‘system host-list’ on virtual controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 3 personality=storage
 | 
			
		||||
 | 
			
		||||
   Repeat for 'dedicatedstorage-storage-1'. On the host:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start dedicatedstorage-storage-1
 | 
			
		||||
 | 
			
		||||
   And wait for new host (hostname=None) to be discovered by checking
 | 
			
		||||
   ‘system host-list’ on virtual controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 4 personality=storage
 | 
			
		||||
 | 
			
		||||
   This initiates software installation on storage-0 and storage-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting on the previous step to complete, start up and set the personality
 | 
			
		||||
   for 'dedicatedstorage-worker-0' and 'dedicatedstorage-worker-1'. Set the
 | 
			
		||||
   personality to 'worker' and assign a unique hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, start 'dedicatedstorage-worker-0' from the host:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start dedicatedstorage-worker-0
 | 
			
		||||
 | 
			
		||||
   Wait for new host (hostname=None) to be discovered by checking
 | 
			
		||||
   ‘system host-list’ on virtual controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 5 personality=worker hostname=compute-0
 | 
			
		||||
 | 
			
		||||
   Repeat for 'dedicatedstorage-worker-1'. On the host:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start dedicatedstorage-worker-1
 | 
			
		||||
 | 
			
		||||
   And wait for new host (hostname=None) to be discovered by checking
 | 
			
		||||
   ‘system host-list’ on virtual controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 6 personality=worker hostname=compute-1
 | 
			
		||||
 | 
			
		||||
   This initiates software installation on compute-0 and compute-1.
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
 | 
			
		||||
   compute-0, and compute-1 to complete, for all virtual servers to reboot, and for all
 | 
			
		||||
   to show as locked/disabled/online in 'system host-list'.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
      | 3  | storage-0    | storage     | locked         | disabled    | online       |
 | 
			
		||||
      | 4  | storage-1    | storage     | locked         | disabled    | online       |
 | 
			
		||||
      | 5  | compute-0    | compute     | locked         | disabled    | online       |
 | 
			
		||||
      | 6  | compute-1    | compute     | locked         | disabled    | online       |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-1-virt-controller-storage-start:
 | 
			
		||||
   :end-before: incl-config-controller-1-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-unlock-controller-1-virt-controller-storage-start:
 | 
			
		||||
   :end-before: incl-unlock-controller-1-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure storage nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes.
 | 
			
		||||
 | 
			
		||||
   Note that the MGMT interfaces are partially set up by the network install procedure.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in storage-0 storage-1; do
 | 
			
		||||
         system interface-network-assign $COMPUTE mgmt0 cluster-host
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to storage-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    HOST=storage-0
 | 
			
		||||
    DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
    TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
    OSDs="/dev/sdb"
 | 
			
		||||
    for OSD in $OSDs; do
 | 
			
		||||
       system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
    done
 | 
			
		||||
 | 
			
		||||
    system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to storage-1:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      HOST=storage-1
 | 
			
		||||
      DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
      TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
      OSDs="/dev/sdb"
 | 
			
		||||
      for OSD in $OSDs; do
 | 
			
		||||
          system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
      system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock storage nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
Unlock virtual storage nodes in order to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  for STORAGE in storage-0 storage-1; do
 | 
			
		||||
     system host-unlock $STORAGE
 | 
			
		||||
  done
 | 
			
		||||
 | 
			
		||||
The storage nodes will reboot in order to apply configuration changes and come
 | 
			
		||||
into service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure compute nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes.
 | 
			
		||||
 | 
			
		||||
   Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
         system interface-network-assign $COMPUTE mgmt0 cluster-host
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for compute nodes.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
      (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
      1G Huge Pages are not supported in the virtual environment and there is no
 | 
			
		||||
      virtual NIC supporting SRIOV. For that reason, data interfaces are not
 | 
			
		||||
      applicable in the virtual environment for the Kubernetes-only scenario.
 | 
			
		||||
 | 
			
		||||
   For OpenStack only:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=eth1000
 | 
			
		||||
      DATA1IF=eth1001
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
 | 
			
		||||
   Configure the datanetworks in sysinv, prior to referencing it in the
 | 
			
		||||
   :command:`system host-if-modify` command.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
        system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
        system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
        for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
          echo "Configuring interface for: $COMPUTE"
 | 
			
		||||
          set -ex
 | 
			
		||||
          system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
          system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
          DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
          DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
          DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
          DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
          DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
          DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
          DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
          DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
          system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
          system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
          system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
          system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
          set +ex
 | 
			
		||||
        done
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
 | 
			
		||||
   support of installing the stx-openstack manifest/helm-charts later:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for NODE in compute-0 compute-1; do
 | 
			
		||||
        system host-label-assign $NODE  openstack-compute-node=enabled
 | 
			
		||||
        system host-label-assign $NODE  openvswitch=enabled
 | 
			
		||||
        system host-label-assign $NODE  sriov=enabled
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
        echo "Configuring Nova local for: $COMPUTE"
 | 
			
		||||
        ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
        ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
        PARTITION_SIZE=10
 | 
			
		||||
        NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 | 
			
		||||
        NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
        system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
        system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock compute nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-unlock-compute-nodes-virt-controller-storage-start:
 | 
			
		||||
   :end-before: incl-unlock-compute-nodes-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,78 +0,0 @@
 | 
			
		||||
The following sections describe system requirements and host setup for a
 | 
			
		||||
workstation hosting virtual machine(s) where StarlingX will be deployed.
 | 
			
		||||
 | 
			
		||||
*********************
 | 
			
		||||
Hardware requirements
 | 
			
		||||
*********************
 | 
			
		||||
 | 
			
		||||
The host system should have at least:
 | 
			
		||||
 | 
			
		||||
* **Processor:** x86_64 only supported architecture with BIOS enabled hardware
 | 
			
		||||
  virtualization extensions
 | 
			
		||||
 | 
			
		||||
* **Cores:** 8
 | 
			
		||||
 | 
			
		||||
* **Memory:** 32GB RAM
 | 
			
		||||
 | 
			
		||||
* **Hard Disk:** 500GB HDD
 | 
			
		||||
 | 
			
		||||
* **Network:** One network adapter with active Internet connection
 | 
			
		||||
 | 
			
		||||
*********************
 | 
			
		||||
Software requirements
 | 
			
		||||
*********************
 | 
			
		||||
 | 
			
		||||
The host system should have at least:
 | 
			
		||||
 | 
			
		||||
* A workstation computer with Ubuntu 16.04 LTS 64-bit
 | 
			
		||||
 | 
			
		||||
All other required packages will be installed by scripts in the StarlingX tools repository.
 | 
			
		||||
 | 
			
		||||
**********
 | 
			
		||||
Host setup
 | 
			
		||||
**********
 | 
			
		||||
 | 
			
		||||
Set up the host with the following steps:
 | 
			
		||||
 | 
			
		||||
#. Update OS:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    apt-get update
 | 
			
		||||
 | 
			
		||||
#. Clone the StarlingX tools repository:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    apt-get install -y git
 | 
			
		||||
    cd $HOME
 | 
			
		||||
    git clone https://opendev.org/starlingx/tools.git
 | 
			
		||||
 | 
			
		||||
#. Install required packages:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    cd $HOME/tools/deployment/libvirt/
 | 
			
		||||
    bash install_packages.sh
 | 
			
		||||
    apt install -y apparmor-profiles
 | 
			
		||||
    apt-get install -y ufw
 | 
			
		||||
    ufw disable
 | 
			
		||||
    ufw status
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      On Ubuntu 16.04, if apparmor-profile modules were installed as shown in
 | 
			
		||||
      the example above, you must reboot the server to fully install the
 | 
			
		||||
      apparmor-profile modules.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
#. Get the StarlingX ISO. This can be from a private StarlingX build or from
 | 
			
		||||
   the public `CENGN StarlingX mirror <http://mirror.starlingx.cengn.ca/mirror/starlingx/>`_.
 | 
			
		||||
 | 
			
		||||
   For example, to get the ISO for the StarlingX R2.0 build, use the command
 | 
			
		||||
   shown below:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    wget http://mirror.starlingx.cengn.ca/mirror/starlingx/release/2.0.0/centos/outputs/iso/bootimage.iso
 | 
			
		||||
@@ -1,422 +0,0 @@
 | 
			
		||||
================================
 | 
			
		||||
Ansible Bootstrap Configurations
 | 
			
		||||
================================
 | 
			
		||||
 | 
			
		||||
This section describes Ansible bootstrap configuration options.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
.. _install-time-only-params:
 | 
			
		||||
 | 
			
		||||
----------------------------
 | 
			
		||||
Install-time-only parameters
 | 
			
		||||
----------------------------
 | 
			
		||||
 | 
			
		||||
Some Ansible bootstrap parameters can not be changed or are very difficult to
 | 
			
		||||
change after installation is complete.
 | 
			
		||||
 | 
			
		||||
Review the set of install-time-only parameters before installation and confirm
 | 
			
		||||
that your values for these parameters are correct for the desired installation.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   If you notice an incorrect install-time-only parameter value *before you
 | 
			
		||||
   unlock controller-0 for the first time*, you can re-run the Ansible bootstrap
 | 
			
		||||
   playbook with updated override values and the updated values will take effect.
 | 
			
		||||
 | 
			
		||||
****************************
 | 
			
		||||
Install-time-only parameters
 | 
			
		||||
****************************
 | 
			
		||||
 | 
			
		||||
**System Properties**
 | 
			
		||||
 | 
			
		||||
* ``system_mode``
 | 
			
		||||
* ``distributed_cloud_role``
 | 
			
		||||
 | 
			
		||||
**Network Properties**
 | 
			
		||||
 | 
			
		||||
* ``pxeboot_subnet``
 | 
			
		||||
* ``pxeboot_start_address``
 | 
			
		||||
* ``pxeboot_end_address``
 | 
			
		||||
* ``management_subnet``
 | 
			
		||||
* ``management_start_address``
 | 
			
		||||
* ``management_end_address``
 | 
			
		||||
* ``cluster_host_subnet``
 | 
			
		||||
* ``cluster_host_start_address``
 | 
			
		||||
* ``cluster_host_end_address``
 | 
			
		||||
* ``cluster_pod_subnet``
 | 
			
		||||
* ``cluster_pod_start_address``
 | 
			
		||||
* ``cluster_pod_end_address``
 | 
			
		||||
* ``cluster_service_subnet``
 | 
			
		||||
* ``cluster_service_start_address``
 | 
			
		||||
* ``cluster_service_end_address``
 | 
			
		||||
* ``management_multicast_subnet``
 | 
			
		||||
* ``management_multicast_start_address``
 | 
			
		||||
* ``management_multicast_end_address``
 | 
			
		||||
 | 
			
		||||
**Docker Proxies**
 | 
			
		||||
 | 
			
		||||
* ``docker_http_proxy``
 | 
			
		||||
* ``docker_https_proxy``
 | 
			
		||||
* ``docker_no_proxy``
 | 
			
		||||
 | 
			
		||||
**Docker Registry Overrides**
 | 
			
		||||
 | 
			
		||||
* ``docker_registries``
 | 
			
		||||
 | 
			
		||||
  * ``k8s.gcr.io``
 | 
			
		||||
 | 
			
		||||
    * ``url``
 | 
			
		||||
    * ``username``
 | 
			
		||||
    * ``password``
 | 
			
		||||
    * ``secure``
 | 
			
		||||
 | 
			
		||||
  * ``gcr.io``
 | 
			
		||||
 | 
			
		||||
    * ``url``
 | 
			
		||||
    * ``username``
 | 
			
		||||
    * ``password``
 | 
			
		||||
    * ``secure``
 | 
			
		||||
 | 
			
		||||
  * ``quay.io``
 | 
			
		||||
 | 
			
		||||
    * ``url``
 | 
			
		||||
    * ``username``
 | 
			
		||||
    * ``password``
 | 
			
		||||
    * ``secure``
 | 
			
		||||
 | 
			
		||||
  * ``docker.io``
 | 
			
		||||
 | 
			
		||||
    * ``url``
 | 
			
		||||
    * ``username``
 | 
			
		||||
    * ``password``
 | 
			
		||||
    * ``secure``
 | 
			
		||||
 | 
			
		||||
  * ``docker.elastic.co``
 | 
			
		||||
 | 
			
		||||
    * ``url``
 | 
			
		||||
    * ``username``
 | 
			
		||||
    * ``password``
 | 
			
		||||
    * ``secure``
 | 
			
		||||
 | 
			
		||||
  * ``defaults``
 | 
			
		||||
 | 
			
		||||
    * ``url``
 | 
			
		||||
    * ``username``
 | 
			
		||||
    * ``password``
 | 
			
		||||
    * ``secure``
 | 
			
		||||
 | 
			
		||||
**Certificates**
 | 
			
		||||
 | 
			
		||||
* ``k8s_root_ca_cert``
 | 
			
		||||
* ``k8s_root_ca_key``
 | 
			
		||||
 | 
			
		||||
**Kubernetes Parameters**
 | 
			
		||||
 | 
			
		||||
* ``apiserver_oidc``
 | 
			
		||||
 | 
			
		||||
  * ``client_id``
 | 
			
		||||
  * ``issuer_id``
 | 
			
		||||
  * ``username_claim``
 | 
			
		||||
 | 
			
		||||
----
 | 
			
		||||
IPv6
 | 
			
		||||
----
 | 
			
		||||
 | 
			
		||||
If you are using IPv6, provide IPv6 configuration overrides for the Ansible
 | 
			
		||||
bootstrap playbook. Note that all addressing, except pxeboot_subnet, should be
 | 
			
		||||
updated to IPv6 addressing.
 | 
			
		||||
 | 
			
		||||
Example IPv6 override values are shown below:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   dns_servers:
 | 
			
		||||
   ‐ 2001:4860:4860::8888
 | 
			
		||||
   ‐ 2001:4860:4860::8844
 | 
			
		||||
   pxeboot_subnet: 169.254.202.0/24
 | 
			
		||||
   management_subnet: 2001:db8:2::/64
 | 
			
		||||
   cluster_host_subnet: 2001:db8:3::/64
 | 
			
		||||
   cluster_pod_subnet: 2001:db8:4::/64
 | 
			
		||||
   cluster_service_subnet: 2001:db8:4::/112
 | 
			
		||||
   external_oam_subnet: 2001:db8:1::/64
 | 
			
		||||
   external_oam_gateway_address: 2001:db8::1
 | 
			
		||||
   external_oam_floating_address: 2001:db8::2
 | 
			
		||||
   external_oam_node_0_address: 2001:db8::3
 | 
			
		||||
   external_oam_node_1_address: 2001:db8::4
 | 
			
		||||
   management_multicast_subnet: ff08::1:1:0/124
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   The `external_oam_node_0_address`, and `external_oam_node_1_address` parameters
 | 
			
		||||
   are not required for the AIO‐SX installation.
 | 
			
		||||
 | 
			
		||||
----------------
 | 
			
		||||
Private registry
 | 
			
		||||
----------------
 | 
			
		||||
 | 
			
		||||
To bootstrap StarlingX you must pull container images for multiple system
 | 
			
		||||
services. By default these container images are pulled from public registries:
 | 
			
		||||
k8s.gcr.io, gcr.io, quay.io, and docker.io.
 | 
			
		||||
 | 
			
		||||
It may be required (or desired) to copy the container images to a private
 | 
			
		||||
registry and pull the images from the private registry (instead of the public
 | 
			
		||||
registries) as part of the StarlingX bootstrap. For example, a private registry
 | 
			
		||||
would be required if a StarlingX system was deployed in an air-gapped network
 | 
			
		||||
environment.
 | 
			
		||||
 | 
			
		||||
Use the `docker_registries` structure in the bootstrap overrides file to specify
 | 
			
		||||
alternate registry(s) for the public registries from which container images are
 | 
			
		||||
pulled. These alternate registries are used during the bootstrapping of
 | 
			
		||||
controller-0, and on :command:`system application-apply` of application packages.
 | 
			
		||||
 | 
			
		||||
The `docker_registries` structure is a map of public registries and the
 | 
			
		||||
alternate registry values for each public registry. For each public registry the
 | 
			
		||||
key is a fully scoped registry name of a public registry (for example "k8s.gcr.io")
 | 
			
		||||
and the alternate registry URL and username/password (if authenticated).
 | 
			
		||||
 | 
			
		||||
url
 | 
			
		||||
   The fully scoped registry name (and optionally namespace/) for the alternate
 | 
			
		||||
   registry location where the images associated with this public registry
 | 
			
		||||
   should now be pulled from.
 | 
			
		||||
 | 
			
		||||
   Valid formats for the `url` value are:
 | 
			
		||||
 | 
			
		||||
   * Domain. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       example.domain
 | 
			
		||||
 | 
			
		||||
   * Domain with port. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       example.domain:5000
 | 
			
		||||
 | 
			
		||||
   * IPv4 address. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       1.2.3.4
 | 
			
		||||
 | 
			
		||||
   * IPv4 address with port. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       1.2.3.4:5000
 | 
			
		||||
 | 
			
		||||
   * IPv6 address. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       FD01::0100
 | 
			
		||||
 | 
			
		||||
   * IPv6 address with port. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       [FD01::0100]:5000
 | 
			
		||||
 | 
			
		||||
username
 | 
			
		||||
   The username for logging into the alternate registry, if authenticated.
 | 
			
		||||
 | 
			
		||||
password
 | 
			
		||||
   The password for logging into the alternate registry, if authenticated.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
Additional configuration options in the `docker_registries` structure are:
 | 
			
		||||
 | 
			
		||||
defaults
 | 
			
		||||
   A special public registry key which defines common values to be applied to
 | 
			
		||||
   all overrideable public registries. If only the `defaults` registry
 | 
			
		||||
   is defined, it will apply `url`, `username`, and `password` for all
 | 
			
		||||
   registries.
 | 
			
		||||
 | 
			
		||||
   If values under specific registries are defined, they will override the
 | 
			
		||||
   values defined in the defaults registry.
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      The `defaults` key was formerly called `unified`. It was renamed
 | 
			
		||||
      in StarlingX R3.0 and updated semantics were applied.
 | 
			
		||||
 | 
			
		||||
      This change affects anyone with a StarlingX installation prior to R3.0 that
 | 
			
		||||
      specifies alternate Docker registries using the `unified` key.
 | 
			
		||||
 | 
			
		||||
secure
 | 
			
		||||
   Specifies whether the registry(s) supports HTTPS (secure) or HTTP (not secure).
 | 
			
		||||
   Applies to all alternate registries. A boolean value. The default value is
 | 
			
		||||
   True (secure, HTTPS).
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   The ``secure`` parameter was formerly called ``is_secure_registry``. It was
 | 
			
		||||
   renamed in StarlingX R3.0.
 | 
			
		||||
 | 
			
		||||
If an alternate registry is specified to be secure (using HTTPS), the certificate
 | 
			
		||||
used by the registry may not be signed by a well-known Certificate Authority (CA).
 | 
			
		||||
This results in the :command:`docker pull` of images from this registry to fail.
 | 
			
		||||
Use the `ssl_ca_cert` override to specify the public certificate of the CA that
 | 
			
		||||
signed the alternate registry’s certificate. This will add the CA as a trusted
 | 
			
		||||
CA to the StarlingX system.
 | 
			
		||||
 | 
			
		||||
ssl_ca_cert
 | 
			
		||||
   The `ssl_ca_cert` value is the absolute path of the certificate file. The
 | 
			
		||||
   certificate must be in PEM format and the file may contain a single CA
 | 
			
		||||
   certificate or multiple CA certificates in a bundle.
 | 
			
		||||
 | 
			
		||||
The following example will apply `url`, `username`, and `password` to all
 | 
			
		||||
registries.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   docker_registries:
 | 
			
		||||
     defaults:
 | 
			
		||||
       url: my.registry.io
 | 
			
		||||
       username: myreguser
 | 
			
		||||
       password: myregP@ssw0rd
 | 
			
		||||
 | 
			
		||||
The next example applies `username` and `password` from the defaults registry
 | 
			
		||||
to all public registries. `url` is different for each public registry. It
 | 
			
		||||
additionally specifies an alternate CA certificate.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  docker_registries:
 | 
			
		||||
     k8s.gcr.io:
 | 
			
		||||
       url: my.k8sregistry.io
 | 
			
		||||
     gcr.io:
 | 
			
		||||
       url: my.gcrregistry.io
 | 
			
		||||
     quay.io:
 | 
			
		||||
       url: my.quayregistry.io
 | 
			
		||||
     docker.io:
 | 
			
		||||
       url: my.dockerregistry.io
 | 
			
		||||
     defaults:
 | 
			
		||||
       url: my.registry.io
 | 
			
		||||
       username: myreguser
 | 
			
		||||
       password: myregP@ssw0rd
 | 
			
		||||
 | 
			
		||||
  ssl_ca_cert: /path/to/ssl_ca_cert_file
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Docker proxy
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
If the StarlingX OAM interface or network is behind a http/https proxy, relative
 | 
			
		||||
to the Docker registries used by StarlingX or applications running on StarlingX,
 | 
			
		||||
then Docker within StarlingX must be configured to use these http/https proxies.
 | 
			
		||||
 | 
			
		||||
Use the following configuration overrides to configure your Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
docker_http_proxy
 | 
			
		||||
   Specify the HTTP proxy URL to use. For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      docker_http_proxy: http://my.proxy.com:1080
 | 
			
		||||
 | 
			
		||||
docker_https_proxy
 | 
			
		||||
   Specify the HTTPS proxy URL to use. For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      docker_https_proxy: https://my.proxy.com:1443
 | 
			
		||||
 | 
			
		||||
docker_no_proxy
 | 
			
		||||
   A no-proxy address list can be provided for registries not on the other side
 | 
			
		||||
   of the proxies. This list will be added to the default no-proxy list derived
 | 
			
		||||
   from localhost, loopback, management, and OAM floating addresses at run time.
 | 
			
		||||
   Each address in the no-proxy list must neither contain a wildcard nor have
 | 
			
		||||
   subnet format. For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      docker_no_proxy:
 | 
			
		||||
        - 1.2.3.4
 | 
			
		||||
        - 5.6.7.8
 | 
			
		||||
 | 
			
		||||
--------------------------------------
 | 
			
		||||
Kubernetes root CA certificate and key
 | 
			
		||||
--------------------------------------
 | 
			
		||||
 | 
			
		||||
By default the Kubernetes Root CA Certificate and Key are auto-generated and
 | 
			
		||||
result in the use of self-signed certificates for the Kubernetes API server. In
 | 
			
		||||
the case where self-signed certificates are not acceptable, use the bootstrap
 | 
			
		||||
override values `k8s_root_ca_cert` and `k8s_root_ca_key` to specify the
 | 
			
		||||
certificate and key for the Kubernetes root CA.
 | 
			
		||||
 | 
			
		||||
k8s_root_ca_cert
 | 
			
		||||
   Specifies the certificate for the Kubernetes root CA. The `k8s_root_ca_cert`
 | 
			
		||||
   value is the absolute path of the certificate file. The certificate must be
 | 
			
		||||
   in PEM format and the value must be provided as part of a pair with
 | 
			
		||||
   `k8s_root_ca_key`. The playbook will not proceed if only one value is provided.
 | 
			
		||||
 | 
			
		||||
k8s_root_ca_key
 | 
			
		||||
   Specifies the key for the Kubernetes root CA. The `k8s_root_ca_key`
 | 
			
		||||
   value is the absolute path of the certificate file. The certificate must be
 | 
			
		||||
   in PEM format and the value must be provided as part of a pair with
 | 
			
		||||
   `k8s_root_ca_cert`. The playbook will not proceed if only one value is provided.
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   The default length for the generated Kubernetes root CA certificate is 10
 | 
			
		||||
   years. Replacing the root CA certificate is an involved process so the custom
 | 
			
		||||
   certificate expiry should be as long as possible. We recommend ensuring root
 | 
			
		||||
   CA certificate has an expiry of at least 5-10 years.
 | 
			
		||||
 | 
			
		||||
The administrator can also provide values to add to the Kubernetes API server
 | 
			
		||||
certificate Subject Alternative Name list using the 'apiserver_cert_sans`
 | 
			
		||||
override parameter.
 | 
			
		||||
 | 
			
		||||
apiserver_cert_sans
 | 
			
		||||
   Specifies a list of Subject Alternative Name entries that will be added to the
 | 
			
		||||
   Kubernetes API server certificate. Each entry in the list must be an IP address
 | 
			
		||||
   or domain name. For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      apiserver_cert_sans:
 | 
			
		||||
        - hostname.domain
 | 
			
		||||
        - 198.51.100.75
 | 
			
		||||
 | 
			
		||||
StarlingX automatically updates this parameter to include IP records for the OAM
 | 
			
		||||
floating IP and both OAM unit IP addresses.
 | 
			
		||||
 | 
			
		||||
----------------------------------------------------
 | 
			
		||||
OpenID Connect authentication for Kubernetes cluster
 | 
			
		||||
----------------------------------------------------
 | 
			
		||||
 | 
			
		||||
The Kubernetes cluster can be configured to use an external OpenID Connect
 | 
			
		||||
:abbr:`IDP (identity provider)`, such as Azure Active Directory, Salesforce, or
 | 
			
		||||
Google, for Kubernetes API authentication.
 | 
			
		||||
 | 
			
		||||
By default, OpenID Connect authentication is disabled. To enable OpenID Connect,
 | 
			
		||||
use the following configuration values in the Ansible bootstrap overrides file
 | 
			
		||||
to specify the IDP for OpenID Connect:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
    apiserver_oidc:
 | 
			
		||||
      client_id:
 | 
			
		||||
      issuer_url:
 | 
			
		||||
      username_claim:
 | 
			
		||||
 | 
			
		||||
When the three required fields of the `apiserver_oidc` parameter are defined,
 | 
			
		||||
OpenID Connect is considered active. The values will be used to configure the
 | 
			
		||||
Kubernetes cluster to use the specified external OpenID Connect IDP for
 | 
			
		||||
Kubernetes API authentication.
 | 
			
		||||
 | 
			
		||||
In addition, you will need to configure the external OpenID Connect IDP and any
 | 
			
		||||
required OpenID client application according to the specific IDP's documentation.
 | 
			
		||||
 | 
			
		||||
If not configuring OpenID Connect, all values should be absent from the
 | 
			
		||||
configuration file.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   Default authentication via service account tokens is always supported,
 | 
			
		||||
   even when OpenID Connect authentication is configured.
 | 
			
		||||
@@ -1,7 +0,0 @@
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
      Some Ansible bootstrap parameters cannot be changed or are very difficult to change after installation is complete.
 | 
			
		||||
 | 
			
		||||
      Review the set of install-time-only parameters before installation and confirm that your values for these parameters are correct for the desired installation.
 | 
			
		||||
 | 
			
		||||
      Refer to :ref:`Ansible install-time-only parameters <install-time-only-params>` for details.
 | 
			
		||||
@@ -1,26 +0,0 @@
 | 
			
		||||
==============================================
 | 
			
		||||
Bare metal All-in-one Duplex Installation R3.0
 | 
			
		||||
==============================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_aio_duplex.txt
 | 
			
		||||
 | 
			
		||||
The bare metal AIO-DX deployment configuration may be extended with up to four
 | 
			
		||||
worker nodes (not shown in the diagram). Installation instructions for
 | 
			
		||||
these additional nodes are described in :doc:`aio_duplex_extend`.
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   aio_duplex_hardware
 | 
			
		||||
   aio_duplex_install_kubernetes
 | 
			
		||||
   aio_duplex_extend
 | 
			
		||||
@@ -1,192 +0,0 @@
 | 
			
		||||
=================================
 | 
			
		||||
Extend Capacity with Worker Nodes
 | 
			
		||||
=================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to extend capacity with worker nodes on a
 | 
			
		||||
**StarlingX R3.0 bare metal All-in-one Duplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on worker nodes
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
#. Power on the worker node servers and force them to network boot with the
 | 
			
		||||
   appropriate BIOS boot options for your particular server.
 | 
			
		||||
 | 
			
		||||
#. As the worker nodes boot, a message appears on their console instructing
 | 
			
		||||
   you to configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of controller-0, list hosts to see newly discovered worker node
 | 
			
		||||
   hosts (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 3  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
      | 4  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Using the host id, set the personality of this host to 'worker':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 3 personality=worker hostname=worker-0
 | 
			
		||||
      system host-update 4 personality=worker hostname=worker-1
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on worker nodes.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. Wait for the install of software on the worker nodes to complete, for the
 | 
			
		||||
   worker nodes to reboot, and for both to show as locked/disabled/online in
 | 
			
		||||
   'system host-list'.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | controller-1 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 3  | worker-0     | worker      | locked         | disabled    | online       |
 | 
			
		||||
      | 4  | worker-1     | worker      | locked         | disabled    | online       |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure worker nodes
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for NODE in worker-0 worker-1; do
 | 
			
		||||
         system interface-network-assign $NODE mgmt0 cluster-host
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for worker nodes. Use the DATA port names, for
 | 
			
		||||
   example eth0, that are applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure SRIOV device plug in:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
         system host-label-assign controller-1 sriovdp=enabled
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
           system host-memory-modify controller-1 0 -1G 100
 | 
			
		||||
           system host-memory-modify controller-1 1 -1G 100
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=<DATA-0-PORT>
 | 
			
		||||
      DATA1IF=<DATA-1-PORT>
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
 | 
			
		||||
      # configure the datanetworks in sysinv, prior to referencing it
 | 
			
		||||
      # in the ``system host-if-modify`` command'.
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      for NODE in worker-0 worker-1; do
 | 
			
		||||
        echo "Configuring interface for: $NODE"
 | 
			
		||||
        set -ex
 | 
			
		||||
        system host-port-list ${NODE} --nowrap > ${SPL}
 | 
			
		||||
        system host-if-list -a ${NODE} --nowrap > ${SPIL}
 | 
			
		||||
        DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
        DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
        DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
        DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
        DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
        DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
        DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
        DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
        system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
 | 
			
		||||
        system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
 | 
			
		||||
        system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
        system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
        set +ex
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for NODE in worker-0 worker-1; do
 | 
			
		||||
        system host-label-assign $NODE  openstack-compute-node=enabled
 | 
			
		||||
        system host-label-assign $NODE  openvswitch=enabled
 | 
			
		||||
        system host-label-assign $NODE  sriov=enabled
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
 | 
			
		||||
   needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for NODE in worker-0 worker-1; do
 | 
			
		||||
        echo "Configuring Nova local for: $NODE"
 | 
			
		||||
        ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
        ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
        PARTITION_SIZE=10
 | 
			
		||||
        NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 | 
			
		||||
        NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
        system host-lvg-add ${NODE} nova-local
 | 
			
		||||
        system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock worker nodes
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock worker nodes in order to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  for NODE in worker-0 worker-1; do
 | 
			
		||||
     system host-unlock $NODE
 | 
			
		||||
  done
 | 
			
		||||
 | 
			
		||||
The worker nodes will reboot to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
@@ -1,58 +0,0 @@
 | 
			
		||||
=====================
 | 
			
		||||
Hardware Requirements
 | 
			
		||||
=====================
 | 
			
		||||
 | 
			
		||||
This section describes the hardware requirements and server preparation for a
 | 
			
		||||
**StarlingX R3.0 bare metal All-in-one Duplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Minimum hardware requirements
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
The recommended minimum hardware requirements for bare metal servers for various
 | 
			
		||||
host types are:
 | 
			
		||||
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum Requirement     | All-in-one Controller Node                                |
 | 
			
		||||
+=========================+===========================================================+
 | 
			
		||||
| Number of servers       | 2                                                         |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge)      |
 | 
			
		||||
|                         |   8 cores/socket                                          |
 | 
			
		||||
|                         |                                                           |
 | 
			
		||||
|                         | or                                                        |
 | 
			
		||||
|                         |                                                           |
 | 
			
		||||
|                         | - Single-CPU Intel® Xeon® D-15xx family, 8 cores          |
 | 
			
		||||
|                         |   (low-power/low-cost option)                             |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum memory          | 64 GB                                                     |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Primary disk            | 500 GB SSD or NVMe (see :doc:`../../nvme_config`)         |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Additional disks        | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD            |
 | 
			
		||||
|                         | - Recommended, but not required: 1 or more SSDs or NVMe   |
 | 
			
		||||
|                         |   drives for Ceph journals (min. 1024 MiB per OSD journal)|
 | 
			
		||||
|                         | - For OpenStack, recommend 1 or more 500 GB (min. 10K RPM)|
 | 
			
		||||
|                         |   for VM local ephemeral storage                          |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum network ports   | - Mgmt/Cluster: 1x10GE                                    |
 | 
			
		||||
|                         | - OAM: 1x1GE                                              |
 | 
			
		||||
|                         | - Data: 1 or more x 10GE                                  |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| BIOS settings           | - Hyper-Threading technology enabled                      |
 | 
			
		||||
|                         | - Virtualization technology enabled                       |
 | 
			
		||||
|                         | - VT for directed I/O enabled                             |
 | 
			
		||||
|                         | - CPU power and performance policy set to performance     |
 | 
			
		||||
|                         | - CPU C state control disabled                            |
 | 
			
		||||
|                         | - Plug & play BMC detection disabled                      |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
--------------------------
 | 
			
		||||
Prepare bare metal servers
 | 
			
		||||
--------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: prep_servers.txt
 | 
			
		||||
@@ -1,523 +0,0 @@
 | 
			
		||||
=================================================
 | 
			
		||||
Install StarlingX Kubernetes on Bare Metal AIO-DX
 | 
			
		||||
=================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R3.0 bare metal All-in-one Duplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
---------------------
 | 
			
		||||
Create a bootable USB
 | 
			
		||||
---------------------
 | 
			
		||||
 | 
			
		||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
 | 
			
		||||
create a bootable USB with the StarlingX ISO on your system.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: aio_simplex_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-install-software-controller-0-aio-simplex-start:
 | 
			
		||||
   :end-before: incl-install-software-controller-0-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
#. Login using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Login: sysadmin
 | 
			
		||||
      Password:
 | 
			
		||||
      Changing password for sysadmin.
 | 
			
		||||
      (current) UNIX Password: sysadmin
 | 
			
		||||
      New Password:
 | 
			
		||||
      (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. Verify and/or configure IP connectivity.
 | 
			
		||||
 | 
			
		||||
   External connectivity is required to run the Ansible bootstrap playbook. The
 | 
			
		||||
   StarlingX boot image will DHCP out all interfaces so the server may have
 | 
			
		||||
   obtained an IP address and have external IP connectivity if a DHCP server is
 | 
			
		||||
   present in your environment. Verify this using the :command:`ip addr` and
 | 
			
		||||
   :command:`ping 8.8.8.8` commands.
 | 
			
		||||
 | 
			
		||||
   Otherwise, manually configure an IP address and default IP route. Use the
 | 
			
		||||
   PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
 | 
			
		||||
   deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
 | 
			
		||||
      sudo ip link set up dev <PORT>
 | 
			
		||||
      sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
 | 
			
		||||
      ping 8.8.8.8
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``sysadmin home directory ($HOME)``
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   .. include:: ../ansible_install_time_only.txt
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   #. Use a copy of the default.yml file listed above to provide your overrides.
 | 
			
		||||
 | 
			
		||||
      The default.yml file lists all available parameters for bootstrap
 | 
			
		||||
      configuration with a brief description for each parameter in the file comments.
 | 
			
		||||
 | 
			
		||||
      To use this method, copy the default.yml file listed above to
 | 
			
		||||
      ``$HOME/localhost.yml`` and edit the configurable values as desired.
 | 
			
		||||
 | 
			
		||||
   #. Create a minimal user configuration override file.
 | 
			
		||||
 | 
			
		||||
      To use this method, create your override file at ``$HOME/localhost.yml``
 | 
			
		||||
      and provide the minimum required parameters for the deployment configuration
 | 
			
		||||
      as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
 | 
			
		||||
      applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: duplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
 | 
			
		||||
        external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
 | 
			
		||||
        external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
 | 
			
		||||
        external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
 | 
			
		||||
        external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <admin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
 | 
			
		||||
        # Add these lines to configure Docker to use a proxy server
 | 
			
		||||
        # docker_http_proxy: http://my.proxy.com:1080
 | 
			
		||||
        # docker_https_proxy: https://my.proxy.com:1443
 | 
			
		||||
        # docker_no_proxy:
 | 
			
		||||
        #   - 1.2.3.4
 | 
			
		||||
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
 | 
			
		||||
   firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
 | 
			
		||||
   details about Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
   The image below shows a typical successful run.
 | 
			
		||||
 | 
			
		||||
   .. figure:: ../figures/starlingx-release3-ansible-bootstrap-simplex.png
 | 
			
		||||
      :alt: ansible bootstrap install screen
 | 
			
		||||
      :width: 800
 | 
			
		||||
 | 
			
		||||
      *Figure 3: StarlingX Ansible Bootstrap*
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
 | 
			
		||||
   attached networks. Use the OAM and MGMT port names, for example eth0, that are
 | 
			
		||||
   applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     OAM_IF=<OAM-PORT>
 | 
			
		||||
     MGMT_IF=<MGMT-PORT>
 | 
			
		||||
     system host-if-modify controller-0 lo -c none
 | 
			
		||||
     IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
 | 
			
		||||
     for UUID in $IFNET_UUIDS; do
 | 
			
		||||
         system interface-network-remove ${UUID}
 | 
			
		||||
     done
 | 
			
		||||
     system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
     system host-if-modify controller-0 $MGMT_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $MGMT_IF mgmt
 | 
			
		||||
     system interface-network-assign controller-0 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-0. Use the DATA port names, for example
 | 
			
		||||
   eth0, applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure the SRIOV device plugin
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       system host-label-assign controller-0 sriovdp=enabled
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes.
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       system host-memory-modify controller-0 0 -1G 100
 | 
			
		||||
       system host-memory-modify controller-0 1 -1G 100
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=<DATA-0-PORT>
 | 
			
		||||
      DATA1IF=<DATA-1-PORT>
 | 
			
		||||
      export NODE=controller-0
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
      system host-port-list ${NODE} --nowrap > ${SPL}
 | 
			
		||||
      system host-if-list -a ${NODE} --nowrap > ${SPIL}
 | 
			
		||||
      DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
      DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
      DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
      DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
 | 
			
		||||
      system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
 | 
			
		||||
      system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
      system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
 | 
			
		||||
   to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      echo ">>> Add OSDs to primary tier"
 | 
			
		||||
      system host-disk-list controller-0
 | 
			
		||||
      system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
 | 
			
		||||
      system host-stor-list controller-0
 | 
			
		||||
 | 
			
		||||
#. If required, and not already done as part of bootstrap, configure Docker to
 | 
			
		||||
   use a proxy server.
 | 
			
		||||
 | 
			
		||||
   #. List Docker proxy parameters:
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
       system service-parameter-list platform docker
 | 
			
		||||
 | 
			
		||||
   #. Refer to :doc:`/../../configuration/docker_proxy_config` for
 | 
			
		||||
      details about Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. include:: aio_simplex_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-0-openstack-specific-aio-simplex-start:
 | 
			
		||||
   :end-before: incl-config-controller-0-openstack-specific-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. include:: aio_simplex_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-unlock-controller-0-aio-simplex-start:
 | 
			
		||||
   :end-before: incl-unlock-controller-0-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------------------------
 | 
			
		||||
Install software on controller-1 node
 | 
			
		||||
-------------------------------------
 | 
			
		||||
 | 
			
		||||
#. Power on the controller-1 server and force it to network boot with the
 | 
			
		||||
   appropriate BIOS boot options for your particular server.
 | 
			
		||||
 | 
			
		||||
#. As controller-1 boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
 | 
			
		||||
   host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Using the host id, set the personality of this host to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1 to complete, for controller-1 to
 | 
			
		||||
   reboot, and for controller-1 to show as locked/disabled/online in 'system host-list'.
 | 
			
		||||
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-1 and specify the
 | 
			
		||||
   attached networks. Use the OAM and MGMT port names, for example eth0, that are
 | 
			
		||||
   applicable to your deployment environment:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interface is partially set up automatically by the network
 | 
			
		||||
   install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      OAM_IF=<OAM-PORT>
 | 
			
		||||
      MGMT_IF=<MGMT-PORT>
 | 
			
		||||
      system host-if-modify controller-1 $OAM_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-1 $OAM_IF oam
 | 
			
		||||
      system interface-network-assign controller-1 mgmt0 cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-1. Use the DATA port names, for example
 | 
			
		||||
   eth0, applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure the SRIOV device plugin:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        system host-label-assign controller-1 sriovdp=enabled
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        system host-memory-modify controller-1 0 -1G 100
 | 
			
		||||
        system host-memory-modify controller-1 1 -1G 100
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=<DATA-0-PORT>
 | 
			
		||||
      DATA1IF=<DATA-1-PORT>
 | 
			
		||||
      export NODE=controller-1
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
      system host-port-list ${NODE} --nowrap > ${SPL}
 | 
			
		||||
      system host-if-list -a ${NODE} --nowrap > ${SPIL}
 | 
			
		||||
      DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
      DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
      DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
      DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
 | 
			
		||||
      system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
 | 
			
		||||
      system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
      system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-1 for Ceph:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      echo ">>> Add OSDs to primary tier"
 | 
			
		||||
      system host-disk-list controller-1
 | 
			
		||||
      system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {}
 | 
			
		||||
      system host-stor-list controller-1
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-label-assign controller-1 openstack-control-plane=enabled
 | 
			
		||||
      system host-label-assign controller-1 openstack-compute-node=enabled
 | 
			
		||||
      system host-label-assign controller-1 openvswitch=enabled
 | 
			
		||||
      system host-label-assign controller-1 sriov=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      export NODE=controller-1
 | 
			
		||||
 | 
			
		||||
      echo ">>> Getting root disk info"
 | 
			
		||||
      ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
      ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
      echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
 | 
			
		||||
 | 
			
		||||
      echo ">>>> Configuring nova-local"
 | 
			
		||||
      NOVA_SIZE=34
 | 
			
		||||
      NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
 | 
			
		||||
      NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
      system host-lvg-add ${NODE} nova-local
 | 
			
		||||
      system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
      sleep 2
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock controller-1 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
Controller-1 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
 [sysadmin@controller-1 ~(keystone_admin)]$ system host-list
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
 | 2  | controller-1 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
 [sysadmin@controller-1 ~(keystone_admin)]$ system host-show controller-1
 | 
			
		||||
 +-----------------------+----------------------------------------------------------------------+
 | 
			
		||||
 | Property              | Value                                                                |
 | 
			
		||||
 +-----------------------+----------------------------------------------------------------------+
 | 
			
		||||
 | action                | none                                                                 |
 | 
			
		||||
 | administrative        | unlocked                                                             |
 | 
			
		||||
 | availability          | available                                                            |
 | 
			
		||||
 | bm_ip                 | None                                                                 |
 | 
			
		||||
 | bm_type               | none                                                                 |
 | 
			
		||||
 | bm_username           | None                                                                 |
 | 
			
		||||
 | boot_device           | /dev/sda                                                             |
 | 
			
		||||
 | capabilities          | {u'stor_function': u'monitor', u'Personality': u'Controller-Active'} |
 | 
			
		||||
 | clock_synchronization | ntp                                                                  |
 | 
			
		||||
 | config_applied        | 19e0dada-c2ac-4faf-a513-b713c07441af                                 |
 | 
			
		||||
 | config_status         | None                                                                 |
 | 
			
		||||
 | config_target         | 19e0dada-c2ac-4faf-a513-b713c07441af                                 |
 | 
			
		||||
 | console               | ttyS0,115200                                                         |
 | 
			
		||||
 | created_at            | 2020-04-22T02:42:06.956004+00:00                                     |
 | 
			
		||||
 | hostname              | controller-1                                                         |
 | 
			
		||||
 | id                    | 2                                                                    |
 | 
			
		||||
 | install_output        | text                                                                 |
 | 
			
		||||
 | install_state         | completed                                                            |
 | 
			
		||||
 | install_state_info    | None                                                                 |
 | 
			
		||||
 | inv_state             | inventoried                                                          |
 | 
			
		||||
 | invprovision          | provisioned                                                          |
 | 
			
		||||
 | location              | {}                                                                   |
 | 
			
		||||
 | mgmt_ip               | 10.10.53.12                                                          |
 | 
			
		||||
 | mgmt_mac              | a4:bf:01:55:03:bb                                                    |
 | 
			
		||||
 | operational           | enabled                                                              |
 | 
			
		||||
 | personality           | controller                                                           |
 | 
			
		||||
 | reserved              | False                                                                |
 | 
			
		||||
 | rootfs_device         | /dev/sda                                                             |
 | 
			
		||||
 | serialid              | None                                                                 |
 | 
			
		||||
 | software_load         | 20.01                                                                |
 | 
			
		||||
 | subfunction_avail     | available                                                            |
 | 
			
		||||
 | subfunction_oper      | enabled                                                              |
 | 
			
		||||
 | subfunctions          | controller,worker                                                    |
 | 
			
		||||
 | task                  |                                                                      |
 | 
			
		||||
 | tboot                 | false                                                                |
 | 
			
		||||
 | ttys_dcd              | None                                                                 |
 | 
			
		||||
 | updated_at            | 2020-04-22T12:20:25.248838+00:00                                     |
 | 
			
		||||
 | uptime                | 10587                                                                |
 | 
			
		||||
 | uuid                  | 41296fac-5b16-4c52-9296-5b720100f8b5                                 |
 | 
			
		||||
 | vim_progress_status   | services-enabled                                                     |
 | 
			
		||||
 +-----------------------+----------------------------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,21 +0,0 @@
 | 
			
		||||
===============================================
 | 
			
		||||
Bare metal All-in-one Simplex Installation R3.0
 | 
			
		||||
===============================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_aio_simplex.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   aio_simplex_hardware
 | 
			
		||||
   aio_simplex_install_kubernetes
 | 
			
		||||
@@ -1,510 +0,0 @@
 | 
			
		||||
=================================================
 | 
			
		||||
Install StarlingX Kubernetes on Bare Metal AIO-SX
 | 
			
		||||
=================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R3.0 bare metal All-in-one Simplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
---------------------
 | 
			
		||||
Create a bootable USB
 | 
			
		||||
---------------------
 | 
			
		||||
 | 
			
		||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
 | 
			
		||||
create a bootable USB with the StarlingX ISO on your system.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. incl-install-software-controller-0-aio-simplex-start:
 | 
			
		||||
 | 
			
		||||
#. Insert the bootable USB into a bootable USB port on the host you are
 | 
			
		||||
   configuring as controller-0.
 | 
			
		||||
 | 
			
		||||
#. Power on the host.
 | 
			
		||||
 | 
			
		||||
#. Attach to a console, ensure the host boots from the USB, and wait for the
 | 
			
		||||
   StarlingX Installer Menus.
 | 
			
		||||
 | 
			
		||||
#. Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
   #. First menu: Select 'All-in-one Controller Configuration'.
 | 
			
		||||
   #. Second menu: Select 'Graphical Console' or 'Serial Console' depending on
 | 
			
		||||
      your terminal access to the console port.
 | 
			
		||||
 | 
			
		||||
      .. figure:: ../figures/starlingx-aio-controller-configuration.png
 | 
			
		||||
         :alt: starlingx-controller-configuration
 | 
			
		||||
 | 
			
		||||
         *Figure 1: StarlingX Controller Configuration*
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
      .. figure:: ../figures/starlingx-aio-serial-console.png
 | 
			
		||||
         :alt: starlingx-serial-console
 | 
			
		||||
 | 
			
		||||
         *Figure 2: StarlingX Serial Console*
 | 
			
		||||
 | 
			
		||||
Wait for non-interactive install of software to complete and server to reboot.
 | 
			
		||||
This can take 5-10 minutes, depending on the performance of the server.
 | 
			
		||||
 | 
			
		||||
.. incl-install-software-controller-0-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
#. Login using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Login: sysadmin
 | 
			
		||||
      Password:
 | 
			
		||||
      Changing password for sysadmin.
 | 
			
		||||
      (current) UNIX Password: sysadmin
 | 
			
		||||
      New Password:
 | 
			
		||||
      (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. Verify and/or configure IP connectivity.
 | 
			
		||||
 | 
			
		||||
   External connectivity is required to run the Ansible bootstrap playbook. The
 | 
			
		||||
   StarlingX boot image will DHCP out all interfaces so the server may have
 | 
			
		||||
   obtained an IP address and have external IP connectivity if a DHCP server is
 | 
			
		||||
   present in your environment. Verify this using the :command:`ip addr` and
 | 
			
		||||
   :command:`ping 8.8.8.8` commands.
 | 
			
		||||
 | 
			
		||||
   Otherwise, manually configure an IP address and default IP route. Use the
 | 
			
		||||
   PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
 | 
			
		||||
   deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
 | 
			
		||||
      sudo ip link set up dev <PORT>
 | 
			
		||||
      sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
 | 
			
		||||
      ping 8.8.8.8
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``sysadmin home directory ($HOME)``
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   .. include:: ../ansible_install_time_only.txt
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   #. Use a copy of the default.yml file listed above to provide your overrides.
 | 
			
		||||
 | 
			
		||||
      The default.yml file lists all available parameters for bootstrap
 | 
			
		||||
      configuration with a brief description for each parameter in the file comments.
 | 
			
		||||
 | 
			
		||||
      To use this method, copy the default.yml file listed above to
 | 
			
		||||
      ``$HOME/localhost.yml`` and edit the configurable values as desired.
 | 
			
		||||
 | 
			
		||||
   #. Create a minimal user configuration override file.
 | 
			
		||||
 | 
			
		||||
      To use this method, create your override file at ``$HOME/localhost.yml``
 | 
			
		||||
      and provide the minimum required parameters for the deployment configuration
 | 
			
		||||
      as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
 | 
			
		||||
      applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: simplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
 | 
			
		||||
        external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
 | 
			
		||||
        external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <admin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
 | 
			
		||||
        # Add these lines to configure Docker to use a proxy server
 | 
			
		||||
        # docker_http_proxy: http://my.proxy.com:1080
 | 
			
		||||
        # docker_https_proxy: https://my.proxy.com:1443
 | 
			
		||||
        # docker_no_proxy:
 | 
			
		||||
        #   - 1.2.3.4
 | 
			
		||||
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
 | 
			
		||||
   firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
 | 
			
		||||
   details about Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
   The image below shows a typical successful run.
 | 
			
		||||
 | 
			
		||||
   .. figure:: ../figures/starlingx-release3-ansible-bootstrap-simplex.png
 | 
			
		||||
      :alt: ansible bootstarp install screen
 | 
			
		||||
      :width: 800
 | 
			
		||||
 | 
			
		||||
      *Figure 3: StarlingX Ansible Bootstrap*
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. At this stage, you can see the controller status, it will be in the locked state.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    [sysadmin@localhost ~(keystone_admin)]$ system host-list
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | 1  | controller-0 | controller  | locked         | disabled    | online       |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM interface of controller-0 and specify the attached network
 | 
			
		||||
   as "oam". Use the OAM port name that is applicable to your deployment
 | 
			
		||||
   environment, for example eth0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     OAM_IF=<OAM-PORT>
 | 
			
		||||
     system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
 | 
			
		||||
#. If the system is a subcloud in a distributed cloud environment, then the mgmt
 | 
			
		||||
   network and cluster-host networks must be configured on an actual interface
 | 
			
		||||
   and not left on the loopback interface.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      Complete this step only if the system is a subcloud in a distributed cloud
 | 
			
		||||
      environment!
 | 
			
		||||
 | 
			
		||||
   For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     MGMT_IF=<MGMT-PORT>
 | 
			
		||||
     system host-if-modify controller-0 lo -c none
 | 
			
		||||
     IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
 | 
			
		||||
     for UUID in $IFNET_UUIDS; do
 | 
			
		||||
         system interface-network-remove ${UUID}
 | 
			
		||||
     done
 | 
			
		||||
     system host-if-modify controller-0 $MGMT_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $MGMT_IF mgmt
 | 
			
		||||
     system interface-network-assign controller-0 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-0. Use the DATA port names, for example
 | 
			
		||||
   eth0, applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure the SRIOV device plugin
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       system host-label-assign controller-0 sriovdp=enabled
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes.
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       system host-memory-modify controller-0 0 -1G 100
 | 
			
		||||
       system host-memory-modify controller-0 1 -1G 100
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     DATA0IF=<DATA-0-PORT>
 | 
			
		||||
     DATA1IF=<DATA-1-PORT>
 | 
			
		||||
     export NODE=controller-0
 | 
			
		||||
     PHYSNET0='physnet0'
 | 
			
		||||
     PHYSNET1='physnet1'
 | 
			
		||||
     SPL=/tmp/tmp-system-port-list
 | 
			
		||||
     SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
     system host-port-list ${NODE} --nowrap > ${SPL}
 | 
			
		||||
     system host-if-list -a ${NODE} --nowrap > ${SPIL}
 | 
			
		||||
     DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
     DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
     DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
     DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
     DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
     DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
     DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
     DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
     system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
     system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
     system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
 | 
			
		||||
     system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
 | 
			
		||||
     system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
     system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
 | 
			
		||||
   to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     echo ">>> Add OSDs to primary tier"
 | 
			
		||||
     system host-disk-list controller-0
 | 
			
		||||
     system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
 | 
			
		||||
     system host-stor-list controller-0
 | 
			
		||||
 | 
			
		||||
#. If required, and not already done as part of bootstrap, configure Docker to
 | 
			
		||||
   use a proxy server.
 | 
			
		||||
 | 
			
		||||
   #. List Docker proxy parameters:
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
       system service-parameter-list platform docker
 | 
			
		||||
 | 
			
		||||
   #. Refer to :doc:`/../../configuration/docker_proxy_config` for
 | 
			
		||||
      details about Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-openstack-specific-aio-simplex-start:
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-label-assign controller-0 openstack-control-plane=enabled
 | 
			
		||||
     system host-label-assign controller-0 openstack-compute-node=enabled
 | 
			
		||||
     system host-label-assign controller-0 openvswitch=enabled
 | 
			
		||||
     system host-label-assign controller-0 sriov=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
 | 
			
		||||
 | 
			
		||||
   StarlingX has OVS (kernel-based) vSwitch configured as default:
 | 
			
		||||
 | 
			
		||||
   * Runs in a container; defined within the helm charts of stx-openstack
 | 
			
		||||
     manifest.
 | 
			
		||||
   * Shares the core(s) assigned to the platform.
 | 
			
		||||
 | 
			
		||||
   If you require better performance, OVS-DPDK (OVS with the Data Plane
 | 
			
		||||
   Development Kit, which is supported only on bare metal hardware) should be
 | 
			
		||||
   used:
 | 
			
		||||
 | 
			
		||||
   * Runs directly on the host (it is not containerized).
 | 
			
		||||
   * Requires that at least 1 core be assigned/dedicated to the vSwitch function.
 | 
			
		||||
 | 
			
		||||
   To deploy the default containerized OVS:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system modify --vswitch_type none
 | 
			
		||||
 | 
			
		||||
   Do not run any vSwitch directly on the host, instead, use the containerized
 | 
			
		||||
   OVS defined in the helm charts of stx-openstack manifest.
 | 
			
		||||
 | 
			
		||||
   To deploy OVS-DPDK, run the following command:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system modify --vswitch_type ovs-dpdk
 | 
			
		||||
     system host-cpu-modify -f vswitch -p0 1 controller-0
 | 
			
		||||
 | 
			
		||||
   Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
 | 
			
		||||
   default to automatically assigning 1 vSwitch core for AIO controllers and 2
 | 
			
		||||
   vSwitch cores for compute-labeled worker nodes.
 | 
			
		||||
 | 
			
		||||
   When using OVS-DPDK, configure vSwitch memory per NUMA node with the following
 | 
			
		||||
   command:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
 | 
			
		||||
 | 
			
		||||
   For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-memory-modify -f vswitch -1G 1 worker-0 0
 | 
			
		||||
 | 
			
		||||
   VMs created in an OVS-DPDK environment must be configured to use huge pages
 | 
			
		||||
   to enable networking and must use a flavor with property: hw:mem_page_size=large
 | 
			
		||||
 | 
			
		||||
   Configure the huge pages for VMs in an OVS-DPDK environment with the command:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-memory-modify -1G <1G hugepages number> <hostname or id> <processor>
 | 
			
		||||
 | 
			
		||||
   For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-memory-modify worker-0 0 -1G 10
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      After controller-0 is unlocked, changing vswitch_type requires
 | 
			
		||||
      locking and unlocking all compute-labeled worker nodes (and/or AIO
 | 
			
		||||
      controllers) to apply the change.
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     export NODE=controller-0
 | 
			
		||||
 | 
			
		||||
     echo ">>> Getting root disk info"
 | 
			
		||||
     ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
     ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
     echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
 | 
			
		||||
 | 
			
		||||
     echo ">>>> Configuring nova-local"
 | 
			
		||||
     NOVA_SIZE=34
 | 
			
		||||
     NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
 | 
			
		||||
     NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
     system host-lvg-add ${NODE} nova-local
 | 
			
		||||
     system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
     sleep 2
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-0-aio-simplex-start:
 | 
			
		||||
 | 
			
		||||
Unlock controller-0 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   Once the controller comes back up, check the status of controller-0. It should
 | 
			
		||||
   now show "unlocked", "enabled", "available" and "provisioned".
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$ system host-list
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$
 | 
			
		||||
 | 
			
		||||
 ===============================================
 | 
			
		||||
  [sysadmin@controller-0 ~(keystone_admin)]$ system host-show controller-0
 | 
			
		||||
 +-----------------------+------------------------------------------------------------ ----------+
 | 
			
		||||
 | Property              | Value                                                                |
 | 
			
		||||
 +-----------------------+------------------------------------------------------------ ----------+
 | 
			
		||||
 | action                | none                                                                 |
 | 
			
		||||
 | administrative        | unlocked                                                             |
 | 
			
		||||
 | availability          | available                                                            |
 | 
			
		||||
 | bm_ip                 | None                                                                 |
 | 
			
		||||
 | bm_type               | none                                                                 |
 | 
			
		||||
 | bm_username           | None
 | 
			
		||||
 | boot_device           | /dev/disk/by-path/pci-0000:00:1f.2-ata-1.0                           |
 | 
			
		||||
 | capabilities          | {u'stor_function': u'monitor', u'Personality': u'Controller-Active'} |
 | 
			
		||||
 | clock_synchronization | ntp                                                                  |
 | 
			
		||||
 | config_applied        | 03e22d8b-1b1f-4c52-9500-96afad295d9a                                 |
 | 
			
		||||
 | config_status         | None                                                                 |
 | 
			
		||||
 | config_target         | 03e22d8b-1b1f-4c52-9500-96afad295d9a                                 |
 | 
			
		||||
 | console               | ttyS0,115200                                                         |
 | 
			
		||||
 | created_at            | 2020-03-09T12:34:34.866469+00:00                                     |
 | 
			
		||||
 | hostname              | controller-0                                                         |
 | 
			
		||||
 | id                    | 1                                                                    |
 | 
			
		||||
 | install_output        | text                                                                 |
 | 
			
		||||
 | install_state         | None                                                                 |
 | 
			
		||||
 | install_state_info    | None                                                                 |
 | 
			
		||||
 | inv_state             | inventoried                                                          |
 | 
			
		||||
 | invprovision          | provisioned                                                          |
 | 
			
		||||
 | location              | {}                                                                   |
 | 
			
		||||
 | mgmt_ip               | 192.168.204.2                                                        |
 | 
			
		||||
 | mgmt_mac              | 00:00:00:00:00:00                                                    |
 | 
			
		||||
 | operational           | enabled                                                              |
 | 
			
		||||
 | personality           | controller                                                           |
 | 
			
		||||
 | reserved              | False                                                                |
 | 
			
		||||
 | rootfs_device         | /dev/disk/by-path/pci-0000:00:1f.2-ata-1.0                           |
 | 
			
		||||
 | serialid              | None                                                                 |
 | 
			
		||||
 | software_load         | 19.12                                                                |
 | 
			
		||||
 | subfunction_avail     | available                                                            |
 | 
			
		||||
 | subfunction_oper      | enabled                                                              |
 | 
			
		||||
 | subfunctions          | controller,worker                                                    |
 | 
			
		||||
 | task                  |                                                                      |
 | 
			
		||||
 | tboot                 | false                                                                |
 | 
			
		||||
 | ttys_dcd              | None                                                                 |
 | 
			
		||||
 | updated_at            | 2020-03-09T14:10:42.362846+00:00                                     |
 | 
			
		||||
 | uptime                | 991                                                                  |
 | 
			
		||||
 | uuid                  | 66aa842e-84a2-4041-b93e-f0275cde8784                                 |
 | 
			
		||||
 | vim_progress_status   | services-enabled                                                     |
 | 
			
		||||
 +-----------------------+------------------------------------------------------------ ----------+
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-0-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,22 +0,0 @@
 | 
			
		||||
=============================================================
 | 
			
		||||
Bare metal Standard with Controller Storage Installation R3.0
 | 
			
		||||
=============================================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_controller_storage.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   controller_storage_hardware
 | 
			
		||||
   controller_storage_install_kubernetes
 | 
			
		||||
@@ -1,56 +0,0 @@
 | 
			
		||||
=====================
 | 
			
		||||
Hardware Requirements
 | 
			
		||||
=====================
 | 
			
		||||
 | 
			
		||||
This section describes the hardware requirements and server preparation for a
 | 
			
		||||
**StarlingX R3.0 bare metal Standard with Controller Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Minimum hardware requirements
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
The recommended minimum hardware requirements for bare metal servers for various
 | 
			
		||||
host types are:
 | 
			
		||||
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Minimum Requirement     | Controller Node             | Worker Node                 |
 | 
			
		||||
+=========================+=============================+=============================+
 | 
			
		||||
| Number of servers       | 2                           | 2-10                        |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge)      |
 | 
			
		||||
|                         |   8 cores/socket                                          |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Minimum memory          | 64 GB                       | 32 GB                       |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Primary disk            | 500 GB SSD or NVMe (see     | 120 GB (Minimum 10k RPM)    |
 | 
			
		||||
|                         | :doc:`../../nvme_config`)   |                             |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Additional disks        | - 1 or more 500 GB (min.    | - For OpenStack, recommend  |
 | 
			
		||||
|                         |   10K RPM) for Ceph OSD     |   1 or more 500 GB (min.    |
 | 
			
		||||
|                         | - Recommended, but not      |   10K RPM) for VM local     |
 | 
			
		||||
|                         |   required: 1 or more SSDs  |   ephemeral storage         |
 | 
			
		||||
|                         |   or NVMe drives for Ceph   |                             |
 | 
			
		||||
|                         |   journals (min. 1024 MiB   |                             |
 | 
			
		||||
|                         |   per OSD journal)          |                             |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Minimum network ports   | - Mgmt/Cluster: 1x10GE      | - Mgmt/Cluster: 1x10GE      |
 | 
			
		||||
|                         | - OAM: 1x1GE                | - Data: 1 or more x 10GE    |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| BIOS settings           | - Hyper-Threading technology enabled                      |
 | 
			
		||||
|                         | - Virtualization technology enabled                       |
 | 
			
		||||
|                         | - VT for directed I/O enabled                             |
 | 
			
		||||
|                         | - CPU power and performance policy set to performance     |
 | 
			
		||||
|                         | - CPU C state control disabled                            |
 | 
			
		||||
|                         | - Plug & play BMC detection disabled                      |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
 | 
			
		||||
--------------------------
 | 
			
		||||
Prepare bare metal servers
 | 
			
		||||
--------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: prep_servers.txt
 | 
			
		||||
@@ -1,773 +0,0 @@
 | 
			
		||||
===========================================================================
 | 
			
		||||
Install StarlingX Kubernetes on Bare Metal Standard with Controller Storage
 | 
			
		||||
===========================================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R3.0 bare metal Standard with Controller Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Create bootable USB
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
 | 
			
		||||
create a bootable USB with the StarlingX ISO on your system.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. incl-install-software-controller-0-standard-start:
 | 
			
		||||
 | 
			
		||||
#. Insert the bootable USB into a bootable USB port on the host you are
 | 
			
		||||
   configuring as controller-0.
 | 
			
		||||
 | 
			
		||||
#. Power on the host.
 | 
			
		||||
 | 
			
		||||
#. Attach to a console, ensure the host boots from the USB, and wait for the
 | 
			
		||||
   StarlingX Installer Menus.
 | 
			
		||||
 | 
			
		||||
#. Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
   #. First menu: Select 'Standard Controller Configuration'.
 | 
			
		||||
   #. Second menu: Select 'Graphical Console' or 'Serial Console' depending on
 | 
			
		||||
      your terminal access to the console port.
 | 
			
		||||
 | 
			
		||||
      .. figure:: ../figures/starlingx-standard-controller-configuration.png
 | 
			
		||||
         :scale: 47%
 | 
			
		||||
         :alt: starlingx-controller-configuration
 | 
			
		||||
 | 
			
		||||
         *Figure 1: StarlingX Controller Configuration*
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
      .. figure:: ../figures/starlingx-aio-serial-console.png
 | 
			
		||||
         :alt: starlingx--serial-console
 | 
			
		||||
 | 
			
		||||
         *Figure 2: StarlingX Serial Console*
 | 
			
		||||
 | 
			
		||||
      Wait for non-interactive install of software to complete and server to reboot.
 | 
			
		||||
      This can take 5-10 minutes, depending on the performance of the server.
 | 
			
		||||
 | 
			
		||||
.. incl-install-software-controller-0-standard-end:
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. incl-bootstrap-sys-controller-0-standard-start:
 | 
			
		||||
 | 
			
		||||
#. Login using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Login: sysadmin
 | 
			
		||||
      Password:
 | 
			
		||||
      Changing password for sysadmin.
 | 
			
		||||
      (current) UNIX Password: sysadmin
 | 
			
		||||
      New Password:
 | 
			
		||||
      (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. Verify and/or configure IP connectivity.
 | 
			
		||||
 | 
			
		||||
   External connectivity is required to run the Ansible bootstrap playbook. The
 | 
			
		||||
   StarlingX boot image will DHCP out all interfaces so the server may have
 | 
			
		||||
   obtained an IP address and have external IP connectivity if a DHCP server is
 | 
			
		||||
   present in your environment. Verify this using the :command:`ip addr` and
 | 
			
		||||
   :command:`ping 8.8.8.8` commands.
 | 
			
		||||
 | 
			
		||||
   Otherwise, manually configure an IP address and default IP route. Use the
 | 
			
		||||
   PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
 | 
			
		||||
   deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
 | 
			
		||||
      sudo ip link set up dev <PORT>
 | 
			
		||||
      sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
 | 
			
		||||
      ping 8.8.8.8
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``sysadmin home directory ($HOME)``
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   .. include:: ../ansible_install_time_only.txt
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   #. Use a copy of the default.yml file listed above to provide your overrides.
 | 
			
		||||
 | 
			
		||||
      The default.yml file lists all available parameters for bootstrap
 | 
			
		||||
      configuration with a brief description for each parameter in the file comments.
 | 
			
		||||
 | 
			
		||||
      To use this method, copy the default.yml file listed above to
 | 
			
		||||
      ``$HOME/localhost.yml`` and edit the configurable values as desired.
 | 
			
		||||
 | 
			
		||||
   #. Create a minimal user configuration override file.
 | 
			
		||||
 | 
			
		||||
      To use this method, create your override file at ``$HOME/localhost.yml``
 | 
			
		||||
      and provide the minimum required parameters for the deployment configuration
 | 
			
		||||
      as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
 | 
			
		||||
      applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: duplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
 | 
			
		||||
        external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
 | 
			
		||||
        external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
 | 
			
		||||
        external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
 | 
			
		||||
        external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <admin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
 | 
			
		||||
        # Add these lines to configure Docker to use a proxy server
 | 
			
		||||
        # docker_http_proxy: http://my.proxy.com:1080
 | 
			
		||||
        # docker_https_proxy: https://my.proxy.com:1443
 | 
			
		||||
        # docker_no_proxy:
 | 
			
		||||
        #   - 1.2.3.4
 | 
			
		||||
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
 | 
			
		||||
   firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
 | 
			
		||||
   details about Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
   The image below shows a typical successful run.
 | 
			
		||||
 | 
			
		||||
   .. figure:: ../figures/starlingx-release3-ansible-bootstrap-simplex.png
 | 
			
		||||
      :alt: ansible bootstrap install screen
 | 
			
		||||
      :width: 800
 | 
			
		||||
 | 
			
		||||
      *Figure 3: StarlingX Ansible Bootstrap*
 | 
			
		||||
 | 
			
		||||
.. incl-bootstrap-sys-controller-0-standard-end:
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-storage-start:
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
 | 
			
		||||
   attached networks. Use the OAM and MGMT port names, for example eth0, that are
 | 
			
		||||
   applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     OAM_IF=<OAM-PORT>
 | 
			
		||||
     MGMT_IF=<MGMT-PORT>
 | 
			
		||||
     system host-if-modify controller-0 lo -c none
 | 
			
		||||
     IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
 | 
			
		||||
     for UUID in $IFNET_UUIDS; do
 | 
			
		||||
         system interface-network-remove ${UUID}
 | 
			
		||||
     done
 | 
			
		||||
     system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
     system host-if-modify controller-0 $MGMT_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $MGMT_IF mgmt
 | 
			
		||||
     system interface-network-assign controller-0 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
#. If required, and not already done as part of bootstrap, configure Docker to
 | 
			
		||||
   use a proxy server.
 | 
			
		||||
 | 
			
		||||
   #. List Docker proxy parameters:
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
       system service-parameter-list platform docker
 | 
			
		||||
 | 
			
		||||
   #. Refer to :doc:`/../../configuration/docker_proxy_config` for
 | 
			
		||||
      details about Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-label-assign controller-0 openstack-control-plane=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
 | 
			
		||||
 | 
			
		||||
   StarlingX has OVS (kernel-based) vSwitch configured as default:
 | 
			
		||||
 | 
			
		||||
   * Runs in a container; defined within the helm charts of stx-openstack
 | 
			
		||||
     manifest.
 | 
			
		||||
   * Shares the core(s) assigned to the platform.
 | 
			
		||||
 | 
			
		||||
   If you require better performance, OVS-DPDK (OVS with the Data Plane
 | 
			
		||||
   Development Kit, which is supported only on bare metal hardware) should be
 | 
			
		||||
   used:
 | 
			
		||||
 | 
			
		||||
   * Runs directly on the host (it is not containerized).
 | 
			
		||||
   * Requires that at least 1 core be assigned/dedicated to the vSwitch function.
 | 
			
		||||
 | 
			
		||||
   To deploy the default containerized OVS:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system modify --vswitch_type none
 | 
			
		||||
 | 
			
		||||
   Do not run any vSwitch directly on the host, instead, use the containerized
 | 
			
		||||
   OVS defined in the helm charts of stx-openstack manifest.
 | 
			
		||||
 | 
			
		||||
   To deploy OVS-DPDK, run the following command:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system modify --vswitch_type ovs-dpdk
 | 
			
		||||
     system host-cpu-modify -f vswitch -p0 1 controller-0
 | 
			
		||||
 | 
			
		||||
   Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
 | 
			
		||||
   default to automatically assigning 1 vSwitch core for AIO controllers and 2
 | 
			
		||||
   vSwitch cores for compute-labeled worker nodes.
 | 
			
		||||
 | 
			
		||||
   When using OVS-DPDK, configure vSwitch memory per NUMA node with the following
 | 
			
		||||
   command:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-memory-modify -f <function> -1G <1G hugepages number> <hostname or id> <processor>
 | 
			
		||||
 | 
			
		||||
   For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-memory-modify -f vswitch -1G 1 worker-0 0
 | 
			
		||||
 | 
			
		||||
   VMs created in an OVS-DPDK environment must be configured to use huge pages
 | 
			
		||||
   to enable networking and must use a flavor with property: hw:mem_page_size=large
 | 
			
		||||
 | 
			
		||||
   Configure the huge pages for VMs in an OVS-DPDK environment with the command:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-memory-modify -1G <1G hugepages number> <hostname or id> <processor>
 | 
			
		||||
 | 
			
		||||
   For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-memory-modify worker-0 0 -1G 10
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      After controller-0 is unlocked, changing vswitch_type requires
 | 
			
		||||
      locking and unlocking all compute-labeled worker nodes (and/or AIO
 | 
			
		||||
      controllers) to apply the change.
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-0-storage-start:
 | 
			
		||||
 | 
			
		||||
Unlock controller-0 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   Once the controller comes back up, check the status of controller-0. It should
 | 
			
		||||
   now show "unlocked", "enabled", "available" and "provisioned".
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$ system host-show controller-0
 | 
			
		||||
 +-----------------------+----------------------------------------------------------------------+
 | 
			
		||||
 | Property              | Value                                                                |
 | 
			
		||||
 +-----------------------+----------------------------------------------------------------------+
 | 
			
		||||
 | action                | none                                                                 |
 | 
			
		||||
 | administrative        | unlocked                                                             |
 | 
			
		||||
 | availability          | available                                                            |
 | 
			
		||||
 | bm_ip                 | None                                                                 |
 | 
			
		||||
 | bm_type               | none                                                                 |
 | 
			
		||||
 | bm_username           | None                                                                 |
 | 
			
		||||
 | boot_device           | /dev/disk/by-path/pci-0000:00:17.0-ata-1.0                           |
 | 
			
		||||
 | capabilities          | {u'stor_function': u'monitor', u'Personality': u'Controller-Active'} |
 | 
			
		||||
 | clock_synchronization | ntp                                                                  |
 | 
			
		||||
 | config_applied        | 4bb28c10-7546-49be-8cf9-afc82fccf6fb                                 |
 | 
			
		||||
 | config_status         | None                                                                 |
 | 
			
		||||
 | config_target         | 4bb28c10-7546-49be-8cf9-afc82fccf6fb                                 |
 | 
			
		||||
 | console               | ttyS0,115200                                                         |
 | 
			
		||||
 | created_at            | 2020-04-22T06:03:45.823265+00:00                                     |
 | 
			
		||||
 | hostname              | controller-0                                                         |
 | 
			
		||||
 | id                    | 1                                                                    |
 | 
			
		||||
 | install_output        | text                                                                 |
 | 
			
		||||
 | install_state         | None                                                                 |
 | 
			
		||||
 | install_state_info    | None                                                                 |
 | 
			
		||||
 | inv_state             | inventoried                                                          |
 | 
			
		||||
 | invprovision          | provisioned                                                          |
 | 
			
		||||
 | location              | {}                                                                   |
 | 
			
		||||
 | mgmt_ip               | 10.10.54.11                                                          |
 | 
			
		||||
 | mgmt_mac              | a4:bf:01:54:97:96                                                    |
 | 
			
		||||
 | operational           | enabled                                                              |
 | 
			
		||||
 | personality           | controller                                                           |
 | 
			
		||||
 | reserved              | False                                                                |
 | 
			
		||||
 | rootfs_device         | /dev/disk/by-path/pci-0000:00:17.0-ata-1.0                           |
 | 
			
		||||
 | serialid              | None                                                                 |
 | 
			
		||||
 | software_load         | 20.01                                                                |
 | 
			
		||||
 | task                  |                                                                      |
 | 
			
		||||
 | tboot                 | false                                                                |
 | 
			
		||||
 | ttys_dcd              | None                                                                 |
 | 
			
		||||
 | updated_at            | 2020-04-22T10:36:37.034403+00:00                                     |
 | 
			
		||||
 | uptime                | 15224                                                                |
 | 
			
		||||
 | uuid                  | 770f6596-f631-489b-83c0-e64099748243                                 |
 | 
			
		||||
 | vim_progress_status   | services-enabled                                                     |
 | 
			
		||||
 +-----------------------+----------------------------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-0-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------------------------------------
 | 
			
		||||
Install software on controller-1 and worker nodes
 | 
			
		||||
-------------------------------------------------
 | 
			
		||||
 | 
			
		||||
#. Power on the controller-1 server and force it to network boot with the
 | 
			
		||||
   appropriate BIOS boot options for your particular server.
 | 
			
		||||
 | 
			
		||||
#. As controller-1 boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
 | 
			
		||||
   host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-list
 | 
			
		||||
     +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
     | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
     +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
     | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
     | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
     +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Using the host id, set the personality of this host to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on controller-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting for the previous step to complete, power on the worker nodes.
 | 
			
		||||
   Set the personality to 'worker' and assign a unique hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, power on worker-0 and wait for the new host (hostname=None) to
 | 
			
		||||
   be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 3 personality=worker hostname=worker-0
 | 
			
		||||
 | 
			
		||||
   Repeat for worker-1. Power on worker-1 and wait for the new host (hostname=None) to
 | 
			
		||||
   be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 4 personality=worker hostname=worker-1
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1, worker-0, and worker-1 to
 | 
			
		||||
   complete, for all servers to reboot, and for all to show as locked/disabled/online in
 | 
			
		||||
   'system host-list'.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-list
 | 
			
		||||
 | 
			
		||||
     +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
     | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
     +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
     | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
     | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
     | 3  | worker-0     | worker      | locked         | disabled    | online       |
 | 
			
		||||
     | 4  | worker-1     | worker      | locked         | disabled    | online       |
 | 
			
		||||
     +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-1-start:
 | 
			
		||||
 | 
			
		||||
Configure the OAM and MGMT interfaces of controller-1 and specify the attached
 | 
			
		||||
networks. Use the OAM and MGMT port names, for example eth0, that are applicable
 | 
			
		||||
to your deployment environment.
 | 
			
		||||
 | 
			
		||||
(Note that the MGMT interface is partially set up automatically by the network
 | 
			
		||||
install procedure.)
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  OAM_IF=<OAM-PORT>
 | 
			
		||||
  MGMT_IF=<MGMT-PORT>
 | 
			
		||||
  system host-if-modify controller-1 $OAM_IF -c platform
 | 
			
		||||
  system interface-network-assign controller-1 $OAM_IF oam
 | 
			
		||||
  system interface-network-assign controller-1 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in support
 | 
			
		||||
of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-label-assign controller-1 openstack-control-plane=enabled
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-1-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-1-start:
 | 
			
		||||
 | 
			
		||||
Unlock controller-1 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
Controller-1 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$ system host-show controller-1
 | 
			
		||||
 +-----------------------+-----------------------------------------------------------------------+
 | 
			
		||||
 | Property              | Value                                                                 |
 | 
			
		||||
 +-----------------------+-----------------------------------------------------------------------+
 | 
			
		||||
 | action                | none                                                                  |
 | 
			
		||||
 | administrative        | unlocked                                                              |
 | 
			
		||||
 | availability          | available                                                             |
 | 
			
		||||
 | bm_ip                 | None                                                                  |
 | 
			
		||||
 | bm_type               | none                                                                  |
 | 
			
		||||
 | bm_username           | None                                                                  |
 | 
			
		||||
 | boot_device           | /dev/sda                                                              |
 | 
			
		||||
 | capabilities          | {u'stor_function': u'monitor', u'Personality': u'Controller-Standby'} |
 | 
			
		||||
 | clock_synchronization | ntp                                                                   |
 | 
			
		||||
 | config_applied        | 4bb28c10-7546-49be-8cf9-afc82fccf6fb                                  |
 | 
			
		||||
 | config_status         | None                                                                  |
 | 
			
		||||
 | config_target         | 4bb28c10-7546-49be-8cf9-afc82fccf6fb                                  |
 | 
			
		||||
 | console               | ttyS0,115200                                                          |
 | 
			
		||||
 | created_at            | 2020-04-22T06:45:14.914066+00:00                                      |
 | 
			
		||||
 | hostname              | controller-1                                                          |
 | 
			
		||||
 | id                    | 4                                                                     |
 | 
			
		||||
 | install_output        | text                                                                  |
 | 
			
		||||
 | install_state         | completed                                                             |
 | 
			
		||||
 | install_state_info    | None                                                                  |
 | 
			
		||||
 | inv_state             | inventoried                                                           |
 | 
			
		||||
 | invprovision          | provisioned                                                           |
 | 
			
		||||
 | location              | {}                                                                    |
 | 
			
		||||
 | mgmt_ip               | 10.10.54.12                                                           |
 | 
			
		||||
 | mgmt_mac              | a4:bf:01:55:05:eb                                                     |
 | 
			
		||||
 | operational           | enabled                                                               |
 | 
			
		||||
 | personality           | controller                                                            |
 | 
			
		||||
 | reserved              | False                                                                 |
 | 
			
		||||
 | rootfs_device         | /dev/sda                                                              |
 | 
			
		||||
 | serialid              | None                                                                  |
 | 
			
		||||
 | software_load         | 20.01                                                                 |
 | 
			
		||||
 | task                  |                                                                       |
 | 
			
		||||
 | tboot                 | false                                                                 |
 | 
			
		||||
 | ttys_dcd              | None                                                                  |
 | 
			
		||||
 | updated_at            | 2020-04-22T10:37:11.563565+00:00                                      |
 | 
			
		||||
 | uptime                | 1611                                                                  |
 | 
			
		||||
 | uuid                  | c296480a-3303-4e1a-a131-178a0d7d5a6d                                  |
 | 
			
		||||
 | vim_progress_status   | services-enabled                                                      |
 | 
			
		||||
 +-----------------------+-----------------------------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-1-end:
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure worker nodes
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
#. Add the third Ceph monitor to a worker node:
 | 
			
		||||
 | 
			
		||||
   (The first two Ceph monitors are automatically assigned to controller-0 and
 | 
			
		||||
   controller-1.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system ceph-mon-add worker-0
 | 
			
		||||
 | 
			
		||||
#. Wait for the worker node monitor to complete configuration:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system ceph-mon-list
 | 
			
		||||
     +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
     | uuid                                 | ceph_ | hostname     | state      | task |
 | 
			
		||||
     |                                      | mon_g |              |            |      |
 | 
			
		||||
     |                                      | ib    |              |            |      |
 | 
			
		||||
     +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
     | 64176b6c-e284-4485-bb2a-115dee215279 | 20    | controller-1 | configured | None |
 | 
			
		||||
     | a9ca151b-7f2c-4551-8167-035d49e2df8c | 20    | controller-0 | configured | None |
 | 
			
		||||
     | f76bc385-190c-4d9a-aa0f-107346a9907b | 20    | worker-0     | configured | None |
 | 
			
		||||
     +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     for NODE in worker-0 worker-1; do
 | 
			
		||||
        system interface-network-assign $NODE mgmt0 cluster-host
 | 
			
		||||
     done
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for worker nodes. Use the DATA port names, for
 | 
			
		||||
   example eth0, that are applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure SRIOV device plug in:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
      for NODE in worker-0 worker-1; do
 | 
			
		||||
         system host-label-assign ${NODE} sriovdp=enabled
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        for NODE in worker-0 worker-1; do
 | 
			
		||||
           system host-memory-modify ${NODE} 0 -1G 100
 | 
			
		||||
           system host-memory-modify ${NODE} 1 -1G 100
 | 
			
		||||
        done
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=<DATA-0-PORT>
 | 
			
		||||
      DATA1IF=<DATA-1-PORT>
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
 | 
			
		||||
      # configure the datanetworks in sysinv, prior to referencing it
 | 
			
		||||
      # in the ``system host-if-modify`` command'.
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      for NODE in worker-0 worker-1; do
 | 
			
		||||
        echo "Configuring interface for: $NODE"
 | 
			
		||||
        set -ex
 | 
			
		||||
        system host-port-list ${NODE} --nowrap > ${SPL}
 | 
			
		||||
        system host-if-list -a ${NODE} --nowrap > ${SPIL}
 | 
			
		||||
        DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
        DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
        DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
        DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
        DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
        DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
        DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
        DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
        system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
 | 
			
		||||
        system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
 | 
			
		||||
        system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
        system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
        set +ex
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     for NODE in worker-0 worker-1; do
 | 
			
		||||
       system host-label-assign $NODE  openstack-compute-node=enabled
 | 
			
		||||
       system host-label-assign $NODE  openvswitch=enabled
 | 
			
		||||
       system host-label-assign $NODE  sriov=enabled
 | 
			
		||||
     done
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     for NODE in worker-0 worker-1; do
 | 
			
		||||
       echo "Configuring Nova local for: $NODE"
 | 
			
		||||
       ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
       ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
       PARTITION_SIZE=10
 | 
			
		||||
       NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 | 
			
		||||
       NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
       system host-lvg-add ${NODE} nova-local
 | 
			
		||||
       system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
     done
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock worker nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-worker-baremetal-start:
 | 
			
		||||
 | 
			
		||||
Unlock worker nodes in order to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  for NODE in worker-0 worker-1; do
 | 
			
		||||
     system host-unlock $NODE
 | 
			
		||||
  done
 | 
			
		||||
 | 
			
		||||
The worker nodes will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-worker-baremetal-end:
 | 
			
		||||
 | 
			
		||||
----------------------------
 | 
			
		||||
Add Ceph OSDs to controllers
 | 
			
		||||
----------------------------
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to controller-0. The following example adds OSDs to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     HOST=controller-0
 | 
			
		||||
     DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
     TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
     OSDs="/dev/sdb"
 | 
			
		||||
     for OSD in $OSDs; do
 | 
			
		||||
        system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
        while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
     done
 | 
			
		||||
 | 
			
		||||
     system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to controller-1. The following example adds OSDs to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     HOST=controller-1
 | 
			
		||||
     DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
     TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
     OSDs="/dev/sdb"
 | 
			
		||||
     for OSD in $OSDs; do
 | 
			
		||||
         system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
         while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
     done
 | 
			
		||||
 | 
			
		||||
     system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   Check the status of the controller and worker nodes. It should
 | 
			
		||||
   now show "unlocked", "enabled" and "available".
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$ system host-list
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
 | 2  | compute-0    | worker      | unlocked       | enabled     | available    |
 | 
			
		||||
 | 3  | compute-1    | worker      | unlocked       | enabled     | available    |
 | 
			
		||||
 | 4  | controller-1 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,22 +0,0 @@
 | 
			
		||||
 | 
			
		||||
============================================================
 | 
			
		||||
Bare metal Standard with Dedicated Storage Installation R3.0
 | 
			
		||||
============================================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_dedicated_storage.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   dedicated_storage_hardware
 | 
			
		||||
   dedicated_storage_install_kubernetes
 | 
			
		||||
@@ -1,61 +0,0 @@
 | 
			
		||||
=====================
 | 
			
		||||
Hardware Requirements
 | 
			
		||||
=====================
 | 
			
		||||
 | 
			
		||||
This section describes the hardware requirements and server preparation for a
 | 
			
		||||
**StarlingX R3.0 bare metal Standard with Dedicated Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Minimum hardware requirements
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
The recommended minimum hardware requirements for bare metal servers for various
 | 
			
		||||
host types are:
 | 
			
		||||
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Minimum Requirement | Controller Node           | Storage Node          | Worker Node           |
 | 
			
		||||
+=====================+===========================+=======================+=======================+
 | 
			
		||||
| Number of servers   | 2                         | 2-9                   | 2-100                 |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Minimum processor   | Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket         |
 | 
			
		||||
| class               |                                                                           |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Minimum memory      | 64 GB                     | 64 GB                 | 32 GB                 |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Primary disk        | 500 GB SSD or NVMe (see   | 120 GB (min. 10k RPM) | 120 GB (min. 10k RPM) |
 | 
			
		||||
|                     | :doc:`../../nvme_config`) |                       |                       |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Additional disks    | None                      | - 1 or more 500 GB    | - For OpenStack,      |
 | 
			
		||||
|                     |                           |   (min. 10K RPM) for  |   recommend 1 or more |
 | 
			
		||||
|                     |                           |   Ceph OSD            |   500 GB (min. 10K    |
 | 
			
		||||
|                     |                           | - Recommended, but    |   RPM) for VM         |
 | 
			
		||||
|                     |                           |   not required: 1 or  |   ephemeral storage   |
 | 
			
		||||
|                     |                           |   more SSDs or NVMe   |                       |
 | 
			
		||||
|                     |                           |   drives for Ceph     |                       |
 | 
			
		||||
|                     |                           |   journals (min. 1024 |                       |
 | 
			
		||||
|                     |                           |   MiB per OSD         |                       |
 | 
			
		||||
|                     |                           |   journal)            |                       |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Minimum network     | - Mgmt/Cluster:           | - Mgmt/Cluster:       | - Mgmt/Cluster:       |
 | 
			
		||||
| ports               |   1x10GE                  |   1x10GE              |   1x10GE              |
 | 
			
		||||
|                     | - OAM: 1x1GE              |                       | - Data: 1 or more     |
 | 
			
		||||
|                     |                           |                       |   x 10GE              |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| BIOS settings       | - Hyper-Threading technology enabled                                      |
 | 
			
		||||
|                     | - Virtualization technology enabled                                       |
 | 
			
		||||
|                     | - VT for directed I/O enabled                                             |
 | 
			
		||||
|                     | - CPU power and performance policy set to performance                     |
 | 
			
		||||
|                     | - CPU C state control disabled                                            |
 | 
			
		||||
|                     | - Plug & play BMC detection disabled                                      |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
 | 
			
		||||
--------------------------
 | 
			
		||||
Prepare bare metal servers
 | 
			
		||||
--------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: prep_servers.txt
 | 
			
		||||
@@ -1,367 +0,0 @@
 | 
			
		||||
==========================================================================
 | 
			
		||||
Install StarlingX Kubernetes on Bare Metal Standard with Dedicated Storage
 | 
			
		||||
==========================================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R3.0 bare metal Standard with Dedicated Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Create bootable USB
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
 | 
			
		||||
create a bootable USB with the StarlingX ISO on your system.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-install-software-controller-0-standard-start:
 | 
			
		||||
   :end-before: incl-install-software-controller-0-standard-end:
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-bootstrap-sys-controller-0-standard-start:
 | 
			
		||||
   :end-before: incl-bootstrap-sys-controller-0-standard-end:
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-0-storage-start:
 | 
			
		||||
   :end-before: incl-config-controller-0-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-unlock-controller-0-storage-start:
 | 
			
		||||
   :end-before: incl-unlock-controller-0-storage-end:
 | 
			
		||||
 | 
			
		||||
-----------------------------------------------------------------
 | 
			
		||||
Install software on controller-1, storage nodes, and worker nodes
 | 
			
		||||
-----------------------------------------------------------------
 | 
			
		||||
 | 
			
		||||
#. Power on the controller-1 server and force it to network boot with the
 | 
			
		||||
   appropriate BIOS boot options for your particular server.
 | 
			
		||||
 | 
			
		||||
#. As controller-1 boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of controller-0, list hosts to see newly discovered
 | 
			
		||||
   controller-1 host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	system host-list
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	| id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
	| 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Using the host id, set the personality of this host to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on controller-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting for the previous step to complete, power on the storage-0 and
 | 
			
		||||
   storage-1 servers. Set the personality to 'storage' and assign a unique
 | 
			
		||||
   hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, power on storage-0 and wait for the new host (hostname=None) to
 | 
			
		||||
   be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   		system host-update 3 personality=storage
 | 
			
		||||
 | 
			
		||||
   Repeat for storage-1. Power on storage-1 and wait for the new host
 | 
			
		||||
   (hostname=None) to be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   		system host-update 4 personality=storage
 | 
			
		||||
 | 
			
		||||
   This initiates the software installation on storage-0 and storage-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting for the previous step to complete, power on the worker nodes.
 | 
			
		||||
   Set the personality to 'worker' and assign a unique hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, power on worker-0 and wait for the new host (hostname=None) to
 | 
			
		||||
   be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 5 personality=worker hostname=worker-0
 | 
			
		||||
 | 
			
		||||
   Repeat for worker-1. Power on worker-1 and wait for the new host
 | 
			
		||||
   (hostname=None) to be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 system host-update 6 personality=worker hostname=worker-1
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on worker-0 and worker-1.
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
 | 
			
		||||
   worker-0, and worker-1 to complete, for all servers to reboot, and for all to
 | 
			
		||||
   show as locked/disabled/online in 'system host-list'.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 system host-list
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	 | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	 | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
	 | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
	 | 3  | storage-0    | storage     | locked         | disabled    | online       |
 | 
			
		||||
	 | 4  | storage-1    | storage     | locked         | disabled    | online       |
 | 
			
		||||
	 | 5  | worker-0     | worker      | locked         | disabled    | online       |
 | 
			
		||||
	 | 6  | worker-1     | worker      | locked         | disabled    | online       |
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-1-start:
 | 
			
		||||
   :end-before: incl-config-controller-1-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-unlock-controller-1-start:
 | 
			
		||||
   :end-before: incl-unlock-controller-1-end:
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure storage nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	for NODE in storage-0 storage-1; do
 | 
			
		||||
	   system interface-network-assign $NODE mgmt0 cluster-host
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to storage-0. The following example adds OSDs to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	HOST=storage-0
 | 
			
		||||
	DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
	TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
	OSDs="/dev/sdb"
 | 
			
		||||
	for OSD in $OSDs; do
 | 
			
		||||
	   system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
	   while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
	system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to storage-1. The following example adds OSDs to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	HOST=storage-1
 | 
			
		||||
	DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
	TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
	OSDs="/dev/sdb"
 | 
			
		||||
	for OSD in $OSDs; do
 | 
			
		||||
	    system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
	    while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
	system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock storage nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
Unlock storage nodes in order to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	for STORAGE in storage-0 storage-1; do
 | 
			
		||||
	   system host-unlock $STORAGE
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
The storage nodes will reboot in order to apply configuration changes and come
 | 
			
		||||
into service. This can take 5-10 minutes, depending on the performance of the
 | 
			
		||||
host machine.
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure worker nodes
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	for NODE in worker-0 worker-1; do
 | 
			
		||||
	   system interface-network-assign $NODE mgmt0 cluster-host
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for worker nodes. Use the DATA port names, for
 | 
			
		||||
   example eth0, that are applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure SRIOV device plug in:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
		for NODE in worker-0 worker-1; do
 | 
			
		||||
		   system host-label-assign ${NODE} sriovdp=enabled
 | 
			
		||||
		done
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the
 | 
			
		||||
     number of 1G Huge pages required on both NUMA nodes:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
		for NODE in worker-0 worker-1; do
 | 
			
		||||
		   system host-memory-modify ${NODE} 0 -1G 100
 | 
			
		||||
		   system host-memory-modify ${NODE} 1 -1G 100
 | 
			
		||||
		done
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 	DATA0IF=<DATA-0-PORT>
 | 
			
		||||
		DATA1IF=<DATA-1-PORT>
 | 
			
		||||
		PHYSNET0='physnet0'
 | 
			
		||||
		PHYSNET1='physnet1'
 | 
			
		||||
		SPL=/tmp/tmp-system-port-list
 | 
			
		||||
		SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
 | 
			
		||||
		# configure the datanetworks in sysinv, prior to referencing it
 | 
			
		||||
		# in the ``system host-if-modify`` command'.
 | 
			
		||||
		system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
		system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
		for NODE in worker-0 worker-1; do
 | 
			
		||||
		  echo "Configuring interface for: $NODE"
 | 
			
		||||
		  set -ex
 | 
			
		||||
		  system host-port-list ${NODE} --nowrap > ${SPL}
 | 
			
		||||
		  system host-if-list -a ${NODE} --nowrap > ${SPIL}
 | 
			
		||||
		  DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
		  DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
		  DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
		  DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
		  DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
		  DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
		  DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
		  DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
		  system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
 | 
			
		||||
		  system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
 | 
			
		||||
		  system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
		  system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
		  set +ex
 | 
			
		||||
		done
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	for NODE in worker-0 worker-1; do
 | 
			
		||||
	  system host-label-assign $NODE  openstack-compute-node=enabled
 | 
			
		||||
	  system host-label-assign $NODE  openvswitch=enabled
 | 
			
		||||
	  system host-label-assign $NODE  sriov=enabled
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	for NODE in worker-0 worker-1; do
 | 
			
		||||
	  echo "Configuring Nova local for: $NODE"
 | 
			
		||||
	  ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
	  ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
	  PARTITION_SIZE=10
 | 
			
		||||
	  NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 | 
			
		||||
	  NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
	  system host-lvg-add ${NODE} nova-local
 | 
			
		||||
	  system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock worker nodes
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-unlock-worker-baremetal-start:
 | 
			
		||||
   :end-before: incl-unlock-worker-baremetal-end:
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   Check the status of the controller, worker and storage nodes. It should
 | 
			
		||||
   now show "unlocked", "enabled" and "available".
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$ system host-list
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
 | 2  | compute-0    | worker      | unlocked       | enabled     | available    |
 | 
			
		||||
 | 3  | compute-1    | worker      | unlocked       | enabled     | available    |
 | 
			
		||||
 | 4  | controller-1 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
 | 5  | storage-0    | storage     | unlocked       | enabled     | available    |
 | 
			
		||||
 | 6  | storage-1    | storage     | unlocked       | enabled     | available    |
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,51 +0,0 @@
 | 
			
		||||
=====================
 | 
			
		||||
Hardware Requirements
 | 
			
		||||
=====================
 | 
			
		||||
 | 
			
		||||
This section describes the hardware requirements and server preparation for a
 | 
			
		||||
**StarlingX R3.0 bare metal Ironic** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Minimum hardware requirements
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
* One or more bare metal hosts as Ironic nodes as well as tenant instance node.
 | 
			
		||||
 | 
			
		||||
* BMC support on bare metal host and controller node connectivity to the BMC IP
 | 
			
		||||
  address of bare metal hosts.
 | 
			
		||||
 | 
			
		||||
For controller nodes:
 | 
			
		||||
 | 
			
		||||
* Additional NIC port on both controller nodes for connecting to the
 | 
			
		||||
  ironic-provisioning-net.
 | 
			
		||||
 | 
			
		||||
For worker nodes:
 | 
			
		||||
 | 
			
		||||
* If using a flat data network for the Ironic provisioning network, an additional
 | 
			
		||||
  NIC port on one of the worker nodes is required.
 | 
			
		||||
 | 
			
		||||
* Alternatively, use a VLAN data network for the Ironic provisioning network and
 | 
			
		||||
  simply add the new data network to an existing interface on the worker node.
 | 
			
		||||
 | 
			
		||||
* Additional switch ports / configuration for new ports on controller, worker,
 | 
			
		||||
  and Ironic nodes, for connectivity to the Ironic provisioning network.
 | 
			
		||||
 | 
			
		||||
-----------------------------------
 | 
			
		||||
BMC configuration of Ironic node(s)
 | 
			
		||||
-----------------------------------
 | 
			
		||||
 | 
			
		||||
Enable BMC and allocate a static IP, username, and password in the BIOS settings.
 | 
			
		||||
For example, set:
 | 
			
		||||
 | 
			
		||||
IP address
 | 
			
		||||
  10.10.10.126
 | 
			
		||||
 | 
			
		||||
username
 | 
			
		||||
  root
 | 
			
		||||
 | 
			
		||||
password
 | 
			
		||||
  test123
 | 
			
		||||
@@ -1,392 +0,0 @@
 | 
			
		||||
================================
 | 
			
		||||
Install Ironic on StarlingX R3.0
 | 
			
		||||
================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install Ironic on a standard configuration,
 | 
			
		||||
either:
 | 
			
		||||
 | 
			
		||||
* **StarlingX R3.0 bare metal Standard with Controller Storage** deployment
 | 
			
		||||
  configuration
 | 
			
		||||
 | 
			
		||||
* **StarlingX R3.0 bare metal Standard with Dedicated Storage** deployment
 | 
			
		||||
  configuration
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
---------------------
 | 
			
		||||
Enable Ironic service
 | 
			
		||||
---------------------
 | 
			
		||||
 | 
			
		||||
This section describes the pre-configuration required to enable the Ironic service.
 | 
			
		||||
All the commands in this section are for the StarlingX platform.
 | 
			
		||||
 | 
			
		||||
First acquire administrative privileges:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
********************************
 | 
			
		||||
Download Ironic deployment image
 | 
			
		||||
********************************
 | 
			
		||||
 | 
			
		||||
The Ironic service requires a deployment image (kernel and ramdisk) which is
 | 
			
		||||
used to clean Ironic nodes and install the end-user's image. The cleaning done
 | 
			
		||||
by the deployment image wipes the disks and tests connectivity to the Ironic
 | 
			
		||||
conductor on the controller nodes via the Ironic Python Agent (IPA).
 | 
			
		||||
 | 
			
		||||
The Ironic deployment Stein image (**Ironic-kernel** and **Ironic-ramdisk**)
 | 
			
		||||
can be found here:
 | 
			
		||||
 | 
			
		||||
* `Ironic-kernel coreos_production_pxe-stable-stein.vmlinuz
 | 
			
		||||
  <https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe-stable-stein.vmlinuz>`__
 | 
			
		||||
* `Ironic-ramdisk coreos_production_pxe_image-oem-stable-stein.cpio.gz
 | 
			
		||||
  <https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem-stable-stein.cpio.gz>`__
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
*******************************************************
 | 
			
		||||
Configure Ironic network on deployed standard StarlingX
 | 
			
		||||
*******************************************************
 | 
			
		||||
 | 
			
		||||
#. Add an address pool for the Ironic network. This example uses `ironic-pool`:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system addrpool-add --ranges 10.10.20.1-10.10.20.100 ironic-pool 10.10.20.0 24
 | 
			
		||||
 | 
			
		||||
#. Add the Ironic platform network. This example uses `ironic-net`:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system addrpool-list | grep ironic-pool | awk '{print$2}' | xargs system network-add ironic-net ironic false
 | 
			
		||||
 | 
			
		||||
#. Add the Ironic tenant network. This example uses `ironic-data`:
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      The tenant network is not the same as the platform network described in
 | 
			
		||||
      the previous step. You can specify any name for the tenant network other
 | 
			
		||||
      than ‘ironic’. If the name 'ironic' is used, a user override must be
 | 
			
		||||
      generated to indicate the tenant network name.
 | 
			
		||||
 | 
			
		||||
      Refer to section `Generate user Helm overrides`_ for details.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ironic-data flat
 | 
			
		||||
 | 
			
		||||
#. Configure the new interfaces (for Ironic) on controller nodes and assign
 | 
			
		||||
   them to the platform network. Host must be locked. This example uses the
 | 
			
		||||
   platform network `ironic-net` that was named in a previous step.
 | 
			
		||||
 | 
			
		||||
   These new interfaces to the controllers are used to connect to the Ironic
 | 
			
		||||
   provisioning network:
 | 
			
		||||
 | 
			
		||||
   **controller-0**
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system interface-network-assign controller-0 enp2s0 ironic-net
 | 
			
		||||
      system host-if-modify -n ironic -c platform \
 | 
			
		||||
      --ipv4-mode static --ipv4-pool ironic-pool controller-0 enp2s0
 | 
			
		||||
 | 
			
		||||
      # Apply the OpenStack Ironic node labels
 | 
			
		||||
      system host-label-assign controller-0 openstack-ironic=enabled
 | 
			
		||||
 | 
			
		||||
      # Unlock the node to apply changes
 | 
			
		||||
      system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   **controller-1**
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system interface-network-assign controller-1 enp2s0 ironic-net
 | 
			
		||||
      system host-if-modify -n ironic -c platform \
 | 
			
		||||
      --ipv4-mode static --ipv4-pool ironic-pool controller-1 enp2s0
 | 
			
		||||
 | 
			
		||||
      # Apply the OpenStack Ironic node labels
 | 
			
		||||
      system host-label-assign controller-1 openstack-ironic=enabled
 | 
			
		||||
 | 
			
		||||
      # Unlock the node to apply changes
 | 
			
		||||
      system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
#. Configure the new interface (for Ironic) on one of the compute-labeled worker
 | 
			
		||||
   nodes and assign it to the Ironic data network. This example uses the data
 | 
			
		||||
   network `ironic-data` that was named in a previous step.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system interface-datanetwork-assign worker-0 eno1 ironic-data
 | 
			
		||||
      system host-if-modify -n ironicdata -c data worker-0 eno1
 | 
			
		||||
 | 
			
		||||
****************************
 | 
			
		||||
Generate user Helm overrides
 | 
			
		||||
****************************
 | 
			
		||||
 | 
			
		||||
Ironic Helm Charts are included in the stx-openstack application. By default,
 | 
			
		||||
Ironic is disabled.
 | 
			
		||||
 | 
			
		||||
To enable Ironic, update the following Ironic Helm Chart attributes:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   system helm-override-update stx-openstack ironic openstack \
 | 
			
		||||
   --set network.pxe.neutron_subnet_alloc_start=10.10.20.10 \
 | 
			
		||||
   --set network.pxe.neutron_subnet_gateway=10.10.20.1 \
 | 
			
		||||
   --set network.pxe.neutron_provider_network=ironic-data
 | 
			
		||||
 | 
			
		||||
:command:`network.pxe.neutron_subnet_alloc_start` sets the DHCP start IP to
 | 
			
		||||
Neutron for Ironic node provision, and reserves several IPs for the platform.
 | 
			
		||||
 | 
			
		||||
If the data network name for Ironic is changed, modify
 | 
			
		||||
:command:`network.pxe.neutron_provider_network` to the command above:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   --set network.pxe.neutron_provider_network=ironic-data
 | 
			
		||||
 | 
			
		||||
*******************************
 | 
			
		||||
Apply stx-openstack application
 | 
			
		||||
*******************************
 | 
			
		||||
 | 
			
		||||
Re-apply the stx-openstack application to apply the changes to Ironic:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   system helm-chart-attribute-modify stx-openstack ironic openstack \
 | 
			
		||||
   --enabled true
 | 
			
		||||
 | 
			
		||||
   system application-apply stx-openstack
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Start an Ironic node
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
All the commands in this section are for the OpenStack application with
 | 
			
		||||
administrative privileges.
 | 
			
		||||
 | 
			
		||||
From a new shell as a root user, without sourcing ``/etc/platform/openrc``:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   mkdir -p /etc/openstack
 | 
			
		||||
 | 
			
		||||
   tee /etc/openstack/clouds.yaml << EOF
 | 
			
		||||
   clouds:
 | 
			
		||||
     openstack_helm:
 | 
			
		||||
       region_name: RegionOne
 | 
			
		||||
       identity_api_version: 3
 | 
			
		||||
       endpoint_type: internalURL
 | 
			
		||||
       auth:
 | 
			
		||||
         username: 'admin'
 | 
			
		||||
         password: 'Li69nux*'
 | 
			
		||||
         project_name: 'admin'
 | 
			
		||||
         project_domain_name: 'default'
 | 
			
		||||
         user_domain_name: 'default'
 | 
			
		||||
         auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
 | 
			
		||||
   EOF
 | 
			
		||||
 | 
			
		||||
   export OS_CLOUD=openstack_helm
 | 
			
		||||
 | 
			
		||||
********************
 | 
			
		||||
Create Glance images
 | 
			
		||||
********************
 | 
			
		||||
 | 
			
		||||
#. Create the **ironic-kernel** image:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack image create \
 | 
			
		||||
      --file ~/coreos_production_pxe-stable-stein.vmlinuz \
 | 
			
		||||
      --disk-format aki \
 | 
			
		||||
      --container-format aki \
 | 
			
		||||
      --public \
 | 
			
		||||
      ironic-kernel
 | 
			
		||||
 | 
			
		||||
#. Create the **ironic-ramdisk** image:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack image create \
 | 
			
		||||
      --file ~/coreos_production_pxe_image-oem-stable-stein.cpio.gz \
 | 
			
		||||
      --disk-format ari \
 | 
			
		||||
      --container-format ari \
 | 
			
		||||
      --public \
 | 
			
		||||
      ironic-ramdisk
 | 
			
		||||
 | 
			
		||||
#. Create the end user application image (for example, CentOS):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack image create \
 | 
			
		||||
      --file ~/CentOS-7-x86_64-GenericCloud-root.qcow2 \
 | 
			
		||||
      --public --disk-format \
 | 
			
		||||
      qcow2 --container-format bare centos
 | 
			
		||||
 | 
			
		||||
*********************
 | 
			
		||||
Create an Ironic node
 | 
			
		||||
*********************
 | 
			
		||||
 | 
			
		||||
#. Create a node:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node create --driver ipmi --name ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Add IPMI information:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node set \
 | 
			
		||||
      --driver-info ipmi_address=10.10.10.126 \
 | 
			
		||||
      --driver-info ipmi_username=root \
 | 
			
		||||
      --driver-info ipmi_password=test123 \
 | 
			
		||||
      --driver-info ipmi_terminal_port=623 ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Set `ironic-kernel` and `ironic-ramdisk` images driver information,
 | 
			
		||||
   on this bare metal node:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node set \
 | 
			
		||||
      --driver-info deploy_kernel=$(openstack image list | grep ironic-kernel | awk '{print$2}') \
 | 
			
		||||
      --driver-info deploy_ramdisk=$(openstack image list | grep ironic-ramdisk | awk '{print$2}') \
 | 
			
		||||
      ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Set resource properties on this bare metal node based on actual Ironic node
 | 
			
		||||
   capacities:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node set \
 | 
			
		||||
      --property cpus=4 \
 | 
			
		||||
      --property cpu_arch=x86_64\
 | 
			
		||||
      --property capabilities="boot_option:local" \
 | 
			
		||||
      --property memory_mb=65536 \
 | 
			
		||||
      --property local_gb=400 \
 | 
			
		||||
      --resource-class bm ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Add pxe_template location:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node set --driver-info \
 | 
			
		||||
      pxe_template='/var/lib/openstack/lib64/python2.7/site-packages/ironic/drivers/modules/ipxe_config.template' \
 | 
			
		||||
      ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Create a port to identify the specific port used by the Ironic node.
 | 
			
		||||
   Substitute **a4:bf:01:2b:3b:c8** with the MAC address for the Ironic node
 | 
			
		||||
   port which connects to the Ironic network:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal port create \
 | 
			
		||||
      --node $(openstack baremetal node list | grep ironic-test0 | awk '{print$2}') \
 | 
			
		||||
      --pxe-enabled true a4:bf:01:2b:3b:c8
 | 
			
		||||
 | 
			
		||||
#. Change node state to `manage`:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node manage ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Make node available for deployment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node provide ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Wait for ironic-test0 provision-state: available:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node show ironic-test0
 | 
			
		||||
 | 
			
		||||
---------------------------------
 | 
			
		||||
Deploy an instance on Ironic node
 | 
			
		||||
---------------------------------
 | 
			
		||||
 | 
			
		||||
All the commands in this section are for the OpenStack application, but this
 | 
			
		||||
time with *tenant* specific privileges.
 | 
			
		||||
 | 
			
		||||
#. From a new shell as a root user, without sourcing ``/etc/platform/openrc``:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      mkdir -p /etc/openstack
 | 
			
		||||
 | 
			
		||||
      tee /etc/openstack/clouds.yaml << EOF
 | 
			
		||||
      clouds:
 | 
			
		||||
        openstack_helm:
 | 
			
		||||
          region_name: RegionOne
 | 
			
		||||
          identity_api_version: 3
 | 
			
		||||
          endpoint_type: internalURL
 | 
			
		||||
          auth:
 | 
			
		||||
            username: 'joeuser'
 | 
			
		||||
            password: 'mypasswrd'
 | 
			
		||||
            project_name: 'intel'
 | 
			
		||||
            project_domain_name: 'default'
 | 
			
		||||
            user_domain_name: 'default'
 | 
			
		||||
            auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
 | 
			
		||||
      EOF
 | 
			
		||||
 | 
			
		||||
      export OS_CLOUD=openstack_helm
 | 
			
		||||
 | 
			
		||||
#. Create flavor.
 | 
			
		||||
 | 
			
		||||
   Set resource CUSTOM_BM corresponding to **--resource-class bm**:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack flavor create --ram 4096 --vcpus 4 --disk 400 \
 | 
			
		||||
      --property resources:CUSTOM_BM=1 \
 | 
			
		||||
      --property resources:VCPU=0 \
 | 
			
		||||
      --property resources:MEMORY_MB=0 \
 | 
			
		||||
      --property resources:DISK_GB=0 \
 | 
			
		||||
      --property capabilities:boot_option='local' \
 | 
			
		||||
      bm-flavor
 | 
			
		||||
 | 
			
		||||
   See `Adding scheduling information
 | 
			
		||||
   <https://docs.openstack.org/ironic/latest/install/enrollment.html#adding-scheduling-information>`__
 | 
			
		||||
   and `Configure Nova flavors
 | 
			
		||||
   <https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html>`__
 | 
			
		||||
   for more information.
 | 
			
		||||
 | 
			
		||||
#. Enable service
 | 
			
		||||
 | 
			
		||||
   List the compute services:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack compute service list
 | 
			
		||||
 | 
			
		||||
   Set compute service properties:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack compute service set --enable controller-0 nova-compute
 | 
			
		||||
 | 
			
		||||
#. Create instance
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      The :command:`keypair create` command is optional. It is not required to
 | 
			
		||||
      enable a bare metal instance.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   Create 2 new servers, one bare metal and one virtual:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack server create --image centos --flavor bm-flavor \
 | 
			
		||||
      --network baremetal --key-name mykey bm
 | 
			
		||||
 | 
			
		||||
      openstack server create --image centos --flavor m1.small \
 | 
			
		||||
      --network baremetal --key-name mykey vm
 | 
			
		||||
@@ -1,17 +0,0 @@
 | 
			
		||||
Prior to starting the StarlingX installation, the bare metal servers must be in
 | 
			
		||||
the following condition:
 | 
			
		||||
 | 
			
		||||
* Physically installed
 | 
			
		||||
 | 
			
		||||
* Cabled for power
 | 
			
		||||
 | 
			
		||||
* Cabled for networking
 | 
			
		||||
 | 
			
		||||
  * Far-end switch ports should be properly configured to realize the networking
 | 
			
		||||
    shown in Figure 1.
 | 
			
		||||
 | 
			
		||||
* All disks wiped
 | 
			
		||||
 | 
			
		||||
  * Ensures that servers will boot from either the network or USB storage (if present)
 | 
			
		||||
 | 
			
		||||
* Powered off
 | 
			
		||||
@@ -1,23 +0,0 @@
 | 
			
		||||
The Standard with Dedicated Storage deployment option is a standard installation
 | 
			
		||||
with independent controller, worker, and storage nodes.
 | 
			
		||||
 | 
			
		||||
A Standard with Dedicated Storage configuration provides the following benefits:
 | 
			
		||||
 | 
			
		||||
* A pool of up to 100 worker nodes
 | 
			
		||||
* A 2x node high availability (HA) controller cluster with HA services running
 | 
			
		||||
  across the controller nodes in either active/active or active/standby mode
 | 
			
		||||
* A storage back end solution using a two-to-9x node HA CEPH storage cluster
 | 
			
		||||
  that supports a replication factor of two or three
 | 
			
		||||
* Up to four groups of 2x storage nodes, or up to three groups of 3x storage nodes
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   If you are behind a corporate firewall or proxy, you need to set proxy
 | 
			
		||||
   settings. Refer to :doc:`/../../configuration/docker_proxy_config` for
 | 
			
		||||
   details.
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-deployment-options-dedicated-storage.png
 | 
			
		||||
   :scale: 50%
 | 
			
		||||
   :alt: Standard with Dedicated Storage deployment configuration
 | 
			
		||||
 | 
			
		||||
   *Figure 1: Standard with Dedicated Storage deployment configuration*
 | 
			
		||||
@@ -1,310 +0,0 @@
 | 
			
		||||
===================================
 | 
			
		||||
Distributed Cloud Installation R3.0
 | 
			
		||||
===================================
 | 
			
		||||
 | 
			
		||||
This section describes how to install and configure the StarlingX distributed
 | 
			
		||||
cloud deployment.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
Distributed cloud configuration supports an edge computing solution by
 | 
			
		||||
providing central management and orchestration for a geographically
 | 
			
		||||
distributed network of StarlingX Kubernetes edge systems/clusters.
 | 
			
		||||
 | 
			
		||||
The StarlingX distributed cloud implements the OpenStack Edge Computing
 | 
			
		||||
Groups's MVP `Edge Reference Architecture
 | 
			
		||||
<https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures>`_,
 | 
			
		||||
specifically the "Distributed Control Plane" scenario.
 | 
			
		||||
 | 
			
		||||
The StarlingX distributed cloud deployment is designed to meet the needs of
 | 
			
		||||
edge-based data centers with centralized orchestration and independent control
 | 
			
		||||
planes, and in which Network Function Cloudification (NFC) worker resources
 | 
			
		||||
are localized for maximum responsiveness. The architecture features:
 | 
			
		||||
 | 
			
		||||
- Centralized orchestration of edge cloud control planes.
 | 
			
		||||
- Full synchronized control planes at edge clouds (that is, Kubernetes cluster
 | 
			
		||||
  master and nodes), with greater benefits for local services, such as:
 | 
			
		||||
 | 
			
		||||
  - Reduced network latency.
 | 
			
		||||
  - Operational availability, even if northbound connectivity
 | 
			
		||||
    to the central cloud is lost.
 | 
			
		||||
 | 
			
		||||
The system supports a scalable number of StarlingX Kubernetes edge
 | 
			
		||||
systems/clusters, which are centrally managed and synchronized over L3
 | 
			
		||||
networks from a central cloud. Each edge system is also highly scalable, from
 | 
			
		||||
a single node StarlingX Kubernetes deployment to a full standard cloud
 | 
			
		||||
configuration with controller, worker and storage nodes.
 | 
			
		||||
 | 
			
		||||
------------------------------
 | 
			
		||||
Distributed cloud architecture
 | 
			
		||||
------------------------------
 | 
			
		||||
 | 
			
		||||
A distributed cloud system consists of a central cloud, and one or more
 | 
			
		||||
subclouds connected to the System Controller region central cloud over L3
 | 
			
		||||
networks, as shown in Figure 1.
 | 
			
		||||
 | 
			
		||||
- **Central cloud**
 | 
			
		||||
 | 
			
		||||
  The central cloud provides a *RegionOne* region for managing the physical
 | 
			
		||||
  platform of the central cloud and the *SystemController* region for managing
 | 
			
		||||
  and orchestrating over the subclouds.
 | 
			
		||||
 | 
			
		||||
  - **RegionOne**
 | 
			
		||||
 | 
			
		||||
    In the Horizon GUI, RegionOne is the name of the access mode, or region,
 | 
			
		||||
    used to manage the nodes in the central cloud.
 | 
			
		||||
 | 
			
		||||
  - **SystemController**
 | 
			
		||||
 | 
			
		||||
    In the Horizon GUI, SystemController is the name of the access mode, or
 | 
			
		||||
    region, used to manage the subclouds.
 | 
			
		||||
 | 
			
		||||
    You can use the System Controller to add subclouds, synchronize select
 | 
			
		||||
    configuration data across all subclouds and monitor subcloud operations
 | 
			
		||||
    and alarms. System software updates for the subclouds are also centrally
 | 
			
		||||
    managed and applied from the System Controller.
 | 
			
		||||
 | 
			
		||||
    DNS, NTP, and other select configuration settings are centrally managed
 | 
			
		||||
    at the System Controller and pushed to the subclouds in parallel to
 | 
			
		||||
    maintain synchronization across the distributed cloud.
 | 
			
		||||
 | 
			
		||||
- **Subclouds**
 | 
			
		||||
 | 
			
		||||
  The subclouds are StarlingX Kubernetes edge systems/clusters used to host
 | 
			
		||||
  containerized applications. Any type of StarlingX Kubernetes configuration,
 | 
			
		||||
  (including simplex, duplex, or standard with or without storage nodes), can
 | 
			
		||||
  be used for a subcloud. The two edge clouds shown in Figure 1 are subclouds.
 | 
			
		||||
 | 
			
		||||
  Alarms raised at the subclouds are sent to the System Controller for
 | 
			
		||||
  central reporting.
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-deployment-options-distributed-cloud.png
 | 
			
		||||
   :scale: 45%
 | 
			
		||||
   :alt: Distributed cloud deployment configuration
 | 
			
		||||
 | 
			
		||||
   *Figure 1: Distributed cloud deployment configuration*
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Network requirements
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
Subclouds are connected to the System Controller through both the OAM and the
 | 
			
		||||
Management interfaces. Because each subcloud is on a separate L3 subnet, the
 | 
			
		||||
OAM, Management and PXE boot L2 networks are local to the subclouds. They are
 | 
			
		||||
not connected via L2 to the central cloud, they are only connected via L3
 | 
			
		||||
routing. The settings required to connect a subcloud to the System Controller
 | 
			
		||||
are specified when a subcloud is defined. A gateway router is required to
 | 
			
		||||
complete the L3 connections, which will provide IP routing between the
 | 
			
		||||
subcloud Management and OAM IP subnet and the System Controller Management and
 | 
			
		||||
OAM IP subnet, respectively. The System Controller bootstraps the subclouds via
 | 
			
		||||
the OAM network, and manages them via the management network. For more
 | 
			
		||||
information, see the `Install a Subcloud`_ section later in this guide.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
    All messaging between System Controllers and Subclouds uses the ``admin``
 | 
			
		||||
    REST API service endpoints which, in this distributed cloud environment,
 | 
			
		||||
    are all configured for secure HTTPS. Certificates for these HTTPS
 | 
			
		||||
    connections are managed internally by StarlingX.
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Install and provision the central cloud
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
Installing the central cloud is similar to installing a standard
 | 
			
		||||
StarlingX Kubernetes system. The central cloud supports either an AIO-duplex
 | 
			
		||||
deployment configuration or a standard with dedicated storage nodes deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
To configure controller-0 as a distributed cloud central controller, you must
 | 
			
		||||
set certain system parameters during the initial bootstrapping of
 | 
			
		||||
controller-0. Set the system parameter *distributed_cloud_role* to
 | 
			
		||||
*systemcontroller* in the Ansible bootstrap override file. Also, set the
 | 
			
		||||
management network IP address range to exclude IP addresses reserved for
 | 
			
		||||
gateway routers providing routing to the subclouds' management subnets.
 | 
			
		||||
 | 
			
		||||
Procedure:
 | 
			
		||||
 | 
			
		||||
- Follow the StarlingX R3.0 installation procedures with the extra step noted below:
 | 
			
		||||
 | 
			
		||||
  - AIO-duplex:
 | 
			
		||||
    `Bare metal All-in-one Duplex Installation R3.0 <https://docs.starlingx.io/deploy_install_guides/r3_release/bare_metal/aio_duplex.html>`_
 | 
			
		||||
 | 
			
		||||
  - Standard with dedicated storage nodes:
 | 
			
		||||
    `Bare metal Standard with Dedicated Storage Installation R3.0 <https://docs.starlingx.io/deploy_install_guides/r3_release/bare_metal/dedicated_storage.html>`_
 | 
			
		||||
 | 
			
		||||
- For the step "Bootstrap system on controller-0", add the following
 | 
			
		||||
  parameters to the Ansible bootstrap override file.
 | 
			
		||||
 | 
			
		||||
  .. code:: yaml
 | 
			
		||||
 | 
			
		||||
     distributed_cloud_role: systemcontroller
 | 
			
		||||
     management_start_address: <X.Y.Z.2>
 | 
			
		||||
     management_end_address: <X.Y.Z.50>
 | 
			
		||||
 | 
			
		||||
------------------
 | 
			
		||||
Install a subcloud
 | 
			
		||||
------------------
 | 
			
		||||
 | 
			
		||||
At the subcloud location:
 | 
			
		||||
 | 
			
		||||
1. Physically install and cable all subcloud servers.
 | 
			
		||||
2. Physically install the top of rack switch and configure it for the
 | 
			
		||||
   required networks.
 | 
			
		||||
3. Physically install the gateway routers which will provide IP routing
 | 
			
		||||
   between the subcloud OAM and Management subnets and the System Controller
 | 
			
		||||
   OAM and management subnets.
 | 
			
		||||
4. On the server designated for controller-0, install the StarlingX
 | 
			
		||||
   Kubernetes software from USB or a PXE Boot server.
 | 
			
		||||
 | 
			
		||||
5. Establish an L3 connection to the System Controller by enabling the OAM
 | 
			
		||||
   interface (with OAM IP/subnet) on the subcloud controller using the
 | 
			
		||||
   ``config_management`` script. This step is for subcloud ansible bootstrap
 | 
			
		||||
   preparation.
 | 
			
		||||
 | 
			
		||||
   .. note:: This step should **not** use an interface that uses the MGMT
 | 
			
		||||
             IP/subnet because the MGMT IP subnet will get moved to the loopback
 | 
			
		||||
             address by the Ansible bootstrap playbook during installation.
 | 
			
		||||
 | 
			
		||||
   Be prepared to provide the following information:
 | 
			
		||||
 | 
			
		||||
   - Subcloud OAM interface name (for example, enp0s3).
 | 
			
		||||
   - Subcloud OAM interface address, in CIDR format (for example, 10.10.10.12/24).
 | 
			
		||||
 | 
			
		||||
     .. note:: This must match the *external_oam_floating_address* supplied in
 | 
			
		||||
               the subcloud's ansible bootstrap override file.
 | 
			
		||||
 | 
			
		||||
   - Subcloud gateway address on the OAM network
 | 
			
		||||
     (for example, 10.10.10.1). A default value is shown.
 | 
			
		||||
   - System Controller OAM subnet (for example, 10,10.10.0/24).
 | 
			
		||||
 | 
			
		||||
   .. note:: To exit without completing the script, use ``CTRL+C``. Allow a few minutes for
 | 
			
		||||
             the script to finish.
 | 
			
		||||
 | 
			
		||||
   .. note:: The `config_management` in the code snippet configures the OAM
 | 
			
		||||
             interface/address/gateway.
 | 
			
		||||
 | 
			
		||||
   .. code:: sh
 | 
			
		||||
 | 
			
		||||
        $ sudo config_management
 | 
			
		||||
        Enabling interfaces... DONE
 | 
			
		||||
        Waiting 120 seconds for LLDP neighbor discovery... Retrieving neighbor details... DONE
 | 
			
		||||
        Available interfaces:
 | 
			
		||||
        local interface     remote port
 | 
			
		||||
        ---------------     ----------
 | 
			
		||||
        enp0s3              08:00:27:c4:6c:7a
 | 
			
		||||
        enp0s8              08:00:27:86:7a:13
 | 
			
		||||
        enp0s9              unknown
 | 
			
		||||
 | 
			
		||||
        Enter management interface name: enp0s3
 | 
			
		||||
        Enter management address CIDR: 10.10.10.12/24
 | 
			
		||||
        Enter management gateway address [10.10.10.1]:
 | 
			
		||||
        Enter System Controller subnet: 10.10.10.0/24
 | 
			
		||||
        Disabling non-management interfaces... DONE
 | 
			
		||||
        Configuring management interface... DONE
 | 
			
		||||
        RTNETLINK answers: File exists
 | 
			
		||||
        Adding route to System Controller... DONE
 | 
			
		||||
 | 
			
		||||
At the System Controller:
 | 
			
		||||
 | 
			
		||||
1. Create a ``bootstrap-values.yml`` override file for the subcloud. For
 | 
			
		||||
   example:
 | 
			
		||||
 | 
			
		||||
   .. code:: yaml
 | 
			
		||||
 | 
			
		||||
      system_mode: duplex
 | 
			
		||||
      name: "subcloud1"
 | 
			
		||||
      description: "Ottawa Site"
 | 
			
		||||
      location: "YOW"
 | 
			
		||||
 | 
			
		||||
      management_subnet: 192.168.101.0/24
 | 
			
		||||
      management_start_address: 192.168.101.2
 | 
			
		||||
      management_end_address: 192.168.101.50
 | 
			
		||||
      management_gateway_address: 192.168.101.1
 | 
			
		||||
 | 
			
		||||
      external_oam_subnet: 10.10.10.0/24
 | 
			
		||||
      external_oam_gateway_address: 10.10.10.1
 | 
			
		||||
      external_oam_floating_address: 10.10.10.12
 | 
			
		||||
 | 
			
		||||
      systemcontroller_gateway_address: 192.168.204.101
 | 
			
		||||
 | 
			
		||||
   .. important:: The `management_*` entries in the override file are required
 | 
			
		||||
      for all installation types, including AIO-Simplex.
 | 
			
		||||
 | 
			
		||||
   .. important:: The `management_subnet` must not overlap with any other subclouds.
 | 
			
		||||
 | 
			
		||||
   .. note:: The `systemcontroller_gateway_address` is the address of central
 | 
			
		||||
             cloud management network gateway.
 | 
			
		||||
 | 
			
		||||
2. Add the subcloud using the CLI command below:
 | 
			
		||||
 | 
			
		||||
   .. code:: sh
 | 
			
		||||
 | 
			
		||||
      dcmanager subcloud add --bootstrap-address <ip_address>
 | 
			
		||||
      --bootstrap-values <config-file>
 | 
			
		||||
 | 
			
		||||
   Where:
 | 
			
		||||
 | 
			
		||||
   - *<ip_address>* is the OAM interface address set earlier on the subcloud.
 | 
			
		||||
   - *<config_file>* is the Ansible override configuration file, ``bootstrap-values.yml``,
 | 
			
		||||
     created earlier in step 1.
 | 
			
		||||
 | 
			
		||||
   You will be prompted for the Linux password of the subcloud. This command
 | 
			
		||||
   will take 5- 10 minutes to complete. You can monitor the progress of the
 | 
			
		||||
   subcloud bootstrap through logs:
 | 
			
		||||
 | 
			
		||||
   .. code:: sh
 | 
			
		||||
 | 
			
		||||
      tail –f /var/log/dcmanager/<subcloud name>_bootstrap_<time stamp>.log
 | 
			
		||||
 | 
			
		||||
3. Confirm that the subcloud was deployed successfully:
 | 
			
		||||
 | 
			
		||||
   .. code:: sh
 | 
			
		||||
 | 
			
		||||
      dcmanager subcloud list
 | 
			
		||||
 | 
			
		||||
      +----+-----------+------------+--------------+---------------+---------+
 | 
			
		||||
      | id | name      | management | availability | deploy status | sync    |
 | 
			
		||||
      +----+-----------+------------+--------------+---------------+---------+
 | 
			
		||||
      | 1  | subcloud1 | unmanaged  | offline      | complete      | unknown |
 | 
			
		||||
      +----+-----------+------------+--------------+---------------+---------+
 | 
			
		||||
 | 
			
		||||
4. Continue provisioning the subcloud system as required using the StarlingX
 | 
			
		||||
   R3.0 Installation procedures and starting from the 'Configure controller-0'
 | 
			
		||||
   step.
 | 
			
		||||
 | 
			
		||||
   - For AIO-Simplex:
 | 
			
		||||
     `Bare metal All-in-one Simplex Installation R3.0 <https://docs.starlingx.io/deploy_install_guides/r3_release/bare_metal/aio_simplex.html>`_
 | 
			
		||||
 | 
			
		||||
   - For AIO-Duplex:
 | 
			
		||||
     `Bare metal All-in-one Duplex Installation R3.0 <https://docs.starlingx.io/deploy_install_guides/r3_release/bare_metal/aio_duplex.html>`_
 | 
			
		||||
 | 
			
		||||
   - For Standard with controller storage:
 | 
			
		||||
     `Bare metal Standard with Controller Storage Installation R3.0 <https://docs.starlingx.io/deploy_install_guides/r3_release/bare_metal/controller_storage.html>`_
 | 
			
		||||
 | 
			
		||||
   - For Standard with dedicated storage nodes:
 | 
			
		||||
     `Bare metal Standard with Dedicated Storage Installation R3.0 <https://docs.starlingx.io/deploy_install_guides/r3_release/bare_metal/dedicated_storage.html>`_
 | 
			
		||||
 | 
			
		||||
On the active controller for each subcloud:
 | 
			
		||||
 | 
			
		||||
#. Add a route from the subcloud to the controller management network to enable
 | 
			
		||||
   the subcloud to go online. For each host in the subcloud:
 | 
			
		||||
 | 
			
		||||
   .. code:: sh
 | 
			
		||||
 | 
			
		||||
      system host-route-add <host id> <mgmt.interface> \
 | 
			
		||||
                            <system controller mgmt.subnet> <prefix> <subcloud mgmt.gateway ip>
 | 
			
		||||
 | 
			
		||||
   For example:
 | 
			
		||||
 | 
			
		||||
   .. code:: sh
 | 
			
		||||
 | 
			
		||||
      system host-route-add 1 enp0s8 192.168.204.0 24 192.168.101.1
 | 
			
		||||
 | 
			
		||||
| 
		 Before Width: | Height: | Size: 16 KiB  | 
| 
		 Before Width: | Height: | Size: 14 KiB  | 
| 
		 Before Width: | Height: | Size: 96 KiB  | 
| 
		 Before Width: | Height: | Size: 109 KiB  | 
| 
		 Before Width: | Height: | Size: 100 KiB  | 
| 
		 Before Width: | Height: | Size: 127 KiB  | 
| 
		 Before Width: | Height: | Size: 70 KiB  | 
| 
		 Before Width: | Height: | Size: 27 KiB  | 
| 
		 Before Width: | Height: | Size: 191 KiB  | 
@@ -1,65 +0,0 @@
 | 
			
		||||
===========================
 | 
			
		||||
StarlingX R3.0 Installation
 | 
			
		||||
===========================
 | 
			
		||||
 | 
			
		||||
StarlingX provides a pre-defined set of standard
 | 
			
		||||
:doc:`deployment configurations </introduction/deploy_options>`. Most deployment options may
 | 
			
		||||
be installed in a virtual environment or on bare metal.
 | 
			
		||||
 | 
			
		||||
-----------------------------------------------------
 | 
			
		||||
Install StarlingX Kubernetes in a virtual environment
 | 
			
		||||
-----------------------------------------------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   virtual/aio_simplex
 | 
			
		||||
   virtual/aio_duplex
 | 
			
		||||
   virtual/controller_storage
 | 
			
		||||
   virtual/dedicated_storage
 | 
			
		||||
 | 
			
		||||
------------------------------------------
 | 
			
		||||
Install StarlingX Kubernetes on bare metal
 | 
			
		||||
------------------------------------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   bare_metal/aio_simplex
 | 
			
		||||
   bare_metal/aio_duplex
 | 
			
		||||
   bare_metal/controller_storage
 | 
			
		||||
   bare_metal/dedicated_storage
 | 
			
		||||
   bare_metal/ironic
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :hidden:
 | 
			
		||||
 | 
			
		||||
   ansible_bootstrap_configs
 | 
			
		||||
 | 
			
		||||
-------------------------------------------------
 | 
			
		||||
Install StarlingX Distributed Cloud on bare metal
 | 
			
		||||
-------------------------------------------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   distributed_cloud/index
 | 
			
		||||
 | 
			
		||||
-----------------
 | 
			
		||||
Access Kubernetes
 | 
			
		||||
-----------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   kubernetes_access
 | 
			
		||||
 | 
			
		||||
--------------------------
 | 
			
		||||
Access StarlingX OpenStack
 | 
			
		||||
--------------------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   openstack/index
 | 
			
		||||
 | 
			
		||||
@@ -1,14 +0,0 @@
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   By default, StarlingX uses IPv4. To use StarlingX with IPv6:
 | 
			
		||||
 | 
			
		||||
   * The entire infrastructure and cluster configuration must be IPv6, with the
 | 
			
		||||
     exception of the PXE boot network.
 | 
			
		||||
 | 
			
		||||
   * Not all external servers are reachable via IPv6 addresses (for example
 | 
			
		||||
     Docker registries). Depending on your infrastructure, it may be necessary
 | 
			
		||||
     to deploy a NAT64/DNS64 gateway to translate the IPv4 addresses to IPv6.
 | 
			
		||||
 | 
			
		||||
   * Refer to the :doc:`/../developer_resources/stx_ipv6_deployment` guide
 | 
			
		||||
     for details on how to deploy a NAT64/DNS64 gateway to use StarlingX
 | 
			
		||||
     with IPv6.
 | 
			
		||||
@@ -1,181 +0,0 @@
 | 
			
		||||
================================
 | 
			
		||||
Access StarlingX Kubernetes R3.0
 | 
			
		||||
================================
 | 
			
		||||
 | 
			
		||||
Use local/remote CLIs, GUIs, and/or REST APIs to access and manage StarlingX
 | 
			
		||||
Kubernetes and hosted containerized applications.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Local CLIs
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
In order to access the StarlingX and Kubernetes commands on controller-O, first
 | 
			
		||||
follow these steps:
 | 
			
		||||
 | 
			
		||||
#. Log in to controller-0 via the console or SSH with a sysadmin/<sysadmin-password>.
 | 
			
		||||
 | 
			
		||||
#. Acquire Keystone admin and Kubernetes admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
*********************************************
 | 
			
		||||
StarlingX system and host management commands
 | 
			
		||||
*********************************************
 | 
			
		||||
 | 
			
		||||
Access StarlingX system and host management commands using the :command:`system`
 | 
			
		||||
command. For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	system host-list
 | 
			
		||||
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	| id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
Use the :command:`system help` command for the full list of options.
 | 
			
		||||
 | 
			
		||||
***********************************
 | 
			
		||||
StarlingX fault management commands
 | 
			
		||||
***********************************
 | 
			
		||||
 | 
			
		||||
Access StarlingX fault management commands using the :command:`fm` command, for example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	fm alarm-list
 | 
			
		||||
 | 
			
		||||
*******************
 | 
			
		||||
Kubernetes commands
 | 
			
		||||
*******************
 | 
			
		||||
 | 
			
		||||
Access Kubernetes commands using the :command:`kubectl` command, for example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	kubectl get nodes
 | 
			
		||||
 | 
			
		||||
	NAME           STATUS   ROLES    AGE     VERSION
 | 
			
		||||
	controller-0   Ready    master   5d19h   v1.13.5
 | 
			
		||||
 | 
			
		||||
See https://kubernetes.io/docs/reference/kubectl/overview/ for details.
 | 
			
		||||
 | 
			
		||||
-----------
 | 
			
		||||
Remote CLIs
 | 
			
		||||
-----------
 | 
			
		||||
 | 
			
		||||
Documentation coming soon.
 | 
			
		||||
 | 
			
		||||
---
 | 
			
		||||
GUI
 | 
			
		||||
---
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   For a virtual installation, run the browser on the host machine.
 | 
			
		||||
 | 
			
		||||
*********************
 | 
			
		||||
StarlingX Horizon GUI
 | 
			
		||||
*********************
 | 
			
		||||
 | 
			
		||||
Access the StarlingX Horizon GUI with the following steps:
 | 
			
		||||
 | 
			
		||||
#. Enter the OAM floating IP address in your browser:
 | 
			
		||||
   `\http://<oam-floating-ip-address>:8080`
 | 
			
		||||
 | 
			
		||||
   Discover your OAM floating IP address with the :command:`system oam-show` command.
 | 
			
		||||
 | 
			
		||||
#. Log in to Horizon with an admin/<sysadmin-password>.
 | 
			
		||||
 | 
			
		||||
********************
 | 
			
		||||
Kubernetes dashboard
 | 
			
		||||
********************
 | 
			
		||||
 | 
			
		||||
The Kubernetes dashboard is not installed by default.
 | 
			
		||||
 | 
			
		||||
To install the Kubernetes dashboard, execute the following steps on controller-0:
 | 
			
		||||
 | 
			
		||||
#. Use the kubernetes-dashboard helm chart from the stable helm repository with
 | 
			
		||||
   the override values shown below:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	cat <<EOF > dashboard-values.yaml
 | 
			
		||||
	service:
 | 
			
		||||
	  type: NodePort
 | 
			
		||||
	  nodePort: 30000
 | 
			
		||||
 | 
			
		||||
	rbac:
 | 
			
		||||
	  create: true
 | 
			
		||||
	  clusterAdminRole: true
 | 
			
		||||
 | 
			
		||||
	serviceAccount:
 | 
			
		||||
	  create: true
 | 
			
		||||
	  name: kubernetes-dashboard
 | 
			
		||||
	EOF
 | 
			
		||||
 | 
			
		||||
	helm repo update
 | 
			
		||||
 | 
			
		||||
	helm install stable/kubernetes-dashboard --name dashboard -f dashboard-values.yaml
 | 
			
		||||
 | 
			
		||||
#. Create an ``admin-user`` service account with ``cluster-admin`` privileges, and
 | 
			
		||||
   display its token for logging into the Kubernetes dashboard.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	cat <<EOF > admin-login.yaml
 | 
			
		||||
	apiVersion: v1
 | 
			
		||||
	kind: ServiceAccount
 | 
			
		||||
	metadata:
 | 
			
		||||
	  name: admin-user
 | 
			
		||||
	  namespace: kube-system
 | 
			
		||||
	---
 | 
			
		||||
	apiVersion: rbac.authorization.k8s.io/v1
 | 
			
		||||
	kind: ClusterRoleBinding
 | 
			
		||||
	metadata:
 | 
			
		||||
	  name: admin-user
 | 
			
		||||
	roleRef:
 | 
			
		||||
	  apiGroup: rbac.authorization.k8s.io
 | 
			
		||||
	  kind: ClusterRole
 | 
			
		||||
	  name: cluster-admin
 | 
			
		||||
	subjects:
 | 
			
		||||
	- kind: ServiceAccount
 | 
			
		||||
	  name: admin-user
 | 
			
		||||
	  namespace: kube-system
 | 
			
		||||
	EOF
 | 
			
		||||
 | 
			
		||||
	kubectl apply -f admin-login.yaml
 | 
			
		||||
 | 
			
		||||
	kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
Access the Kubernetes dashboard GUI with the following steps:
 | 
			
		||||
 | 
			
		||||
#. Enter the OAM floating IP address in your browser:
 | 
			
		||||
   `\https://<oam-floating-ip-address>:30000`.
 | 
			
		||||
 | 
			
		||||
   Discover your OAM floating IP address with the :command:`system oam-show` command.
 | 
			
		||||
 | 
			
		||||
#. Log in to the Kubernetes dashboard using the ``admin-user`` token.
 | 
			
		||||
 | 
			
		||||
---------
 | 
			
		||||
REST APIs
 | 
			
		||||
---------
 | 
			
		||||
 | 
			
		||||
List the StarlingX platform-related public REST API endpoints using the
 | 
			
		||||
following command:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	openstack endpoint list | grep public
 | 
			
		||||
 | 
			
		||||
Use these URLs as the prefix for the URL target of StarlingX Platform Services'
 | 
			
		||||
REST API messages.
 | 
			
		||||
@@ -1,7 +0,0 @@
 | 
			
		||||
Your Kubernetes cluster is now up and running.
 | 
			
		||||
 | 
			
		||||
For instructions on how to access StarlingX Kubernetes see
 | 
			
		||||
:doc:`../kubernetes_access`.
 | 
			
		||||
 | 
			
		||||
For instructions on how to install and access StarlingX OpenStack see
 | 
			
		||||
:doc:`../openstack/index`.
 | 
			
		||||
@@ -1,331 +0,0 @@
 | 
			
		||||
==========================
 | 
			
		||||
Access StarlingX OpenStack
 | 
			
		||||
==========================
 | 
			
		||||
 | 
			
		||||
Use local/remote CLIs, GUIs and/or REST APIs to access and manage StarlingX
 | 
			
		||||
OpenStack and hosted virtualized applications.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
------------------------------
 | 
			
		||||
Configure helm endpoint domain
 | 
			
		||||
------------------------------
 | 
			
		||||
 | 
			
		||||
Containerized OpenStack services in StarlingX are deployed behind an ingress
 | 
			
		||||
controller (nginx) that listens on either port 80 (HTTP) or port 443 (HTTPS).
 | 
			
		||||
The ingress controller routes packets to the specific OpenStack service, such as
 | 
			
		||||
the Cinder service, or the Neutron service, by parsing the FQDN in the packet.
 | 
			
		||||
For example, `neutron.openstack.svc.cluster.local` is for the Neutron service,
 | 
			
		||||
`cinder‐api.openstack.svc.cluster.local` is for the Cinder service.
 | 
			
		||||
 | 
			
		||||
This routing requires that access to OpenStack REST APIs must be via a FQDN
 | 
			
		||||
or by using a remote OpenStack CLI that uses the REST APIs. You cannot access
 | 
			
		||||
OpenStack REST APIs using an IP address.
 | 
			
		||||
 | 
			
		||||
FQDNs (such as `cinder‐api.openstack.svc.cluster.local`) must be in a DNS server
 | 
			
		||||
that is publicly accessible.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   There is a way to wild‐card a set of FQDNs to the same IP address in a DNS
 | 
			
		||||
   server configuration so that you don’t need to update the DNS server every
 | 
			
		||||
   time an OpenStack service is added. Check your particular DNS server for
 | 
			
		||||
   details on how to wild-card a set of FQDNs.
 | 
			
		||||
 | 
			
		||||
In a “real” deployment, that is, not a lab scenario, you can not use the default
 | 
			
		||||
`openstack.svc.cluster.local` domain name externally. You must set a unique
 | 
			
		||||
domain name for your StarlingX system. StarlingX provides the
 | 
			
		||||
:command:`system service‐parameter-add` command to configure and set the
 | 
			
		||||
OpenStack domain name:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system service-parameter-add openstack helm endpoint_domain=<domain_name>
 | 
			
		||||
 | 
			
		||||
`<domain_name>` should be a fully qualified domain name that you own, such that
 | 
			
		||||
you can configure the DNS Server that owns `<domain_name>` with the OpenStack
 | 
			
		||||
service names underneath the domain.
 | 
			
		||||
 | 
			
		||||
For example:
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system service-parameter-add openstack helm endpoint_domain=my-starlingx-domain.my-company.com
 | 
			
		||||
  system application-apply stx-openstack
 | 
			
		||||
 | 
			
		||||
This command updates the helm charts of all OpenStack services and restarts them.
 | 
			
		||||
For example it would change `cinder‐api.openstack.svc.cluster.local` to
 | 
			
		||||
`cinder‐api.my-starlingx-domain.my-company.com`, and so on for all OpenStack
 | 
			
		||||
services.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   This command also changes the containerized OpenStack Horizon to listen on
 | 
			
		||||
   `horizon.my-starlingx-domain.my-company.com:80` instead of the initial
 | 
			
		||||
   `<oam‐floating‐ip>:31000`.
 | 
			
		||||
 | 
			
		||||
You must configure `{ ‘*.my-starlingx-domain.my-company.com’:  -->  oam‐floating‐ip‐address }`
 | 
			
		||||
in the external DNS server that owns `my-company.com`.
 | 
			
		||||
 | 
			
		||||
---------
 | 
			
		||||
Local CLI
 | 
			
		||||
---------
 | 
			
		||||
 | 
			
		||||
Access OpenStack using the local CLI with one of the following methods.
 | 
			
		||||
 | 
			
		||||
**Method 1**
 | 
			
		||||
 | 
			
		||||
You can use this method on either controller, active or standby.
 | 
			
		||||
 | 
			
		||||
#. Log in to the desired controller via the console or SSH with a
 | 
			
		||||
   sysadmin/<sysadmin-password>.
 | 
			
		||||
 | 
			
		||||
   **Do not** use ``source /etc/platform/openrc``.
 | 
			
		||||
 | 
			
		||||
#. Set the CLI context to the StarlingX OpenStack Cloud Application and set up
 | 
			
		||||
   OpenStack admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    sudo su -
 | 
			
		||||
    mkdir -p /etc/openstack
 | 
			
		||||
    tee /etc/openstack/clouds.yaml << EOF
 | 
			
		||||
    clouds:
 | 
			
		||||
      openstack_helm:
 | 
			
		||||
        region_name: RegionOne
 | 
			
		||||
        identity_api_version: 3
 | 
			
		||||
        endpoint_type: internalURL
 | 
			
		||||
        auth:
 | 
			
		||||
          username: 'admin'
 | 
			
		||||
          password: '<sysadmin-password>'
 | 
			
		||||
          project_name: 'admin'
 | 
			
		||||
          project_domain_name: 'default'
 | 
			
		||||
          user_domain_name: 'default'
 | 
			
		||||
          auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
 | 
			
		||||
    EOF
 | 
			
		||||
    exit
 | 
			
		||||
 | 
			
		||||
    export OS_CLOUD=openstack_helm
 | 
			
		||||
 | 
			
		||||
**Method 2**
 | 
			
		||||
 | 
			
		||||
Use this method to access StarlingX Kubernetes commands and StarlingX OpenStack
 | 
			
		||||
commands in the same shell. You can only use this method on the active
 | 
			
		||||
controller.
 | 
			
		||||
 | 
			
		||||
#.  Log in to the active controller via the console or SSH with a
 | 
			
		||||
    sysadmin/<sysadmin-password>.
 | 
			
		||||
 | 
			
		||||
#.  Set the CLI context to the StarlingX OpenStack Cloud Application and set up
 | 
			
		||||
    OpenStack admin credentials:
 | 
			
		||||
 | 
			
		||||
    ::
 | 
			
		||||
 | 
			
		||||
        sed '/export OS_AUTH_URL/c\export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3' /etc/platform/openrc > ~/openrc.os
 | 
			
		||||
        source ./openrc.os
 | 
			
		||||
 | 
			
		||||
    .. note::
 | 
			
		||||
 | 
			
		||||
        To switch between StarlingX Kubernetes/Platform credentials and StarlingX
 | 
			
		||||
        OpenStack credentials, use ``source /etc/platform/openrc`` or
 | 
			
		||||
        ``source ./openrc.os`` respectively.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
**********************
 | 
			
		||||
OpenStack CLI commands
 | 
			
		||||
**********************
 | 
			
		||||
 | 
			
		||||
Access OpenStack CLI commands for the StarlingX OpenStack cloud application
 | 
			
		||||
using the :command:`openstack` command. For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
        controller-0:~$ export OS_CLOUD=openstack_helm
 | 
			
		||||
        controller-0:~$ openstack flavor list
 | 
			
		||||
        controller-0:~$ openstack image list
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
    If you are using Method 2 described above, use these commands:
 | 
			
		||||
 | 
			
		||||
    ::
 | 
			
		||||
 | 
			
		||||
        controller-0:~$ source ./openrc.os
 | 
			
		||||
        controller-0:~$ openstack flavor list
 | 
			
		||||
        controller-0:~$ openstack image list
 | 
			
		||||
 | 
			
		||||
The image below shows a typical successful run.
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-access-openstack-flavorlist.png
 | 
			
		||||
   :alt: starlingx-access-openstack-flavorlist
 | 
			
		||||
   :scale: 50%
 | 
			
		||||
 | 
			
		||||
   *Figure 1: StarlingX OpenStack Flavorlist*
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-access-openstack-command.png
 | 
			
		||||
   :alt: starlingx-access-openstack-command
 | 
			
		||||
   :scale: 50%
 | 
			
		||||
 | 
			
		||||
   *Figure 2: StarlingX OpenStack Commands*
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Remote CLI
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
Documentation coming soon.
 | 
			
		||||
 | 
			
		||||
---
 | 
			
		||||
GUI
 | 
			
		||||
---
 | 
			
		||||
 | 
			
		||||
Access the StarlingX containerized OpenStack Horizon GUI in your browser at the
 | 
			
		||||
following address:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
    http://<oam-floating-ip-address>:31000
 | 
			
		||||
 | 
			
		||||
Log in to the Containerized OpenStack Horizon GUI with an admin/<sysadmin-password>.
 | 
			
		||||
 | 
			
		||||
---------
 | 
			
		||||
REST APIs
 | 
			
		||||
---------
 | 
			
		||||
 | 
			
		||||
This section provides an overview of accessing REST APIs with examples of
 | 
			
		||||
`curl`-based REST API commands.
 | 
			
		||||
 | 
			
		||||
****************
 | 
			
		||||
Public endpoints
 | 
			
		||||
****************
 | 
			
		||||
 | 
			
		||||
Use the `Local CLI`_ to display OpenStack public REST API endpoints. For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  openstack endpoint list
 | 
			
		||||
 | 
			
		||||
The public endpoints will look like:
 | 
			
		||||
 | 
			
		||||
* `\http://keystone.openstack.svc.cluster.local:80/v3`
 | 
			
		||||
* `\http://nova.openstack.svc.cluster.local:80/v2.1/%(tenant_id)s`
 | 
			
		||||
* `\http://neutron.openstack.svc.cluster.local:80/`
 | 
			
		||||
* `etc.`
 | 
			
		||||
 | 
			
		||||
If you have set a unique domain name, then the public endpoints will look like:
 | 
			
		||||
 | 
			
		||||
* `\http://keystone.my-starlingx-domain.my-company.com:80/v3`
 | 
			
		||||
* `\http://nova.my-starlingx-domain.my-company.com:80/v2.1/%(tenant_id)s`
 | 
			
		||||
* `\http://neutron.my-starlingx-domain.my-company.com:80/`
 | 
			
		||||
* `etc.`
 | 
			
		||||
 | 
			
		||||
Documentation for the OpenStack REST APIs is available at
 | 
			
		||||
`OpenStack API Documentation <https://docs.openstack.org/api-quick-start/index.html>`_.
 | 
			
		||||
 | 
			
		||||
***********
 | 
			
		||||
Get a token
 | 
			
		||||
***********
 | 
			
		||||
 | 
			
		||||
The following command will request the Keystone token:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
    curl -i   -H "Content-Type: application/json"   -d
 | 
			
		||||
    '{ "auth": {
 | 
			
		||||
        "identity": {
 | 
			
		||||
          "methods": ["password"],
 | 
			
		||||
          "password": {
 | 
			
		||||
            "user": {
 | 
			
		||||
              "name": "admin",
 | 
			
		||||
              "domain": { "id": "default" },
 | 
			
		||||
              "password": "St8rlingX*"
 | 
			
		||||
            }
 | 
			
		||||
          }
 | 
			
		||||
        },
 | 
			
		||||
        "scope": {
 | 
			
		||||
          "project": {
 | 
			
		||||
            "name": "admin",
 | 
			
		||||
            "domain": { "id": "default" }
 | 
			
		||||
          }
 | 
			
		||||
        }
 | 
			
		||||
      }
 | 
			
		||||
    }'   http://keystone.openstack.svc.cluster.local:80/v3/auth/tokens
 | 
			
		||||
 | 
			
		||||
The token will be returned in the "X-Subject-Token" header field of the response:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
    HTTP/1.1 201 CREATED
 | 
			
		||||
    Date: Wed, 02 Oct 2019 18:27:38 GMT
 | 
			
		||||
    Content-Type: application/json
 | 
			
		||||
    Content-Length: 8128
 | 
			
		||||
    Connection: keep-alive
 | 
			
		||||
    X-Subject-Token: gAAAAABdlOwafP71DXZjbyEf4gsNYA8ftso910S-RdJhg0fnqWuMGyMUhYUUJSossuUIitrvu2VXYXDNPbnaGzFveOoXxYTPlM6Fgo1aCl6wW85zzuXqT6AsxoCn95OMFhj_HHeYNPTkcyjbuW-HH_rJfhuUXt85iytZ_YAQQUfSXM7N3zAk7Pg
 | 
			
		||||
    Vary: X-Auth-Token
 | 
			
		||||
    x-openstack-request-id: req-d1bbe060-32f0-4cf1-ba1d-7b38c56b79fb
 | 
			
		||||
 | 
			
		||||
    {"token": {"is_domain": false,
 | 
			
		||||
 | 
			
		||||
        ...
 | 
			
		||||
 | 
			
		||||
You can set an environment variable to hold the token value from the response.
 | 
			
		||||
For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  TOKEN=gAAAAABdlOwafP71DXZjbyEf4gsNYA8ftso910S
 | 
			
		||||
 | 
			
		||||
*****************
 | 
			
		||||
List Nova flavors
 | 
			
		||||
*****************
 | 
			
		||||
 | 
			
		||||
The following command will request a list of all Nova flavors:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
    curl -i http://nova.openstack.svc.cluster.local:80/v2.1/flavors -X GET -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token:${TOKEN}" | tail -1 | python -m json.tool
 | 
			
		||||
 | 
			
		||||
The list will be returned in the response. For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
     % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
 | 
			
		||||
                                     Dload  Upload   Total   Spent    Left  Speed
 | 
			
		||||
    100  2529  100  2529    0     0  24187      0 --:--:-- --:--:-- --:--:-- 24317
 | 
			
		||||
    {
 | 
			
		||||
        "flavors": [
 | 
			
		||||
            {
 | 
			
		||||
                "id": "04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
 | 
			
		||||
                "links": [
 | 
			
		||||
                    {
 | 
			
		||||
                        "href": "http://nova.openstack.svc.cluster.local/v2.1/flavors/04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
 | 
			
		||||
                        "rel": "self"
 | 
			
		||||
                    },
 | 
			
		||||
                    {
 | 
			
		||||
                        "href": "http://nova.openstack.svc.cluster.local/flavors/04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
 | 
			
		||||
                        "rel": "bookmark"
 | 
			
		||||
                    }
 | 
			
		||||
                ],
 | 
			
		||||
                "name": "m1.tiny"
 | 
			
		||||
            },
 | 
			
		||||
            {
 | 
			
		||||
                "id": "14c725b1-1658-48ec-90e6-05048d269e89",
 | 
			
		||||
                "links": [
 | 
			
		||||
                    {
 | 
			
		||||
                        "href": "http://nova.openstack.svc.cluster.local/v2.1/flavors/14c725b1-1658-48ec-90e6-05048d269e89",
 | 
			
		||||
                        "rel": "self"
 | 
			
		||||
                    },
 | 
			
		||||
                    {
 | 
			
		||||
                        "href": "http://nova.openstack.svc.cluster.local/flavors/14c725b1-1658-48ec-90e6-05048d269e89",
 | 
			
		||||
                        "rel": "bookmark"
 | 
			
		||||
                    }
 | 
			
		||||
                ],
 | 
			
		||||
                "name": "medium.dpdk"
 | 
			
		||||
            },
 | 
			
		||||
            {
 | 
			
		||||
 | 
			
		||||
                ...
 | 
			
		||||
 | 
			
		||||
@@ -1,16 +0,0 @@
 | 
			
		||||
===================
 | 
			
		||||
StarlingX OpenStack
 | 
			
		||||
===================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install and access StarlingX OpenStack.
 | 
			
		||||
Other than the OpenStack-specific configurations required in the underlying
 | 
			
		||||
StarlingX Kubernetes infrastructure (described in the installation steps for
 | 
			
		||||
StarlingX Kubernetes), the installation of containerized OpenStack for StarlingX
 | 
			
		||||
is independent of deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 2
 | 
			
		||||
 | 
			
		||||
   install
 | 
			
		||||
   access
 | 
			
		||||
   uninstall_delete
 | 
			
		||||
@@ -1,75 +0,0 @@
 | 
			
		||||
===========================
 | 
			
		||||
Install StarlingX OpenStack
 | 
			
		||||
===========================
 | 
			
		||||
 | 
			
		||||
These instructions assume that you have completed the following
 | 
			
		||||
OpenStack-specific configuration tasks that are required by the underlying
 | 
			
		||||
StarlingX Kubernetes platform:
 | 
			
		||||
 | 
			
		||||
* All nodes have been labeled appropriately for their OpenStack role(s).
 | 
			
		||||
* The vSwitch type has been configured.
 | 
			
		||||
* The nova-local volume group has been configured on any node's host, if running
 | 
			
		||||
  the compute function.
 | 
			
		||||
 | 
			
		||||
--------------------------------------------
 | 
			
		||||
Install application manifest and helm-charts
 | 
			
		||||
--------------------------------------------
 | 
			
		||||
 | 
			
		||||
#. Get the latest StarlingX OpenStack application (stx-openstack) manifest and
 | 
			
		||||
   helm charts. Use one of the following options:
 | 
			
		||||
 | 
			
		||||
   *  Private StarlingX build. See :ref:`Build-stx-openstack-app` for details.
 | 
			
		||||
   *  Public download from
 | 
			
		||||
      `CENGN StarlingX mirror <http://mirror.starlingx.cengn.ca/mirror/starlingx/>`_.
 | 
			
		||||
 | 
			
		||||
      After you select a release, helm charts are located in ``centos/outputs/helm-charts``.
 | 
			
		||||
 | 
			
		||||
#. Load the stx-openstack application's package into StarlingX. The tarball
 | 
			
		||||
   package contains stx-openstack's Airship Armada manifest and stx-openstack's
 | 
			
		||||
   set of helm charts. For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
        system application-upload stx-openstack-<version>-centos-stable-versioned.tgz
 | 
			
		||||
 | 
			
		||||
   This will:
 | 
			
		||||
 | 
			
		||||
   * Load the Armada manifest and helm charts.
 | 
			
		||||
   * Internally manage helm chart override values for each chart.
 | 
			
		||||
   * Automatically generate system helm chart overrides for each chart based on
 | 
			
		||||
     the current state of the underlying StarlingX Kubernetes platform and the
 | 
			
		||||
     recommended StarlingX configuration of OpenStack services.
 | 
			
		||||
 | 
			
		||||
#. Apply the stx-openstack application in order to bring StarlingX OpenStack
 | 
			
		||||
   into service. If your environment is preconfigured with a proxy server, then
 | 
			
		||||
   make sure HTTPS proxy is set before applying stx-openstack.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
        system application-apply stx-openstack
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
        To set the HTTPS proxy at bootstrap time, refer to
 | 
			
		||||
        `Ansible Bootstrap Configurations <https://docs.starlingx.io/deploy_install_guides/r3_release/ansible_bootstrap_configs.html#docker-proxy>`_.
 | 
			
		||||
 | 
			
		||||
        To set the HTTPS proxy after installation, refer to
 | 
			
		||||
        `Docker Proxy Configuration <https://docs.starlingx.io/configuration/docker_proxy_config.html>`_.
 | 
			
		||||
 | 
			
		||||
#. Wait for the activation of stx-openstack to complete.
 | 
			
		||||
 | 
			
		||||
   This can take 5-10 minutes depending on the performance of your host machine.
 | 
			
		||||
 | 
			
		||||
   Monitor progress with the command:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
        watch -n 5 system application-list
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
Your OpenStack cloud is now up and running.
 | 
			
		||||
 | 
			
		||||
See :doc:`access` for details on how to access StarlingX OpenStack.
 | 
			
		||||
@@ -1,33 +0,0 @@
 | 
			
		||||
=============================
 | 
			
		||||
Uninstall StarlingX OpenStack
 | 
			
		||||
=============================
 | 
			
		||||
 | 
			
		||||
This section provides additional commands for uninstalling and deleting the
 | 
			
		||||
StarlingX OpenStack application.
 | 
			
		||||
 | 
			
		||||
.. warning::
 | 
			
		||||
 | 
			
		||||
   Uninstalling the OpenStack application will terminate all OpenStack services.
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Bring down OpenStack services
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
Use the system CLI to uninstall the OpenStack application:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   system application-remove stx-openstack
 | 
			
		||||
   system application-list
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Delete OpenStack application definition
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
Use the system CLI to delete the OpenStack application definition:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   system application-delete stx-openstack
 | 
			
		||||
   system application-list
 | 
			
		||||
 | 
			
		||||
@@ -1,54 +0,0 @@
 | 
			
		||||
============================
 | 
			
		||||
Prepare Host and Environment
 | 
			
		||||
============================
 | 
			
		||||
 | 
			
		||||
This section describes how to prepare the physical host and virtual environment
 | 
			
		||||
for a **StarlingX R3.0 virtual All-in-one Duplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
------------------------------------
 | 
			
		||||
Physical host requirements and setup
 | 
			
		||||
------------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: physical_host_req.txt
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Prepare virtual environment and servers
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
The following steps explain how to prepare the virtual environment and servers
 | 
			
		||||
on a physical host for a StarlingX R3.0 virtual All-in-one Duplex deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual environment.
 | 
			
		||||
 | 
			
		||||
   Set up the virtual platform networks for virtual deployment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_network.sh
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual servers.
 | 
			
		||||
 | 
			
		||||
   Create the XML definitions for the virtual servers required by this
 | 
			
		||||
   configuration option. This will create the XML virtual server definition for:
 | 
			
		||||
 | 
			
		||||
   * duplex-controller-0
 | 
			
		||||
   * duplex-controller-1
 | 
			
		||||
 | 
			
		||||
   The following command will start/virtually power on:
 | 
			
		||||
 | 
			
		||||
   * The 'duplex-controller-0' virtual server
 | 
			
		||||
   * The X-based graphical virt-manager application
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_configuration.sh -c duplex -i ./bootimage.iso
 | 
			
		||||
 | 
			
		||||
   If there is no X-server present errors will occur and the X-based GUI for the
 | 
			
		||||
   virt-manager application will not start. The virt-manager GUI is not absolutely
 | 
			
		||||
   required and you can safely ignore errors and continue.
 | 
			
		||||
 | 
			
		||||
@@ -1,532 +0,0 @@
 | 
			
		||||
==============================================
 | 
			
		||||
Install StarlingX Kubernetes on Virtual AIO-DX
 | 
			
		||||
==============================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R3.0 virtual All-in-one Duplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
In the last step of :doc:`aio_duplex_environ`, the controller-0 virtual server
 | 
			
		||||
'duplex-controller-0' was started by the :command:`setup_configuration.sh`
 | 
			
		||||
command.
 | 
			
		||||
 | 
			
		||||
On the host, attach to the console of virtual controller-0 and select the
 | 
			
		||||
appropriate installer menu options to start the non-interactive install of
 | 
			
		||||
StarlingX software on controller-0.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   When entering the console, it is very easy to miss the first installer menu
 | 
			
		||||
   selection. Use ESC to navigate to previous menus, to ensure you are at the
 | 
			
		||||
   first installer menu.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  virsh console duplex-controller-0
 | 
			
		||||
 | 
			
		||||
Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
#. First menu: Select 'All-in-one Controller Configuration'.
 | 
			
		||||
#. Second menu: Select 'Serial Console'.
 | 
			
		||||
 | 
			
		||||
   .. figure:: ../figures/starlingx-aio-controller-configuration.png
 | 
			
		||||
      :alt: starlingx-controller-configuration
 | 
			
		||||
 | 
			
		||||
      *Figure 1: StarlingX Controller Configuration*
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   .. figure:: ../figures/starlingx-aio-serial-console.png
 | 
			
		||||
      :alt: starlingx--serial-console
 | 
			
		||||
 | 
			
		||||
      *Figure 2: StarlingX Serial Console*
 | 
			
		||||
 | 
			
		||||
   Wait for the non-interactive install of software to complete and for the server
 | 
			
		||||
   to reboot. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
   machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     Login: sysadmin
 | 
			
		||||
     Password:
 | 
			
		||||
     Changing password for sysadmin.
 | 
			
		||||
     (current) UNIX Password: sysadmin
 | 
			
		||||
     New Password:
 | 
			
		||||
     (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. External connectivity is required to run the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     export CONTROLLER0_OAM_CIDR=10.10.10.3/24
 | 
			
		||||
     export DEFAULT_OAM_GATEWAY=10.10.10.1
 | 
			
		||||
     sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
 | 
			
		||||
     sudo ip link set up dev enp7s1
 | 
			
		||||
     sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
 | 
			
		||||
 | 
			
		||||
   Check the configured network:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    localhost:~$ ifconfig
 | 
			
		||||
    enp7s1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
 | 
			
		||||
    inet 10.10.10.3  netmask 255.255.255.0  broadcast 0.0.0.0
 | 
			
		||||
    inet6 fe80::5054:ff:feb6:10d6  prefixlen 64  scopeid 0x20<link>
 | 
			
		||||
    ether 52:54:00:b6:10:d6  txqueuelen 1000  (Ethernet)
 | 
			
		||||
    RX packets 10  bytes 1151 (1.1 KiB)
 | 
			
		||||
    RX errors 0  dropped 0  overruns 0  frame 0
 | 
			
		||||
    TX packets 94  bytes 27958 (27.3 KiB)
 | 
			
		||||
    TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``sysadmin home directory ($HOME)``
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   .. include:: ../ansible_install_time_only.txt
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
 | 
			
		||||
     the configurable values as desired (use the commented instructions in
 | 
			
		||||
     the file).
 | 
			
		||||
 | 
			
		||||
   or
 | 
			
		||||
 | 
			
		||||
   * Create the minimal user configuration override file as shown in the example
 | 
			
		||||
     below:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: duplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: 10.10.10.0/24
 | 
			
		||||
        external_oam_gateway_address: 10.10.10.1
 | 
			
		||||
        external_oam_floating_address: 10.10.10.2
 | 
			
		||||
        external_oam_node_0_address: 10.10.10.3
 | 
			
		||||
        external_oam_node_1_address: 10.10.10.4
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <admin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
 | 
			
		||||
        # Add these lines to configure Docker to use a proxy server
 | 
			
		||||
        # docker_http_proxy: http://my.proxy.com:1080
 | 
			
		||||
        # docker_https_proxy: https://my.proxy.com:1443
 | 
			
		||||
        # docker_no_proxy:
 | 
			
		||||
        #   - 1.2.3.4
 | 
			
		||||
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
 | 
			
		||||
   firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
 | 
			
		||||
   details about Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
   The image below shows a typical successful run.
 | 
			
		||||
 | 
			
		||||
   .. figure:: ../figures/starlingx-release3-ansible-bootstrap-simplex.png
 | 
			
		||||
      :alt: ansible bootstrap install screen
 | 
			
		||||
      :width: 800
 | 
			
		||||
 | 
			
		||||
      *Figure 3: StarlingX Ansible Bootstrap*
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
 | 
			
		||||
   attached networks:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      OAM_IF=enp7s1
 | 
			
		||||
      MGMT_IF=enp7s2
 | 
			
		||||
      system host-if-modify controller-0 lo -c none
 | 
			
		||||
      IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
 | 
			
		||||
      for UUID in $IFNET_UUIDS; do
 | 
			
		||||
          system interface-network-remove ${UUID}
 | 
			
		||||
      done
 | 
			
		||||
      system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
      system host-if-modify controller-0 $MGMT_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-0 $MGMT_IF mgmt
 | 
			
		||||
      system interface-network-assign controller-0 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      In a virtual environment, this can sometimes cause Ceph clock skew alarms.
 | 
			
		||||
      Also, the virtual instances clock is synchronized with the host clock,
 | 
			
		||||
      so it is not absolutely required to configure NTP in this step.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-0.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
      (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
      1G Huge Pages are not supported in the virtual environment and there is no
 | 
			
		||||
      virtual NIC supporting SRIOV. For that reason, data interfaces are not
 | 
			
		||||
      applicable in the virtual environment for the Kubernetes-only scenario.
 | 
			
		||||
 | 
			
		||||
   For OpenStack only:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=eth1000
 | 
			
		||||
      DATA1IF=eth1001
 | 
			
		||||
      export NODE=controller-0
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
      system host-port-list ${NODE} --nowrap > ${SPL}
 | 
			
		||||
      system host-if-list -a ${NODE} --nowrap > ${SPIL}
 | 
			
		||||
      DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
      DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
      DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
      DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
 | 
			
		||||
      system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
 | 
			
		||||
      system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
      system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-0 for Ceph:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-disk-list controller-0
 | 
			
		||||
      system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
 | 
			
		||||
      system host-stor-list controller-0
 | 
			
		||||
 | 
			
		||||
#. If required, and not already done as part of bootstrap, configure Docker to
 | 
			
		||||
   use a proxy server.
 | 
			
		||||
 | 
			
		||||
   #. List Docker proxy parameters:
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
       system service-parameter-list platform docker
 | 
			
		||||
 | 
			
		||||
   #. Refer to :doc:`/../../configuration/docker_proxy_config` for
 | 
			
		||||
      details about Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. include:: aio_simplex_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-0-openstack-specific-aio-simplex-start:
 | 
			
		||||
   :end-before: incl-config-controller-0-openstack-specific-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. include:: aio_simplex_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-unlock-controller-0-virt-aio-simplex-start:
 | 
			
		||||
   :end-before: incl-unlock-controller-0-virt-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------------------------
 | 
			
		||||
Install software on controller-1 node
 | 
			
		||||
-------------------------------------
 | 
			
		||||
 | 
			
		||||
#. On the host, power on the controller-1 virtual server, 'duplex-controller-1'. It will
 | 
			
		||||
   automatically attempt to network boot over the management network:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start duplex-controller-1
 | 
			
		||||
 | 
			
		||||
#. Attach to the console of virtual controller-1:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh console duplex-controller-1
 | 
			
		||||
 | 
			
		||||
   As controller-1 VM boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of virtual controller-0, list hosts to see the newly discovered
 | 
			
		||||
   controller-1 host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-list
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
    | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. On virtual controller-0, using the host id, set the personality of this host
 | 
			
		||||
   to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1 to complete, controller-1 to
 | 
			
		||||
   reboot, and controller-1 to show as locked/disabled/online in 'system host-list'.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-list
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
    | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-1 and specify the
 | 
			
		||||
   attached networks. Note that the MGMT interface is partially set up
 | 
			
		||||
   automatically by the network install procedure.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      OAM_IF=enp7s1
 | 
			
		||||
      system host-if-modify controller-1 $OAM_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-1 $OAM_IF oam
 | 
			
		||||
      system interface-network-assign controller-1 mgmt0 cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-1.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
      (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
      1G Huge Pages are not supported in the virtual environment and there is no
 | 
			
		||||
      virtual NIC supporting SRIOV. For that reason, data interfaces are not
 | 
			
		||||
      applicable in the virtual environment for the Kubernetes-only scenario.
 | 
			
		||||
 | 
			
		||||
   For OpenStack only:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=eth1000
 | 
			
		||||
      DATA1IF=eth1001
 | 
			
		||||
      export NODE=controller-1
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
      system host-port-list ${NODE} --nowrap > ${SPL}
 | 
			
		||||
      system host-if-list -a ${NODE} --nowrap > ${SPIL}
 | 
			
		||||
      DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
      DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
      DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
      DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
 | 
			
		||||
      system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
 | 
			
		||||
      system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
      system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-1 for Ceph:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    echo ">>> Add OSDs to primary tier"
 | 
			
		||||
    system host-disk-list controller-1
 | 
			
		||||
    system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {}
 | 
			
		||||
    system host-stor-list controller-1
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
 | 
			
		||||
   support of installing the stx-openstack manifest/helm-charts later:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-label-assign controller-1 openstack-control-plane=enabled
 | 
			
		||||
    system host-label-assign controller-1 openstack-compute-node=enabled
 | 
			
		||||
    system host-label-assign controller-1 openvswitch=enabled
 | 
			
		||||
    system host-label-assign controller-1 sriov=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      export NODE=controller-1
 | 
			
		||||
 | 
			
		||||
      echo ">>> Getting root disk info"
 | 
			
		||||
      ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
      ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
      echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
 | 
			
		||||
 | 
			
		||||
      echo ">>>> Configuring nova-local"
 | 
			
		||||
      NOVA_SIZE=34
 | 
			
		||||
      NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
 | 
			
		||||
      NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
      system host-lvg-add ${NODE} nova-local
 | 
			
		||||
      system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-1 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
Controller-1 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$ system host-list
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
 | 2  | controller-1 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$ system host-show controller-1
 | 
			
		||||
 +-----------------------+-----------------------------------------------------------------------+
 | 
			
		||||
 | Property              | Value                                                                 |
 | 
			
		||||
 +-----------------------+-----------------------------------------------------------------------+
 | 
			
		||||
 | action                | none                                                                  |
 | 
			
		||||
 | administrative        | unlocked                                                              |
 | 
			
		||||
 | availability          | available                                                             |
 | 
			
		||||
 | bm_ip                 | None                                                                  |
 | 
			
		||||
 | bm_type               | none                                                                  |
 | 
			
		||||
 | bm_username           | None                                                                  |
 | 
			
		||||
 | boot_device           | /dev/sda                                                              |
 | 
			
		||||
 | capabilities          | {u'stor_function': u'monitor', u'Personality': u'Controller-Standby'} |
 | 
			
		||||
 | clock_synchronization | ntp                                                                   |
 | 
			
		||||
 | config_applied        | c9884f2e-cc35-4fe1-bce2-9f2398073832                                  |
 | 
			
		||||
 | config_status         | None                                                                  |
 | 
			
		||||
 | config_target         | c9884f2e-cc35-4fe1-bce2-9f2398073832                                  |
 | 
			
		||||
 | console               | ttyS0,115200                                                          |
 | 
			
		||||
 | created_at            | 2020-04-22T10:21:15.029427+00:00                                      |
 | 
			
		||||
 | hostname              | controller-1                                                          |
 | 
			
		||||
 | id                    | 2                                                                     |
 | 
			
		||||
 | install_output        | text                                                                  |
 | 
			
		||||
 | install_state         | completed                                                             |
 | 
			
		||||
 | install_state_info    | None                                                                  |
 | 
			
		||||
 | inv_state             | inventoried                                                           |
 | 
			
		||||
 | invprovision          | provisioned                                                           |
 | 
			
		||||
 | location              | {}                                                                    |
 | 
			
		||||
 | mgmt_ip               | 192.168.204.12                                                        |
 | 
			
		||||
 | mgmt_mac              | 52:54:00:b4:fe:50                                                     |
 | 
			
		||||
 | operational           | enabled                                                               |
 | 
			
		||||
 | personality           | controller                                                            |
 | 
			
		||||
 | reserved              | False                                                                 |
 | 
			
		||||
 | rootfs_device         | /dev/sda                                                              |
 | 
			
		||||
 | serialid              | None                                                                  |
 | 
			
		||||
 | software_load         | 20.01                                                                 |
 | 
			
		||||
 | subfunction_avail     | available                                                             |
 | 
			
		||||
 | subfunction_oper      | enabled                                                               |
 | 
			
		||||
 | subfunctions          | controller,worker                                                     |
 | 
			
		||||
 | task                  |                                                                       |
 | 
			
		||||
 | tboot                 | false                                                                 |
 | 
			
		||||
 | ttys_dcd              | None                                                                  |
 | 
			
		||||
 | updated_at            | 2020-04-22T18:31:50.184695+00:00                                      |
 | 
			
		||||
 | uptime                | 28258                                                                 |
 | 
			
		||||
 | uuid                  | 52f357a9-09fb-4c6d-9e1d-9d36f5deb8b9                                  |
 | 
			
		||||
 | vim_progress_status   | services-enabled                                                      |
 | 
			
		||||
 +-----------------------+-----------------------------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,52 +0,0 @@
 | 
			
		||||
============================
 | 
			
		||||
Prepare Host and Environment
 | 
			
		||||
============================
 | 
			
		||||
 | 
			
		||||
This section describes how to prepare the physical host and virtual environment
 | 
			
		||||
for a **StarlingX R3.0 virtual All-in-one Simplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
------------------------------------
 | 
			
		||||
Physical host requirements and setup
 | 
			
		||||
------------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: physical_host_req.txt
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Prepare virtual environment and servers
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
The following steps explain how to prepare the virtual environment and servers
 | 
			
		||||
on a physical host for a StarlingX R3.0 virtual All-in-one Simplex deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual environment.
 | 
			
		||||
 | 
			
		||||
   Set up the virtual platform networks for virtual deployment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_network.sh
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual servers.
 | 
			
		||||
 | 
			
		||||
   Create the XML definitions for the virtual servers required by this
 | 
			
		||||
   configuration option. This will create the XML virtual server definition for:
 | 
			
		||||
 | 
			
		||||
   * simplex-controller-0
 | 
			
		||||
 | 
			
		||||
   The following command will start/virtually power on:
 | 
			
		||||
 | 
			
		||||
   * The 'simplex-controller-0' virtual server
 | 
			
		||||
   * The X-based graphical virt-manager application
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_configuration.sh -c simplex -i ./bootimage.iso
 | 
			
		||||
 | 
			
		||||
   If there is no X-server present errors will occur and the X-based GUI for the
 | 
			
		||||
   virt-manager application will not start. The virt-manager GUI is not absolutely
 | 
			
		||||
   required and you can safely ignore errors and continue.
 | 
			
		||||
@@ -1,419 +0,0 @@
 | 
			
		||||
==============================================
 | 
			
		||||
Install StarlingX Kubernetes on Virtual AIO-SX
 | 
			
		||||
==============================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R3.0 virtual All-in-one Simplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
In the last step of :doc:`aio_simplex_environ`, the controller-0 virtual server
 | 
			
		||||
'simplex-controller-0' was started by the :command:`setup_configuration.sh`
 | 
			
		||||
command.
 | 
			
		||||
 | 
			
		||||
On the host, attach to the console of virtual controller-0 and select the
 | 
			
		||||
appropriate installer menu options to start the non-interactive install of
 | 
			
		||||
StarlingX software on controller-0.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   When entering the console, it is very easy to miss the first installer menu
 | 
			
		||||
   selection. Use ESC to navigate to previous menus, to ensure you are at the
 | 
			
		||||
   first installer menu.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  virsh console simplex-controller-0
 | 
			
		||||
 | 
			
		||||
Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
#. First menu: Select 'All-in-one Controller Configuration'
 | 
			
		||||
#. Second menu: Select 'Serial Console'
 | 
			
		||||
 | 
			
		||||
   .. figure:: ../figures/starlingx-aio-controller-configuration.png
 | 
			
		||||
      :alt: starlingx-controller-configuration
 | 
			
		||||
 | 
			
		||||
      *Figure 1: StarlingX Controller Configuration*
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   .. figure:: ../figures/starlingx-aio-serial-console.png
 | 
			
		||||
      :alt: starlingx--serial-console
 | 
			
		||||
 | 
			
		||||
      *Figure 2: StarlingX Serial Console*
 | 
			
		||||
 | 
			
		||||
   Wait for the non-interactive install of software to complete and for the server
 | 
			
		||||
   to reboot. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
   machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    Login: sysadmin
 | 
			
		||||
    Password:
 | 
			
		||||
    Changing password for sysadmin.
 | 
			
		||||
    (current) UNIX Password: sysadmin
 | 
			
		||||
    New Password:
 | 
			
		||||
    (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. External connectivity is required to run the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    export CONTROLLER0_OAM_CIDR=10.10.10.3/24
 | 
			
		||||
    export DEFAULT_OAM_GATEWAY=10.10.10.1
 | 
			
		||||
    sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
 | 
			
		||||
    sudo ip link set up dev enp7s1
 | 
			
		||||
    sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
 | 
			
		||||
 | 
			
		||||
   Check the configured network:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    localhost:~$ ifconfig
 | 
			
		||||
    enp7s1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
 | 
			
		||||
    inet 10.10.10.3  netmask 255.255.255.0  broadcast 0.0.0.0
 | 
			
		||||
    inet6 fe80::5054:ff:feb6:10d6  prefixlen 64  scopeid 0x20<link>
 | 
			
		||||
    ether 52:54:00:b6:10:d6  txqueuelen 1000  (Ethernet)
 | 
			
		||||
    RX packets 10  bytes 1151 (1.1 KiB)
 | 
			
		||||
    RX errors 0  dropped 0  overruns 0  frame 0
 | 
			
		||||
    TX packets 94  bytes 27958 (27.3 KiB)
 | 
			
		||||
    TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``sysadmin home directory ($HOME)``
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   .. include:: ../ansible_install_time_only.txt
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
 | 
			
		||||
     the configurable values as desired (use the commented instructions in
 | 
			
		||||
     the file).
 | 
			
		||||
 | 
			
		||||
   or
 | 
			
		||||
 | 
			
		||||
   * Create the minimal user configuration override file as shown in the example
 | 
			
		||||
     below:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: simplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
        - 8.8.8.8
 | 
			
		||||
        - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: 10.10.10.0/24
 | 
			
		||||
        external_oam_gateway_address: 10.10.10.1
 | 
			
		||||
        external_oam_floating_address: 10.10.10.2
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <admin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
 | 
			
		||||
        # Add these lines to configure Docker to use a proxy server
 | 
			
		||||
        # docker_http_proxy: http://my.proxy.com:1080
 | 
			
		||||
        # docker_https_proxy: https://my.proxy.com:1443
 | 
			
		||||
        # docker_no_proxy:
 | 
			
		||||
        #   - 1.2.3.4
 | 
			
		||||
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
 | 
			
		||||
   firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
 | 
			
		||||
   details about Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
   The image below shows a typical successful run.
 | 
			
		||||
 | 
			
		||||
   .. figure:: ../figures/starlingx-release3-ansible-bootstrap-simplex.png
 | 
			
		||||
      :alt: ansible bootstarp install screen
 | 
			
		||||
      :width: 800
 | 
			
		||||
 | 
			
		||||
      *Figure 3: StarlingX Ansible Bootstrap*
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. At this stage, you can see the controller status, it will be in the locked state.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    [sysadmin@localhost ~(keystone_admin)]$ system host-list
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | 1  | controller-0 | controller  | locked         | disabled    | online       |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM interface of controller-0 and specify the attached network
 | 
			
		||||
   as "oam". Use the OAM port name, for example eth0, that is applicable to your
 | 
			
		||||
   deployment environment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     OAM_IF=enp7s1
 | 
			
		||||
     system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      In a virtual environment, this can sometimes cause Ceph clock skew alarms.
 | 
			
		||||
      Also, the virtual instances clock is synchronized with the host clock,
 | 
			
		||||
      so it is not absolutely required to configure NTP in this step.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-0.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
      (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
      1G Huge Pages are not supported in the virtual environment and there is no
 | 
			
		||||
      virtual NIC supporting SRIOV. For that reason, data interfaces are not
 | 
			
		||||
      applicable in the virtual environment for the Kubernetes-only scenario.
 | 
			
		||||
 | 
			
		||||
   For OpenStack only:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    DATA0IF=eth1000
 | 
			
		||||
    DATA1IF=eth1001
 | 
			
		||||
    export NODE=controller-0
 | 
			
		||||
    PHYSNET0='physnet0'
 | 
			
		||||
    PHYSNET1='physnet1'
 | 
			
		||||
    SPL=/tmp/tmp-system-port-list
 | 
			
		||||
    SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
    system host-port-list ${NODE} --nowrap > ${SPL}
 | 
			
		||||
    system host-if-list -a ${NODE} --nowrap > ${SPIL}
 | 
			
		||||
    DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
    DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
    DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
    DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
    DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
    DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
    DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
    DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
    system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
    system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
    system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
 | 
			
		||||
    system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
 | 
			
		||||
    system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
    system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-0 for Ceph:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-disk-list controller-0
 | 
			
		||||
    system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
 | 
			
		||||
    system host-stor-list controller-0
 | 
			
		||||
 | 
			
		||||
#. If required, and not already done as part of bootstrap, configure Docker to
 | 
			
		||||
   use a proxy server.
 | 
			
		||||
 | 
			
		||||
   #. List Docker proxy parameters:
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
       system service-parameter-list platform docker
 | 
			
		||||
 | 
			
		||||
   #. Refer to :doc:`/../../configuration/docker_proxy_config` for
 | 
			
		||||
      details about Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-openstack-specific-aio-simplex-start:
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
 | 
			
		||||
   support of installing the stx-openstack manifest/helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-label-assign controller-0 openstack-control-plane=enabled
 | 
			
		||||
     system host-label-assign controller-0 openstack-compute-node=enabled
 | 
			
		||||
     system host-label-assign controller-0 openvswitch=enabled
 | 
			
		||||
     system host-label-assign controller-0 sriov=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** A vSwitch is required.
 | 
			
		||||
 | 
			
		||||
   The default vSwitch is containerized OVS that is packaged with the
 | 
			
		||||
   stx-openstack manifest/helm-charts. StarlingX provides the option to use
 | 
			
		||||
   OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT
 | 
			
		||||
   supported, only OVS is supported. Therefore, simply use the default OVS
 | 
			
		||||
   vSwitch here.
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack Only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     export NODE=controller-0
 | 
			
		||||
 | 
			
		||||
     echo ">>> Getting root disk info"
 | 
			
		||||
     ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
     ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
     echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
 | 
			
		||||
 | 
			
		||||
     echo ">>>> Configuring nova-local"
 | 
			
		||||
     NOVA_SIZE=34
 | 
			
		||||
     NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
 | 
			
		||||
     NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
     system host-lvg-add ${NODE} nova-local
 | 
			
		||||
     system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
     sleep 2
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-0-virt-aio-simplex-start:
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-0 to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   Once the controller comes back up, check the status of controller-0. It should
 | 
			
		||||
   now show "unlocked", "enabled", "available" and "provisioned".
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$ system host-list
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$
 | 
			
		||||
 | 
			
		||||
 ===============================================
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$ system host-show controller-0
 | 
			
		||||
 +-----------------------+------------------------------------------------------------ ----------+
 | 
			
		||||
 | Property              | Value                                                                |
 | 
			
		||||
 +-----------------------+------------------------------------------------------------ ----------+
 | 
			
		||||
 | action                | none                                                                 |
 | 
			
		||||
 | administrative        | unlocked                                                             |
 | 
			
		||||
 | availability          | available                                                            |
 | 
			
		||||
 | bm_ip                 | None                                                                 |
 | 
			
		||||
 | bm_type               | none                                                                 |
 | 
			
		||||
 | bm_username           | None                                                                 |
 | 
			
		||||
 | boot_device           | /dev/disk/by-path/pci-0000:00:1f.2-ata-1.0                           |
 | 
			
		||||
 | capabilities          | {u'stor_function': u'monitor', u'Personality': u'Controller-Active'} |
 | 
			
		||||
 | clock_synchronization | ntp                                                                  |
 | 
			
		||||
 | config_applied        | 03e22d8b-1b1f-4c52-9500-96afad295d9a                                 |
 | 
			
		||||
 | config_status         | None                                                                 |
 | 
			
		||||
 | config_target         | 03e22d8b-1b1f-4c52-9500-96afad295d9a                                 |
 | 
			
		||||
 | console               | ttyS0,115200                                                         |
 | 
			
		||||
 | created_at            | 2020-03-09T12:34:34.866469+00:00                                     |
 | 
			
		||||
 | hostname              | controller-0                                                         |
 | 
			
		||||
 | id                    | 1                                                                    |
 | 
			
		||||
 | install_output        | text                                                                 |
 | 
			
		||||
 | install_state         | None                                                                 |
 | 
			
		||||
 | install_state_info    | None                                                                 |
 | 
			
		||||
 | inv_state             | inventoried                                                          |
 | 
			
		||||
 | invprovision          | provisioned                                                          |
 | 
			
		||||
 | location              | {}                                                                   |
 | 
			
		||||
 | mgmt_ip               | 192.168.204.2                                                        |
 | 
			
		||||
 | mgmt_mac              | 00:00:00:00:00:00                                                    |
 | 
			
		||||
 | operational           | enabled                                                              |
 | 
			
		||||
 | personality           | controller                                                           |
 | 
			
		||||
 | reserved              | False                                                                |
 | 
			
		||||
 | rootfs_device         | /dev/disk/by-path/pci-0000:00:1f.2-ata-1.0                           |
 | 
			
		||||
 | serialid              | None                                                                 |
 | 
			
		||||
 | software_load         | 19.12                                                                |
 | 
			
		||||
 | subfunction_avail     | available                                                            |
 | 
			
		||||
 | subfunction_oper      | enabled                                                              |
 | 
			
		||||
 | subfunctions          | controller,worker                                                    |
 | 
			
		||||
 | task                  |                                                                      |
 | 
			
		||||
 | tboot                 | false                                                                |
 | 
			
		||||
 | ttys_dcd              | None                                                                 |
 | 
			
		||||
 | updated_at            | 2020-03-09T14:10:42.362846+00:00                                     |
 | 
			
		||||
 | uptime                | 991                                                                  |
 | 
			
		||||
 | uuid                  | 66aa842e-84a2-4041-b93e-f0275cde8784                                 |
 | 
			
		||||
 | vim_progress_status   | services-enabled                                                     |
 | 
			
		||||
 +-----------------------+------------------------------------------------------------ ----------+
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-0-virt-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,21 +0,0 @@
 | 
			
		||||
==========================================================
 | 
			
		||||
Virtual Standard with Controller Storage Installation R3.0
 | 
			
		||||
==========================================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_controller_storage.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   controller_storage_environ
 | 
			
		||||
   controller_storage_install_kubernetes
 | 
			
		||||
@@ -1,56 +0,0 @@
 | 
			
		||||
============================
 | 
			
		||||
Prepare Host and Environment
 | 
			
		||||
============================
 | 
			
		||||
 | 
			
		||||
This section describes how to prepare the physical host and virtual environment
 | 
			
		||||
for a **StarlingX R3.0 virtual Standard with Controller Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
------------------------------------
 | 
			
		||||
Physical host requirements and setup
 | 
			
		||||
------------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: physical_host_req.txt
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Prepare virtual environment and servers
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
The following steps explain how to prepare the virtual environment and servers
 | 
			
		||||
on a physical host for a StarlingX R3.0 virtual Standard with Controller Storage
 | 
			
		||||
deployment configuration.
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual environment.
 | 
			
		||||
 | 
			
		||||
   Set up virtual platform networks for virtual deployment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_network.sh
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual servers.
 | 
			
		||||
 | 
			
		||||
   Create the XML definitions for the virtual servers required by this
 | 
			
		||||
   configuration option. This will create the XML virtual server definition for:
 | 
			
		||||
 | 
			
		||||
   * controllerstorage-controller-0
 | 
			
		||||
   * controllerstorage-controller-1
 | 
			
		||||
   * controllerstorage-worker-0
 | 
			
		||||
   * controllerstorage-worker-1
 | 
			
		||||
 | 
			
		||||
   The following command will start/virtually power on:
 | 
			
		||||
 | 
			
		||||
   * The 'controllerstorage-controller-0' virtual server
 | 
			
		||||
   * The X-based graphical virt-manager application
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_configuration.sh -c controllerstorage -i ./bootimage.iso
 | 
			
		||||
 | 
			
		||||
   If there is no X-server present errors will occur and the X-based GUI for the
 | 
			
		||||
   virt-manager application will not start. The virt-manager GUI is not absolutely
 | 
			
		||||
   required and you can safely ignore errors and continue.
 | 
			
		||||
@@ -1,721 +0,0 @@
 | 
			
		||||
========================================================================
 | 
			
		||||
Install StarlingX Kubernetes on Virtual Standard with Controller Storage
 | 
			
		||||
========================================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R3.0 virtual Standard with Controller Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
In the last step of :doc:`controller_storage_environ`, the controller-0 virtual
 | 
			
		||||
server 'controllerstorage-controller-0' was started by the
 | 
			
		||||
:command:`setup_configuration.sh` command.
 | 
			
		||||
 | 
			
		||||
On the host, attach to the console of virtual controller-0 and select the
 | 
			
		||||
appropriate installer menu options to start the non-interactive install of
 | 
			
		||||
StarlingX software on controller-0.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   When entering the console, it is very easy to miss the first installer menu
 | 
			
		||||
   selection. Use ESC to navigate to previous menus, to ensure you are at the
 | 
			
		||||
   first installer menu.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  virsh console controllerstorage-controller-0
 | 
			
		||||
 | 
			
		||||
Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
#. First menu: Select 'Standard Controller Configuration'.
 | 
			
		||||
#. Second menu: Select 'Serial Console'.
 | 
			
		||||
 | 
			
		||||
   .. figure:: ../figures/starlingx-standard-controller-configuration.png
 | 
			
		||||
      :scale: 47%
 | 
			
		||||
      :alt: starlingx-controller-configuration
 | 
			
		||||
 | 
			
		||||
      *Figure 1: StarlingX Controller Configuration*
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   .. figure:: ../figures/starlingx-aio-serial-console.png
 | 
			
		||||
      :alt: starlingx--serial-console
 | 
			
		||||
 | 
			
		||||
      *Figure 2: StarlingX Serial Console*
 | 
			
		||||
 | 
			
		||||
   Wait for the non-interactive install of software to complete and for the server
 | 
			
		||||
   to reboot. This can take 5-10 minutes depending on the performance of the host
 | 
			
		||||
   machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. incl-bootstrap-controller-0-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Login: sysadmin
 | 
			
		||||
      Password:
 | 
			
		||||
      Changing password for sysadmin.
 | 
			
		||||
      (current) UNIX Password: sysadmin
 | 
			
		||||
      New Password:
 | 
			
		||||
      (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. External connectivity is required to run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      export CONTROLLER0_OAM_CIDR=10.10.10.3/24
 | 
			
		||||
      export DEFAULT_OAM_GATEWAY=10.10.10.1
 | 
			
		||||
      sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
 | 
			
		||||
      sudo ip link set up dev enp7s1
 | 
			
		||||
      sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
 | 
			
		||||
 | 
			
		||||
   Check the configured network:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    localhost:~$ ifconfig
 | 
			
		||||
    enp7s1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
 | 
			
		||||
    inet 10.10.10.3  netmask 255.255.255.0  broadcast 0.0.0.0
 | 
			
		||||
    inet6 fe80::5054:ff:feb6:10d6  prefixlen 64  scopeid 0x20<link>
 | 
			
		||||
    ether 52:54:00:b6:10:d6  txqueuelen 1000  (Ethernet)
 | 
			
		||||
    RX packets 10  bytes 1151 (1.1 KiB)
 | 
			
		||||
    RX errors 0  dropped 0  overruns 0  frame 0
 | 
			
		||||
    TX packets 94  bytes 27958 (27.3 KiB)
 | 
			
		||||
    TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``sysadmin home directory ($HOME)``
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   .. include:: ../ansible_install_time_only.txt
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
 | 
			
		||||
     the configurable values as desired (use the commented instructions in
 | 
			
		||||
     the file).
 | 
			
		||||
 | 
			
		||||
   or
 | 
			
		||||
 | 
			
		||||
   * Create the minimal user configuration override file as shown in the example
 | 
			
		||||
     below:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: duplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: 10.10.10.0/24
 | 
			
		||||
        external_oam_gateway_address: 10.10.10.1
 | 
			
		||||
        external_oam_floating_address: 10.10.10.2
 | 
			
		||||
        external_oam_node_0_address: 10.10.10.3
 | 
			
		||||
        external_oam_node_1_address: 10.10.10.4
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <admin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
 | 
			
		||||
        # Add these lines to configure Docker to use a proxy server
 | 
			
		||||
        # docker_http_proxy: http://my.proxy.com:1080
 | 
			
		||||
        # docker_https_proxy: https://my.proxy.com:1443
 | 
			
		||||
        # docker_no_proxy:
 | 
			
		||||
        #   - 1.2.3.4
 | 
			
		||||
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
 | 
			
		||||
   firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
 | 
			
		||||
   details about Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
   The image below shows a typical successful run.
 | 
			
		||||
 | 
			
		||||
   .. figure:: ../figures/starlingx-release3-ansible-bootstrap-simplex.png
 | 
			
		||||
      :alt: ansible bootstrap install screen
 | 
			
		||||
      :width: 800
 | 
			
		||||
 | 
			
		||||
      *Figure 3: StarlingX Ansible Bootstrap*
 | 
			
		||||
 | 
			
		||||
.. incl-bootstrap-controller-0-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
 | 
			
		||||
   attached networks:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      OAM_IF=enp7s1
 | 
			
		||||
      MGMT_IF=enp7s2
 | 
			
		||||
      system host-if-modify controller-0 lo -c none
 | 
			
		||||
      IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
 | 
			
		||||
      for UUID in $IFNET_UUIDS; do
 | 
			
		||||
          system interface-network-remove ${UUID}
 | 
			
		||||
      done
 | 
			
		||||
      system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
      system host-if-modify controller-0 $MGMT_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-0 $MGMT_IF mgmt
 | 
			
		||||
      system interface-network-assign controller-0 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      In a virtual environment, this can sometimes cause Ceph clock skew alarms.
 | 
			
		||||
      Also, the virtual instance clock is synchronized with the host clock,
 | 
			
		||||
      so it is not absolutely required to configure NTP here.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
#. If required, and not already done as part of bootstrap, configure Docker to
 | 
			
		||||
   use a proxy server.
 | 
			
		||||
 | 
			
		||||
   #. List Docker proxy parameters:
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
       system service-parameter-list platform docker
 | 
			
		||||
 | 
			
		||||
   #. Refer to :doc:`/../../configuration/docker_proxy_config` for
 | 
			
		||||
      details about Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
 | 
			
		||||
   support of installing the stx-openstack manifest/helm-charts later:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-label-assign controller-0 openstack-control-plane=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** A vSwitch is required.
 | 
			
		||||
 | 
			
		||||
   The default vSwitch is containerized OVS that is packaged with the
 | 
			
		||||
   stx-openstack manifest/helm-charts. StarlingX provides the option to use
 | 
			
		||||
   OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT
 | 
			
		||||
   supported, only OVS is supported. Therefore, simply use the default OVS
 | 
			
		||||
   vSwitch here.
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-0-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-0 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
    system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   Once the controller comes back up, check the status of controller-0. It should
 | 
			
		||||
   now show "unlocked", "enabled", "available" and "provisioned".
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$ system host-show controller-0
 | 
			
		||||
 +-----------------------+----------------------------------------------------------------------+
 | 
			
		||||
 | Property              | Value                                                                |
 | 
			
		||||
 +-----------------------+----------------------------------------------------------------------+
 | 
			
		||||
 | action                | none                                                                 |
 | 
			
		||||
 | administrative        | unlocked                                                             |
 | 
			
		||||
 | availability          | available                                                            |
 | 
			
		||||
 | bm_ip                 | None                                                                 |
 | 
			
		||||
 | bm_type               | none                                                                 |
 | 
			
		||||
 | bm_username           | None                                                                 |
 | 
			
		||||
 | boot_device           | /dev/disk/by-path/pci-0000:00:08.0-ata-1.0                           |
 | 
			
		||||
 | capabilities          | {u'stor_function': u'monitor', u'Personality': u'Controller-Active'} |
 | 
			
		||||
 | clock_synchronization | ntp                                                                  |
 | 
			
		||||
 | config_applied        | 783e5df7-cd7c-44a4-9dca-640044e982fd                                 |
 | 
			
		||||
 | config_status         | None                                                                 |
 | 
			
		||||
 | config_target         | 783e5df7-cd7c-44a4-9dca-640044e982fd                                 |
 | 
			
		||||
 | console               | ttyS0,115200                                                         |
 | 
			
		||||
 | created_at            | 2020-04-22T06:26:08.656693+00:00                                     |
 | 
			
		||||
 | hostname              | controller-0                                                         |
 | 
			
		||||
 | id                    | 1                                                                    |
 | 
			
		||||
 | install_output        | text                                                                 |
 | 
			
		||||
 | install_state         | None                                                                 |
 | 
			
		||||
 | install_state_info    | None                                                                 |
 | 
			
		||||
 | inv_state             | inventoried                                                          |
 | 
			
		||||
 | invprovision          | provisioned                                                          |
 | 
			
		||||
 | location              | {}                                                                   |
 | 
			
		||||
 | mgmt_ip               | 192.168.204.11                                                       |
 | 
			
		||||
 | mgmt_mac              | 52:54:00:80:16:be                                                    |
 | 
			
		||||
 | operational           | enabled                                                              |
 | 
			
		||||
 | personality           | controller                                                           |
 | 
			
		||||
 | reserved              | False                                                                |
 | 
			
		||||
 | rootfs_device         | /dev/disk/by-path/pci-0000:00:08.0-ata-1.0                           |
 | 
			
		||||
 | serialid              | None                                                                 |
 | 
			
		||||
 | software_load         | 20.01                                                                |
 | 
			
		||||
 | task                  |                                                                      |
 | 
			
		||||
 | tboot                 | false                                                                |
 | 
			
		||||
 | ttys_dcd              | None                                                                 |
 | 
			
		||||
 | updated_at            | 2020-04-22T18:16:27.731120+00:00                                     |
 | 
			
		||||
 | uptime                | 40733                                                                |
 | 
			
		||||
 | uuid                  | 4befdadb-4fc0-4c33-a6e9-686d97279619                                 |
 | 
			
		||||
 | vim_progress_status   | services-enabled                                                     |
 | 
			
		||||
 +-----------------------+----------------------------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-0-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------------------------------------
 | 
			
		||||
Install software on controller-1 and worker nodes
 | 
			
		||||
-------------------------------------------------
 | 
			
		||||
 | 
			
		||||
#. On the host, power on the controller-1 virtual server,
 | 
			
		||||
   'controllerstorage-controller-1'. It will automatically attempt to network
 | 
			
		||||
   boot over the management network:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start controllerstorage-controller-1
 | 
			
		||||
 | 
			
		||||
#. Attach to the console of virtual controller-1:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh console controllerstorage-controller-1
 | 
			
		||||
 | 
			
		||||
   As controller-1 VM boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On console of virtual controller-0, list hosts to see the newly discovered
 | 
			
		||||
   controller-1 host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. On virtual controller-0, using the host id, set the personality of this host
 | 
			
		||||
   to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on controller-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting on the previous step to complete, start up and set the personality
 | 
			
		||||
   for 'controllerstorage-worker-0' and 'controllerstorage-worker-1'. Set the
 | 
			
		||||
   personality to 'worker' and assign a unique hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, start 'controllerstorage-worker-0' from the host:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start controllerstorage-worker-0
 | 
			
		||||
 | 
			
		||||
   Wait for new host (hostname=None) to be discovered by checking
 | 
			
		||||
   ‘system host-list’ on virtual controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 3 personality=worker hostname=worker-0
 | 
			
		||||
 | 
			
		||||
   Repeat for 'controllerstorage-worker-1'. On the host:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start controllerstorage-worker-1
 | 
			
		||||
 | 
			
		||||
   And wait for new host (hostname=None) to be discovered by checking
 | 
			
		||||
   ‘system host-list’ on virtual controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 4 personality=worker hostname=worker-1
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1, worker-0, and worker-1 to
 | 
			
		||||
   complete, for all virtual servers to reboot, and for all to show as
 | 
			
		||||
   locked/disabled/online in 'system host-list'.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
      | 3  | worker-0     | worker      | locked         | disabled    | online       |
 | 
			
		||||
      | 4  | worker-1     | worker      | locked         | disabled    | online       |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-1-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
Configure the OAM and MGMT interfaces of virtual controller-0 and specify the
 | 
			
		||||
attached networks. Note that the MGMT interface is partially set up by the
 | 
			
		||||
network install procedure.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  OAM_IF=enp7s1
 | 
			
		||||
  system host-if-modify controller-1 $OAM_IF -c platform
 | 
			
		||||
  system interface-network-assign controller-1 $OAM_IF oam
 | 
			
		||||
  system interface-network-assign controller-1 mgmt0 cluster-host
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in support
 | 
			
		||||
of installing the stx-openstack manifest/helm-charts later:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-label-assign controller-1 openstack-control-plane=enabled
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-1-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-1-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-1 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
Controller-1 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$ system host-show controller-1
 | 
			
		||||
 +-----------------------+-----------------------------------------------------------------------+
 | 
			
		||||
 | Property              | Value                                                                 |
 | 
			
		||||
 +-----------------------+-----------------------------------------------------------------------+
 | 
			
		||||
 | action                | none                                                                  |
 | 
			
		||||
 | administrative        | unlocked                                                              |
 | 
			
		||||
 | availability          | available                                                             |
 | 
			
		||||
 | bm_ip                 | None                                                                  |
 | 
			
		||||
 | bm_type               | none                                                                  |
 | 
			
		||||
 | bm_username           | None                                                                  |
 | 
			
		||||
 | boot_device           | /dev/sda                                                              |
 | 
			
		||||
 | capabilities          | {u'stor_function': u'monitor', u'Personality': u'Controller-Standby'} |
 | 
			
		||||
 | clock_synchronization | ntp                                                                   |
 | 
			
		||||
 | config_applied        | 122087b1-e611-4ce2-ba19-89d967b0c197                                  |
 | 
			
		||||
 | config_status         | None                                                                  |
 | 
			
		||||
 | config_target         | 122087b1-e611-4ce2-ba19-89d967b0c197                                  |
 | 
			
		||||
 | console               | ttyS0,115200                                                          |
 | 
			
		||||
 | created_at            | 2020-04-22T07:14:41.917528+00:00                                      |
 | 
			
		||||
 | hostname              | controller-1                                                          |
 | 
			
		||||
 | id                    | 2                                                                     |
 | 
			
		||||
 | install_output        | text                                                                  |
 | 
			
		||||
 | install_state         | completed                                                             |
 | 
			
		||||
 | install_state_info    | None                                                                  |
 | 
			
		||||
 | inv_state             | inventoried                                                           |
 | 
			
		||||
 | invprovision          | provisioned                                                           |
 | 
			
		||||
 | location              | {}                                                                    |
 | 
			
		||||
 | mgmt_ip               | 192.168.204.12                                                        |
 | 
			
		||||
 | mgmt_mac              | 52:54:00:e1:47:58                                                     |
 | 
			
		||||
 | operational           | enabled                                                               |
 | 
			
		||||
 | personality           | controller                                                            |
 | 
			
		||||
 | reserved              | False                                                                 |
 | 
			
		||||
 | rootfs_device         | /dev/sda                                                              |
 | 
			
		||||
 | serialid              | None                                                                  |
 | 
			
		||||
 | software_load         | 20.01                                                                 |
 | 
			
		||||
 | task                  |                                                                       |
 | 
			
		||||
 | tboot                 | false                                                                 |
 | 
			
		||||
 | ttys_dcd              | None                                                                  |
 | 
			
		||||
 | updated_at            | 2020-04-22T18:19:58.168304+00:00                                      |
 | 
			
		||||
 | uptime                | 25238                                                                 |
 | 
			
		||||
 | uuid                  | 902613c7-da3e-4449-9d7d-41b832420d74                                  |
 | 
			
		||||
 | vim_progress_status   | services-enabled                                                      |
 | 
			
		||||
 +-----------------------+-----------------------------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-1-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure worker nodes
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Add the third Ceph monitor to a worker node:
 | 
			
		||||
 | 
			
		||||
   (The first two Ceph monitors are automatically assigned to controller-0 and
 | 
			
		||||
   controller-1.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ceph-mon-add worker-0
 | 
			
		||||
 | 
			
		||||
#. Wait for the worker node monitor to complete configuration:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ceph-mon-list
 | 
			
		||||
      +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
      | uuid                                 | ceph_ | hostname     | state      | task |
 | 
			
		||||
      |                                      | mon_g |              |            |      |
 | 
			
		||||
      |                                      | ib    |              |            |      |
 | 
			
		||||
      +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
      | 64176b6c-e284-4485-bb2a-115dee215279 | 20    | controller-1 | configured | None |
 | 
			
		||||
      | a9ca151b-7f2c-4551-8167-035d49e2df8c | 20    | controller-0 | configured | None |
 | 
			
		||||
      | f76bc385-190c-4d9a-aa0f-107346a9907b | 20    | worker-0     | configured | None |
 | 
			
		||||
      +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the worker nodes.
 | 
			
		||||
 | 
			
		||||
   Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for NODE in worker-0 worker-1; do
 | 
			
		||||
         system interface-network-assign $NODE mgmt0 cluster-host
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for worker nodes.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
      (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
      1G Huge Pages are not supported in the virtual environment and there is no
 | 
			
		||||
      virtual NIC supporting SRIOV. For that reason, data interfaces are not
 | 
			
		||||
      applicable in the virtual environment for the Kubernetes-only scenario.
 | 
			
		||||
 | 
			
		||||
   For OpenStack only:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=eth1000
 | 
			
		||||
      DATA1IF=eth1001
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
 | 
			
		||||
      # configure the datanetworks in sysinv, prior to referencing it
 | 
			
		||||
      # in the ``system host-if-modify`` command'.
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      for NODE in worker-0 worker-1; do
 | 
			
		||||
        echo "Configuring interface for: $NODE"
 | 
			
		||||
        set -ex
 | 
			
		||||
        system host-port-list ${NODE} --nowrap > ${SPL}
 | 
			
		||||
        system host-if-list -a ${NODE} --nowrap > ${SPIL}
 | 
			
		||||
        DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
        DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
        DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
        DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
        DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
        DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
        DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
        DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
        system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
 | 
			
		||||
        system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
 | 
			
		||||
        system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
        system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
        set +ex
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
 | 
			
		||||
   support of installing the stx-openstack manifest/helm-charts later:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for NODE in worker-0 worker-1; do
 | 
			
		||||
        system host-label-assign $NODE  openstack-compute-node=enabled
 | 
			
		||||
        system host-label-assign $NODE  openvswitch=enabled
 | 
			
		||||
        system host-label-assign $NODE  sriov=enabled
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for NODE in worker-0 worker-1; do
 | 
			
		||||
        echo "Configuring Nova local for: $NODE"
 | 
			
		||||
        ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
        ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
        PARTITION_SIZE=10
 | 
			
		||||
        NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 | 
			
		||||
        NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
        system host-lvg-add ${NODE} nova-local
 | 
			
		||||
        system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock worker nodes
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-compute-nodes-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
Unlock virtual worker nodes to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  for NODE in worker-0 worker-1; do
 | 
			
		||||
     system host-unlock $NODE
 | 
			
		||||
  done
 | 
			
		||||
 | 
			
		||||
The worker nodes will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-compute-nodes-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
----------------------------
 | 
			
		||||
Add Ceph OSDs to controllers
 | 
			
		||||
----------------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      HOST=controller-0
 | 
			
		||||
      DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
      TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
      OSDs="/dev/sdb"
 | 
			
		||||
      for OSD in $OSDs; do
 | 
			
		||||
         system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
         while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
      system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to controller-1:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      HOST=controller-1
 | 
			
		||||
      DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
      TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
      OSDs="/dev/sdb"
 | 
			
		||||
      for OSD in $OSDs; do
 | 
			
		||||
          system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
          while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
      system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   Check the status of the controller and worker nodes. It should
 | 
			
		||||
   now show "unlocked", "enabled" and "available".
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
 [sysadmin@controller-0 ~(keystone_admin)]$ system host-list
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
 | 2  | controller-1 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
 | 3  | compute-0    | worker      | unlocked       | enabled     | available    |
 | 
			
		||||
 | 4  | compute-1    | worker      | unlocked       | enabled     | available    |
 | 
			
		||||
 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -1,21 +0,0 @@
 | 
			
		||||
=========================================================
 | 
			
		||||
Virtual Standard with Dedicated Storage Installation R3.0
 | 
			
		||||
=========================================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_dedicated_storage.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   dedicated_storage_environ
 | 
			
		||||
   dedicated_storage_install_kubernetes
 | 
			
		||||