Add initial R4 install guides
Add copy of R3 install guides as the starting point for R4 isntallation guides. Change-Id: Ia2d71e05636ab7128eb8b3b05cc184a039077a5d Signed-off-by: Kristal Dale <kristal.dale@intel.com>
@@ -25,6 +25,11 @@ Upcoming R4.0 release
 | 
			
		||||
 | 
			
		||||
StarlingX R4.0 is the forthcoming version of StarlingX under development.
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   r4_release/index
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
-----------------
 | 
			
		||||
Archived releases
 | 
			
		||||
 
 | 
			
		||||
@@ -0,0 +1,422 @@
 | 
			
		||||
================================
 | 
			
		||||
Ansible Bootstrap Configurations
 | 
			
		||||
================================
 | 
			
		||||
 | 
			
		||||
This section describes Ansible bootstrap configuration options.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
.. _install-time-only-params-r4:
 | 
			
		||||
 | 
			
		||||
----------------------------
 | 
			
		||||
Install-time-only parameters
 | 
			
		||||
----------------------------
 | 
			
		||||
 | 
			
		||||
Some Ansible bootstrap parameters can not be changed or are very difficult to
 | 
			
		||||
change after installation is complete.
 | 
			
		||||
 | 
			
		||||
Review the set of install-time-only parameters before installation and confirm
 | 
			
		||||
that your values for these parameters are correct for the desired installation.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   If you notice an incorrect install-time-only parameter value *before you
 | 
			
		||||
   unlock controller-0 for the first time*, you can re-run the Ansible bootstrap
 | 
			
		||||
   playbook with updated override values and the updated values will take effect.
 | 
			
		||||
 | 
			
		||||
****************************
 | 
			
		||||
Install-time-only parameters
 | 
			
		||||
****************************
 | 
			
		||||
 | 
			
		||||
**System Properties**
 | 
			
		||||
 | 
			
		||||
* ``system_mode``
 | 
			
		||||
* ``distributed_cloud_role``
 | 
			
		||||
 | 
			
		||||
**Network Properties**
 | 
			
		||||
 | 
			
		||||
* ``pxeboot_subnet``
 | 
			
		||||
* ``pxeboot_start_address``
 | 
			
		||||
* ``pxeboot_end_address``
 | 
			
		||||
* ``management_subnet``
 | 
			
		||||
* ``management_start_address``
 | 
			
		||||
* ``management_end_address``
 | 
			
		||||
* ``cluster_host_subnet``
 | 
			
		||||
* ``cluster_host_start_address``
 | 
			
		||||
* ``cluster_host_end_address``
 | 
			
		||||
* ``cluster_pod_subnet``
 | 
			
		||||
* ``cluster_pod_start_address``
 | 
			
		||||
* ``cluster_pod_end_address``
 | 
			
		||||
* ``cluster_service_subnet``
 | 
			
		||||
* ``cluster_service_start_address``
 | 
			
		||||
* ``cluster_service_end_address``
 | 
			
		||||
* ``management_multicast_subnet``
 | 
			
		||||
* ``management_multicast_start_address``
 | 
			
		||||
* ``management_multicast_end_address``
 | 
			
		||||
 | 
			
		||||
**Docker Proxies**
 | 
			
		||||
 | 
			
		||||
* ``docker_http_proxy``
 | 
			
		||||
* ``docker_https_proxy``
 | 
			
		||||
* ``docker_no_proxy``
 | 
			
		||||
 | 
			
		||||
**Docker Registry Overrides**
 | 
			
		||||
 | 
			
		||||
* ``docker_registries``
 | 
			
		||||
 | 
			
		||||
  * ``k8s.gcr.io``
 | 
			
		||||
 | 
			
		||||
    * ``url``
 | 
			
		||||
    * ``username``
 | 
			
		||||
    * ``password``
 | 
			
		||||
    * ``secure``
 | 
			
		||||
 | 
			
		||||
  * ``gcr.io``
 | 
			
		||||
 | 
			
		||||
    * ``url``
 | 
			
		||||
    * ``username``
 | 
			
		||||
    * ``password``
 | 
			
		||||
    * ``secure``
 | 
			
		||||
 | 
			
		||||
  * ``quay.io``
 | 
			
		||||
 | 
			
		||||
    * ``url``
 | 
			
		||||
    * ``username``
 | 
			
		||||
    * ``password``
 | 
			
		||||
    * ``secure``
 | 
			
		||||
 | 
			
		||||
  * ``docker.io``
 | 
			
		||||
 | 
			
		||||
    * ``url``
 | 
			
		||||
    * ``username``
 | 
			
		||||
    * ``password``
 | 
			
		||||
    * ``secure``
 | 
			
		||||
 | 
			
		||||
  * ``docker.elastic.co``
 | 
			
		||||
 | 
			
		||||
    * ``url``
 | 
			
		||||
    * ``username``
 | 
			
		||||
    * ``password``
 | 
			
		||||
    * ``secure``
 | 
			
		||||
 | 
			
		||||
  * ``defaults``
 | 
			
		||||
 | 
			
		||||
    * ``url``
 | 
			
		||||
    * ``username``
 | 
			
		||||
    * ``password``
 | 
			
		||||
    * ``secure``
 | 
			
		||||
 | 
			
		||||
**Certificates**
 | 
			
		||||
 | 
			
		||||
* ``k8s_root_ca_cert``
 | 
			
		||||
* ``k8s_root_ca_key``
 | 
			
		||||
 | 
			
		||||
**Kubernetes Parameters**
 | 
			
		||||
 | 
			
		||||
* ``apiserver_oidc``
 | 
			
		||||
 | 
			
		||||
  * ``client_id``
 | 
			
		||||
  * ``issuer_id``
 | 
			
		||||
  * ``username_claim``
 | 
			
		||||
 | 
			
		||||
----
 | 
			
		||||
IPv6
 | 
			
		||||
----
 | 
			
		||||
 | 
			
		||||
If you are using IPv6, provide IPv6 configuration overrides for the Ansible
 | 
			
		||||
bootstrap playbook. Note that all addressing, except pxeboot_subnet, should be
 | 
			
		||||
updated to IPv6 addressing.
 | 
			
		||||
 | 
			
		||||
Example IPv6 override values are shown below:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   dns_servers:
 | 
			
		||||
   ‐ 2001:4860:4860::8888
 | 
			
		||||
   ‐ 2001:4860:4860::8844
 | 
			
		||||
   pxeboot_subnet: 169.254.202.0/24
 | 
			
		||||
   management_subnet: 2001:db8:2::/64
 | 
			
		||||
   cluster_host_subnet: 2001:db8:3::/64
 | 
			
		||||
   cluster_pod_subnet: 2001:db8:4::/64
 | 
			
		||||
   cluster_service_subnet: 2001:db8:4::/112
 | 
			
		||||
   external_oam_subnet: 2001:db8:1::/64
 | 
			
		||||
   external_oam_gateway_address: 2001:db8::1
 | 
			
		||||
   external_oam_floating_address: 2001:db8::2
 | 
			
		||||
   external_oam_node_0_address: 2001:db8::3
 | 
			
		||||
   external_oam_node_1_address: 2001:db8::4
 | 
			
		||||
   management_multicast_subnet: ff08::1:1:0/124
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   The `external_oam_node_0_address`, and `external_oam_node_1_address` parameters
 | 
			
		||||
   are not required for the AIO‐SX installation.
 | 
			
		||||
 | 
			
		||||
----------------
 | 
			
		||||
Private registry
 | 
			
		||||
----------------
 | 
			
		||||
 | 
			
		||||
To bootstrap StarlingX you must pull container images for multiple system
 | 
			
		||||
services. By default these container images are pulled from public registries:
 | 
			
		||||
k8s.gcr.io, gcr.io, quay.io, and docker.io.
 | 
			
		||||
 | 
			
		||||
It may be required (or desired) to copy the container images to a private
 | 
			
		||||
registry and pull the images from the private registry (instead of the public
 | 
			
		||||
registries) as part of the StarlingX bootstrap. For example, a private registry
 | 
			
		||||
would be required if a StarlingX system was deployed in an air-gapped network
 | 
			
		||||
environment.
 | 
			
		||||
 | 
			
		||||
Use the `docker_registries` structure in the bootstrap overrides file to specify
 | 
			
		||||
alternate registry(s) for the public registries from which container images are
 | 
			
		||||
pulled. These alternate registries are used during the bootstrapping of
 | 
			
		||||
controller-0, and on :command:`system application-apply` of application packages.
 | 
			
		||||
 | 
			
		||||
The `docker_registries` structure is a map of public registries and the
 | 
			
		||||
alternate registry values for each public registry. For each public registry the
 | 
			
		||||
key is a fully scoped registry name of a public registry (for example "k8s.gcr.io")
 | 
			
		||||
and the alternate registry URL and username/password (if authenticated).
 | 
			
		||||
 | 
			
		||||
url
 | 
			
		||||
   The fully scoped registry name (and optionally namespace/) for the alternate
 | 
			
		||||
   registry location where the images associated with this public registry
 | 
			
		||||
   should now be pulled from.
 | 
			
		||||
 | 
			
		||||
   Valid formats for the `url` value are:
 | 
			
		||||
 | 
			
		||||
   * Domain. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       example.domain
 | 
			
		||||
 | 
			
		||||
   * Domain with port. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       example.domain:5000
 | 
			
		||||
 | 
			
		||||
   * IPv4 address. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       1.2.3.4
 | 
			
		||||
 | 
			
		||||
   * IPv4 address with port. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       1.2.3.4:5000
 | 
			
		||||
 | 
			
		||||
   * IPv6 address. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       FD01::0100
 | 
			
		||||
 | 
			
		||||
   * IPv6 address with port. For example:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       [FD01::0100]:5000
 | 
			
		||||
 | 
			
		||||
username
 | 
			
		||||
   The username for logging into the alternate registry, if authenticated.
 | 
			
		||||
 | 
			
		||||
password
 | 
			
		||||
   The password for logging into the alternate registry, if authenticated.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
Additional configuration options in the `docker_registries` structure are:
 | 
			
		||||
 | 
			
		||||
defaults
 | 
			
		||||
   A special public registry key which defines common values to be applied to
 | 
			
		||||
   all overrideable public registries. If only the `defaults` registry
 | 
			
		||||
   is defined, it will apply `url`, `username`, and `password` for all
 | 
			
		||||
   registries.
 | 
			
		||||
 | 
			
		||||
   If values under specific registries are defined, they will override the
 | 
			
		||||
   values defined in the defaults registry.
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      The `defaults` key was formerly called `unified`. It was renamed
 | 
			
		||||
      in StarlingX R3.0 and updated semantics were applied.
 | 
			
		||||
 | 
			
		||||
      This change affects anyone with a StarlingX installation prior to R3.0 that
 | 
			
		||||
      specifies alternate Docker registries using the `unified` key.
 | 
			
		||||
 | 
			
		||||
secure
 | 
			
		||||
   Specifies whether the registry(s) supports HTTPS (secure) or HTTP (not secure).
 | 
			
		||||
   Applies to all alternate registries. A boolean value. The default value is
 | 
			
		||||
   True (secure, HTTPS).
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   The ``secure`` parameter was formerly called ``is_secure_registry``. It was
 | 
			
		||||
   renamed in StarlingX R3.0.
 | 
			
		||||
 | 
			
		||||
If an alternate registry is specified to be secure (using HTTPS), the certificate
 | 
			
		||||
used by the registry may not be signed by a well-known Certificate Authority (CA).
 | 
			
		||||
This results in the :command:`docker pull` of images from this registry to fail.
 | 
			
		||||
Use the `ssl_ca_cert` override to specify the public certificate of the CA that
 | 
			
		||||
signed the alternate registry’s certificate. This will add the CA as a trusted
 | 
			
		||||
CA to the StarlingX system.
 | 
			
		||||
 | 
			
		||||
ssl_ca_cert
 | 
			
		||||
   The `ssl_ca_cert` value is the absolute path of the certificate file. The
 | 
			
		||||
   certificate must be in PEM format and the file may contain a single CA
 | 
			
		||||
   certificate or multiple CA certificates in a bundle.
 | 
			
		||||
 | 
			
		||||
The following example will apply `url`, `username`, and `password` to all
 | 
			
		||||
registries.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   docker_registries:
 | 
			
		||||
     defaults:
 | 
			
		||||
       url: my.registry.io
 | 
			
		||||
       username: myreguser
 | 
			
		||||
       password: myregP@ssw0rd
 | 
			
		||||
 | 
			
		||||
The next example applies `username` and `password` from the defaults registry
 | 
			
		||||
to all public registries. `url` is different for each public registry. It
 | 
			
		||||
additionally specifies an alternate CA certificate.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  docker_registries:
 | 
			
		||||
     k8s.gcr.io:
 | 
			
		||||
       url: my.k8sregistry.io
 | 
			
		||||
     gcr.io:
 | 
			
		||||
       url: my.gcrregistry.io
 | 
			
		||||
     quay.io:
 | 
			
		||||
       url: my.quayregistry.io
 | 
			
		||||
     docker.io:
 | 
			
		||||
       url: my.dockerregistry.io
 | 
			
		||||
     defaults:
 | 
			
		||||
       url: my.registry.io
 | 
			
		||||
       username: myreguser
 | 
			
		||||
       password: myregP@ssw0rd
 | 
			
		||||
 | 
			
		||||
  ssl_ca_cert: /path/to/ssl_ca_cert_file
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Docker proxy
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
If the StarlingX OAM interface or network is behind a http/https proxy, relative
 | 
			
		||||
to the Docker registries used by StarlingX or applications running on StarlingX,
 | 
			
		||||
then Docker within StarlingX must be configured to use these http/https proxies.
 | 
			
		||||
 | 
			
		||||
Use the following configuration overrides to configure your Docker proxy settings.
 | 
			
		||||
 | 
			
		||||
docker_http_proxy
 | 
			
		||||
   Specify the HTTP proxy URL to use. For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      docker_http_proxy: http://my.proxy.com:1080
 | 
			
		||||
 | 
			
		||||
docker_https_proxy
 | 
			
		||||
   Specify the HTTPS proxy URL to use. For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      docker_https_proxy: https://my.proxy.com:1443
 | 
			
		||||
 | 
			
		||||
docker_no_proxy
 | 
			
		||||
   A no-proxy address list can be provided for registries not on the other side
 | 
			
		||||
   of the proxies. This list will be added to the default no-proxy list derived
 | 
			
		||||
   from localhost, loopback, management, and OAM floating addresses at run time.
 | 
			
		||||
   Each address in the no-proxy list must neither contain a wildcard nor have
 | 
			
		||||
   subnet format. For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      docker_no_proxy:
 | 
			
		||||
        - 1.2.3.4
 | 
			
		||||
        - 5.6.7.8
 | 
			
		||||
 | 
			
		||||
--------------------------------------
 | 
			
		||||
Kubernetes root CA certificate and key
 | 
			
		||||
--------------------------------------
 | 
			
		||||
 | 
			
		||||
By default the Kubernetes Root CA Certificate and Key are auto-generated and
 | 
			
		||||
result in the use of self-signed certificates for the Kubernetes API server. In
 | 
			
		||||
the case where self-signed certificates are not acceptable, use the bootstrap
 | 
			
		||||
override values `k8s_root_ca_cert` and `k8s_root_ca_key` to specify the
 | 
			
		||||
certificate and key for the Kubernetes root CA.
 | 
			
		||||
 | 
			
		||||
k8s_root_ca_cert
 | 
			
		||||
   Specifies the certificate for the Kubernetes root CA. The `k8s_root_ca_cert`
 | 
			
		||||
   value is the absolute path of the certificate file. The certificate must be
 | 
			
		||||
   in PEM format and the value must be provided as part of a pair with
 | 
			
		||||
   `k8s_root_ca_key`. The playbook will not proceed if only one value is provided.
 | 
			
		||||
 | 
			
		||||
k8s_root_ca_key
 | 
			
		||||
   Specifies the key for the Kubernetes root CA. The `k8s_root_ca_key`
 | 
			
		||||
   value is the absolute path of the certificate file. The certificate must be
 | 
			
		||||
   in PEM format and the value must be provided as part of a pair with
 | 
			
		||||
   `k8s_root_ca_cert`. The playbook will not proceed if only one value is provided.
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   The default length for the generated Kubernetes root CA certificate is 10
 | 
			
		||||
   years. Replacing the root CA certificate is an involved process so the custom
 | 
			
		||||
   certificate expiry should be as long as possible. We recommend ensuring root
 | 
			
		||||
   CA certificate has an expiry of at least 5-10 years.
 | 
			
		||||
 | 
			
		||||
The administrator can also provide values to add to the Kubernetes API server
 | 
			
		||||
certificate Subject Alternative Name list using the 'apiserver_cert_sans`
 | 
			
		||||
override parameter.
 | 
			
		||||
 | 
			
		||||
apiserver_cert_sans
 | 
			
		||||
   Specifies a list of Subject Alternative Name entries that will be added to the
 | 
			
		||||
   Kubernetes API server certificate. Each entry in the list must be an IP address
 | 
			
		||||
   or domain name. For example:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      apiserver_cert_sans:
 | 
			
		||||
        - hostname.domain
 | 
			
		||||
        - 198.51.100.75
 | 
			
		||||
 | 
			
		||||
StarlingX automatically updates this parameter to include IP records for the OAM
 | 
			
		||||
floating IP and both OAM unit IP addresses.
 | 
			
		||||
 | 
			
		||||
----------------------------------------------------
 | 
			
		||||
OpenID Connect authentication for Kubernetes cluster
 | 
			
		||||
----------------------------------------------------
 | 
			
		||||
 | 
			
		||||
The Kubernetes cluster can be configured to use an external OpenID Connect
 | 
			
		||||
:abbr:`IDP (identity provider)`, such as Azure Active Directory, Salesforce, or
 | 
			
		||||
Google, for Kubernetes API authentication.
 | 
			
		||||
 | 
			
		||||
By default, OpenID Connect authentication is disabled. To enable OpenID Connect,
 | 
			
		||||
use the following configuration values in the Ansible bootstrap overrides file
 | 
			
		||||
to specify the IDP for OpenID Connect:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
    apiserver_oidc:
 | 
			
		||||
      client_id:
 | 
			
		||||
      issuer_url:
 | 
			
		||||
      username_claim:
 | 
			
		||||
 | 
			
		||||
When the three required fields of the `apiserver_oidc` parameter are defined,
 | 
			
		||||
OpenID Connect is considered active. The values will be used to configure the
 | 
			
		||||
Kubernetes cluster to use the specified external OpenID Connect IDP for
 | 
			
		||||
Kubernetes API authentication.
 | 
			
		||||
 | 
			
		||||
In addition, you will need to configure the external OpenID Connect IDP and any
 | 
			
		||||
required OpenID client application according to the specific IDP's documentation.
 | 
			
		||||
 | 
			
		||||
If not configuring OpenID Connect, all values should be absent from the
 | 
			
		||||
configuration file.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   Default authentication via service account tokens is always supported,
 | 
			
		||||
   even when OpenID Connect authentication is configured.
 | 
			
		||||
@@ -0,0 +1,7 @@
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
      Some Ansible bootstrap parameters cannot be changed or are very difficult to change after installation is complete.
 | 
			
		||||
 | 
			
		||||
      Review the set of install-time-only parameters before installation and confirm that your values for these parameters are correct for the desired installation.
 | 
			
		||||
 | 
			
		||||
      Refer to :ref:`Ansible install-time-only parameters <install-time-only-params-r4>` for details.
 | 
			
		||||
@@ -0,0 +1,26 @@
 | 
			
		||||
==============================================
 | 
			
		||||
Bare metal All-in-one Duplex Installation R4.0
 | 
			
		||||
==============================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_aio_duplex.txt
 | 
			
		||||
 | 
			
		||||
The bare metal AIO-DX deployment configuration may be extended with up to four
 | 
			
		||||
worker/compute nodes (not shown in the diagram). Installation instructions for
 | 
			
		||||
these additional nodes are described in :doc:`aio_duplex_extend`.
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   aio_duplex_hardware
 | 
			
		||||
   aio_duplex_install_kubernetes
 | 
			
		||||
   aio_duplex_extend
 | 
			
		||||
@@ -0,0 +1,192 @@
 | 
			
		||||
================================================
 | 
			
		||||
Extend Capacity with Worker and/or Compute Nodes
 | 
			
		||||
================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to extend capacity with worker and/or compute
 | 
			
		||||
nodes on a **StarlingX R4.0 bare metal All-in-one Duplex** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
---------------------------------
 | 
			
		||||
Install software on compute nodes
 | 
			
		||||
---------------------------------
 | 
			
		||||
 | 
			
		||||
#. Power on the compute servers and force them to network boot with the
 | 
			
		||||
   appropriate BIOS boot options for your particular server.
 | 
			
		||||
 | 
			
		||||
#. As the compute servers boot, a message appears on their console instructing
 | 
			
		||||
   you to configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of controller-0, list hosts to see newly discovered compute
 | 
			
		||||
   hosts (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 3  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
      | 4  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Using the host id, set the personality of this host to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 3 personality=worker hostname=compute-0
 | 
			
		||||
      system host-update 4 personality=worker hostname=compute-1
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on compute nodes.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. Wait for the install of software on the computes to complete, the computes to
 | 
			
		||||
   reboot and to both show as locked/disabled/online in 'system host-list'.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | controller-1 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 3  | compute-0    | compute     | locked         | disabled    | online       |
 | 
			
		||||
      | 4  | compute-1    | compute     | locked         | disabled    | online       |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure compute nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
         system interface-network-assign $COMPUTE mgmt0 cluster-host
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
 | 
			
		||||
   example eth0, that are applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure SRIOV device plug in:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
         system host-label-assign controller-1 sriovdp=enabled
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
           system host-memory-modify controller-1 0 -1G 100
 | 
			
		||||
           system host-memory-modify controller-1 1 -1G 100
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=<DATA-0-PORT>
 | 
			
		||||
      DATA1IF=<DATA-1-PORT>
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
 | 
			
		||||
      # configure the datanetworks in sysinv, prior to referencing it
 | 
			
		||||
      # in the ``system host-if-modify`` command'.
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
        echo "Configuring interface for: $COMPUTE"
 | 
			
		||||
        set -ex
 | 
			
		||||
        system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
        system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
        DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
        DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
        DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
        DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
        DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
        DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
        DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
        DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
        system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
        system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
        system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
        system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
        set +ex
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for NODE in compute-0 compute-1; do
 | 
			
		||||
        system host-label-assign $NODE  openstack-compute-node=enabled
 | 
			
		||||
        system host-label-assign $NODE  openvswitch=enabled
 | 
			
		||||
        system host-label-assign $NODE  sriov=enabled
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
 | 
			
		||||
   needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
        echo "Configuring Nova local for: $COMPUTE"
 | 
			
		||||
        ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
        ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
        PARTITION_SIZE=10
 | 
			
		||||
        NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 | 
			
		||||
        NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
        system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
        system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock compute nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
Unlock compute nodes in order to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
     system host-unlock $COMPUTE
 | 
			
		||||
  done
 | 
			
		||||
 | 
			
		||||
The compute nodes will reboot to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
@@ -0,0 +1,58 @@
 | 
			
		||||
=====================
 | 
			
		||||
Hardware Requirements
 | 
			
		||||
=====================
 | 
			
		||||
 | 
			
		||||
This section describes the hardware requirements and server preparation for a
 | 
			
		||||
**StarlingX R4.0 bare metal All-in-one Duplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Minimum hardware requirements
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
The recommended minimum hardware requirements for bare metal servers for various
 | 
			
		||||
host types are:
 | 
			
		||||
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum Requirement     | All-in-one Controller Node                                |
 | 
			
		||||
+=========================+===========================================================+
 | 
			
		||||
| Number of servers       | 2                                                         |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge)      |
 | 
			
		||||
|                         |   8 cores/socket                                          |
 | 
			
		||||
|                         |                                                           |
 | 
			
		||||
|                         | or                                                        |
 | 
			
		||||
|                         |                                                           |
 | 
			
		||||
|                         | - Single-CPU Intel® Xeon® D-15xx family, 8 cores          |
 | 
			
		||||
|                         |   (low-power/low-cost option)                             |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum memory          | 64 GB                                                     |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Primary disk            | 500 GB SDD or NVMe (see :doc:`../../nvme_config`)         |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Additional disks        | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD            |
 | 
			
		||||
|                         | - Recommended, but not required: 1 or more SSDs or NVMe   |
 | 
			
		||||
|                         |   drives for Ceph journals (min. 1024 MiB per OSD journal)|
 | 
			
		||||
|                         | - For OpenStack, recommend 1 or more 500 GB (min. 10K RPM)|
 | 
			
		||||
|                         |   for VM local ephemeral storage                          |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum network ports   | - Mgmt/Cluster: 1x10GE                                    |
 | 
			
		||||
|                         | - OAM: 1x1GE                                              |
 | 
			
		||||
|                         | - Data: 1 or more x 10GE                                  |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| BIOS settings           | - Hyper-Threading technology enabled                      |
 | 
			
		||||
|                         | - Virtualization technology enabled                       |
 | 
			
		||||
|                         | - VT for directed I/O enabled                             |
 | 
			
		||||
|                         | - CPU power and performance policy set to performance     |
 | 
			
		||||
|                         | - CPU C state control disabled                            |
 | 
			
		||||
|                         | - Plug & play BMC detection disabled                      |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
--------------------------
 | 
			
		||||
Prepare bare metal servers
 | 
			
		||||
--------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: prep_servers.txt
 | 
			
		||||
@@ -0,0 +1,437 @@
 | 
			
		||||
=================================================
 | 
			
		||||
Install StarlingX Kubernetes on Bare Metal AIO-DX
 | 
			
		||||
=================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R4.0 bare metal All-in-one Duplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
---------------------
 | 
			
		||||
Create a bootable USB
 | 
			
		||||
---------------------
 | 
			
		||||
 | 
			
		||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
 | 
			
		||||
create a bootable USB with the StarlingX ISO on your system.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: aio_simplex_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-install-software-controller-0-aio-simplex-start:
 | 
			
		||||
   :end-before: incl-install-software-controller-0-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
#. Login using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Login: sysadmin
 | 
			
		||||
      Password:
 | 
			
		||||
      Changing password for sysadmin.
 | 
			
		||||
      (current) UNIX Password: sysadmin
 | 
			
		||||
      New Password:
 | 
			
		||||
      (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. Verify and/or configure IP connectivity.
 | 
			
		||||
 | 
			
		||||
   External connectivity is required to run the Ansible bootstrap playbook. The
 | 
			
		||||
   StarlingX boot image will DHCP out all interfaces so the server may have
 | 
			
		||||
   obtained an IP address and have external IP connectivity if a DHCP server is
 | 
			
		||||
   present in your environment. Verify this using the :command:`ip addr` and
 | 
			
		||||
   :command:`ping 8.8.8.8` commands.
 | 
			
		||||
 | 
			
		||||
   Otherwise, manually configure an IP address and default IP route. Use the
 | 
			
		||||
   PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
 | 
			
		||||
   deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
 | 
			
		||||
      sudo ip link set up dev <PORT>
 | 
			
		||||
      sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
 | 
			
		||||
      ping 8.8.8.8
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``sysadmin home directory ($HOME)``
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   .. include:: ../ansible_install_time_only.txt
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   #. Use a copy of the default.yml file listed above to provide your overrides.
 | 
			
		||||
 | 
			
		||||
      The default.yml file lists all available parameters for bootstrap
 | 
			
		||||
      configuration with a brief description for each parameter in the file comments.
 | 
			
		||||
 | 
			
		||||
      To use this method, copy the default.yml file listed above to
 | 
			
		||||
      ``$HOME/localhost.yml`` and edit the configurable values as desired.
 | 
			
		||||
 | 
			
		||||
   #. Create a minimal user configuration override file.
 | 
			
		||||
 | 
			
		||||
      To use this method, create your override file at ``$HOME/localhost.yml``
 | 
			
		||||
      and provide the minimum required parameters for the deployment configuration
 | 
			
		||||
      as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
 | 
			
		||||
      applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: duplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
 | 
			
		||||
        external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
 | 
			
		||||
        external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
 | 
			
		||||
        external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
 | 
			
		||||
        external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <sysadmin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
 | 
			
		||||
   attached networks. Use the OAM and MGMT port names, for example eth0, that are
 | 
			
		||||
   applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     OAM_IF=<OAM-PORT>
 | 
			
		||||
     MGMT_IF=<MGMT-PORT>
 | 
			
		||||
     system host-if-modify controller-0 lo -c none
 | 
			
		||||
     IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
 | 
			
		||||
     for UUID in $IFNET_UUIDS; do
 | 
			
		||||
         system interface-network-remove ${UUID}
 | 
			
		||||
     done
 | 
			
		||||
     system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
     system host-if-modify controller-0 $MGMT_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $MGMT_IF mgmt
 | 
			
		||||
     system interface-network-assign controller-0 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-0. Use the DATA port names, for example
 | 
			
		||||
   eth0, applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure the SRIOV device plugin
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       system host-label-assign controller-0 sriovdp=enabled
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes.
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       system host-memory-modify controller-0 0 -1G 100
 | 
			
		||||
       system host-memory-modify controller-0 1 -1G 100
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=<DATA-0-PORT>
 | 
			
		||||
      DATA1IF=<DATA-1-PORT>
 | 
			
		||||
      export COMPUTE=controller-0
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
      system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
      system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
      DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
      DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
      DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
      DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
      system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
 | 
			
		||||
   to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      echo ">>> Add OSDs to primary tier"
 | 
			
		||||
      system host-disk-list controller-0
 | 
			
		||||
      system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
 | 
			
		||||
      system host-stor-list controller-0
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. include:: aio_simplex_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-0-openstack-specific-aio-simplex-start:
 | 
			
		||||
   :end-before: incl-config-controller-0-openstack-specific-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. include:: aio_simplex_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-unlock-controller-0-aio-simplex-start:
 | 
			
		||||
   :end-before: incl-unlock-controller-0-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------------------------
 | 
			
		||||
Install software on controller-1 node
 | 
			
		||||
-------------------------------------
 | 
			
		||||
 | 
			
		||||
#. Power on the controller-1 server and force it to network boot with the
 | 
			
		||||
   appropriate BIOS boot options for your particular server.
 | 
			
		||||
 | 
			
		||||
#. As controller-1 boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
 | 
			
		||||
   host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Using the host id, set the personality of this host to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1 to complete, for controller-1 to
 | 
			
		||||
   reboot, and for controller-1 to show as locked/disabled/online in 'system host-list'.
 | 
			
		||||
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-1 and specify the
 | 
			
		||||
   attached networks. Use the OAM and MGMT port names, for example eth0, that are
 | 
			
		||||
   applicable to your deployment environment:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interface is partially set up automatically by the network
 | 
			
		||||
   install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      OAM_IF=<OAM-PORT>
 | 
			
		||||
      MGMT_IF=<MGMT-PORT>
 | 
			
		||||
      system host-if-modify controller-1 $OAM_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-1 $OAM_IF oam
 | 
			
		||||
      system interface-network-assign controller-1 mgmt0 cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-1. Use the DATA port names, for example
 | 
			
		||||
   eth0, applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure the SRIOV device plugin:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        system host-label-assign controller-1 sriovdp=enabled
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        system host-memory-modify controller-1 0 -1G 100
 | 
			
		||||
        system host-memory-modify controller-1 1 -1G 100
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=<DATA-0-PORT>
 | 
			
		||||
      DATA1IF=<DATA-1-PORT>
 | 
			
		||||
      export COMPUTE=controller-1
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
      system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
      system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
      DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
      DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
      DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
      DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
      system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-1 for Ceph:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      echo ">>> Add OSDs to primary tier"
 | 
			
		||||
      system host-disk-list controller-1
 | 
			
		||||
      system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {}
 | 
			
		||||
      system host-stor-list controller-1
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-label-assign controller-1 openstack-control-plane=enabled
 | 
			
		||||
      system host-label-assign controller-1 openstack-compute-node=enabled
 | 
			
		||||
      system host-label-assign controller-1 openvswitch=enabled
 | 
			
		||||
      system host-label-assign controller-1 sriov=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      export COMPUTE=controller-1
 | 
			
		||||
 | 
			
		||||
      echo ">>> Getting root disk info"
 | 
			
		||||
      ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
      ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
      echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
 | 
			
		||||
 | 
			
		||||
      echo ">>>> Configuring nova-local"
 | 
			
		||||
      NOVA_SIZE=34
 | 
			
		||||
      NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
 | 
			
		||||
      NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
      system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
      system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
      sleep 2
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock controller-1 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
Controller-1 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -0,0 +1,21 @@
 | 
			
		||||
===============================================
 | 
			
		||||
Bare metal All-in-one Simplex Installation R4.0
 | 
			
		||||
===============================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_aio_simplex.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   aio_simplex_hardware
 | 
			
		||||
   aio_simplex_install_kubernetes
 | 
			
		||||
@@ -0,0 +1,58 @@
 | 
			
		||||
=====================
 | 
			
		||||
Hardware Requirements
 | 
			
		||||
=====================
 | 
			
		||||
 | 
			
		||||
This section describes the hardware requirements and server preparation for a
 | 
			
		||||
**StarlingX R4.0 bare metal All-in-one Simplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Minimum hardware requirements
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
The recommended minimum hardware requirements for bare metal servers for various
 | 
			
		||||
host types are:
 | 
			
		||||
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum Requirement     | All-in-one Controller Node                                |
 | 
			
		||||
+=========================+===========================================================+
 | 
			
		||||
| Number of servers       |  1                                                        |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge)      |
 | 
			
		||||
|                         |   8 cores/socket                                          |
 | 
			
		||||
|                         |                                                           |
 | 
			
		||||
|                         | or                                                        |
 | 
			
		||||
|                         |                                                           |
 | 
			
		||||
|                         | - Single-CPU Intel® Xeon® D-15xx family, 8 cores          |
 | 
			
		||||
|                         |   (low-power/low-cost option)                             |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum memory          | 64 GB                                                     |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Primary disk            | 500 GB SDD or NVMe (see :doc:`../../nvme_config`)         |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Additional disks        | - 1 or more 500 GB (min. 10K RPM) for Ceph OSD            |
 | 
			
		||||
|                         | - Recommended, but not required: 1 or more SSDs or NVMe   |
 | 
			
		||||
|                         |   drives for Ceph journals (min. 1024 MiB per OSD         |
 | 
			
		||||
|                         |   journal)                                                |
 | 
			
		||||
|                         | - For OpenStack, recommend 1 or more 500 GB (min. 10K     |
 | 
			
		||||
|                         |   RPM) for VM local ephemeral storage                     |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| Minimum network ports   | - OAM: 1x1GE                                              |
 | 
			
		||||
|                         | - Data: 1 or more x 10GE                                  |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
| BIOS settings           | - Hyper-Threading technology enabled                      |
 | 
			
		||||
|                         | - Virtualization technology enabled                       |
 | 
			
		||||
|                         | - VT for directed I/O enabled                             |
 | 
			
		||||
|                         | - CPU power and performance policy set to performance     |
 | 
			
		||||
|                         | - CPU C state control disabled                            |
 | 
			
		||||
|                         | - Plug & play BMC detection disabled                      |
 | 
			
		||||
+-------------------------+-----------------------------------------------------------+
 | 
			
		||||
 | 
			
		||||
--------------------------
 | 
			
		||||
Prepare bare metal servers
 | 
			
		||||
--------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: prep_servers.txt
 | 
			
		||||
@@ -0,0 +1,349 @@
 | 
			
		||||
=================================================
 | 
			
		||||
Install StarlingX Kubernetes on Bare Metal AIO-SX
 | 
			
		||||
=================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R4.0 bare metal All-in-one Simplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
---------------------
 | 
			
		||||
Create a bootable USB
 | 
			
		||||
---------------------
 | 
			
		||||
 | 
			
		||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
 | 
			
		||||
create a bootable USB with the StarlingX ISO on your system.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. incl-install-software-controller-0-aio-simplex-start:
 | 
			
		||||
 | 
			
		||||
#. Insert the bootable USB into a bootable USB port on the host you are
 | 
			
		||||
   configuring as controller-0.
 | 
			
		||||
 | 
			
		||||
#. Power on the host.
 | 
			
		||||
 | 
			
		||||
#. Attach to a console, ensure the host boots from the USB, and wait for the
 | 
			
		||||
   StarlingX Installer Menus.
 | 
			
		||||
 | 
			
		||||
#. Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
   #. First menu: Select 'All-in-one Controller Configuration'
 | 
			
		||||
   #. Second menu: Select 'Graphical Console' or 'Textual Console' depending on
 | 
			
		||||
      your terminal access to the console port
 | 
			
		||||
   #. Third menu: Select 'Standard Security Profile'
 | 
			
		||||
 | 
			
		||||
#. Wait for non-interactive install of software to complete and server to reboot.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the server.
 | 
			
		||||
 | 
			
		||||
.. incl-install-software-controller-0-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
#. Login using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Login: sysadmin
 | 
			
		||||
      Password:
 | 
			
		||||
      Changing password for sysadmin.
 | 
			
		||||
      (current) UNIX Password: sysadmin
 | 
			
		||||
      New Password:
 | 
			
		||||
      (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. Verify and/or configure IP connectivity.
 | 
			
		||||
 | 
			
		||||
   External connectivity is required to run the Ansible bootstrap playbook. The
 | 
			
		||||
   StarlingX boot image will DHCP out all interfaces so the server may have
 | 
			
		||||
   obtained an IP address and have external IP connectivity if a DHCP server is
 | 
			
		||||
   present in your environment. Verify this using the :command:`ip addr` and
 | 
			
		||||
   :command:`ping 8.8.8.8` commands.
 | 
			
		||||
 | 
			
		||||
   Otherwise, manually configure an IP address and default IP route. Use the
 | 
			
		||||
   PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
 | 
			
		||||
   deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
 | 
			
		||||
      sudo ip link set up dev <PORT>
 | 
			
		||||
      sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
 | 
			
		||||
      ping 8.8.8.8
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``sysadmin home directory ($HOME)``
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   .. include:: ../ansible_install_time_only.txt
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   #. Use a copy of the default.yml file listed above to provide your overrides.
 | 
			
		||||
 | 
			
		||||
      The default.yml file lists all available parameters for bootstrap
 | 
			
		||||
      configuration with a brief description for each parameter in the file comments.
 | 
			
		||||
 | 
			
		||||
      To use this method, copy the default.yml file listed above to
 | 
			
		||||
      ``$HOME/localhost.yml`` and edit the configurable values as desired.
 | 
			
		||||
 | 
			
		||||
   #. Create a minimal user configuration override file.
 | 
			
		||||
 | 
			
		||||
      To use this method, create your override file at ``$HOME/localhost.yml``
 | 
			
		||||
      and provide the minimum required parameters for the deployment configuration
 | 
			
		||||
      as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
 | 
			
		||||
      applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: simplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
 | 
			
		||||
        external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
 | 
			
		||||
        external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <sysadmin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM interface of controller-0 and specify the attached network
 | 
			
		||||
   as "oam". Use the OAM port name, for example eth0, that is applicable to your
 | 
			
		||||
   deployment environment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     OAM_IF=<OAM-PORT>
 | 
			
		||||
     system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-0. Use the DATA port names, for example
 | 
			
		||||
   eth0, applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure the SRIOV device plugin
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       system host-label-assign controller-0 sriovdp=enabled
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes.
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
       system host-memory-modify controller-0 0 -1G 100
 | 
			
		||||
       system host-memory-modify controller-0 1 -1G 100
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     DATA0IF=<DATA-0-PORT>
 | 
			
		||||
     DATA1IF=<DATA-1-PORT>
 | 
			
		||||
     export COMPUTE=controller-0
 | 
			
		||||
     PHYSNET0='physnet0'
 | 
			
		||||
     PHYSNET1='physnet1'
 | 
			
		||||
     SPL=/tmp/tmp-system-port-list
 | 
			
		||||
     SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
     system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
     system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
     DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
     DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
     DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
     DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
     DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
     DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
     DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
     DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
     system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
     system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
     system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
     system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
     system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
     system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-0 for Ceph. The following example adds an OSD
 | 
			
		||||
   to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     echo ">>> Add OSDs to primary tier"
 | 
			
		||||
     system host-disk-list controller-0
 | 
			
		||||
     system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
 | 
			
		||||
     system host-stor-list controller-0
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-openstack-specific-aio-simplex-start:
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-label-assign controller-0 openstack-control-plane=enabled
 | 
			
		||||
     system host-label-assign controller-0 openstack-compute-node=enabled
 | 
			
		||||
     system host-label-assign controller-0 openvswitch=enabled
 | 
			
		||||
     system host-label-assign controller-0 sriov=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
 | 
			
		||||
 | 
			
		||||
   StarlingX has OVS (kernel-based) vSwitch configured as default:
 | 
			
		||||
 | 
			
		||||
   * Runs in a container; defined within the helm charts of stx-openstack
 | 
			
		||||
     manifest.
 | 
			
		||||
   * Shares the core(s) assigned to the platform.
 | 
			
		||||
 | 
			
		||||
   If you require better performance, OVS-DPDK should be used:
 | 
			
		||||
 | 
			
		||||
   * Runs directly on the host (it is not containerized).
 | 
			
		||||
   * Requires that at least 1 core be assigned/dedicated to the vSwitch function.
 | 
			
		||||
 | 
			
		||||
   To deploy the default containerized OVS:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system modify --vswitch_type none
 | 
			
		||||
 | 
			
		||||
   Do not run any vSwitch directly on the host, instead, use the containerized
 | 
			
		||||
   OVS defined in the helm charts of stx-openstack manifest.
 | 
			
		||||
 | 
			
		||||
   To deploy OVS-DPDK (OVS with the Data Plane Development Kit, which is
 | 
			
		||||
   supported only on bare metal hardware), run the following command:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system modify --vswitch_type ovs-dpdk
 | 
			
		||||
     system host-cpu-modify -f vswitch -p0 1 controller-0
 | 
			
		||||
 | 
			
		||||
   Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
 | 
			
		||||
   default to automatically assigning 1 vSwitch core for AIO controllers and 2
 | 
			
		||||
   vSwitch cores for computes.
 | 
			
		||||
 | 
			
		||||
   When using OVS-DPDK, virtual machines must be configured to use a flavor with
 | 
			
		||||
   property: hw:mem_page_size=large
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      After controller-0 is unlocked, changing vswitch_type requires
 | 
			
		||||
      locking and unlocking all computes (and/or AIO Controllers) to
 | 
			
		||||
      apply the change.
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     export COMPUTE=controller-0
 | 
			
		||||
 | 
			
		||||
     echo ">>> Getting root disk info"
 | 
			
		||||
     ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
     ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
     echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
 | 
			
		||||
 | 
			
		||||
     echo ">>>> Configuring nova-local"
 | 
			
		||||
     NOVA_SIZE=34
 | 
			
		||||
     NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
 | 
			
		||||
     NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
     system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
     system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
     sleep 2
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-0-aio-simplex-start:
 | 
			
		||||
 | 
			
		||||
Unlock controller-0 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-0-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -0,0 +1,22 @@
 | 
			
		||||
=============================================================
 | 
			
		||||
Bare metal Standard with Controller Storage Installation R4.0
 | 
			
		||||
=============================================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_controller_storage.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   controller_storage_hardware
 | 
			
		||||
   controller_storage_install_kubernetes
 | 
			
		||||
@@ -0,0 +1,56 @@
 | 
			
		||||
=====================
 | 
			
		||||
Hardware Requirements
 | 
			
		||||
=====================
 | 
			
		||||
 | 
			
		||||
This section describes the hardware requirements and server preparation for a
 | 
			
		||||
**StarlingX R4.0 bare metal Standard with Controller Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Minimum hardware requirements
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
The recommended minimum hardware requirements for bare metal servers for various
 | 
			
		||||
host types are:
 | 
			
		||||
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Minimum Requirement     | Controller Node             | Compute Node                |
 | 
			
		||||
+=========================+=============================+=============================+
 | 
			
		||||
| Number of servers       | 2                           | 2-10                        |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Minimum processor class | - Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge)      |
 | 
			
		||||
|                         |   8 cores/socket                                          |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Minimum memory          | 64 GB                       | 32 GB                       |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Primary disk            | 500 GB SDD or NVMe (see     | 120 GB (Minimum 10k RPM)    |
 | 
			
		||||
|                         | :doc:`../../nvme_config`)   |                             |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Additional disks        | - 1 or more 500 GB (min.    | - For OpenStack, recommend  |
 | 
			
		||||
|                         |   10K RPM) for Ceph OSD     |   1 or more 500 GB (min.    |
 | 
			
		||||
|                         | - Recommended, but not      |   10K RPM) for VM local     |
 | 
			
		||||
|                         |   required: 1 or more SSDs  |   ephemeral storage         |
 | 
			
		||||
|                         |   or NVMe drives for Ceph   |                             |
 | 
			
		||||
|                         |   journals (min. 1024 MiB   |                             |
 | 
			
		||||
|                         |   per OSD journal)          |                             |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| Minimum network ports   | - Mgmt/Cluster: 1x10GE      | - Mgmt/Cluster: 1x10GE      |
 | 
			
		||||
|                         | - OAM: 1x1GE                | - Data: 1 or more x 10GE    |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
| BIOS settings           | - Hyper-Threading technology enabled                      |
 | 
			
		||||
|                         | - Virtualization technology enabled                       |
 | 
			
		||||
|                         | - VT for directed I/O enabled                             |
 | 
			
		||||
|                         | - CPU power and performance policy set to performance     |
 | 
			
		||||
|                         | - CPU C state control disabled                            |
 | 
			
		||||
|                         | - Plug & play BMC detection disabled                      |
 | 
			
		||||
+-------------------------+-----------------------------+-----------------------------+
 | 
			
		||||
 | 
			
		||||
--------------------------
 | 
			
		||||
Prepare bare metal servers
 | 
			
		||||
--------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: prep_servers.txt
 | 
			
		||||
@@ -0,0 +1,588 @@
 | 
			
		||||
===========================================================================
 | 
			
		||||
Install StarlingX Kubernetes on Bare Metal Standard with Controller Storage
 | 
			
		||||
===========================================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R4.0 bare metal Standard with Controller Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Create bootable USB
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
 | 
			
		||||
create a bootable USB with the StarlingX ISO on your system.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. incl-install-software-controller-0-standard-start:
 | 
			
		||||
 | 
			
		||||
#. Insert the bootable USB into a bootable USB port on the host you are
 | 
			
		||||
   configuring as controller-0.
 | 
			
		||||
 | 
			
		||||
#. Power on the host.
 | 
			
		||||
 | 
			
		||||
#. Attach to a console, ensure the host boots from the USB, and wait for the
 | 
			
		||||
   StarlingX Installer Menus.
 | 
			
		||||
 | 
			
		||||
#. Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
   #. First menu: Select 'Standard Controller Configuration'
 | 
			
		||||
   #. Second menu: Select 'Graphical Console' or 'Textual Console' depending on
 | 
			
		||||
      your terminal access to the console port
 | 
			
		||||
   #. Third menu: Select 'Standard Security Profile'
 | 
			
		||||
 | 
			
		||||
#. Wait for non-interactive install of software to complete and server to reboot.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the server.
 | 
			
		||||
 | 
			
		||||
.. incl-install-software-controller-0-standard-end:
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. incl-bootstrap-sys-controller-0-standard-start:
 | 
			
		||||
 | 
			
		||||
#. Login using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Login: sysadmin
 | 
			
		||||
      Password:
 | 
			
		||||
      Changing password for sysadmin.
 | 
			
		||||
      (current) UNIX Password: sysadmin
 | 
			
		||||
      New Password:
 | 
			
		||||
      (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. Verify and/or configure IP connectivity.
 | 
			
		||||
 | 
			
		||||
   External connectivity is required to run the Ansible bootstrap playbook. The
 | 
			
		||||
   StarlingX boot image will DHCP out all interfaces so the server may have
 | 
			
		||||
   obtained an IP address and have external IP connectivity if a DHCP server is
 | 
			
		||||
   present in your environment. Verify this using the :command:`ip addr` and
 | 
			
		||||
   :command:`ping 8.8.8.8` commands.
 | 
			
		||||
 | 
			
		||||
   Otherwise, manually configure an IP address and default IP route. Use the
 | 
			
		||||
   PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
 | 
			
		||||
   deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
 | 
			
		||||
      sudo ip link set up dev <PORT>
 | 
			
		||||
      sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
 | 
			
		||||
      ping 8.8.8.8
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``sysadmin home directory ($HOME)``
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   .. include:: ../ansible_install_time_only.txt
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   #. Use a copy of the default.yml file listed above to provide your overrides.
 | 
			
		||||
 | 
			
		||||
      The default.yml file lists all available parameters for bootstrap
 | 
			
		||||
      configuration with a brief description for each parameter in the file comments.
 | 
			
		||||
 | 
			
		||||
      To use this method, copy the default.yml file listed above to
 | 
			
		||||
      ``$HOME/localhost.yml`` and edit the configurable values as desired.
 | 
			
		||||
 | 
			
		||||
   #. Create a minimal user configuration override file.
 | 
			
		||||
 | 
			
		||||
      To use this method, create your override file at ``$HOME/localhost.yml``
 | 
			
		||||
      and provide the minimum required parameters for the deployment configuration
 | 
			
		||||
      as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing
 | 
			
		||||
      applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
      ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: duplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH>
 | 
			
		||||
        external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS>
 | 
			
		||||
        external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS>
 | 
			
		||||
        external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS>
 | 
			
		||||
        external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS>
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <sysadmin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. incl-bootstrap-sys-controller-0-standard-end:
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-storage-start:
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	   source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
 | 
			
		||||
   attached networks. Use the OAM and MGMT port names, for example eth0, that are
 | 
			
		||||
   applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
  	 OAM_IF=<OAM-PORT>
 | 
			
		||||
  	 MGMT_IF=<MGMT-PORT>
 | 
			
		||||
  	 system host-if-modify controller-0 lo -c none
 | 
			
		||||
  	 IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
 | 
			
		||||
  	 for UUID in $IFNET_UUIDS; do
 | 
			
		||||
  	     system interface-network-remove ${UUID}
 | 
			
		||||
  	 done
 | 
			
		||||
  	 system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
  	 system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
  	 system host-if-modify controller-0 $MGMT_IF -c platform
 | 
			
		||||
  	 system interface-network-assign controller-0 $MGMT_IF mgmt
 | 
			
		||||
  	 system interface-network-assign controller-0 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   	 system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-label-assign controller-0 openstack-control-plane=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Configure the system setting for the vSwitch.
 | 
			
		||||
 | 
			
		||||
   StarlingX has OVS (kernel-based) vSwitch configured as default:
 | 
			
		||||
 | 
			
		||||
   * Runs in a container; defined within the helm charts of stx-openstack
 | 
			
		||||
     manifest.
 | 
			
		||||
   * Shares the core(s) assigned to the platform.
 | 
			
		||||
 | 
			
		||||
   If you require better performance, OVS-DPDK should be used:
 | 
			
		||||
 | 
			
		||||
   * Runs directly on the host (it is not containerized).
 | 
			
		||||
   * Requires that at least 1 core be assigned/dedicated to the vSwitch function.
 | 
			
		||||
 | 
			
		||||
   To deploy the default containerized OVS:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system modify --vswitch_type none
 | 
			
		||||
 | 
			
		||||
   Do not run any vSwitch directly on the host, instead, use the containerized
 | 
			
		||||
   OVS defined in the helm charts of stx-openstack manifest.
 | 
			
		||||
 | 
			
		||||
   To deploy OVS-DPDK (OVS with the Data Plane Development Kit, which is
 | 
			
		||||
   supported only on bare metal hardware), run the following command:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system modify --vswitch_type ovs-dpdk
 | 
			
		||||
	   system host-cpu-modify -f vswitch -p0 1 controller-0
 | 
			
		||||
 | 
			
		||||
   Once vswitch_type is set to OVS-DPDK, any subsequent nodes created will
 | 
			
		||||
   default to automatically assigning 1 vSwitch core for AIO controllers and 2
 | 
			
		||||
   vSwitch cores for computes.
 | 
			
		||||
 | 
			
		||||
   When using OVS-DPDK, Virtual Machines must be configured to use a flavor with
 | 
			
		||||
   property: hw:mem_page_size=large.
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
   	  After controller-0 is unlocked, changing vswitch_type requires
 | 
			
		||||
   	  locking and unlocking all computes (and/or AIO controllers) to
 | 
			
		||||
   	  apply the change.
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock controller-0 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------------------------
 | 
			
		||||
Install software on controller-1 and compute nodes
 | 
			
		||||
--------------------------------------------------
 | 
			
		||||
 | 
			
		||||
#. Power on the controller-1 server and force it to network boot with the
 | 
			
		||||
   appropriate BIOS boot options for your particular server.
 | 
			
		||||
 | 
			
		||||
#. As controller-1 boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
 | 
			
		||||
   host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-list
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	 | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	 | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
	 | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Using the host id, set the personality of this host to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on controller-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting for the previous step to complete, power on the compute-0 and
 | 
			
		||||
   compute-1 servers. Set the personality to 'worker' and assign a unique
 | 
			
		||||
   hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, power on compute-0 and wait for the new host (hostname=None) to
 | 
			
		||||
   be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 3 personality=worker hostname=compute-0
 | 
			
		||||
 | 
			
		||||
   Repeat for compute-1. Power on compute-1 and wait for the new host (hostname=None) to
 | 
			
		||||
   be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 4 personality=worker hostname=compute-1
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1, compute-0, and compute-1 to
 | 
			
		||||
   complete, for all servers to reboot, and for all to show as locked/disabled/online in
 | 
			
		||||
   'system host-list'.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 system host-list
 | 
			
		||||
 | 
			
		||||
 	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	 | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	 | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
	 | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
	 | 3  | compute-0    | compute     | locked         | disabled    | online       |
 | 
			
		||||
	 | 4  | compute-1    | compute     | locked         | disabled    | online       |
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-1-start:
 | 
			
		||||
 | 
			
		||||
Configure the OAM and MGMT interfaces of controller-0 and specify the attached
 | 
			
		||||
networks. Use the OAM and MGMT port names, for example eth0, that are applicable
 | 
			
		||||
to your deployment environment.
 | 
			
		||||
 | 
			
		||||
(Note that the MGMT interface is partially set up automatically by the network
 | 
			
		||||
install procedure.)
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	OAM_IF=<OAM-PORT>
 | 
			
		||||
	MGMT_IF=<MGMT-PORT>
 | 
			
		||||
	system host-if-modify controller-1 $OAM_IF -c platform
 | 
			
		||||
	system interface-network-assign controller-1 $OAM_IF oam
 | 
			
		||||
	system interface-network-assign controller-1 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in support
 | 
			
		||||
of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	system host-label-assign controller-1 openstack-control-plane=enabled
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-1-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-1-start:
 | 
			
		||||
 | 
			
		||||
Unlock controller-1 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
Controller-1 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-1-end:
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure compute nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
#. Add the third Ceph monitor to compute-0:
 | 
			
		||||
 | 
			
		||||
   (The first two Ceph monitors are automatically assigned to controller-0 and
 | 
			
		||||
   controller-1.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   	 system ceph-mon-add compute-0
 | 
			
		||||
 | 
			
		||||
#. Wait for the compute node monitor to complete configuration:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 system ceph-mon-list
 | 
			
		||||
	 +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
	 | uuid                                 | ceph_ | hostname     | state      | task |
 | 
			
		||||
	 |                                      | mon_g |              |            |      |
 | 
			
		||||
	 |                                      | ib    |              |            |      |
 | 
			
		||||
	 +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
	 | 64176b6c-e284-4485-bb2a-115dee215279 | 20    | controller-1 | configured | None |
 | 
			
		||||
	 | a9ca151b-7f2c-4551-8167-035d49e2df8c | 20    | controller-0 | configured | None |
 | 
			
		||||
	 | f76bc385-190c-4d9a-aa0f-107346a9907b | 20    | compute-0    | configured | None |
 | 
			
		||||
	 +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
  	 for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
  	    system interface-network-assign $COMPUTE mgmt0 cluster-host
 | 
			
		||||
  	 done
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
 | 
			
		||||
   example eth0, that are applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure SRIOV device plug in:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
  		for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
  		   system host-label-assign ${COMPUTE} sriovdp=enabled
 | 
			
		||||
  		done
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
    		for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
    		   system host-memory-modify ${COMPUTE} 0 -1G 100
 | 
			
		||||
    		   system host-memory-modify ${COMPUTE} 1 -1G 100
 | 
			
		||||
    		done
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	    DATA0IF=<DATA-0-PORT>
 | 
			
		||||
  		DATA1IF=<DATA-1-PORT>
 | 
			
		||||
  		PHYSNET0='physnet0'
 | 
			
		||||
  		PHYSNET1='physnet1'
 | 
			
		||||
  		SPL=/tmp/tmp-system-port-list
 | 
			
		||||
  		SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
 | 
			
		||||
  		# configure the datanetworks in sysinv, prior to referencing it
 | 
			
		||||
  		# in the ``system host-if-modify`` command'.
 | 
			
		||||
  		system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
  		system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
  		for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
  		  echo "Configuring interface for: $COMPUTE"
 | 
			
		||||
  		  set -ex
 | 
			
		||||
  		  system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
  		  system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
  		  DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
  		  DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
  		  DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
  		  DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
  		  DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
  		  DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
  		  DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
  		  DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
  		  system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
  		  system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
  		  system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
  		  system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
  		  set +ex
 | 
			
		||||
  		done
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
 	 for NODE in compute-0 compute-1; do
 | 
			
		||||
	   system host-label-assign $NODE  openstack-compute-node=enabled
 | 
			
		||||
	   system host-label-assign $NODE  openvswitch=enabled
 | 
			
		||||
	   system host-label-assign $NODE  sriov=enabled
 | 
			
		||||
	 done
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
	   echo "Configuring Nova local for: $COMPUTE"
 | 
			
		||||
	   ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
	   ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
	   PARTITION_SIZE=10
 | 
			
		||||
	   NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 | 
			
		||||
	   NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
	   system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
	   system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
	 done
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock compute nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
Unlock compute nodes in order to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
	   system host-unlock $COMPUTE
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
The compute nodes will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
----------------------------
 | 
			
		||||
Add Ceph OSDs to controllers
 | 
			
		||||
----------------------------
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to controller-0. The following example adds OSDs to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 HOST=controller-0
 | 
			
		||||
	 DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
	 TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
	 OSDs="/dev/sdb"
 | 
			
		||||
	 for OSD in $OSDs; do
 | 
			
		||||
	    system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
	    while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
	 done
 | 
			
		||||
 | 
			
		||||
	 system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to controller-1. The following example adds OSDs to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 HOST=controller-1
 | 
			
		||||
	 DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
	 TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
	 OSDs="/dev/sdb"
 | 
			
		||||
	 for OSD in $OSDs; do
 | 
			
		||||
	     system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
	     while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
	 done
 | 
			
		||||
 | 
			
		||||
	 system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -0,0 +1,22 @@
 | 
			
		||||
 | 
			
		||||
============================================================
 | 
			
		||||
Bare metal Standard with Dedicated Storage Installation R4.0
 | 
			
		||||
============================================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_dedicated_storage.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   dedicated_storage_hardware
 | 
			
		||||
   dedicated_storage_install_kubernetes
 | 
			
		||||
@@ -0,0 +1,61 @@
 | 
			
		||||
=====================
 | 
			
		||||
Hardware Requirements
 | 
			
		||||
=====================
 | 
			
		||||
 | 
			
		||||
This section describes the hardware requirements and server preparation for a
 | 
			
		||||
**StarlingX R4.0 bare metal Standard with Dedicated Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Minimum hardware requirements
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
The recommended minimum hardware requirements for bare metal servers for various
 | 
			
		||||
host types are:
 | 
			
		||||
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Minimum Requirement | Controller Node           | Storage Node          | Compute Node          |
 | 
			
		||||
+=====================+===========================+=======================+=======================+
 | 
			
		||||
| Number of servers   | 2                         | 2-9                   | 2-100                 |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Minimum processor   | Dual-CPU Intel® Xeon® E5 26xx family (SandyBridge) 8 cores/socket         |
 | 
			
		||||
| class               |                                                                           |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Minimum memory      | 64 GB                     | 64 GB                 | 32 GB                 |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Primary disk        | 500 GB SDD or NVMe (see   | 120 GB (min. 10k RPM) | 120 GB (min. 10k RPM) |
 | 
			
		||||
|                     | :doc:`../../nvme_config`) |                       |                       |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Additional disks    | None                      | - 1 or more 500 GB    | - For OpenStack,      |
 | 
			
		||||
|                     |                           |   (min. 10K RPM) for  |   recommend 1 or more |
 | 
			
		||||
|                     |                           |   Ceph OSD            |   500 GB (min. 10K    |
 | 
			
		||||
|                     |                           | - Recommended, but    |   RPM) for VM         |
 | 
			
		||||
|                     |                           |   not required: 1 or  |   ephemeral storage   |
 | 
			
		||||
|                     |                           |   more SSDs or NVMe   |                       |
 | 
			
		||||
|                     |                           |   drives for Ceph     |                       |
 | 
			
		||||
|                     |                           |   journals (min. 1024 |                       |
 | 
			
		||||
|                     |                           |   MiB per OSD         |                       |
 | 
			
		||||
|                     |                           |   journal)            |                       |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| Minimum network     | - Mgmt/Cluster:           | - Mgmt/Cluster:       | - Mgmt/Cluster:       |
 | 
			
		||||
| ports               |   1x10GE                  |   1x10GE              |   1x10GE              |
 | 
			
		||||
|                     | - OAM: 1x1GE              |                       | - Data: 1 or more     |
 | 
			
		||||
|                     |                           |                       |   x 10GE              |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
| BIOS settings       | - Hyper-Threading technology enabled                                      |
 | 
			
		||||
|                     | - Virtualization technology enabled                                       |
 | 
			
		||||
|                     | - VT for directed I/O enabled                                             |
 | 
			
		||||
|                     | - CPU power and performance policy set to performance                     |
 | 
			
		||||
|                     | - CPU C state control disabled                                            |
 | 
			
		||||
|                     | - Plug & play BMC detection disabled                                      |
 | 
			
		||||
+---------------------+---------------------------+-----------------------+-----------------------+
 | 
			
		||||
 | 
			
		||||
--------------------------
 | 
			
		||||
Prepare bare metal servers
 | 
			
		||||
--------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: prep_servers.txt
 | 
			
		||||
@@ -0,0 +1,362 @@
 | 
			
		||||
==========================================================================
 | 
			
		||||
Install StarlingX Kubernetes on Bare Metal Standard with Dedicated Storage
 | 
			
		||||
==========================================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R4.0 bare metal Standard with Dedicated Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Create bootable USB
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Refer to :doc:`/deploy_install_guides/bootable_usb` for instructions on how to
 | 
			
		||||
create a bootable USB with the StarlingX ISO on your system.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-install-software-controller-0-standard-start:
 | 
			
		||||
   :end-before: incl-install-software-controller-0-standard-end:
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-bootstrap-sys-controller-0-standard-start:
 | 
			
		||||
   :end-before: incl-bootstrap-sys-controller-0-standard-end:
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-0-storage-start:
 | 
			
		||||
   :end-before: incl-config-controller-0-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock controller-0 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
------------------------------------------------------------------
 | 
			
		||||
Install software on controller-1, storage nodes, and compute nodes
 | 
			
		||||
------------------------------------------------------------------
 | 
			
		||||
 | 
			
		||||
#. Power on the controller-1 server and force it to network boot with the
 | 
			
		||||
   appropriate BIOS boot options for your particular server.
 | 
			
		||||
 | 
			
		||||
#. As controller-1 boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of controller-0, list hosts to see newly discovered controller-1
 | 
			
		||||
   host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	system host-list
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	| id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
	| 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Using the host id, set the personality of this host to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on controller-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting for the previous step to complete, power on the storage-0 and
 | 
			
		||||
   storage-1 servers. Set the personality to 'storage' and assign a unique
 | 
			
		||||
   hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, power on storage-0 and wait for the new host (hostname=None) to
 | 
			
		||||
   be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   		system host-update 3 personality=storage
 | 
			
		||||
 | 
			
		||||
   Repeat for storage-1. Power on storage-1 and wait for the new host
 | 
			
		||||
   (hostname=None) to be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   		system host-update 4 personality=storage
 | 
			
		||||
 | 
			
		||||
   This initiates the software installation on storage-0 and storage-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting for the previous step to complete, power on the compute-0 and
 | 
			
		||||
   compute-1 servers. Set the personality to 'worker' and assign a unique
 | 
			
		||||
   hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, power on compute-0 and wait for the new host (hostname=None) to
 | 
			
		||||
   be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-update 5 personality=worker hostname=compute-0
 | 
			
		||||
 | 
			
		||||
   Repeat for compute-1. Power on compute-1 and wait for the new host
 | 
			
		||||
   (hostname=None) to be discovered by checking 'system host-list':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 system host-update 6 personality=worker hostname=compute-1
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on compute-0 and compute-1.
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
 | 
			
		||||
   compute-0, and compute-1 to complete, for all servers to reboot, and for all to
 | 
			
		||||
   show as locked/disabled/online in 'system host-list'.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 system host-list
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	 | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	 | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
	 | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
	 | 3  | storage-0    | storage     | locked         | disabled    | online       |
 | 
			
		||||
	 | 4  | storage-1    | storage     | locked         | disabled    | online       |
 | 
			
		||||
	 | 5  | compute-0    | compute     | locked         | disabled    | online       |
 | 
			
		||||
	 | 6  | compute-1    | compute     | locked         | disabled    | online       |
 | 
			
		||||
	 +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-1-start:
 | 
			
		||||
   :end-before: incl-config-controller-1-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-unlock-controller-1-start:
 | 
			
		||||
   :end-before: incl-unlock-controller-1-end:
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure storage nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	for COMPUTE in storage-0 storage-1; do
 | 
			
		||||
	   system interface-network-assign $COMPUTE mgmt0 cluster-host
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to storage-0. The following example adds OSDs to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	HOST=storage-0
 | 
			
		||||
	DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
	TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
	OSDs="/dev/sdb"
 | 
			
		||||
	for OSD in $OSDs; do
 | 
			
		||||
	   system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
	   while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
	system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to storage-1. The following example adds OSDs to the `sdb` disk:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	HOST=storage-1
 | 
			
		||||
	DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
	TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
	OSDs="/dev/sdb"
 | 
			
		||||
	for OSD in $OSDs; do
 | 
			
		||||
	    system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
	    while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
	system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock storage nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
Unlock storage nodes in order to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	for STORAGE in storage-0 storage-1; do
 | 
			
		||||
	   system host-unlock $STORAGE
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
The storage nodes will reboot in order to apply configuration changes and come
 | 
			
		||||
into service. This can take 5-10 minutes, depending on the performance of the
 | 
			
		||||
host machine.
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure compute nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes:
 | 
			
		||||
 | 
			
		||||
   (Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
	   system interface-network-assign $COMPUTE mgmt0 cluster-host
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for compute nodes. Use the DATA port names, for
 | 
			
		||||
   example eth0, that are applicable to your deployment environment.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      This step is **required** for OpenStack.
 | 
			
		||||
 | 
			
		||||
      This step is optional for Kubernetes: Do this step if using SRIOV network
 | 
			
		||||
      attachments in hosted application containers.
 | 
			
		||||
 | 
			
		||||
   For Kubernetes SRIOV network attachments:
 | 
			
		||||
 | 
			
		||||
   * Configure SRIOV device plug in:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
		for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
		   system host-label-assign ${COMPUTE} sriovdp=enabled
 | 
			
		||||
		done
 | 
			
		||||
 | 
			
		||||
   * If planning on running DPDK in containers on this host, configure the number
 | 
			
		||||
     of 1G Huge pages required on both NUMA nodes:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
		for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
		   system host-memory-modify ${COMPUTE} 0 -1G 100
 | 
			
		||||
		   system host-memory-modify ${COMPUTE} 1 -1G 100
 | 
			
		||||
		done
 | 
			
		||||
 | 
			
		||||
   For both Kubernetes and OpenStack:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	 	DATA0IF=<DATA-0-PORT>
 | 
			
		||||
		DATA1IF=<DATA-1-PORT>
 | 
			
		||||
		PHYSNET0='physnet0'
 | 
			
		||||
		PHYSNET1='physnet1'
 | 
			
		||||
		SPL=/tmp/tmp-system-port-list
 | 
			
		||||
		SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
 | 
			
		||||
		# configure the datanetworks in sysinv, prior to referencing it
 | 
			
		||||
		# in the ``system host-if-modify`` command'.
 | 
			
		||||
		system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
		system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
		for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
		  echo "Configuring interface for: $COMPUTE"
 | 
			
		||||
		  set -ex
 | 
			
		||||
		  system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
		  system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
		  DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
		  DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
		  DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
		  DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
		  DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
		  DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
		  DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
		  DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
		  system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
		  system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
		  system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
		  system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
		  set +ex
 | 
			
		||||
		done
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
 | 
			
		||||
   support of installing the stx-openstack manifest and helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	for NODE in compute-0 compute-1; do
 | 
			
		||||
	  system host-label-assign $NODE  openstack-compute-node=enabled
 | 
			
		||||
	  system host-label-assign $NODE  openvswitch=enabled
 | 
			
		||||
	  system host-label-assign $NODE  sriov=enabled
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
	  echo "Configuring Nova local for: $COMPUTE"
 | 
			
		||||
	  ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
	  ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
	  PARTITION_SIZE=10
 | 
			
		||||
	  NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 | 
			
		||||
	  NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
	  system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
	  system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock compute nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
Unlock compute nodes in order to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
	   system host-unlock $COMPUTE
 | 
			
		||||
	done
 | 
			
		||||
 | 
			
		||||
The compute nodes will reboot in order to apply configuration changes and come
 | 
			
		||||
into service. This can take 5-10 minutes, depending on the performance of the
 | 
			
		||||
host machine.
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -0,0 +1,66 @@
 | 
			
		||||
====================================
 | 
			
		||||
Bare metal Standard with Ironic R4.0
 | 
			
		||||
====================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
Ironic is an OpenStack project that provisions bare metal machines. For
 | 
			
		||||
information about the Ironic project, see
 | 
			
		||||
`Ironic Documentation <https://docs.openstack.org/ironic>`__.
 | 
			
		||||
 | 
			
		||||
End user applications can be deployed on bare metal servers (instead of
 | 
			
		||||
virtual machines) by configuring OpenStack Ironic and deploying a pool of 1 or
 | 
			
		||||
more bare metal servers.
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-deployment-options-ironic.png
 | 
			
		||||
   :scale: 90%
 | 
			
		||||
   :alt: Standard with Ironic deployment configuration
 | 
			
		||||
 | 
			
		||||
   *Figure 1: Standard with Ironic deployment configuration*
 | 
			
		||||
 | 
			
		||||
Bare metal servers must be connected to:
 | 
			
		||||
 | 
			
		||||
* IPMI for OpenStack Ironic control
 | 
			
		||||
* ironic-provisioning-net tenant network via their untagged physical interface,
 | 
			
		||||
  which supports PXE booting
 | 
			
		||||
 | 
			
		||||
As part of configuring OpenStack Ironic in StarlingX:
 | 
			
		||||
 | 
			
		||||
* An ironic-provisioning-net tenant network must be identified as the boot
 | 
			
		||||
  network for bare metal nodes.
 | 
			
		||||
* An additional untagged physical interface must be configured on controller
 | 
			
		||||
  nodes and connected to the ironic-provisioning-net tenant network. The
 | 
			
		||||
  OpenStack Ironic tftpboot server will PXE boot the bare metal servers over
 | 
			
		||||
  this interface.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   Bare metal servers are NOT:
 | 
			
		||||
 | 
			
		||||
   * Running any OpenStack / StarlingX software; they are running end user
 | 
			
		||||
     applications (for example, Glance Images).
 | 
			
		||||
   * To be connected to the internal management network.
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
StarlingX currently supports only a bare metal installation of Ironic with a
 | 
			
		||||
standard configuration, either:
 | 
			
		||||
 | 
			
		||||
* :doc:`controller_storage`
 | 
			
		||||
 | 
			
		||||
* :doc:`dedicated_storage`
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
This guide assumes that you have a standard deployment installed and configured
 | 
			
		||||
with 2x controllers and at least 1x compute node, with the StarlingX OpenStack
 | 
			
		||||
application (stx-openstack) applied.
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   ironic_hardware
 | 
			
		||||
   ironic_install
 | 
			
		||||
@@ -0,0 +1,51 @@
 | 
			
		||||
=====================
 | 
			
		||||
Hardware Requirements
 | 
			
		||||
=====================
 | 
			
		||||
 | 
			
		||||
This section describes the hardware requirements and server preparation for a
 | 
			
		||||
**StarlingX R4.0 bare metal Ironic** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Minimum hardware requirements
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
* One or more bare metal hosts as Ironic nodes as well as tenant instance node.
 | 
			
		||||
 | 
			
		||||
* BMC support on bare metal host and controller node connectivity to the BMC IP
 | 
			
		||||
  address of bare metal hosts.
 | 
			
		||||
 | 
			
		||||
For controller nodes:
 | 
			
		||||
 | 
			
		||||
* Additional NIC port on both controller nodes for connecting to the
 | 
			
		||||
  ironic-provisioning-net.
 | 
			
		||||
 | 
			
		||||
For compute nodes:
 | 
			
		||||
 | 
			
		||||
* If using a flat data network for the Ironic provisioning network, an additional
 | 
			
		||||
  NIC port on one of the compute nodes is required.
 | 
			
		||||
 | 
			
		||||
* Alternatively, use a VLAN data network for the Ironic provisioning network and
 | 
			
		||||
  simply add the new data network to an existing interface on the compute node.
 | 
			
		||||
 | 
			
		||||
* Additional switch ports / configuration for new ports on controller, compute,
 | 
			
		||||
  and Ironic nodes, for connectivity to the Ironic provisioning network.
 | 
			
		||||
 | 
			
		||||
-----------------------------------
 | 
			
		||||
BMC configuration of Ironic node(s)
 | 
			
		||||
-----------------------------------
 | 
			
		||||
 | 
			
		||||
Enable BMC and allocate a static IP, username, and password in the BIOS settings.
 | 
			
		||||
For example, set:
 | 
			
		||||
 | 
			
		||||
IP address
 | 
			
		||||
  10.10.10.126
 | 
			
		||||
 | 
			
		||||
username
 | 
			
		||||
  root
 | 
			
		||||
 | 
			
		||||
password
 | 
			
		||||
  test123
 | 
			
		||||
@@ -0,0 +1,392 @@
 | 
			
		||||
================================
 | 
			
		||||
Install Ironic on StarlingX R4.0
 | 
			
		||||
================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install Ironic on a standard configuration,
 | 
			
		||||
either:
 | 
			
		||||
 | 
			
		||||
* **StarlingX R4.0 bare metal Standard with Controller Storage** deployment
 | 
			
		||||
  configuration
 | 
			
		||||
 | 
			
		||||
* **StarlingX R4.0 bare metal Standard with Dedicated Storage** deployment
 | 
			
		||||
  configuration
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
---------------------
 | 
			
		||||
Enable Ironic service
 | 
			
		||||
---------------------
 | 
			
		||||
 | 
			
		||||
This section describes the pre-configuration required to enable the Ironic service.
 | 
			
		||||
All the commands in this section are for the StarlingX platform.
 | 
			
		||||
 | 
			
		||||
First acquire administrative privileges:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
********************************
 | 
			
		||||
Download Ironic deployment image
 | 
			
		||||
********************************
 | 
			
		||||
 | 
			
		||||
The Ironic service requires a deployment image (kernel and ramdisk) which is
 | 
			
		||||
used to clean Ironic nodes and install the end-user's image. The cleaning done
 | 
			
		||||
by the deployment image wipes the disks and tests connectivity to the Ironic
 | 
			
		||||
conductor on the controller nodes via the Ironic Python Agent (IPA).
 | 
			
		||||
 | 
			
		||||
The Ironic deployment Stein image (**Ironic-kernel** and **Ironic-ramdisk**)
 | 
			
		||||
can be found here:
 | 
			
		||||
 | 
			
		||||
* `Ironic-kernel coreos_production_pxe-stable-stein.vmlinuz
 | 
			
		||||
  <https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe-stable-stein.vmlinuz>`__
 | 
			
		||||
* `Ironic-ramdisk coreos_production_pxe_image-oem-stable-stein.cpio.gz
 | 
			
		||||
  <https://tarballs.openstack.org/ironic-python-agent/coreos/files/coreos_production_pxe_image-oem-stable-stein.cpio.gz>`__
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
*******************************************************
 | 
			
		||||
Configure Ironic network on deployed standard StarlingX
 | 
			
		||||
*******************************************************
 | 
			
		||||
 | 
			
		||||
#. Add an address pool for the Ironic network. This example uses `ironic-pool`:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system addrpool-add --ranges 10.10.20.1-10.10.20.100 ironic-pool 10.10.20.0 24
 | 
			
		||||
 | 
			
		||||
#. Add the Ironic platform network. This example uses `ironic-net`:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system addrpool-list | grep ironic-pool | awk '{print$2}' | xargs system network-add ironic-net ironic false
 | 
			
		||||
 | 
			
		||||
#. Add the Ironic tenant network. This example uses `ironic-data`:
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      The tenant network is not the same as the platform network described in
 | 
			
		||||
      the previous step. You can specify any name for the tenant network other
 | 
			
		||||
      than ‘ironic’. If the name 'ironic' is used, a user override must be
 | 
			
		||||
      generated to indicate the tenant network name.
 | 
			
		||||
 | 
			
		||||
      Refer to section `Generate user Helm overrides`_ for details.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ironic-data flat
 | 
			
		||||
 | 
			
		||||
#. Configure the new interfaces (for Ironic) on controller nodes and assign
 | 
			
		||||
   them to the platform network. Host must be locked. This example uses the
 | 
			
		||||
   platform network `ironic-net` that was named in a previous step.
 | 
			
		||||
 | 
			
		||||
   These new interfaces to the controllers are used to connect to the Ironic
 | 
			
		||||
   provisioning network:
 | 
			
		||||
 | 
			
		||||
   **controller-0**
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system interface-network-assign controller-0 enp2s0 ironic-net
 | 
			
		||||
      system host-if-modify -n ironic -c platform \
 | 
			
		||||
      --ipv4-mode static --ipv4-pool ironic-pool controller-0 enp2s0
 | 
			
		||||
 | 
			
		||||
      # Apply the OpenStack Ironic node labels
 | 
			
		||||
      system host-label-assign controller-0 openstack-ironic=enabled
 | 
			
		||||
 | 
			
		||||
      # Unlock the node to apply changes
 | 
			
		||||
      system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   **controller-1**
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system interface-network-assign controller-1 enp2s0 ironic-net
 | 
			
		||||
      system host-if-modify -n ironic -c platform \
 | 
			
		||||
      --ipv4-mode static --ipv4-pool ironic-pool controller-1 enp2s0
 | 
			
		||||
 | 
			
		||||
      # Apply the OpenStack Ironic node labels
 | 
			
		||||
      system host-label-assign controller-1 openstack-ironic=enabled
 | 
			
		||||
 | 
			
		||||
      # Unlock the node to apply changes
 | 
			
		||||
      system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
#. Configure the new interface (for Ironic) on one of the compute nodes and
 | 
			
		||||
   assign it to the Ironic data network. This example uses the data network
 | 
			
		||||
   `ironic-data` that was named in a previous step.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system interface-datanetwork-assign compute-0 eno1 ironic-data
 | 
			
		||||
      system host-if-modify -n ironicdata -c data compute-0 eno1
 | 
			
		||||
 | 
			
		||||
****************************
 | 
			
		||||
Generate user Helm overrides
 | 
			
		||||
****************************
 | 
			
		||||
 | 
			
		||||
Ironic Helm Charts are included in the stx-openstack application. By default,
 | 
			
		||||
Ironic is disabled.
 | 
			
		||||
 | 
			
		||||
To enable Ironic, update the following Ironic Helm Chart attributes:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   system helm-override-update stx-openstack ironic openstack \
 | 
			
		||||
   --set network.pxe.neutron_subnet_alloc_start=10.10.20.10 \
 | 
			
		||||
   --set network.pxe.neutron_subnet_gateway=10.10.20.1 \
 | 
			
		||||
   --set network.pxe.neutron_provider_network=ironic-data
 | 
			
		||||
 | 
			
		||||
:command:`network.pxe.neutron_subnet_alloc_start` sets the DHCP start IP to
 | 
			
		||||
Neutron for Ironic node provision, and reserves several IPs for the platform.
 | 
			
		||||
 | 
			
		||||
If the data network name for Ironic is changed, modify
 | 
			
		||||
:command:`network.pxe.neutron_provider_network` to the command above:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   --set network.pxe.neutron_provider_network=ironic-data
 | 
			
		||||
 | 
			
		||||
*******************************
 | 
			
		||||
Apply stx-openstack application
 | 
			
		||||
*******************************
 | 
			
		||||
 | 
			
		||||
Re-apply the stx-openstack application to apply the changes to Ironic:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   system helm-chart-attribute-modify stx-openstack ironic openstack \
 | 
			
		||||
   --enabled true
 | 
			
		||||
 | 
			
		||||
   system application-apply stx-openstack
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Start an Ironic node
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
All the commands in this section are for the OpenStack application with
 | 
			
		||||
administrative privileges.
 | 
			
		||||
 | 
			
		||||
From a new shell as a root user, without sourcing ``/etc/platform/openrc``:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   mkdir -p /etc/openstack
 | 
			
		||||
 | 
			
		||||
   tee /etc/openstack/clouds.yaml << EOF
 | 
			
		||||
   clouds:
 | 
			
		||||
     openstack_helm:
 | 
			
		||||
       region_name: RegionOne
 | 
			
		||||
       identity_api_version: 3
 | 
			
		||||
       endpoint_type: internalURL
 | 
			
		||||
       auth:
 | 
			
		||||
         username: 'admin'
 | 
			
		||||
         password: 'Li69nux*'
 | 
			
		||||
         project_name: 'admin'
 | 
			
		||||
         project_domain_name: 'default'
 | 
			
		||||
         user_domain_name: 'default'
 | 
			
		||||
         auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
 | 
			
		||||
   EOF
 | 
			
		||||
 | 
			
		||||
   export OS_CLOUD=openstack_helm
 | 
			
		||||
 | 
			
		||||
********************
 | 
			
		||||
Create Glance images
 | 
			
		||||
********************
 | 
			
		||||
 | 
			
		||||
#. Create the **ironic-kernel** image:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack image create \
 | 
			
		||||
      --file ~/coreos_production_pxe-stable-stein.vmlinuz \
 | 
			
		||||
      --disk-format aki \
 | 
			
		||||
      --container-format aki \
 | 
			
		||||
      --public \
 | 
			
		||||
      ironic-kernel
 | 
			
		||||
 | 
			
		||||
#. Create the **ironic-ramdisk** image:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack image create \
 | 
			
		||||
      --file ~/coreos_production_pxe_image-oem-stable-stein.cpio.gz \
 | 
			
		||||
      --disk-format ari \
 | 
			
		||||
      --container-format ari \
 | 
			
		||||
      --public \
 | 
			
		||||
      ironic-ramdisk
 | 
			
		||||
 | 
			
		||||
#. Create the end user application image (for example, CentOS):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack image create \
 | 
			
		||||
      --file ~/CentOS-7-x86_64-GenericCloud-root.qcow2 \
 | 
			
		||||
      --public --disk-format \
 | 
			
		||||
      qcow2 --container-format bare centos
 | 
			
		||||
 | 
			
		||||
*********************
 | 
			
		||||
Create an Ironic node
 | 
			
		||||
*********************
 | 
			
		||||
 | 
			
		||||
#. Create a node:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node create --driver ipmi --name ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Add IPMI information:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node set \
 | 
			
		||||
      --driver-info ipmi_address=10.10.10.126 \
 | 
			
		||||
      --driver-info ipmi_username=root \
 | 
			
		||||
      --driver-info ipmi_password=test123 \
 | 
			
		||||
      --driver-info ipmi_terminal_port=623 ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Set `ironic-kernel` and `ironic-ramdisk` images driver information,
 | 
			
		||||
   on this bare metal node:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node set \
 | 
			
		||||
      --driver-info deploy_kernel=$(openstack image list | grep ironic-kernel | awk '{print$2}') \
 | 
			
		||||
      --driver-info deploy_ramdisk=$(openstack image list | grep ironic-ramdisk | awk '{print$2}') \
 | 
			
		||||
      ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Set resource properties on this bare metal node based on actual Ironic node
 | 
			
		||||
   capacities:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node set \
 | 
			
		||||
      --property cpus=4 \
 | 
			
		||||
      --property cpu_arch=x86_64\
 | 
			
		||||
      --property capabilities="boot_option:local" \
 | 
			
		||||
      --property memory_mb=65536 \
 | 
			
		||||
      --property local_gb=400 \
 | 
			
		||||
      --resource-class bm ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Add pxe_template location:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node set --driver-info \
 | 
			
		||||
      pxe_template='/var/lib/openstack/lib64/python2.7/site-packages/ironic/drivers/modules/ipxe_config.template' \
 | 
			
		||||
      ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Create a port to identify the specific port used by the Ironic node.
 | 
			
		||||
   Substitute **a4:bf:01:2b:3b:c8** with the MAC address for the Ironic node
 | 
			
		||||
   port which connects to the Ironic network:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal port create \
 | 
			
		||||
      --node $(openstack baremetal node list | grep ironic-test0 | awk '{print$2}') \
 | 
			
		||||
      --pxe-enabled true a4:bf:01:2b:3b:c8
 | 
			
		||||
 | 
			
		||||
#. Change node state to `manage`:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node manage ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Make node available for deployment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node provide ironic-test0
 | 
			
		||||
 | 
			
		||||
#. Wait for ironic-test0 provision-state: available:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack baremetal node show ironic-test0
 | 
			
		||||
 | 
			
		||||
---------------------------------
 | 
			
		||||
Deploy an instance on Ironic node
 | 
			
		||||
---------------------------------
 | 
			
		||||
 | 
			
		||||
All the commands in this section are for the OpenStack application, but this
 | 
			
		||||
time with *tenant* specific privileges.
 | 
			
		||||
 | 
			
		||||
#. From a new shell as a root user, without sourcing ``/etc/platform/openrc``:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      mkdir -p /etc/openstack
 | 
			
		||||
 | 
			
		||||
      tee /etc/openstack/clouds.yaml << EOF
 | 
			
		||||
      clouds:
 | 
			
		||||
        openstack_helm:
 | 
			
		||||
          region_name: RegionOne
 | 
			
		||||
          identity_api_version: 3
 | 
			
		||||
          endpoint_type: internalURL
 | 
			
		||||
          auth:
 | 
			
		||||
            username: 'joeuser'
 | 
			
		||||
            password: 'mypasswrd'
 | 
			
		||||
            project_name: 'intel'
 | 
			
		||||
            project_domain_name: 'default'
 | 
			
		||||
            user_domain_name: 'default'
 | 
			
		||||
            auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
 | 
			
		||||
      EOF
 | 
			
		||||
 | 
			
		||||
      export OS_CLOUD=openstack_helm
 | 
			
		||||
 | 
			
		||||
#. Create flavor.
 | 
			
		||||
 | 
			
		||||
   Set resource CUSTOM_BM corresponding to **--resource-class bm**:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack flavor create --ram 4096 --vcpus 4 --disk 400 \
 | 
			
		||||
      --property resources:CUSTOM_BM=1 \
 | 
			
		||||
      --property resources:VCPU=0 \
 | 
			
		||||
      --property resources:MEMORY_MB=0 \
 | 
			
		||||
      --property resources:DISK_GB=0 \
 | 
			
		||||
      --property capabilities:boot_option='local' \
 | 
			
		||||
      bm-flavor
 | 
			
		||||
 | 
			
		||||
   See `Adding scheduling information
 | 
			
		||||
   <https://docs.openstack.org/ironic/latest/install/enrollment.html#adding-scheduling-information>`__
 | 
			
		||||
   and `Configure Nova flavors
 | 
			
		||||
   <https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html>`__
 | 
			
		||||
   for more information.
 | 
			
		||||
 | 
			
		||||
#. Enable service
 | 
			
		||||
 | 
			
		||||
   List the compute services:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack compute service list
 | 
			
		||||
 | 
			
		||||
   Set compute service properties:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack compute service set --enable controller-0 nova-compute
 | 
			
		||||
 | 
			
		||||
#. Create instance
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      The :command:`keypair create` command is optional. It is not required to
 | 
			
		||||
      enable a bare metal instance.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   Create 2 new servers, one bare metal and one virtual:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      openstack server create --image centos --flavor bm-flavor \
 | 
			
		||||
      --network baremetal --key-name mykey bm
 | 
			
		||||
 | 
			
		||||
      openstack server create --image centos --flavor m1.small \
 | 
			
		||||
      --network baremetal --key-name mykey vm
 | 
			
		||||
@@ -0,0 +1,17 @@
 | 
			
		||||
Prior to starting the StarlingX installation, the bare metal servers must be in
 | 
			
		||||
the following condition:
 | 
			
		||||
 | 
			
		||||
* Physically installed
 | 
			
		||||
 | 
			
		||||
* Cabled for power
 | 
			
		||||
 | 
			
		||||
* Cabled for networking
 | 
			
		||||
 | 
			
		||||
  * Far-end switch ports should be properly configured to realize the networking
 | 
			
		||||
    shown in Figure 1.
 | 
			
		||||
 | 
			
		||||
* All disks wiped
 | 
			
		||||
 | 
			
		||||
  * Ensures that servers will boot from either the network or USB storage (if present)
 | 
			
		||||
 | 
			
		||||
* Powered off
 | 
			
		||||
@@ -0,0 +1,23 @@
 | 
			
		||||
The All-in-one Duplex (AIO-DX) deployment option provides a pair of high
 | 
			
		||||
availability (HA) servers with each server providing all three cloud functions
 | 
			
		||||
(controller, compute, and storage).
 | 
			
		||||
 | 
			
		||||
An AIO-DX configuration provides the following benefits:
 | 
			
		||||
 | 
			
		||||
* Only a small amount of cloud processing and storage power is required
 | 
			
		||||
* Application consolidation using multiple virtual machines on a single pair of
 | 
			
		||||
  physical servers
 | 
			
		||||
* High availability (HA) services run on the controller function across two
 | 
			
		||||
  physical servers in either active/active or active/standby mode
 | 
			
		||||
* A storage back end solution using a two-node CEPH deployment across two servers
 | 
			
		||||
* Virtual machines scheduled on both compute functions
 | 
			
		||||
* Protection against overall server hardware fault, where
 | 
			
		||||
 | 
			
		||||
  * All controller HA services go active on the remaining healthy server
 | 
			
		||||
  * All virtual machines are recovered on the remaining healthy server
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-deployment-options-duplex.png
 | 
			
		||||
   :scale: 50%
 | 
			
		||||
   :alt: All-in-one Duplex deployment configuration
 | 
			
		||||
 | 
			
		||||
   *Figure 1: All-in-one Duplex deployment configuration*
 | 
			
		||||
@@ -0,0 +1,18 @@
 | 
			
		||||
The All-in-one Simplex (AIO-SX) deployment option provides all three cloud
 | 
			
		||||
functions (controller, compute, and storage) on a single server with the
 | 
			
		||||
following benefits:
 | 
			
		||||
 | 
			
		||||
* Requires only a small amount of cloud processing and storage power
 | 
			
		||||
* Application consolidation using multiple virtual machines on a single pair of
 | 
			
		||||
  physical servers
 | 
			
		||||
* A storage backend solution using a single-node CEPH deployment
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-deployment-options-simplex.png
 | 
			
		||||
   :scale: 50%
 | 
			
		||||
   :alt: All-in-one Simplex deployment configuration
 | 
			
		||||
 | 
			
		||||
   *Figure 1: All-in-one Simplex deployment configuration*
 | 
			
		||||
 | 
			
		||||
An AIO-SX deployment gives no protection against overall server hardware fault.
 | 
			
		||||
Hardware component protection can be enabled with, for example, a hardware RAID
 | 
			
		||||
or 2x Port LAG in the deployment.
 | 
			
		||||
@@ -0,0 +1,22 @@
 | 
			
		||||
The Standard with Controller Storage deployment option provides two high
 | 
			
		||||
availability (HA) controller nodes and a pool of up to 10 compute nodes.
 | 
			
		||||
 | 
			
		||||
A Standard with Controller Storage configuration provides the following benefits:
 | 
			
		||||
 | 
			
		||||
* A pool of up to 10 compute nodes
 | 
			
		||||
* High availability (HA) services run across the controller nodes in either
 | 
			
		||||
  active/active or active/standby mode
 | 
			
		||||
* A storage back end solution using a two-node CEPH deployment across two
 | 
			
		||||
  controller servers
 | 
			
		||||
* Protection against overall controller and compute node failure, where
 | 
			
		||||
 | 
			
		||||
  * On overall controller node failure, all controller HA services go active on
 | 
			
		||||
    the remaining healthy controller node
 | 
			
		||||
  * On overall compute node failure, virtual machines and containers are
 | 
			
		||||
    recovered on the remaining healthy compute nodes
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-deployment-options-controller-storage.png
 | 
			
		||||
   :scale: 50%
 | 
			
		||||
   :alt: Standard with Controller Storage deployment configuration
 | 
			
		||||
 | 
			
		||||
   *Figure 1: Standard with Controller Storage deployment configuration*
 | 
			
		||||
@@ -0,0 +1,17 @@
 | 
			
		||||
The Standard with Dedicated Storage deployment option is a standard installation
 | 
			
		||||
with independent controller, compute, and storage nodes.
 | 
			
		||||
 | 
			
		||||
A Standard with Dedicated Storage configuration provides the following benefits:
 | 
			
		||||
 | 
			
		||||
* A pool of up to 100 compute nodes
 | 
			
		||||
* A 2x node high availability (HA) controller cluster with HA services running
 | 
			
		||||
  across the controller nodes in either active/active or active/standby mode
 | 
			
		||||
* A storage back end solution using a two-to-9x node HA CEPH storage cluster
 | 
			
		||||
  that supports a replication factor of two or three
 | 
			
		||||
* Up to four groups of 2x storage nodes, or up to three groups of 3x storage nodes
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-deployment-options-dedicated-storage.png
 | 
			
		||||
   :scale: 50%
 | 
			
		||||
   :alt: Standard with Dedicated Storage deployment configuration
 | 
			
		||||
 | 
			
		||||
   *Figure 1: Standard with Dedicated Storage deployment configuration*
 | 
			
		||||
@@ -0,0 +1,289 @@
 | 
			
		||||
===================================
 | 
			
		||||
Distributed Cloud Installation R4.0
 | 
			
		||||
===================================
 | 
			
		||||
 | 
			
		||||
This section describes how to install and configure the StarlingX distributed
 | 
			
		||||
cloud deployment.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
Distributed cloud configuration supports an edge computing solution by
 | 
			
		||||
providing central management and orchestration for a geographically
 | 
			
		||||
distributed network of StarlingX Kubernetes edge systems/clusters.
 | 
			
		||||
 | 
			
		||||
The StarlingX distributed cloud implements the OpenStack Edge Computing
 | 
			
		||||
Groups's MVP `Edge Reference Architecture
 | 
			
		||||
<https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures>`_,
 | 
			
		||||
specifically the "Distributed Control Plane" scenario.
 | 
			
		||||
 | 
			
		||||
The StarlingX distributed cloud deployment is designed to meet the needs of
 | 
			
		||||
edge-based data centers with centralized orchestration and independent control
 | 
			
		||||
planes, and in which Network Function Cloudification (NFC) worker resources
 | 
			
		||||
are localized for maximum responsiveness. The architecture features:
 | 
			
		||||
 | 
			
		||||
- Centralized orchestration of edge cloud control planes.
 | 
			
		||||
- Full synchronized control planes at edge clouds (that is, Kubernetes cluster
 | 
			
		||||
  master and nodes), with greater benefits for local services, such as:
 | 
			
		||||
 | 
			
		||||
  - Reduced network latency.
 | 
			
		||||
  - Operational availability, even if northbound connectivity
 | 
			
		||||
    to the central cloud is lost.
 | 
			
		||||
 | 
			
		||||
The system supports a scalable number of StarlingX Kubernetes edge
 | 
			
		||||
systems/clusters, which are centrally managed and synchronized over L3
 | 
			
		||||
networks from a central cloud. Each edge system is also highly scalable, from
 | 
			
		||||
a single node StarlingX Kubernetes deployment to a full standard cloud
 | 
			
		||||
configuration with controller, worker and storage nodes.
 | 
			
		||||
 | 
			
		||||
------------------------------
 | 
			
		||||
Distributed cloud architecture
 | 
			
		||||
------------------------------
 | 
			
		||||
 | 
			
		||||
A distributed cloud system consists of a central cloud, and one or more
 | 
			
		||||
subclouds connected to the SystemController region central cloud over L3
 | 
			
		||||
networks, as shown in Figure 1.
 | 
			
		||||
 | 
			
		||||
- **Central cloud**
 | 
			
		||||
 | 
			
		||||
  The central cloud provides a *RegionOne* region for managing the physical
 | 
			
		||||
  platform of the central cloud and the *SystemController* region for managing
 | 
			
		||||
  and orchestrating over the subclouds.
 | 
			
		||||
 | 
			
		||||
  - **RegionOne**
 | 
			
		||||
 | 
			
		||||
    In the Horizon GUI, RegionOne is the name of the access mode, or region,
 | 
			
		||||
    used to manage the nodes in the central cloud.
 | 
			
		||||
 | 
			
		||||
  - **SystemController**
 | 
			
		||||
 | 
			
		||||
    In the Horizon GUI, SystemController is the name of the access mode, or
 | 
			
		||||
    region, used to manage the subclouds.
 | 
			
		||||
 | 
			
		||||
    You can use the SystemController to add subclouds, synchronize select
 | 
			
		||||
    configuration data across all subclouds and monitor subcloud operations
 | 
			
		||||
    and alarms. System software updates for the subclouds are also centrally
 | 
			
		||||
    managed and applied from the SystemController.
 | 
			
		||||
 | 
			
		||||
    DNS, NTP, and other select configuration settings are centrally managed
 | 
			
		||||
    at the SystemController and pushed to the subclouds in parallel to
 | 
			
		||||
    maintain synchronization across the distributed cloud.
 | 
			
		||||
 | 
			
		||||
- **Subclouds**
 | 
			
		||||
 | 
			
		||||
  The subclouds are StarlingX Kubernetes edge systems/clusters used to host
 | 
			
		||||
  containerized applications. Any type of StarlingX Kubernetes configuration,
 | 
			
		||||
  (including simplex, duplex, or standard with or without storage nodes), can
 | 
			
		||||
  be used for a subcloud. The two edge clouds shown in Figure 1 are subclouds.
 | 
			
		||||
 | 
			
		||||
  Alarms raised at the subclouds are sent to the SystemController for
 | 
			
		||||
  central reporting.
 | 
			
		||||
 | 
			
		||||
.. figure:: ../figures/starlingx-deployment-options-distributed-cloud.png
 | 
			
		||||
   :scale: 45%
 | 
			
		||||
   :alt: Distributed cloud deployment configuration
 | 
			
		||||
 | 
			
		||||
   *Figure 1: Distributed cloud deployment configuration*
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Network requirements
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
Subclouds are connected to the SystemController through both the OAM and the
 | 
			
		||||
Management interfaces. Because each subcloud is on a separate L3 subnet, the
 | 
			
		||||
OAM, Management and PXE boot L2 networks are local to the subclouds. They are
 | 
			
		||||
not connected via L2 to the central cloud, they are only connected via L3
 | 
			
		||||
routing. The settings required to connect a subcloud to the SystemController
 | 
			
		||||
are specified when a subcloud is defined. A gateway router is required to
 | 
			
		||||
complete the L3 connections, which will provide IP routing between the
 | 
			
		||||
subcloud Management and OAM IP subnet and the SystemController Management and
 | 
			
		||||
OAM IP subnet, respectively. For more information, see the
 | 
			
		||||
`Install a Subcloud`_ section later in this guide.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Install and provision the central cloud
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
Installing the central cloud is similar to installing a standard
 | 
			
		||||
StarlingX Kubernetes system. The central cloud supports either an AIO-duplex
 | 
			
		||||
deployment configuration or a standard with dedicated storage nodes deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
To configure controller-0 as a distributed cloud central controller, you must
 | 
			
		||||
set certain system parameters during the initial bootstrapping of
 | 
			
		||||
controller-0. Set the system parameter *distributed_cloud_role* to
 | 
			
		||||
*systemcontroller* in the Ansible bootstrap override file. Also, set the
 | 
			
		||||
management network IP address range to exclude IP addresses reserved for
 | 
			
		||||
gateway routers providing routing to the subclouds' management subnets.
 | 
			
		||||
 | 
			
		||||
.. note:: Worker hosts and data networks are not used in the
 | 
			
		||||
          central cloud.
 | 
			
		||||
 | 
			
		||||
Procedure:
 | 
			
		||||
 | 
			
		||||
- Follow the StarlingX R4.0 installation procedures with the extra step noted below:
 | 
			
		||||
 | 
			
		||||
  - AIO-duplex:
 | 
			
		||||
    `Bare metal All-in-one Duplex Installation R4.0 <https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/aio_duplex.html>`_
 | 
			
		||||
 | 
			
		||||
  - Standard with dedicated storage nodes:
 | 
			
		||||
    `Bare metal Standard with Dedicated Storage Installation R4.0 <https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage.html>`_
 | 
			
		||||
 | 
			
		||||
- For the step "Bootstrap system on controller-0", add the following
 | 
			
		||||
  parameters to the Ansible bootstrap override file.
 | 
			
		||||
 | 
			
		||||
  .. code:: yaml
 | 
			
		||||
 | 
			
		||||
     distributed_cloud_role: systemcontroller
 | 
			
		||||
     management_start_address: <X.Y.Z.2>
 | 
			
		||||
     management_end_address: <X.Y.Z.50>
 | 
			
		||||
 | 
			
		||||
------------------
 | 
			
		||||
Install a subcloud
 | 
			
		||||
------------------
 | 
			
		||||
 | 
			
		||||
At the subcloud location:
 | 
			
		||||
 | 
			
		||||
1. Physically install and cable all subcloud servers.
 | 
			
		||||
2. Physically install the top of rack switch and configure it for the
 | 
			
		||||
   required networks.
 | 
			
		||||
3. Physically install the gateway routers which will provide IP routing
 | 
			
		||||
   between the subcloud OAM and Management subnets and the SystemController
 | 
			
		||||
   OAM and management subnets.
 | 
			
		||||
4. On the server designated for controller-0, install the StarlingX
 | 
			
		||||
   Kubernetes software from USB or a PXE Boot server.
 | 
			
		||||
 | 
			
		||||
5. Establish an L3 connection to the SystemController by enabling the OAM
 | 
			
		||||
   interface (with OAM IP/subnet) on the subcloud controller using the
 | 
			
		||||
   ``config_management`` script.
 | 
			
		||||
 | 
			
		||||
   .. note:: This step should **not** use an interface that uses the MGMT
 | 
			
		||||
             IP/subnet because the MGMT IP subnet will get moved to the loopback
 | 
			
		||||
             address by the Ansible bootstrap playbook during installation.
 | 
			
		||||
 | 
			
		||||
   Be prepared to provide the following information:
 | 
			
		||||
 | 
			
		||||
   - Subcloud OAM interface name (for example, enp0s3).
 | 
			
		||||
   - Subcloud OAM interface address, in CIDR format (for example, 10.10.10.12/24).
 | 
			
		||||
 | 
			
		||||
     .. note:: This must match the *external_oam_floating_address* supplied in
 | 
			
		||||
               the subcloud's ansible bootstrap override file.
 | 
			
		||||
 | 
			
		||||
   - Subcloud gateway address on the OAM network
 | 
			
		||||
     (for example, 10.10.10.1). A default value is shown.
 | 
			
		||||
   - System Controller OAM subnet (for example, 10,10.10.0/24).
 | 
			
		||||
 | 
			
		||||
   .. note:: To exit without completing the script, use ``CTRL+C``. Allow a few minutes for
 | 
			
		||||
             the script to finish.
 | 
			
		||||
 | 
			
		||||
   .. code:: sh
 | 
			
		||||
 | 
			
		||||
        $ sudo config_management
 | 
			
		||||
        Enabling interfaces... DONE
 | 
			
		||||
        Waiting 120 seconds for LLDP neighbor discovery... Retrieving neighbor details... DONE
 | 
			
		||||
        Available interfaces:
 | 
			
		||||
        local interface     remote port
 | 
			
		||||
        ---------------     ----------
 | 
			
		||||
        enp0s3              08:00:27:c4:6c:7a
 | 
			
		||||
        enp0s8              08:00:27:86:7a:13
 | 
			
		||||
        enp0s9              unknown
 | 
			
		||||
 | 
			
		||||
        Enter management interface name: enp0s3
 | 
			
		||||
        Enter management address CIDR: 10.10.10.12/24
 | 
			
		||||
        Enter management gateway address [10.10.10.1]:
 | 
			
		||||
        Enter System Controller subnet: 10.10.10.0/24
 | 
			
		||||
        Disabling non-management interfaces... DONE
 | 
			
		||||
        Configuring management interface... DONE
 | 
			
		||||
        RTNETLINK answers: File exists
 | 
			
		||||
        Adding route to System Controller... DONE
 | 
			
		||||
 | 
			
		||||
At the SystemController:
 | 
			
		||||
 | 
			
		||||
1. Create a ``bootstrap-values.yml`` overrides file for the subcloud, for
 | 
			
		||||
   example:
 | 
			
		||||
 | 
			
		||||
   .. code:: yaml
 | 
			
		||||
 | 
			
		||||
      system_mode: duplex
 | 
			
		||||
      name: "subcloud1"
 | 
			
		||||
      description: "Ottawa Site"
 | 
			
		||||
      location: "YOW"
 | 
			
		||||
      management_subnet: 192.168.101.0/24
 | 
			
		||||
      management_start_address: 192.168.101.2
 | 
			
		||||
      management_end_address: 192.168.101.50
 | 
			
		||||
      management_gateway_address: 192.168.101.1
 | 
			
		||||
      external_oam_subnet: 10.10.10.0/24
 | 
			
		||||
      external_oam_gateway_address: 10.10.10.1
 | 
			
		||||
      external_oam_floating_address: 10.10.10.12
 | 
			
		||||
      systemcontroller_gateway_address: 192.168.204.101
 | 
			
		||||
 | 
			
		||||
2. Add the subcloud using the CLI command below:
 | 
			
		||||
 | 
			
		||||
   .. code:: sh
 | 
			
		||||
 | 
			
		||||
      dcmanager subcloud add --bootstrap-address <ip_address>
 | 
			
		||||
      --bootstrap-values <config-file>
 | 
			
		||||
 | 
			
		||||
   Where:
 | 
			
		||||
 | 
			
		||||
   - *<ip_address>* is the bootstrap_ip set earlier on the subcloud.
 | 
			
		||||
   - *<config_file>* is the Ansible override configuration file, ``bootstrap-values.yml``,
 | 
			
		||||
     created earlier in step 1.
 | 
			
		||||
 | 
			
		||||
   You will be prompted for the Linux password of the subcloud. This command
 | 
			
		||||
   will take 5- 10 minutes to complete. You can monitor the progress of the
 | 
			
		||||
   subcloud bootstrap through logs:
 | 
			
		||||
 | 
			
		||||
   .. code:: sh
 | 
			
		||||
 | 
			
		||||
      tail –f /var/log/dcmanager/<subcloud name>_bootstrap_<time stamp>.log
 | 
			
		||||
 | 
			
		||||
3. Confirm that the subcloud was deployed successfully:
 | 
			
		||||
 | 
			
		||||
   .. code:: sh
 | 
			
		||||
 | 
			
		||||
      dcmanager subcloud list
 | 
			
		||||
 | 
			
		||||
      +----+-----------+------------+--------------+---------------+---------+
 | 
			
		||||
      | id | name      | management | availability | deploy status | sync    |
 | 
			
		||||
      +----+-----------+------------+--------------+---------------+---------+
 | 
			
		||||
      | 1  | subcloud1 | unmanaged  | offline      | complete      | unknown |
 | 
			
		||||
      +----+-----------+------------+--------------+---------------+---------+
 | 
			
		||||
 | 
			
		||||
4. Continue provisioning the subcloud system as required using the StarlingX
 | 
			
		||||
   R4.0 Installation procedures and starting from the 'Configure controller-0'
 | 
			
		||||
   step.
 | 
			
		||||
 | 
			
		||||
   - For AIO-Simplex:
 | 
			
		||||
     `Bare metal All-in-one Simplex Installation R4.0 <https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/aio_simplex.html>`_
 | 
			
		||||
 | 
			
		||||
   - For AIO-Duplex:
 | 
			
		||||
     `Bare metal All-in-one Duplex Installation R4.0 <https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/aio_duplex.html>`_
 | 
			
		||||
 | 
			
		||||
   - For Standard with controller storage:
 | 
			
		||||
     `Bare metal Standard with Controller Storage Installation R4.0 <https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/controller_storage.html>`_
 | 
			
		||||
 | 
			
		||||
   - For Standard with dedicated storage nodes:
 | 
			
		||||
     `Bare metal Standard with Dedicated Storage Installation R4.0 <https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/dedicated_storage.html>`_
 | 
			
		||||
 | 
			
		||||
5. Add routes from the subcloud to the controller management network:
 | 
			
		||||
 | 
			
		||||
   .. code:: sh
 | 
			
		||||
 | 
			
		||||
      system host-route-add <host id> <mgmt.interface> \
 | 
			
		||||
                            <system controller mgmt.subnet> <prefix> <subcloud mgmt.gateway ip>
 | 
			
		||||
 | 
			
		||||
   For example:
 | 
			
		||||
 | 
			
		||||
   .. code:: sh
 | 
			
		||||
 | 
			
		||||
      system host-route-add 1 enp0s8 192.168.204.0 24 192.168.101.1
 | 
			
		||||
 | 
			
		||||
   Repeat this step for each host of the subcloud.
 | 
			
		||||
| 
		 After Width: | Height: | Size: 96 KiB  | 
| 
		 After Width: | Height: | Size: 109 KiB  | 
| 
		 After Width: | Height: | Size: 313 KiB  | 
| 
		 After Width: | Height: | Size: 100 KiB  | 
| 
		 After Width: | Height: | Size: 103 KiB  | 
| 
		 After Width: | Height: | Size: 127 KiB  | 
| 
		 After Width: | Height: | Size: 70 KiB  | 
							
								
								
									
										65
									
								
								doc/source/deploy_install_guides/r4_release/index.rst
									
									
									
									
									
										Normal file
									
								
							
							
						
						@@ -0,0 +1,65 @@
 | 
			
		||||
===========================
 | 
			
		||||
StarlingX R4.0 Installation
 | 
			
		||||
===========================
 | 
			
		||||
 | 
			
		||||
StarlingX provides a pre-defined set of standard
 | 
			
		||||
:doc:`deployment configurations </introduction/deploy_options>`. Most deployment options may
 | 
			
		||||
be installed in a virtual environment or on bare metal.
 | 
			
		||||
 | 
			
		||||
-----------------------------------------------------
 | 
			
		||||
Install StarlingX Kubernetes in a virtual environment
 | 
			
		||||
-----------------------------------------------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   virtual/aio_simplex
 | 
			
		||||
   virtual/aio_duplex
 | 
			
		||||
   virtual/controller_storage
 | 
			
		||||
   virtual/dedicated_storage
 | 
			
		||||
 | 
			
		||||
------------------------------------------
 | 
			
		||||
Install StarlingX Kubernetes on bare metal
 | 
			
		||||
------------------------------------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   bare_metal/aio_simplex
 | 
			
		||||
   bare_metal/aio_duplex
 | 
			
		||||
   bare_metal/controller_storage
 | 
			
		||||
   bare_metal/dedicated_storage
 | 
			
		||||
   bare_metal/ironic
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :hidden:
 | 
			
		||||
 | 
			
		||||
   ansible_bootstrap_configs
 | 
			
		||||
 | 
			
		||||
-------------------------------------------------
 | 
			
		||||
Install StarlingX Distributed Cloud on bare metal
 | 
			
		||||
-------------------------------------------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   distributed_cloud/index
 | 
			
		||||
 | 
			
		||||
-----------------
 | 
			
		||||
Access Kubernetes
 | 
			
		||||
-----------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   kubernetes_access
 | 
			
		||||
 | 
			
		||||
--------------------------
 | 
			
		||||
Access StarlingX OpenStack
 | 
			
		||||
--------------------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   openstack/index
 | 
			
		||||
 | 
			
		||||
							
								
								
									
										10
									
								
								doc/source/deploy_install_guides/r4_release/ipv6_note.txt
									
									
									
									
									
										Normal file
									
								
							
							
						
						@@ -0,0 +1,10 @@
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   By default, StarlingX uses IPv4. To use StarlingX with IPv6:
 | 
			
		||||
 | 
			
		||||
   * The entire infrastructure and cluster configuration must be IPv6, with the
 | 
			
		||||
     exception of the PXE boot network.
 | 
			
		||||
 | 
			
		||||
   * Not all external servers are reachable via IPv6 addresses (for example
 | 
			
		||||
     Docker registries). Depending on your infrastructure, it may be necessary
 | 
			
		||||
     to deploy a NAT64/DNS64 gateway to translate the IPv4 addresses to IPv6.
 | 
			
		||||
@@ -0,0 +1,181 @@
 | 
			
		||||
================================
 | 
			
		||||
Access StarlingX Kubernetes R4.0
 | 
			
		||||
================================
 | 
			
		||||
 | 
			
		||||
Use local/remote CLIs, GUIs, and/or REST APIs to access and manage StarlingX
 | 
			
		||||
Kubernetes and hosted containerized applications.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Local CLIs
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
In order to access the StarlingX and Kubernetes commands on controller-O, first
 | 
			
		||||
follow these steps:
 | 
			
		||||
 | 
			
		||||
#. Log in to controller-0 via the console or SSH with a sysadmin/<sysadmin-password>.
 | 
			
		||||
 | 
			
		||||
#. Acquire Keystone admin and Kubernetes admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
*********************************************
 | 
			
		||||
StarlingX system and host management commands
 | 
			
		||||
*********************************************
 | 
			
		||||
 | 
			
		||||
Access StarlingX system and host management commands using the :command:`system`
 | 
			
		||||
command. For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	system host-list
 | 
			
		||||
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	| id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
	| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
	+----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
Use the :command:`system help` command for the full list of options.
 | 
			
		||||
 | 
			
		||||
***********************************
 | 
			
		||||
StarlingX fault management commands
 | 
			
		||||
***********************************
 | 
			
		||||
 | 
			
		||||
Access StarlingX fault management commands using the :command:`fm` command, for example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	fm alarm-list
 | 
			
		||||
 | 
			
		||||
*******************
 | 
			
		||||
Kubernetes commands
 | 
			
		||||
*******************
 | 
			
		||||
 | 
			
		||||
Access Kubernetes commands using the :command:`kubectl` command, for example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	kubectl get nodes
 | 
			
		||||
 | 
			
		||||
	NAME           STATUS   ROLES    AGE     VERSION
 | 
			
		||||
	controller-0   Ready    master   5d19h   v1.13.5
 | 
			
		||||
 | 
			
		||||
See https://kubernetes.io/docs/reference/kubectl/overview/ for details.
 | 
			
		||||
 | 
			
		||||
-----------
 | 
			
		||||
Remote CLIs
 | 
			
		||||
-----------
 | 
			
		||||
 | 
			
		||||
Documentation coming soon.
 | 
			
		||||
 | 
			
		||||
---
 | 
			
		||||
GUI
 | 
			
		||||
---
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   For a virtual installation, run the browser on the host machine.
 | 
			
		||||
 | 
			
		||||
*********************
 | 
			
		||||
StarlingX Horizon GUI
 | 
			
		||||
*********************
 | 
			
		||||
 | 
			
		||||
Access the StarlingX Horizon GUI with the following steps:
 | 
			
		||||
 | 
			
		||||
#. Enter the OAM floating IP address in your browser:
 | 
			
		||||
   `\http://<oam-floating-ip-address>:8080`
 | 
			
		||||
 | 
			
		||||
   Discover your OAM floating IP address with the :command:`system oam-show` command.
 | 
			
		||||
 | 
			
		||||
#. Log in to Horizon with an admin/<sysadmin-password>.
 | 
			
		||||
 | 
			
		||||
********************
 | 
			
		||||
Kubernetes dashboard
 | 
			
		||||
********************
 | 
			
		||||
 | 
			
		||||
The Kubernetes dashboard is not installed by default.
 | 
			
		||||
 | 
			
		||||
To install the Kubernetes dashboard, execute the following steps on controller-0:
 | 
			
		||||
 | 
			
		||||
#. Use the kubernetes-dashboard helm chart from the stable helm repository with
 | 
			
		||||
   the override values shown below:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	cat <<EOF > dashboard-values.yaml
 | 
			
		||||
	service:
 | 
			
		||||
	  type: NodePort
 | 
			
		||||
	  nodePort: 30000
 | 
			
		||||
 | 
			
		||||
	rbac:
 | 
			
		||||
	  create: true
 | 
			
		||||
	  clusterAdminRole: true
 | 
			
		||||
 | 
			
		||||
	serviceAccount:
 | 
			
		||||
	  create: true
 | 
			
		||||
	  name: kubernetes-dashboard
 | 
			
		||||
	EOF
 | 
			
		||||
 | 
			
		||||
	helm repo update
 | 
			
		||||
 | 
			
		||||
	helm install stable/kubernetes-dashboard --name dashboard -f dashboard-values.yaml
 | 
			
		||||
 | 
			
		||||
#. Create an ``admin-user`` service account with ``cluster-admin`` privileges, and
 | 
			
		||||
   display its token for logging into the Kubernetes dashboard.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
	cat <<EOF > admin-login.yaml
 | 
			
		||||
	apiVersion: v1
 | 
			
		||||
	kind: ServiceAccount
 | 
			
		||||
	metadata:
 | 
			
		||||
	  name: admin-user
 | 
			
		||||
	  namespace: kube-system
 | 
			
		||||
	---
 | 
			
		||||
	apiVersion: rbac.authorization.k8s.io/v1
 | 
			
		||||
	kind: ClusterRoleBinding
 | 
			
		||||
	metadata:
 | 
			
		||||
	  name: admin-user
 | 
			
		||||
	roleRef:
 | 
			
		||||
	  apiGroup: rbac.authorization.k8s.io
 | 
			
		||||
	  kind: ClusterRole
 | 
			
		||||
	  name: cluster-admin
 | 
			
		||||
	subjects:
 | 
			
		||||
	- kind: ServiceAccount
 | 
			
		||||
	  name: admin-user
 | 
			
		||||
	  namespace: kube-system
 | 
			
		||||
	EOF
 | 
			
		||||
 | 
			
		||||
	kubectl apply -f admin-login.yaml
 | 
			
		||||
 | 
			
		||||
	kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
Access the Kubernetes dashboard GUI with the following steps:
 | 
			
		||||
 | 
			
		||||
#. Enter the OAM floating IP address in your browser:
 | 
			
		||||
   `\https://<oam-floating-ip-address>:30000`.
 | 
			
		||||
 | 
			
		||||
   Discover your OAM floating IP address with the :command:`system oam-show` command.
 | 
			
		||||
 | 
			
		||||
#. Log in to the Kubernetes dashboard using the ``admin-user`` token.
 | 
			
		||||
 | 
			
		||||
---------
 | 
			
		||||
REST APIs
 | 
			
		||||
---------
 | 
			
		||||
 | 
			
		||||
List the StarlingX platform-related public REST API endpoints using the
 | 
			
		||||
following command:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	openstack endpoint list | grep public
 | 
			
		||||
 | 
			
		||||
Use these URLs as the prefix for the URL target of StarlingX Platform Services'
 | 
			
		||||
REST API messages.
 | 
			
		||||
@@ -0,0 +1,7 @@
 | 
			
		||||
Your Kubernetes cluster is now up and running.
 | 
			
		||||
 | 
			
		||||
For instructions on how to access StarlingX Kubernetes see
 | 
			
		||||
:doc:`../kubernetes_access`.
 | 
			
		||||
 | 
			
		||||
For instructions on how to install and access StarlingX OpenStack see
 | 
			
		||||
:doc:`../openstack/index`.
 | 
			
		||||
							
								
								
									
										273
									
								
								doc/source/deploy_install_guides/r4_release/openstack/access.rst
									
									
									
									
									
										Normal file
									
								
							
							
						
						@@ -0,0 +1,273 @@
 | 
			
		||||
==========================
 | 
			
		||||
Access StarlingX OpenStack
 | 
			
		||||
==========================
 | 
			
		||||
 | 
			
		||||
Use local/remote CLIs, GUIs and/or REST APIs to access and manage StarlingX
 | 
			
		||||
OpenStack and hosted virtualized applications.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
------------------------------
 | 
			
		||||
Configure helm endpoint domain
 | 
			
		||||
------------------------------
 | 
			
		||||
 | 
			
		||||
Containerized OpenStack services in StarlingX are deployed behind an ingress
 | 
			
		||||
controller (nginx) that listens on either port 80 (HTTP) or port 443 (HTTPS).
 | 
			
		||||
The ingress controller routes packets to the specific OpenStack service, such as
 | 
			
		||||
the Cinder service, or the Neutron service, by parsing the FQDN in the packet.
 | 
			
		||||
For example, `neutron.openstack.svc.cluster.local` is for the Neutron service,
 | 
			
		||||
`cinder‐api.openstack.svc.cluster.local` is for the Cinder service.
 | 
			
		||||
 | 
			
		||||
This routing requires that access to OpenStack REST APIs must be via a FQDN
 | 
			
		||||
or by using a remote OpenStack CLI that uses the REST APIs. You cannot access
 | 
			
		||||
OpenStack REST APIs using an IP address.
 | 
			
		||||
 | 
			
		||||
FQDNs (such as `cinder‐api.openstack.svc.cluster.local`) must be in a DNS server
 | 
			
		||||
that is publicly accessible.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   There is a way to wild‐card a set of FQDNs to the same IP address in a DNS
 | 
			
		||||
   server configuration so that you don’t need to update the DNS server every
 | 
			
		||||
   time an OpenStack service is added. Check your particular DNS server for
 | 
			
		||||
   details on how to wild-card a set of FQDNs.
 | 
			
		||||
 | 
			
		||||
In a “real” deployment, that is, not a lab scenario, you can not use the default
 | 
			
		||||
`openstack.svc.cluster.local` domain name externally. You must set a unique
 | 
			
		||||
domain name for your StarlingX system. StarlingX provides the
 | 
			
		||||
:command:`system service‐parameter-add` command to configure and set the
 | 
			
		||||
OpenStack domain name:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system service-parameter-add openstack helm endpoint_domain=<domain_name>
 | 
			
		||||
 | 
			
		||||
`<domain_name>` should be a fully qualified domain name that you own, such that
 | 
			
		||||
you can configure the DNS Server that owns `<domain_name>` with the OpenStack
 | 
			
		||||
service names underneath the domain.
 | 
			
		||||
 | 
			
		||||
For example:
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system service-parameter-add openstack helm endpoint_domain=my-starlingx-domain.my-company.com
 | 
			
		||||
  system application-apply stx-openstack
 | 
			
		||||
 | 
			
		||||
This command updates the helm charts of all OpenStack services and restarts them.
 | 
			
		||||
For example it would change `cinder‐api.openstack.svc.cluster.local` to
 | 
			
		||||
`cinder‐api.my-starlingx-domain.my-company.com`, and so on for all OpenStack
 | 
			
		||||
services.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   This command also changes the containerized OpenStack Horizon to listen on
 | 
			
		||||
   `horizon.my-starlingx-domain.my-company.com:80` instead of the initial
 | 
			
		||||
   `<oam‐floating‐ip>:31000`.
 | 
			
		||||
 | 
			
		||||
You must configure `{ ‘*.my-starlingx-domain.my-company.com’:  -->  oam‐floating‐ip‐address }`
 | 
			
		||||
in the external DNS server that owns `my-company.com`.
 | 
			
		||||
 | 
			
		||||
---------
 | 
			
		||||
Local CLI
 | 
			
		||||
---------
 | 
			
		||||
 | 
			
		||||
Access OpenStack using the local CLI with the following steps:
 | 
			
		||||
 | 
			
		||||
#. Log in to controller-0 via the console or SSH with a sysadmin/<sysadmin-password>.
 | 
			
		||||
   *Do not use* source /etc/platform/openrc .
 | 
			
		||||
 | 
			
		||||
#. Set the CLI context to the StarlingX OpenStack Cloud Application and set up
 | 
			
		||||
   OpenStack admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   	sudo su -
 | 
			
		||||
	mkdir -p /etc/openstack
 | 
			
		||||
	tee /etc/openstack/clouds.yaml << EOF
 | 
			
		||||
	clouds:
 | 
			
		||||
	  openstack_helm:
 | 
			
		||||
	    region_name: RegionOne
 | 
			
		||||
	    identity_api_version: 3
 | 
			
		||||
	    endpoint_type: internalURL
 | 
			
		||||
	    auth:
 | 
			
		||||
	      username: 'admin'
 | 
			
		||||
	      password: '<sysadmin-password>'
 | 
			
		||||
	      project_name: 'admin'
 | 
			
		||||
	      project_domain_name: 'default'
 | 
			
		||||
	      user_domain_name: 'default'
 | 
			
		||||
	      auth_url: 'http://keystone.openstack.svc.cluster.local/v3'
 | 
			
		||||
	EOF
 | 
			
		||||
	exit
 | 
			
		||||
 | 
			
		||||
	export OS_CLOUD=openstack_helm
 | 
			
		||||
 | 
			
		||||
**********************
 | 
			
		||||
OpenStack CLI commands
 | 
			
		||||
**********************
 | 
			
		||||
 | 
			
		||||
Access OpenStack CLI commands for the StarlingX OpenStack cloud application
 | 
			
		||||
using the :command:`openstack` command. For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	[sysadmin@controller-0 ~(keystone_admin)]$ openstack flavor list
 | 
			
		||||
	[sysadmin@controller-0 ~(keystone_admin)]$ openstack image list
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Remote CLI
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
Documentation coming soon.
 | 
			
		||||
 | 
			
		||||
---
 | 
			
		||||
GUI
 | 
			
		||||
---
 | 
			
		||||
 | 
			
		||||
Access the StarlingX containerized OpenStack Horizon GUI in your browser at the
 | 
			
		||||
following address:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	http://<oam-floating-ip-address>:31000
 | 
			
		||||
 | 
			
		||||
Log in to the Containerized OpenStack Horizon GUI with an admin/<sysadmin-password>.
 | 
			
		||||
 | 
			
		||||
---------
 | 
			
		||||
REST APIs
 | 
			
		||||
---------
 | 
			
		||||
 | 
			
		||||
This section provides an overview of accessing REST APIs with examples of
 | 
			
		||||
`curl`-based REST API commands.
 | 
			
		||||
 | 
			
		||||
****************
 | 
			
		||||
Public endpoints
 | 
			
		||||
****************
 | 
			
		||||
 | 
			
		||||
Use the `Local CLI`_ to display OpenStack public REST API endpoints. For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  openstack endpoint list
 | 
			
		||||
 | 
			
		||||
The public endpoints will look like:
 | 
			
		||||
 | 
			
		||||
* `\http://keystone.openstack.svc.cluster.local:80/v3`
 | 
			
		||||
* `\http://nova.openstack.svc.cluster.local:80/v2.1/%(tenant_id)s`
 | 
			
		||||
* `\http://neutron.openstack.svc.cluster.local:80/`
 | 
			
		||||
* `etc.`
 | 
			
		||||
 | 
			
		||||
If you have set a unique domain name, then the public endpoints will look like:
 | 
			
		||||
 | 
			
		||||
* `\http://keystone.my-starlingx-domain.my-company.com:80/v3`
 | 
			
		||||
* `\http://nova.my-starlingx-domain.my-company.com:80/v2.1/%(tenant_id)s`
 | 
			
		||||
* `\http://neutron.my-starlingx-domain.my-company.com:80/`
 | 
			
		||||
* `etc.`
 | 
			
		||||
 | 
			
		||||
Documentation for the OpenStack REST APIs is available at
 | 
			
		||||
`OpenStack API Documentation <https://docs.openstack.org/api-quick-start/index.html>`_.
 | 
			
		||||
 | 
			
		||||
***********
 | 
			
		||||
Get a token
 | 
			
		||||
***********
 | 
			
		||||
 | 
			
		||||
The following command will request the Keystone token:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	curl -i   -H "Content-Type: application/json"   -d
 | 
			
		||||
	'{ "auth": {
 | 
			
		||||
	    "identity": {
 | 
			
		||||
	      "methods": ["password"],
 | 
			
		||||
	      "password": {
 | 
			
		||||
	        "user": {
 | 
			
		||||
	          "name": "admin",
 | 
			
		||||
	          "domain": { "id": "default" },
 | 
			
		||||
	          "password": "St8rlingX*"
 | 
			
		||||
	        }
 | 
			
		||||
	      }
 | 
			
		||||
	    },
 | 
			
		||||
	    "scope": {
 | 
			
		||||
	      "project": {
 | 
			
		||||
	        "name": "admin",
 | 
			
		||||
	        "domain": { "id": "default" }
 | 
			
		||||
	      }
 | 
			
		||||
	    }
 | 
			
		||||
	  }
 | 
			
		||||
	}'   http://keystone.openstack.svc.cluster.local:80/v3/auth/tokens
 | 
			
		||||
 | 
			
		||||
The token will be returned in the "X-Subject-Token" header field of the response:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	HTTP/1.1 201 CREATED
 | 
			
		||||
	Date: Wed, 02 Oct 2019 18:27:38 GMT
 | 
			
		||||
	Content-Type: application/json
 | 
			
		||||
	Content-Length: 8128
 | 
			
		||||
	Connection: keep-alive
 | 
			
		||||
	X-Subject-Token: gAAAAABdlOwafP71DXZjbyEf4gsNYA8ftso910S-RdJhg0fnqWuMGyMUhYUUJSossuUIitrvu2VXYXDNPbnaGzFveOoXxYTPlM6Fgo1aCl6wW85zzuXqT6AsxoCn95OMFhj_HHeYNPTkcyjbuW-HH_rJfhuUXt85iytZ_YAQQUfSXM7N3zAk7Pg
 | 
			
		||||
	Vary: X-Auth-Token
 | 
			
		||||
	x-openstack-request-id: req-d1bbe060-32f0-4cf1-ba1d-7b38c56b79fb
 | 
			
		||||
 | 
			
		||||
	{"token": {"is_domain": false,
 | 
			
		||||
 | 
			
		||||
		...
 | 
			
		||||
 | 
			
		||||
You can set an environment variable to hold the token value from the response.
 | 
			
		||||
For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  TOKEN=gAAAAABdlOwafP71DXZjbyEf4gsNYA8ftso910S
 | 
			
		||||
 | 
			
		||||
*****************
 | 
			
		||||
List Nova flavors
 | 
			
		||||
*****************
 | 
			
		||||
 | 
			
		||||
The following command will request a list of all Nova flavors:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	curl -i http://nova.openstack.svc.cluster.local:80/v2.1/flavors -X GET -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token:${TOKEN}" | tail -1 | python -m json.tool
 | 
			
		||||
 | 
			
		||||
The list will be returned in the response. For example:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
	 % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
 | 
			
		||||
	                                 Dload  Upload   Total   Spent    Left  Speed
 | 
			
		||||
	100  2529  100  2529    0     0  24187      0 --:--:-- --:--:-- --:--:-- 24317
 | 
			
		||||
	{
 | 
			
		||||
	    "flavors": [
 | 
			
		||||
	        {
 | 
			
		||||
	            "id": "04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
 | 
			
		||||
	            "links": [
 | 
			
		||||
	                {
 | 
			
		||||
	                    "href": "http://nova.openstack.svc.cluster.local/v2.1/flavors/04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
 | 
			
		||||
	                    "rel": "self"
 | 
			
		||||
	                },
 | 
			
		||||
	                {
 | 
			
		||||
	                    "href": "http://nova.openstack.svc.cluster.local/flavors/04cfe4e5-0d8c-49b3-ba94-54371e13ddce",
 | 
			
		||||
	                    "rel": "bookmark"
 | 
			
		||||
	                }
 | 
			
		||||
	            ],
 | 
			
		||||
	            "name": "m1.tiny"
 | 
			
		||||
	        },
 | 
			
		||||
	        {
 | 
			
		||||
	            "id": "14c725b1-1658-48ec-90e6-05048d269e89",
 | 
			
		||||
	            "links": [
 | 
			
		||||
	                {
 | 
			
		||||
	                    "href": "http://nova.openstack.svc.cluster.local/v2.1/flavors/14c725b1-1658-48ec-90e6-05048d269e89",
 | 
			
		||||
	                    "rel": "self"
 | 
			
		||||
	                },
 | 
			
		||||
	                {
 | 
			
		||||
	                    "href": "http://nova.openstack.svc.cluster.local/flavors/14c725b1-1658-48ec-90e6-05048d269e89",
 | 
			
		||||
	                    "rel": "bookmark"
 | 
			
		||||
	                }
 | 
			
		||||
	            ],
 | 
			
		||||
	            "name": "medium.dpdk"
 | 
			
		||||
	        },
 | 
			
		||||
	        {
 | 
			
		||||
 | 
			
		||||
	        	...
 | 
			
		||||
 | 
			
		||||
@@ -0,0 +1,16 @@
 | 
			
		||||
===================
 | 
			
		||||
StarlingX OpenStack
 | 
			
		||||
===================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install and access StarlingX OpenStack.
 | 
			
		||||
Other than the OpenStack-specific configurations required in the underlying
 | 
			
		||||
StarlingX Kubernetes infrastructure (described in the installation steps for
 | 
			
		||||
StarlingX Kubernetes), the installation of containerized OpenStack for StarlingX
 | 
			
		||||
is independent of deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 2
 | 
			
		||||
 | 
			
		||||
   install
 | 
			
		||||
   access
 | 
			
		||||
   uninstall_delete
 | 
			
		||||
@@ -0,0 +1,65 @@
 | 
			
		||||
===========================
 | 
			
		||||
Install StarlingX OpenStack
 | 
			
		||||
===========================
 | 
			
		||||
 | 
			
		||||
These instructions assume that you have completed the following
 | 
			
		||||
OpenStack-specific configuration tasks that are required by the underlying
 | 
			
		||||
StarlingX Kubernetes platform:
 | 
			
		||||
 | 
			
		||||
* All nodes have been labeled appropriately for their OpenStack role(s).
 | 
			
		||||
* The vSwitch type has been configured.
 | 
			
		||||
* The nova-local volume group has been configured on any node's host, if running
 | 
			
		||||
  the compute function.
 | 
			
		||||
 | 
			
		||||
--------------------------------------------
 | 
			
		||||
Install application manifest and helm-charts
 | 
			
		||||
--------------------------------------------
 | 
			
		||||
 | 
			
		||||
#. Get the StarlingX OpenStack application (stx-openstack) manifest and helm-charts.
 | 
			
		||||
   This can be from a private StarlingX build or, as shown below, from the public
 | 
			
		||||
   Cengen StarlingX build off ``master`` branch:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   	wget http://mirror.starlingx.cengn.ca/mirror/starlingx/release/2.0.0/centos/outputs/helm-charts/stx-openstack-1.0-17-centos-stable-latest.tgz
 | 
			
		||||
 | 
			
		||||
#. Load the stx-openstack application's package into StarlingX. The tarball
 | 
			
		||||
   package contains stx-openstack's Airship Armada manifest and stx-openstack's
 | 
			
		||||
   set of helm charts:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   	system application-upload stx-openstack-1.0-17-centos-stable-latest.tgz
 | 
			
		||||
 | 
			
		||||
   This will:
 | 
			
		||||
 | 
			
		||||
   * Load the Armada manifest and helm charts.
 | 
			
		||||
   * Internally manage helm chart override values for each chart.
 | 
			
		||||
   * Automatically generate system helm chart overrides for each chart based on
 | 
			
		||||
     the current state of the underlying StarlingX Kubernetes platform and the
 | 
			
		||||
     recommended StarlingX configuration of OpenStack services.
 | 
			
		||||
 | 
			
		||||
#. Apply the stx-openstack application in order to bring StarlingX OpenStack into
 | 
			
		||||
   service.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   	system application-apply stx-openstack
 | 
			
		||||
 | 
			
		||||
#. Wait for the activation of stx-openstack to complete.
 | 
			
		||||
 | 
			
		||||
   This can take 5-10 minutes depending on the performance of your host machine.
 | 
			
		||||
 | 
			
		||||
   Monitor progress with the command:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
   	watch -n 5 system application-list
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
Your OpenStack cloud is now up and running.
 | 
			
		||||
 | 
			
		||||
See :doc:`access` for details on how to access StarlingX OpenStack.
 | 
			
		||||
@@ -0,0 +1,33 @@
 | 
			
		||||
=============================
 | 
			
		||||
Uninstall StarlingX OpenStack
 | 
			
		||||
=============================
 | 
			
		||||
 | 
			
		||||
This section provides additional commands for uninstalling and deleting the
 | 
			
		||||
StarlingX OpenStack application.
 | 
			
		||||
 | 
			
		||||
.. warning::
 | 
			
		||||
 | 
			
		||||
   Uninstalling the OpenStack application will terminate all OpenStack services.
 | 
			
		||||
 | 
			
		||||
-----------------------------
 | 
			
		||||
Bring down OpenStack services
 | 
			
		||||
-----------------------------
 | 
			
		||||
 | 
			
		||||
Use the system CLI to uninstall the OpenStack application:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   system application-remove stx-openstack
 | 
			
		||||
   system application-list
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Delete OpenStack application definition
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
Use the system CLI to delete the OpenStack application definition:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
   system application-delete stx-openstack
 | 
			
		||||
   system application-list
 | 
			
		||||
 | 
			
		||||
@@ -0,0 +1,21 @@
 | 
			
		||||
===========================================
 | 
			
		||||
Virtual All-in-one Duplex Installation R4.0
 | 
			
		||||
===========================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_aio_duplex.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   aio_duplex_environ
 | 
			
		||||
   aio_duplex_install_kubernetes
 | 
			
		||||
@@ -0,0 +1,54 @@
 | 
			
		||||
============================
 | 
			
		||||
Prepare Host and Environment
 | 
			
		||||
============================
 | 
			
		||||
 | 
			
		||||
This section describes how to prepare the physical host and virtual environment
 | 
			
		||||
for a **StarlingX R4.0 virtual All-in-one Duplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
------------------------------------
 | 
			
		||||
Physical host requirements and setup
 | 
			
		||||
------------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: physical_host_req.txt
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Prepare virtual environment and servers
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
The following steps explain how to prepare the virtual environment and servers
 | 
			
		||||
on a physical host for a StarlingX R4.0 virtual All-in-one Duplex deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual environment.
 | 
			
		||||
 | 
			
		||||
   Set up the virtual platform networks for virtual deployment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_network.sh
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual servers.
 | 
			
		||||
 | 
			
		||||
   Create the XML definitions for the virtual servers required by this
 | 
			
		||||
   configuration option. This will create the XML virtual server definition for:
 | 
			
		||||
 | 
			
		||||
   * duplex-controller-0
 | 
			
		||||
   * duplex-controller-1
 | 
			
		||||
 | 
			
		||||
   The following command will start/virtually power on:
 | 
			
		||||
 | 
			
		||||
   * The 'duplex-controller-0' virtual server
 | 
			
		||||
   * The X-based graphical virt-manager application
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_configuration.sh -c duplex -i ./bootimage.iso
 | 
			
		||||
 | 
			
		||||
   If there is no X-server present errors will occur and the X-based GUI for the
 | 
			
		||||
   virt-manager application will not start. The virt-manager GUI is not absolutely
 | 
			
		||||
   required and you can safely ignore errors and continue.
 | 
			
		||||
 | 
			
		||||
@@ -0,0 +1,424 @@
 | 
			
		||||
==============================================
 | 
			
		||||
Install StarlingX Kubernetes on Virtual AIO-DX
 | 
			
		||||
==============================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R4.0 virtual All-in-one Duplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
In the last step of :doc:`aio_duplex_environ`, the controller-0 virtual server 'duplex-controller-0' was started by the :command:`setup_configuration.sh` command.
 | 
			
		||||
 | 
			
		||||
On the host, attach to the console of virtual controller-0 and select the appropriate
 | 
			
		||||
installer menu options to start the non-interactive install of
 | 
			
		||||
StarlingX software on controller-0.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   When entering the console, it is very easy to miss the first installer menu
 | 
			
		||||
   selection. Use ESC to navigate to previous menus, to ensure you are at the
 | 
			
		||||
   first installer menu.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  virsh console duplex-controller-0
 | 
			
		||||
 | 
			
		||||
Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
#. First menu: Select 'All-in-one Controller Configuration'
 | 
			
		||||
#. Second menu: Select 'Serial Console'
 | 
			
		||||
#. Third menu: Select 'Standard Security Profile'
 | 
			
		||||
 | 
			
		||||
Wait for the non-interactive install of software to complete and for the server
 | 
			
		||||
to reboot. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     Login: sysadmin
 | 
			
		||||
     Password:
 | 
			
		||||
     Changing password for sysadmin.
 | 
			
		||||
     (current) UNIX Password: sysadmin
 | 
			
		||||
     New Password:
 | 
			
		||||
     (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. External connectivity is required to run the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     export CONTROLLER0_OAM_CIDR=10.10.10.3/24
 | 
			
		||||
     export DEFAULT_OAM_GATEWAY=10.10.10.1
 | 
			
		||||
     sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
 | 
			
		||||
     sudo ip link set up dev enp7s1
 | 
			
		||||
     sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``sysadmin home directory ($HOME)``
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   .. include:: ../ansible_install_time_only.txt
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
 | 
			
		||||
     the configurable values as desired (use the commented instructions in
 | 
			
		||||
     the file).
 | 
			
		||||
 | 
			
		||||
   or
 | 
			
		||||
 | 
			
		||||
   * Create the minimal user configuration override file as shown in the example
 | 
			
		||||
     below:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: duplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: 10.10.10.0/24
 | 
			
		||||
        external_oam_gateway_address: 10.10.10.1
 | 
			
		||||
        external_oam_floating_address: 10.10.10.2
 | 
			
		||||
        external_oam_node_0_address: 10.10.10.3
 | 
			
		||||
        external_oam_node_1_address: 10.10.10.4
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <sysadmin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
 | 
			
		||||
   attached networks:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      OAM_IF=enp7s1
 | 
			
		||||
      MGMT_IF=enp7s2
 | 
			
		||||
      system host-if-modify controller-0 lo -c none
 | 
			
		||||
      IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
 | 
			
		||||
      for UUID in $IFNET_UUIDS; do
 | 
			
		||||
          system interface-network-remove ${UUID}
 | 
			
		||||
      done
 | 
			
		||||
      system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
      system host-if-modify controller-0 $MGMT_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-0 $MGMT_IF mgmt
 | 
			
		||||
      system interface-network-assign controller-0 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      In a virtual environment, this can sometimes cause Ceph clock skew alarms.
 | 
			
		||||
      Also, the virtual instances clock is synchronized with the host clock,
 | 
			
		||||
      so it is not absolutely required to configure NTP in this step.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-0.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
      (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
      1G Huge Pages are not supported in the virtual environment and there is no
 | 
			
		||||
      virtual NIC supporting SRIOV. For that reason, data interfaces are not
 | 
			
		||||
      applicable in the virtual environment for the Kubernetes-only scenario.
 | 
			
		||||
 | 
			
		||||
   For OpenStack only:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=eth1000
 | 
			
		||||
      DATA1IF=eth1001
 | 
			
		||||
      export COMPUTE=controller-0
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
      system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
      system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
      DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
      DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
      DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
      DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
      system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-0 for Ceph:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-disk-list controller-0
 | 
			
		||||
      system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
 | 
			
		||||
      system host-stor-list controller-0
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. include:: aio_simplex_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-0-openstack-specific-aio-simplex-start:
 | 
			
		||||
   :end-before: incl-config-controller-0-openstack-specific-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-0 to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
-------------------------------------
 | 
			
		||||
Install software on controller-1 node
 | 
			
		||||
-------------------------------------
 | 
			
		||||
 | 
			
		||||
#. On the host, power on the controller-1 virtual server, 'duplex-controller-1'. It will
 | 
			
		||||
   automatically attempt to network boot over the management network:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start duplex-controller-1
 | 
			
		||||
 | 
			
		||||
#. Attach to the console of virtual controller-1:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh console duplex-controller-1
 | 
			
		||||
 | 
			
		||||
   As controller-1 VM boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of virtual controller-0, list hosts to see the newly discovered
 | 
			
		||||
   controller-1 host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-list
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
    | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. On virtual controller-0, using the host id, set the personality of this host
 | 
			
		||||
   to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1 to complete, controller-1 to
 | 
			
		||||
   reboot, and controller-1 to show as locked/disabled/online in 'system host-list'.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-list
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
    | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
    | 2  | controller-1 | controller  | locked         | disabled    | online      |
 | 
			
		||||
    +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-1 and specify the
 | 
			
		||||
   attached networks. Note that the MGMT interface is partially set up
 | 
			
		||||
   automatically by the network install procedure.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      OAM_IF=enp7s1
 | 
			
		||||
      system host-if-modify controller-1 $OAM_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-1 $OAM_IF oam
 | 
			
		||||
      system interface-network-assign controller-1 mgmt0 cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-1.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
      (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
      1G Huge Pages are not supported in the virtual environment and there is no
 | 
			
		||||
      virtual NIC supporting SRIOV. For that reason, data interfaces are not
 | 
			
		||||
      applicable in the virtual environment for the Kubernetes-only scenario.
 | 
			
		||||
 | 
			
		||||
   For OpenStack only:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=eth1000
 | 
			
		||||
      DATA1IF=eth1001
 | 
			
		||||
      export COMPUTE=controller-1
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
      system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
      system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
      DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
      DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
      DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
      DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
      DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
      DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
      system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
      system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-1 for Ceph:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    echo ">>> Add OSDs to primary tier"
 | 
			
		||||
    system host-disk-list controller-1
 | 
			
		||||
    system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {}
 | 
			
		||||
    system host-stor-list controller-1
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
 | 
			
		||||
   support of installing the stx-openstack manifest/helm-charts later:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-label-assign controller-1 openstack-control-plane=enabled
 | 
			
		||||
    system host-label-assign controller-1 openstack-compute-node=enabled
 | 
			
		||||
    system host-label-assign controller-1 openvswitch=enabled
 | 
			
		||||
    system host-label-assign controller-1 sriov=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      export COMPUTE=controller-1
 | 
			
		||||
 | 
			
		||||
      echo ">>> Getting root disk info"
 | 
			
		||||
      ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
      ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
      echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
 | 
			
		||||
 | 
			
		||||
      echo ">>>> Configuring nova-local"
 | 
			
		||||
      NOVA_SIZE=34
 | 
			
		||||
      NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
 | 
			
		||||
      NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
      system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
      system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-1 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
Controller-1 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -0,0 +1,21 @@
 | 
			
		||||
============================================
 | 
			
		||||
Virtual All-in-one Simplex Installation R4.0
 | 
			
		||||
============================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_aio_simplex.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   aio_simplex_environ
 | 
			
		||||
   aio_simplex_install_kubernetes
 | 
			
		||||
@@ -0,0 +1,52 @@
 | 
			
		||||
============================
 | 
			
		||||
Prepare Host and Environment
 | 
			
		||||
============================
 | 
			
		||||
 | 
			
		||||
This section describes how to prepare the physical host and virtual environment
 | 
			
		||||
for a **StarlingX R4.0 virtual All-in-one Simplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
------------------------------------
 | 
			
		||||
Physical host requirements and setup
 | 
			
		||||
------------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: physical_host_req.txt
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Prepare virtual environment and servers
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
The following steps explain how to prepare the virtual environment and servers
 | 
			
		||||
on a physical host for a StarlingX R4.0 virtual All-in-one Simplex deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual environment.
 | 
			
		||||
 | 
			
		||||
   Set up the virtual platform networks for virtual deployment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_network.sh
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual servers.
 | 
			
		||||
 | 
			
		||||
   Create the XML definitions for the virtual servers required by this
 | 
			
		||||
   configuration option. This will create the XML virtual server definition for:
 | 
			
		||||
 | 
			
		||||
   * simplex-controller-0
 | 
			
		||||
 | 
			
		||||
   The following command will start/virtually power on:
 | 
			
		||||
 | 
			
		||||
   * The 'simplex-controller-0' virtual server
 | 
			
		||||
   * The X-based graphical virt-manager application
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_configuration.sh -c simplex -i ./bootimage.iso
 | 
			
		||||
 | 
			
		||||
   If there is no X-server present errors will occur and the X-based GUI for the
 | 
			
		||||
   virt-manager application will not start. The virt-manager GUI is not absolutely
 | 
			
		||||
   required and you can safely ignore errors and continue.
 | 
			
		||||
@@ -0,0 +1,285 @@
 | 
			
		||||
==============================================
 | 
			
		||||
Install StarlingX Kubernetes on Virtual AIO-SX
 | 
			
		||||
==============================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R4.0 virtual All-in-one Simplex** deployment configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
In the last step of :doc:`aio_simplex_environ`, the controller-0 virtual server 'simplex-controller-0' was started by the :command:`setup_configuration.sh` command.
 | 
			
		||||
 | 
			
		||||
On the host, attach to the console of virtual controller-0 and select the
 | 
			
		||||
appropriate installer menu options to start the non-interactive install of
 | 
			
		||||
StarlingX software on controller-0.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   When entering the console, it is very easy to miss the first installer menu
 | 
			
		||||
   selection. Use ESC to navigate to previous menus, to ensure you are at the
 | 
			
		||||
   first installer menu.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  virsh console simplex-controller-0
 | 
			
		||||
 | 
			
		||||
Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
#. First menu: Select 'All-in-one Controller Configuration'
 | 
			
		||||
#. Second menu: Select 'Serial Console'
 | 
			
		||||
#. Third menu: Select 'Standard Security Profile'
 | 
			
		||||
 | 
			
		||||
Wait for the non-interactive install of software to complete and for the server
 | 
			
		||||
to reboot. This can take 5-10 minutes, depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    Login: sysadmin
 | 
			
		||||
    Password:
 | 
			
		||||
    Changing password for sysadmin.
 | 
			
		||||
    (current) UNIX Password: sysadmin
 | 
			
		||||
    New Password:
 | 
			
		||||
    (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. External connectivity is required to run the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    export CONTROLLER0_OAM_CIDR=10.10.10.3/24
 | 
			
		||||
    export DEFAULT_OAM_GATEWAY=10.10.10.1
 | 
			
		||||
    sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
 | 
			
		||||
    sudo ip link set up dev enp7s1
 | 
			
		||||
    sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``sysadmin home directory ($HOME)``
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   .. include:: ../ansible_install_time_only.txt
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
 | 
			
		||||
     the configurable values as desired (use the commented instructions in
 | 
			
		||||
     the file).
 | 
			
		||||
 | 
			
		||||
   or
 | 
			
		||||
 | 
			
		||||
   * Create the minimal user configuration override file as shown in the example
 | 
			
		||||
     below:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: simplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
        - 8.8.8.8
 | 
			
		||||
        - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: 10.10.10.0/24
 | 
			
		||||
        external_oam_gateway_address: 10.10.10.1
 | 
			
		||||
        external_oam_floating_address: 10.10.10.2
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <sysadmin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM interface of controller-0 and specify the attached network
 | 
			
		||||
   as "oam". Use the OAM port name, for example eth0, that is applicable to your
 | 
			
		||||
   deployment environment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     OAM_IF=enp7s1
 | 
			
		||||
     system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
     system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      In a virtual environment, this can sometimes cause Ceph clock skew alarms.
 | 
			
		||||
      Also, the virtual instances clock is synchronized with the host clock,
 | 
			
		||||
      so it is not absolutely required to configure NTP in this step.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for controller-0.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
      (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
      1G Huge Pages are not supported in the virtual environment and there is no
 | 
			
		||||
      virtual NIC supporting SRIOV. For that reason, data interfaces are not
 | 
			
		||||
      applicable in the virtual environment for the Kubernetes-only scenario.
 | 
			
		||||
 | 
			
		||||
   For OpenStack only:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    DATA0IF=eth1000
 | 
			
		||||
    DATA1IF=eth1001
 | 
			
		||||
    export COMPUTE=controller-0
 | 
			
		||||
    PHYSNET0='physnet0'
 | 
			
		||||
    PHYSNET1='physnet1'
 | 
			
		||||
    SPL=/tmp/tmp-system-port-list
 | 
			
		||||
    SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
    system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
    system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
    DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
    DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
    DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
    DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
    DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
    DATA1PORTNAME=$(cat  $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
    DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
    DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
 | 
			
		||||
    system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
    system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
    system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
    system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
    system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
    system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
 | 
			
		||||
#. Add an OSD on controller-0 for Ceph:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-disk-list controller-0
 | 
			
		||||
    system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
 | 
			
		||||
    system host-stor-list controller-0
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-openstack-specific-aio-simplex-start:
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
 | 
			
		||||
   support of installing the stx-openstack manifest/helm-charts later.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     system host-label-assign controller-0 openstack-control-plane=enabled
 | 
			
		||||
     system host-label-assign controller-0 openstack-compute-node=enabled
 | 
			
		||||
     system host-label-assign controller-0 openvswitch=enabled
 | 
			
		||||
     system host-label-assign controller-0 sriov=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** A vSwitch is required.
 | 
			
		||||
 | 
			
		||||
   The default vSwitch is containerized OVS that is packaged with the
 | 
			
		||||
   stx-openstack manifest/helm-charts. StarlingX provides the option to use
 | 
			
		||||
   OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT
 | 
			
		||||
   supported, only OVS is supported. Therefore, simply use the default OVS
 | 
			
		||||
   vSwitch here.
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack Only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     export COMPUTE=controller-0
 | 
			
		||||
 | 
			
		||||
     echo ">>> Getting root disk info"
 | 
			
		||||
     ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
     ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
     echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
 | 
			
		||||
 | 
			
		||||
     echo ">>>> Configuring nova-local"
 | 
			
		||||
     NOVA_SIZE=34
 | 
			
		||||
     NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
 | 
			
		||||
     NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
     system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
     system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
     sleep 2
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-0 to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -0,0 +1,21 @@
 | 
			
		||||
==========================================================
 | 
			
		||||
Virtual Standard with Controller Storage Installation R4.0
 | 
			
		||||
==========================================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_controller_storage.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   controller_storage_environ
 | 
			
		||||
   controller_storage_install_kubernetes
 | 
			
		||||
@@ -0,0 +1,56 @@
 | 
			
		||||
============================
 | 
			
		||||
Prepare Host and Environment
 | 
			
		||||
============================
 | 
			
		||||
 | 
			
		||||
This section describes how to prepare the physical host and virtual environment
 | 
			
		||||
for a **StarlingX R4.0 virtual Standard with Controller Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
------------------------------------
 | 
			
		||||
Physical host requirements and setup
 | 
			
		||||
------------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: physical_host_req.txt
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Prepare virtual environment and servers
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
The following steps explain how to prepare the virtual environment and servers
 | 
			
		||||
on a physical host for a StarlingX R4.0 virtual Standard with Controller Storage
 | 
			
		||||
deployment configuration.
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual environment.
 | 
			
		||||
 | 
			
		||||
   Set up virtual platform networks for virtual deployment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_network.sh
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual servers.
 | 
			
		||||
 | 
			
		||||
   Create the XML definitions for the virtual servers required by this
 | 
			
		||||
   configuration option. This will create the XML virtual server definition for:
 | 
			
		||||
 | 
			
		||||
   * controllerstorage-controller-0
 | 
			
		||||
   * controllerstorage-controller-1
 | 
			
		||||
   * controllerstorage-worker-0
 | 
			
		||||
   * controllerstorage-worker-1
 | 
			
		||||
 | 
			
		||||
   The following command will start/virtually power on:
 | 
			
		||||
 | 
			
		||||
   * The 'controllerstorage-controller-0' virtual server
 | 
			
		||||
   * The X-based graphical virt-manager application
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_configuration.sh -c controllerstorage -i ./bootimage.iso
 | 
			
		||||
 | 
			
		||||
   If there is no X-server present errors will occur and the X-based GUI for the
 | 
			
		||||
   virt-manager application will not start. The virt-manager GUI is not absolutely
 | 
			
		||||
   required and you can safely ignore errors and continue.
 | 
			
		||||
@@ -0,0 +1,551 @@
 | 
			
		||||
========================================================================
 | 
			
		||||
Install StarlingX Kubernetes on Virtual Standard with Controller Storage
 | 
			
		||||
========================================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R4.0 virtual Standard with Controller Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
In the last step of :doc:`controller_storage_environ`, the controller-0 virtual
 | 
			
		||||
server 'controllerstorage-controller-0' was started by the
 | 
			
		||||
:command:`setup_configuration.sh` command.
 | 
			
		||||
 | 
			
		||||
On the host, attach to the console of virtual controller-0 and select the appropriate
 | 
			
		||||
installer menu options to start the non-interactive install of
 | 
			
		||||
StarlingX software on controller-0.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   When entering the console, it is very easy to miss the first installer menu
 | 
			
		||||
   selection. Use ESC to navigate to previous menus, to ensure you are at the
 | 
			
		||||
   first installer menu.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  virsh console controllerstorage-controller-0
 | 
			
		||||
 | 
			
		||||
Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
#. First menu: Select 'Standard Controller Configuration'
 | 
			
		||||
#. Second menu: Select 'Serial Console'
 | 
			
		||||
#. Third menu: Select 'Standard Security Profile'
 | 
			
		||||
 | 
			
		||||
Wait for the non-interactive install of software to complete and for the server
 | 
			
		||||
to reboot. This can take 5-10 minutes depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. incl-bootstrap-controller-0-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Log in using the username / password of "sysadmin" / "sysadmin".
 | 
			
		||||
   When logging in for the first time, you will be forced to change the password.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      Login: sysadmin
 | 
			
		||||
      Password:
 | 
			
		||||
      Changing password for sysadmin.
 | 
			
		||||
      (current) UNIX Password: sysadmin
 | 
			
		||||
      New Password:
 | 
			
		||||
      (repeat) New Password:
 | 
			
		||||
 | 
			
		||||
#. External connectivity is required to run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      export CONTROLLER0_OAM_CIDR=10.10.10.3/24
 | 
			
		||||
      export DEFAULT_OAM_GATEWAY=10.10.10.1
 | 
			
		||||
      sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
 | 
			
		||||
      sudo ip link set up dev enp7s1
 | 
			
		||||
      sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
 | 
			
		||||
 | 
			
		||||
#. Specify user configuration overrides for the Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible
 | 
			
		||||
   configuration are:
 | 
			
		||||
 | 
			
		||||
   ``/etc/ansible/hosts``
 | 
			
		||||
      The default Ansible inventory file. Contains a single host: localhost.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
 | 
			
		||||
      The Ansible bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``/usr/share/ansible/stx-ansible/playbooks/bootstrap/host_vars/default.yml``
 | 
			
		||||
      The default configuration values for the bootstrap playbook.
 | 
			
		||||
 | 
			
		||||
   ``sysadmin home directory ($HOME)``
 | 
			
		||||
      The default location where Ansible looks for and imports user
 | 
			
		||||
      configuration override files for hosts. For example: ``$HOME/<hostname>.yml``.
 | 
			
		||||
 | 
			
		||||
   .. include:: ../ansible_install_time_only.txt
 | 
			
		||||
 | 
			
		||||
   Specify the user configuration override file for the Ansible bootstrap
 | 
			
		||||
   playbook using one of the following methods:
 | 
			
		||||
 | 
			
		||||
   * Copy the default.yml file listed above to ``$HOME/localhost.yml`` and edit
 | 
			
		||||
     the configurable values as desired (use the commented instructions in
 | 
			
		||||
     the file).
 | 
			
		||||
 | 
			
		||||
   or
 | 
			
		||||
 | 
			
		||||
   * Create the minimal user configuration override file as shown in the example
 | 
			
		||||
     below:
 | 
			
		||||
 | 
			
		||||
     ::
 | 
			
		||||
 | 
			
		||||
        cd ~
 | 
			
		||||
        cat <<EOF > localhost.yml
 | 
			
		||||
        system_mode: duplex
 | 
			
		||||
 | 
			
		||||
        dns_servers:
 | 
			
		||||
          - 8.8.8.8
 | 
			
		||||
          - 8.8.4.4
 | 
			
		||||
 | 
			
		||||
        external_oam_subnet: 10.10.10.0/24
 | 
			
		||||
        external_oam_gateway_address: 10.10.10.1
 | 
			
		||||
        external_oam_floating_address: 10.10.10.2
 | 
			
		||||
        external_oam_node_0_address: 10.10.10.3
 | 
			
		||||
        external_oam_node_1_address: 10.10.10.4
 | 
			
		||||
 | 
			
		||||
        admin_username: admin
 | 
			
		||||
        admin_password: <sysadmin-password>
 | 
			
		||||
        ansible_become_pass: <sysadmin-password>
 | 
			
		||||
        EOF
 | 
			
		||||
 | 
			
		||||
   Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs`
 | 
			
		||||
   for information on additional Ansible bootstrap configurations for advanced
 | 
			
		||||
   Ansible bootstrap scenarios.
 | 
			
		||||
 | 
			
		||||
#. Run the Ansible bootstrap playbook:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
 | 
			
		||||
 | 
			
		||||
   Wait for Ansible bootstrap playbook to complete.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. incl-bootstrap-controller-0-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Acquire admin credentials:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      source /etc/platform/openrc
 | 
			
		||||
 | 
			
		||||
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
 | 
			
		||||
   attached networks:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      OAM_IF=enp7s1
 | 
			
		||||
      MGMT_IF=enp7s2
 | 
			
		||||
      system host-if-modify controller-0 lo -c none
 | 
			
		||||
      IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
 | 
			
		||||
      for UUID in $IFNET_UUIDS; do
 | 
			
		||||
          system interface-network-remove ${UUID}
 | 
			
		||||
      done
 | 
			
		||||
      system host-if-modify controller-0 $OAM_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-0 $OAM_IF oam
 | 
			
		||||
      system host-if-modify controller-0 $MGMT_IF -c platform
 | 
			
		||||
      system interface-network-assign controller-0 $MGMT_IF mgmt
 | 
			
		||||
      system interface-network-assign controller-0 $MGMT_IF cluster-host
 | 
			
		||||
 | 
			
		||||
#. Configure NTP Servers for network time synchronization:
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      In a virtual environment, this can sometimes cause Ceph clock skew alarms.
 | 
			
		||||
      Also, the virtual instance clock is synchronized with the host clock,
 | 
			
		||||
      so it is not absolutely required to configure NTP here.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
 | 
			
		||||
   support of installing the stx-openstack manifest/helm-charts later:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    system host-label-assign controller-0 openstack-control-plane=enabled
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** A vSwitch is required.
 | 
			
		||||
 | 
			
		||||
   The default vSwitch is containerized OVS that is packaged with the
 | 
			
		||||
   stx-openstack manifest/helm-charts. StarlingX provides the option to use
 | 
			
		||||
   OVS-DPDK on the host, however, in the virtual environment OVS-DPDK is NOT
 | 
			
		||||
   supported, only OVS is supported. Therefore, simply use the default OVS
 | 
			
		||||
   vSwitch here.
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-0-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-0 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
    system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------------------------
 | 
			
		||||
Install software on controller-1 and compute nodes
 | 
			
		||||
--------------------------------------------------
 | 
			
		||||
 | 
			
		||||
#. On the host, power on the controller-1 virtual server,
 | 
			
		||||
   'controllerstorage-controller-1'. It will automatically attempt to network
 | 
			
		||||
   boot over the management network:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start controllerstorage-controller-1
 | 
			
		||||
 | 
			
		||||
#. Attach to the console of virtual controller-1:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh console controllerstorage-controller-1
 | 
			
		||||
 | 
			
		||||
   As controller-1 VM boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On console of virtual controller-0, list hosts to see the newly discovered
 | 
			
		||||
   controller-1 host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. On virtual controller-0, using the host id, set the personality of this host
 | 
			
		||||
   to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
   This initiates the install of software on controller-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting on the previous step to complete, start up and set the personality
 | 
			
		||||
   for 'controllerstorage-worker-0' and 'controllerstorage-worker-1'. Set the
 | 
			
		||||
   personality to 'worker' and assign a unique hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, start 'controllerstorage-worker-0' from the host:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start controllerstorage-worker-0
 | 
			
		||||
 | 
			
		||||
   Wait for new host (hostname=None) to be discovered by checking
 | 
			
		||||
   ‘system host-list’ on virtual controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 3 personality=worker hostname=compute-0
 | 
			
		||||
 | 
			
		||||
   Repeat for 'controllerstorage-worker-1'. On the host:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start controllerstorage-worker-1
 | 
			
		||||
 | 
			
		||||
   And wait for new host (hostname=None) to be discovered by checking
 | 
			
		||||
   ‘system host-list’ on virtual controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 4 personality=worker hostname=compute-1
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1, compute-0, and compute-1 to
 | 
			
		||||
   complete, for all virtual servers to reboot, and for all to show as
 | 
			
		||||
   locked/disabled/online in 'system host-list'.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
      | 3  | compute-0    | compute     | locked         | disabled    | online       |
 | 
			
		||||
      | 4  | compute-1    | compute     | locked         | disabled    | online       |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-1-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
Configure the OAM and MGMT interfaces of virtual controller-0 and specify the
 | 
			
		||||
attached networks. Note that the MGMT interface is partially set up by the
 | 
			
		||||
network install procedure.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  OAM_IF=enp7s1
 | 
			
		||||
  system host-if-modify controller-1 $OAM_IF -c platform
 | 
			
		||||
  system interface-network-assign controller-1 $OAM_IF oam
 | 
			
		||||
  system interface-network-assign controller-1 mgmt0 cluster-host
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
**For OpenStack only:** Assign OpenStack host labels to controller-1 in support
 | 
			
		||||
of installing the stx-openstack manifest/helm-charts later:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-label-assign controller-1 openstack-control-plane=enabled
 | 
			
		||||
 | 
			
		||||
.. incl-config-controller-1-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-1-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-1 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-1
 | 
			
		||||
 | 
			
		||||
Controller-1 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-controller-1-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure compute nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Add the third Ceph monitor to compute-0:
 | 
			
		||||
 | 
			
		||||
   (The first two Ceph monitors are automatically assigned to controller-0 and
 | 
			
		||||
   controller-1.)
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ceph-mon-add compute-0
 | 
			
		||||
 | 
			
		||||
#. Wait for the compute node monitor to complete configuration:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system ceph-mon-list
 | 
			
		||||
      +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
      | uuid                                 | ceph_ | hostname     | state      | task |
 | 
			
		||||
      |                                      | mon_g |              |            |      |
 | 
			
		||||
      |                                      | ib    |              |            |      |
 | 
			
		||||
      +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
      | 64176b6c-e284-4485-bb2a-115dee215279 | 20    | controller-1 | configured | None |
 | 
			
		||||
      | a9ca151b-7f2c-4551-8167-035d49e2df8c | 20    | controller-0 | configured | None |
 | 
			
		||||
      | f76bc385-190c-4d9a-aa0f-107346a9907b | 20    | compute-0    | configured | None |
 | 
			
		||||
      +--------------------------------------+-------+--------------+------------+------+
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes.
 | 
			
		||||
 | 
			
		||||
   Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
         system interface-network-assign $COMPUTE mgmt0 cluster-host
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for compute nodes.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
      (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
      1G Huge Pages are not supported in the virtual environment and there is no
 | 
			
		||||
      virtual NIC supporting SRIOV. For that reason, data interfaces are not
 | 
			
		||||
      applicable in the virtual environment for the Kubernetes-only scenario.
 | 
			
		||||
 | 
			
		||||
   For OpenStack only:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=eth1000
 | 
			
		||||
      DATA1IF=eth1001
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
 | 
			
		||||
      # configure the datanetworks in sysinv, prior to referencing it
 | 
			
		||||
      # in the ``system host-if-modify`` command'.
 | 
			
		||||
      system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
      system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
        echo "Configuring interface for: $COMPUTE"
 | 
			
		||||
        set -ex
 | 
			
		||||
        system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
        system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
        DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
        DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
        DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
        DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
        DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
        DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
        DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
        DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
        system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
        system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
        system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
        system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
        set +ex
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
 | 
			
		||||
   support of installing the stx-openstack manifest/helm-charts later:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for NODE in compute-0 compute-1; do
 | 
			
		||||
        system host-label-assign $NODE  openstack-compute-node=enabled
 | 
			
		||||
        system host-label-assign $NODE  openvswitch=enabled
 | 
			
		||||
        system host-label-assign $NODE  sriov=enabled
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
        echo "Configuring Nova local for: $COMPUTE"
 | 
			
		||||
        ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
        ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
        PARTITION_SIZE=10
 | 
			
		||||
        NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 | 
			
		||||
        NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
        system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
        system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock compute nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-compute-nodes-virt-controller-storage-start:
 | 
			
		||||
 | 
			
		||||
Unlock virtual compute nodes to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
     system host-unlock $COMPUTE
 | 
			
		||||
  done
 | 
			
		||||
 | 
			
		||||
The compute nodes will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
.. incl-unlock-compute-nodes-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
----------------------------
 | 
			
		||||
Add Ceph OSDs to controllers
 | 
			
		||||
----------------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      HOST=controller-0
 | 
			
		||||
      DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
      TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
      OSDs="/dev/sdb"
 | 
			
		||||
      for OSD in $OSDs; do
 | 
			
		||||
         system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
         while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
      system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to controller-1:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      HOST=controller-1
 | 
			
		||||
      DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
      TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
      OSDs="/dev/sdb"
 | 
			
		||||
      for OSD in $OSDs; do
 | 
			
		||||
          system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
          while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
      system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -0,0 +1,21 @@
 | 
			
		||||
=========================================================
 | 
			
		||||
Virtual Standard with Dedicated Storage Installation R4.0
 | 
			
		||||
=========================================================
 | 
			
		||||
 | 
			
		||||
--------
 | 
			
		||||
Overview
 | 
			
		||||
--------
 | 
			
		||||
 | 
			
		||||
.. include:: ../desc_dedicated_storage.txt
 | 
			
		||||
 | 
			
		||||
.. include:: ../ipv6_note.txt
 | 
			
		||||
 | 
			
		||||
------------
 | 
			
		||||
Installation
 | 
			
		||||
------------
 | 
			
		||||
 | 
			
		||||
.. toctree::
 | 
			
		||||
   :maxdepth: 1
 | 
			
		||||
 | 
			
		||||
   dedicated_storage_environ
 | 
			
		||||
   dedicated_storage_install_kubernetes
 | 
			
		||||
@@ -0,0 +1,58 @@
 | 
			
		||||
============================
 | 
			
		||||
Prepare Host and Environment
 | 
			
		||||
============================
 | 
			
		||||
 | 
			
		||||
This section describes how to prepare the physical host and virtual environment
 | 
			
		||||
for a **StarlingX R4.0 virtual Standard with Dedicated Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
------------------------------------
 | 
			
		||||
Physical host requirements and setup
 | 
			
		||||
------------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: physical_host_req.txt
 | 
			
		||||
 | 
			
		||||
---------------------------------------
 | 
			
		||||
Prepare virtual environment and servers
 | 
			
		||||
---------------------------------------
 | 
			
		||||
 | 
			
		||||
The following steps explain how to prepare the virtual environment and servers
 | 
			
		||||
on a physical host for a StarlingX R4.0 virtual Standard with Dedicated Storage
 | 
			
		||||
deployment configuration.
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual environment.
 | 
			
		||||
 | 
			
		||||
   Set up virtual platform networks for virtual deployment:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_network.sh
 | 
			
		||||
 | 
			
		||||
#. Prepare virtual servers.
 | 
			
		||||
 | 
			
		||||
   Create the XML definitions for the virtual servers required by this
 | 
			
		||||
   configuration option. This will create the XML virtual server definition for:
 | 
			
		||||
 | 
			
		||||
   * dedicatedstorage-controller-0
 | 
			
		||||
   * dedicatedstorage-controller-1
 | 
			
		||||
   * dedicatedstorage-storage-0
 | 
			
		||||
   * dedicatedstorage-storage-1
 | 
			
		||||
   * dedicatedstorage-worker-0
 | 
			
		||||
   * dedicatedstorage-worker-1
 | 
			
		||||
 | 
			
		||||
   The following command will start/virtually power on:
 | 
			
		||||
 | 
			
		||||
   * The 'dedicatedstorage-controller-0' virtual server
 | 
			
		||||
   * The X-based graphical virt-manager application
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
     bash setup_configuration.sh -c dedicatedstorage -i ./bootimage.iso
 | 
			
		||||
 | 
			
		||||
   If there is no X-server present errors will occur and the X-based GUI for the
 | 
			
		||||
   virt-manager application will not start. The virt-manager GUI is not absolutely
 | 
			
		||||
   required and you can safely ignore errors and continue.
 | 
			
		||||
@@ -0,0 +1,390 @@
 | 
			
		||||
=======================================================================
 | 
			
		||||
Install StarlingX Kubernetes on Virtual Standard with Dedicated Storage
 | 
			
		||||
=======================================================================
 | 
			
		||||
 | 
			
		||||
This section describes the steps to install the StarlingX Kubernetes platform
 | 
			
		||||
on a **StarlingX R4.0 virtual Standard with Dedicated Storage** deployment
 | 
			
		||||
configuration.
 | 
			
		||||
 | 
			
		||||
.. contents::
 | 
			
		||||
   :local:
 | 
			
		||||
   :depth: 1
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Install software on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
In the last step of :doc:`dedicated_storage_environ`, the controller-0 virtual
 | 
			
		||||
server 'dedicatedstorage-controller-0' was started by the
 | 
			
		||||
:command:`setup_configuration.sh` command.
 | 
			
		||||
 | 
			
		||||
On the host, attach to the console of virtual controller-0 and select the appropriate
 | 
			
		||||
installer menu options to start the non-interactive install of
 | 
			
		||||
StarlingX software on controller-0.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
 | 
			
		||||
   When entering the console, it is very easy to miss the first installer menu
 | 
			
		||||
   selection. Use ESC to navigate to previous menus, to ensure you are at the
 | 
			
		||||
   first installer menu.
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  virsh console dedicatedstorage-controller-0
 | 
			
		||||
 | 
			
		||||
Make the following menu selections in the installer:
 | 
			
		||||
 | 
			
		||||
#. First menu: Select 'Standard Controller Configuration'
 | 
			
		||||
#. Second menu: Select 'Serial Console'
 | 
			
		||||
#. Third menu: Select 'Standard Security Profile'
 | 
			
		||||
 | 
			
		||||
Wait for the non-interactive install of software to complete and for the server
 | 
			
		||||
to reboot. This can take 5-10 minutes depending on the performance of the host
 | 
			
		||||
machine.
 | 
			
		||||
 | 
			
		||||
--------------------------------
 | 
			
		||||
Bootstrap system on controller-0
 | 
			
		||||
--------------------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-bootstrap-controller-0-virt-controller-storage-start:
 | 
			
		||||
   :end-before: incl-bootstrap-controller-0-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-0
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-0-virt-controller-storage-start:
 | 
			
		||||
   :end-before: incl-config-controller-0-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-0
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
Unlock virtual controller-0 in order to bring it into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  system host-unlock controller-0
 | 
			
		||||
 | 
			
		||||
Controller-0 will reboot in order to apply configuration changes and come into
 | 
			
		||||
service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
------------------------------------------------------------------
 | 
			
		||||
Install software on controller-1, storage nodes, and compute nodes
 | 
			
		||||
------------------------------------------------------------------
 | 
			
		||||
 | 
			
		||||
#. On the host, power on the controller-1 virtual server,
 | 
			
		||||
   'dedicatedstorage-controller-1'. It will automatically attempt to network
 | 
			
		||||
   boot over the management network:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start dedicatedstorage-controller-1
 | 
			
		||||
 | 
			
		||||
#. Attach to the console of virtual controller-1:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh console dedicatedstorage-controller-1
 | 
			
		||||
 | 
			
		||||
#. As controller-1 VM boots, a message appears on its console instructing you to
 | 
			
		||||
   configure the personality of the node.
 | 
			
		||||
 | 
			
		||||
#. On the console of controller-0, list hosts to see newly discovered
 | 
			
		||||
   controller-1 host (hostname=None):
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | None         | None        | locked         | disabled    | offline      |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
#. Using the host id, set the personality of this host to 'controller':
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 2 personality=controller
 | 
			
		||||
 | 
			
		||||
   This initiates software installation on controller-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting on the previous step to complete, start up and set the personality
 | 
			
		||||
   for 'dedicatedstorage-storage-0' and 'dedicatedstorage-storage-1'. Set the
 | 
			
		||||
   personality to 'storage' and assign a unique hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, start 'dedicatedstorage-storage-0' from the host:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start dedicatedstorage-storage-0
 | 
			
		||||
 | 
			
		||||
   Wait for new host (hostname=None) to be discovered by checking
 | 
			
		||||
   ‘system host-list’ on virtual controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 3 personality=storage
 | 
			
		||||
 | 
			
		||||
   Repeat for 'dedicatedstorage-storage-1'. On the host:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start dedicatedstorage-storage-1
 | 
			
		||||
 | 
			
		||||
   And wait for new host (hostname=None) to be discovered by checking
 | 
			
		||||
   ‘system host-list’ on virtual controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 4 personality=storage
 | 
			
		||||
 | 
			
		||||
   This initiates software installation on storage-0 and storage-1.
 | 
			
		||||
   This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
#. While waiting on the previous step to complete, start up and set the personality
 | 
			
		||||
   for 'dedicatedstorage-worker-0' and 'dedicatedstorage-worker-1'. Set the
 | 
			
		||||
   personality to 'worker' and assign a unique hostname for each.
 | 
			
		||||
 | 
			
		||||
   For example, start 'dedicatedstorage-worker-0' from the host:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start dedicatedstorage-worker-0
 | 
			
		||||
 | 
			
		||||
   Wait for new host (hostname=None) to be discovered by checking
 | 
			
		||||
   ‘system host-list’ on virtual controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-update 5 personality=worker hostname=compute-0
 | 
			
		||||
 | 
			
		||||
   Repeat for 'dedicatedstorage-worker-1'. On the host:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      virsh start dedicatedstorage-worker-1
 | 
			
		||||
 | 
			
		||||
   And wait for new host (hostname=None) to be discovered by checking
 | 
			
		||||
   ‘system host-list’ on virtual controller-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      ssystem host-update 6 personality=worker hostname=compute-1
 | 
			
		||||
 | 
			
		||||
   This initiates software installation on compute-0 and compute-1.
 | 
			
		||||
 | 
			
		||||
#. Wait for the software installation on controller-1, storage-0, storage-1,
 | 
			
		||||
   compute-0, and compute-1 to complete, for all virtual servers to reboot, and for all
 | 
			
		||||
   to show as locked/disabled/online in 'system host-list'.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      system host-list
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | id | hostname     | personality | administrative | operational | availability |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
      | 1  | controller-0 | controller  | unlocked       | enabled     | available    |
 | 
			
		||||
      | 2  | controller-1 | controller  | locked         | disabled    | online       |
 | 
			
		||||
      | 3  | storage-0    | storage     | locked         | disabled    | online       |
 | 
			
		||||
      | 4  | storage-1    | storage     | locked         | disabled    | online       |
 | 
			
		||||
      | 5  | compute-0    | compute     | locked         | disabled    | online       |
 | 
			
		||||
      | 6  | compute-1    | compute     | locked         | disabled    | online       |
 | 
			
		||||
      +----+--------------+-------------+----------------+-------------+--------------+
 | 
			
		||||
 | 
			
		||||
----------------------
 | 
			
		||||
Configure controller-1
 | 
			
		||||
----------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-config-controller-1-virt-controller-storage-start:
 | 
			
		||||
   :end-before: incl-config-controller-1-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-------------------
 | 
			
		||||
Unlock controller-1
 | 
			
		||||
-------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-unlock-controller-1-virt-controller-storage-start:
 | 
			
		||||
   :end-before: incl-unlock-controller-1-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure storage nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the storage nodes.
 | 
			
		||||
 | 
			
		||||
   Note that the MGMT interfaces are partially set up by the network install procedure.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in storage-0 storage-1; do
 | 
			
		||||
         system interface-network-assign $COMPUTE mgmt0 cluster-host
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to storage-0:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    HOST=storage-0
 | 
			
		||||
    DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
    TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
    OSDs="/dev/sdb"
 | 
			
		||||
    for OSD in $OSDs; do
 | 
			
		||||
       system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
    done
 | 
			
		||||
 | 
			
		||||
    system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
#. Add OSDs to storage-1:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      HOST=storage-1
 | 
			
		||||
      DISKS=$(system host-disk-list ${HOST})
 | 
			
		||||
      TIERS=$(system storage-tier-list ceph_cluster)
 | 
			
		||||
      OSDs="/dev/sdb"
 | 
			
		||||
      for OSD in $OSDs; do
 | 
			
		||||
          system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
      system host-stor-list $HOST
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock storage nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
Unlock virtual storage nodes in order to bring them into service:
 | 
			
		||||
 | 
			
		||||
::
 | 
			
		||||
 | 
			
		||||
  for STORAGE in storage-0 storage-1; do
 | 
			
		||||
     system host-unlock $STORAGE
 | 
			
		||||
  done
 | 
			
		||||
 | 
			
		||||
The storage nodes will reboot in order to apply configuration changes and come
 | 
			
		||||
into service. This can take 5-10 minutes, depending on the performance of the host machine.
 | 
			
		||||
 | 
			
		||||
-----------------------
 | 
			
		||||
Configure compute nodes
 | 
			
		||||
-----------------------
 | 
			
		||||
 | 
			
		||||
On virtual controller-0:
 | 
			
		||||
 | 
			
		||||
#. Assign the cluster-host network to the MGMT interface for the compute nodes.
 | 
			
		||||
 | 
			
		||||
   Note that the MGMT interfaces are partially set up automatically by the
 | 
			
		||||
   network install procedure.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
         system interface-network-assign $COMPUTE mgmt0 cluster-host
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. Configure data interfaces for compute nodes.
 | 
			
		||||
 | 
			
		||||
   .. important::
 | 
			
		||||
 | 
			
		||||
      **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
      (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
      1G Huge Pages are not supported in the virtual environment and there is no
 | 
			
		||||
      virtual NIC supporting SRIOV. For that reason, data interfaces are not
 | 
			
		||||
      applicable in the virtual environment for the Kubernetes-only scenario.
 | 
			
		||||
 | 
			
		||||
   For OpenStack only:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      DATA0IF=eth1000
 | 
			
		||||
      DATA1IF=eth1001
 | 
			
		||||
      PHYSNET0='physnet0'
 | 
			
		||||
      PHYSNET1='physnet1'
 | 
			
		||||
      SPL=/tmp/tmp-system-port-list
 | 
			
		||||
      SPIL=/tmp/tmp-system-host-if-list
 | 
			
		||||
 | 
			
		||||
   Configure the datanetworks in sysinv, prior to referencing it in the
 | 
			
		||||
   :command:`system host-if-modify` command.
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
        system datanetwork-add ${PHYSNET0} vlan
 | 
			
		||||
        system datanetwork-add ${PHYSNET1} vlan
 | 
			
		||||
 | 
			
		||||
        for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
          echo "Configuring interface for: $COMPUTE"
 | 
			
		||||
          set -ex
 | 
			
		||||
          system host-port-list ${COMPUTE} --nowrap > ${SPL}
 | 
			
		||||
          system host-if-list -a ${COMPUTE} --nowrap > ${SPIL}
 | 
			
		||||
          DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
 | 
			
		||||
          DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
 | 
			
		||||
          DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
 | 
			
		||||
          DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
 | 
			
		||||
          DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
 | 
			
		||||
          DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
 | 
			
		||||
          DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
 | 
			
		||||
          DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
 | 
			
		||||
          system host-if-modify -m 1500 -n data0 -c data ${COMPUTE} ${DATA0IFUUID}
 | 
			
		||||
          system host-if-modify -m 1500 -n data1 -c data ${COMPUTE} ${DATA1IFUUID}
 | 
			
		||||
          system interface-datanetwork-assign ${COMPUTE} ${DATA0IFUUID} ${PHYSNET0}
 | 
			
		||||
          system interface-datanetwork-assign ${COMPUTE} ${DATA1IFUUID} ${PHYSNET1}
 | 
			
		||||
          set +ex
 | 
			
		||||
        done
 | 
			
		||||
 | 
			
		||||
*************************************
 | 
			
		||||
OpenStack-specific host configuration
 | 
			
		||||
*************************************
 | 
			
		||||
 | 
			
		||||
.. important::
 | 
			
		||||
 | 
			
		||||
   **This step is required only if the StarlingX OpenStack application
 | 
			
		||||
   (stx-openstack) will be installed.**
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Assign OpenStack host labels to the compute nodes in
 | 
			
		||||
   support of installing the stx-openstack manifest/helm-charts later:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for NODE in compute-0 compute-1; do
 | 
			
		||||
        system host-label-assign $NODE  openstack-compute-node=enabled
 | 
			
		||||
        system host-label-assign $NODE  openvswitch=enabled
 | 
			
		||||
        system host-label-assign $NODE  sriov=enabled
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
 | 
			
		||||
   which is needed for stx-openstack nova ephemeral disks:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
      for COMPUTE in compute-0 compute-1; do
 | 
			
		||||
        echo "Configuring Nova local for: $COMPUTE"
 | 
			
		||||
        ROOT_DISK=$(system host-show ${COMPUTE} | grep rootfs | awk '{print $4}')
 | 
			
		||||
        ROOT_DISK_UUID=$(system host-disk-list ${COMPUTE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
 | 
			
		||||
        PARTITION_SIZE=10
 | 
			
		||||
        NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${COMPUTE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
 | 
			
		||||
        NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
 | 
			
		||||
        system host-lvg-add ${COMPUTE} nova-local
 | 
			
		||||
        system host-pv-add ${COMPUTE} nova-local ${NOVA_PARTITION_UUID}
 | 
			
		||||
      done
 | 
			
		||||
 | 
			
		||||
--------------------
 | 
			
		||||
Unlock compute nodes
 | 
			
		||||
--------------------
 | 
			
		||||
 | 
			
		||||
.. include:: controller_storage_install_kubernetes.rst
 | 
			
		||||
   :start-after: incl-unlock-compute-nodes-virt-controller-storage-start:
 | 
			
		||||
   :end-before: incl-unlock-compute-nodes-virt-controller-storage-end:
 | 
			
		||||
 | 
			
		||||
----------
 | 
			
		||||
Next steps
 | 
			
		||||
----------
 | 
			
		||||
 | 
			
		||||
.. include:: ../kubernetes_install_next.txt
 | 
			
		||||
@@ -0,0 +1,75 @@
 | 
			
		||||
The following sections describe system requirements and host setup for a
 | 
			
		||||
workstation hosting virtual machine(s) where StarlingX will be deployed.
 | 
			
		||||
 | 
			
		||||
*********************
 | 
			
		||||
Hardware requirements
 | 
			
		||||
*********************
 | 
			
		||||
 | 
			
		||||
The host system should have at least:
 | 
			
		||||
 | 
			
		||||
* **Processor:** x86_64 only supported architecture with BIOS enabled hardware
 | 
			
		||||
  virtualization extensions
 | 
			
		||||
 | 
			
		||||
* **Cores:** 8
 | 
			
		||||
 | 
			
		||||
* **Memory:** 32GB RAM
 | 
			
		||||
 | 
			
		||||
* **Hard Disk:** 500GB HDD
 | 
			
		||||
 | 
			
		||||
* **Network:** One network adapter with active Internet connection
 | 
			
		||||
 | 
			
		||||
*********************
 | 
			
		||||
Software requirements
 | 
			
		||||
*********************
 | 
			
		||||
 | 
			
		||||
The host system should have at least:
 | 
			
		||||
 | 
			
		||||
* A workstation computer with Ubuntu 16.04 LTS 64-bit
 | 
			
		||||
 | 
			
		||||
All other required packages will be installed by scripts in the StarlingX tools repository.
 | 
			
		||||
 | 
			
		||||
**********
 | 
			
		||||
Host setup
 | 
			
		||||
**********
 | 
			
		||||
 | 
			
		||||
Set up the host with the following steps:
 | 
			
		||||
 | 
			
		||||
#. Update OS:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    apt-get update
 | 
			
		||||
 | 
			
		||||
#. Clone the StarlingX tools repository:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    apt-get install -y git
 | 
			
		||||
    cd $HOME
 | 
			
		||||
    git clone https://opendev.org/starlingx/tools.git
 | 
			
		||||
 | 
			
		||||
#. Install required packages:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    cd $HOME/tools/deployment/libvirt/
 | 
			
		||||
    bash install_packages.sh
 | 
			
		||||
    apt install -y apparmor-profiles
 | 
			
		||||
    apt-get install -y ufw
 | 
			
		||||
    ufw disable
 | 
			
		||||
    ufw status
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
   .. note::
 | 
			
		||||
 | 
			
		||||
      On Ubuntu 16.04, if apparmor-profile modules were installed as shown in
 | 
			
		||||
      the example above, you must reboot the server to fully install the
 | 
			
		||||
      apparmor-profile modules.
 | 
			
		||||
 | 
			
		||||
 | 
			
		||||
#. Get the StarlingX ISO. This can be from a private StarlingX build or from the public Cengn
 | 
			
		||||
   StarlingX build off 'master' branch, as shown below:
 | 
			
		||||
 | 
			
		||||
   ::
 | 
			
		||||
 | 
			
		||||
    wget http://mirror.starlingx.cengn.ca/mirror/starlingx/release/2.0.0/centos/outputs/iso/bootimage.iso
 | 
			
		||||