Remove spurious escapes (r8,dsR8)
This change addresses a long-standing issue in rST documentation imported from XML. That import process added backslash escapes in front of various characters. The three most common being '(', ')', and '_'. These instances are removed. Signed-off-by: Ron Stone <ronald.stone@windriver.com> Change-Id: Id43a9337ffcd505ccbdf072d7b29afdb5d2c997e
This commit is contained in:
parent
c153b65ab3
commit
f125a8b892
@ -6,8 +6,8 @@
|
|||||||
appropriate settings.
|
appropriate settings.
|
||||||
|
|
||||||
In particular, if |IGMP| snooping is enabled on |ToR| switches, then a
|
In particular, if |IGMP| snooping is enabled on |ToR| switches, then a
|
||||||
device acting as an |IGMP| querier is required on the network \(on the same
|
device acting as an |IGMP| querier is required on the network (on the same
|
||||||
|VLAN|\) to prevent nodes from being dropped from the multicast group. The
|
|VLAN|) to prevent nodes from being dropped from the multicast group. The
|
||||||
|IGMP| querier periodically sends |IGMP| queries to all nodes on the
|
|IGMP| querier periodically sends |IGMP| queries to all nodes on the
|
||||||
network, and each node sends an |IGMP| join or report in response. Without
|
network, and each node sends an |IGMP| join or report in response. Without
|
||||||
an |IGMP| querier, the nodes do not periodically send |IGMP| join messages
|
an |IGMP| querier, the nodes do not periodically send |IGMP| join messages
|
||||||
|
@ -1,8 +1,8 @@
|
|||||||
|
|
||||||
.. begin-redfish-vms
|
.. begin-redfish-vms
|
||||||
|
|
||||||
For subclouds with servers that support Redfish Virtual Media Service \(version
|
For subclouds with servers that support Redfish Virtual Media Service (version
|
||||||
1.2 or higher\), you can use the Central Cloud's CLI to install the ISO and
|
1.2 or higher), you can use the Central Cloud's CLI to install the ISO and
|
||||||
bootstrap subclouds from the Central Cloud. For more information, see
|
bootstrap subclouds from the Central Cloud. For more information, see
|
||||||
:ref:`Installing a Subcloud Using Redfish Platform Management Service
|
:ref:`Installing a Subcloud Using Redfish Platform Management Service
|
||||||
<installing-a-subcloud-using-redfish-platform-management-service>`.
|
<installing-a-subcloud-using-redfish-platform-management-service>`.
|
||||||
|
@ -20,7 +20,7 @@ more letters, as follows:
|
|||||||
A slash-separated list of letters is used when the alarm can be triggered with
|
A slash-separated list of letters is used when the alarm can be triggered with
|
||||||
one of several severity levels.
|
one of several severity levels.
|
||||||
|
|
||||||
An asterisk \(\*\) indicates the management-affecting severity, if any. A
|
An asterisk (\*) indicates the management-affecting severity, if any. A
|
||||||
management-affecting alarm is one that cannot be ignored at the indicated
|
management-affecting alarm is one that cannot be ignored at the indicated
|
||||||
severity level or higher by using relaxed alarm rules during an orchestrated
|
severity level or higher by using relaxed alarm rules during an orchestrated
|
||||||
patch or upgrade operation.
|
patch or upgrade operation.
|
||||||
|
@ -23,7 +23,7 @@ one or more letters, as follows:
|
|||||||
A slash-separated list of letters is used when the alarm can be triggered with
|
A slash-separated list of letters is used when the alarm can be triggered with
|
||||||
one of several severity levels.
|
one of several severity levels.
|
||||||
|
|
||||||
An asterisk \(\*\) indicates the management-affecting severity, if any. A
|
An asterisk (\*) indicates the management-affecting severity, if any. A
|
||||||
management-affecting alarm is one that cannot be ignored at the indicated
|
management-affecting alarm is one that cannot be ignored at the indicated
|
||||||
severity level or higher by using relaxed alarm rules during an orchestrated
|
severity level or higher by using relaxed alarm rules during an orchestrated
|
||||||
patch or upgrade operation.
|
patch or upgrade operation.
|
||||||
|
@ -75,7 +75,7 @@ commands to manage containerized applications provided as part of |prod|.
|
|||||||
| updated_at | 2022-06-21T03:13:01.051293+00:00 |
|
| updated_at | 2022-06-21T03:13:01.051293+00:00 |
|
||||||
+---------------+----------------------------------+
|
+---------------+----------------------------------+
|
||||||
|
|
||||||
- Use the following command to upload application Helm chart\(s\) and
|
- Use the following command to upload application Helm chart\(s) and
|
||||||
manifest.
|
manifest.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
@ -24,7 +24,7 @@ least the following |SANs|: ``DNS:registry.local``, ``DNS:registry.central``,
|
|||||||
IP Address:<oam-floating-ip-address>, IP Address:<mgmt-floating-ip-address>.
|
IP Address:<oam-floating-ip-address>, IP Address:<mgmt-floating-ip-address>.
|
||||||
Use the :command:`system addrpool-list` command to get the |OAM| floating IP
|
Use the :command:`system addrpool-list` command to get the |OAM| floating IP
|
||||||
Address and management floating IP Address for your system. You can add any
|
Address and management floating IP Address for your system. You can add any
|
||||||
additional |DNS| entry\(s\) that you have set up for your |OAM| floating IP
|
additional |DNS| entry\(s) that you have set up for your |OAM| floating IP
|
||||||
Address.
|
Address.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
@ -42,7 +42,7 @@ an expired or soon to expire certificate.
|
|||||||
.. rubric:: |prereq|
|
.. rubric:: |prereq|
|
||||||
|
|
||||||
Obtain an intermediate or Root |CA|-signed certificate and key from a trusted
|
Obtain an intermediate or Root |CA|-signed certificate and key from a trusted
|
||||||
intermediate or Root Certificate Authority \(|CA|\). Refer to the documentation
|
intermediate or Root Certificate Authority (|CA|). Refer to the documentation
|
||||||
for the external Root |CA| that you are using, on how to create public
|
for the external Root |CA| that you are using, on how to create public
|
||||||
certificate and private key pairs, signed by an intermediate or Root |CA|, for
|
certificate and private key pairs, signed by an intermediate or Root |CA|, for
|
||||||
HTTPS.
|
HTTPS.
|
||||||
@ -54,7 +54,7 @@ using openssl <create-certificates-locally-using-openssl>` to create an
|
|||||||
Intermediate or test Root |CA| certificate and key, and use it to sign test
|
Intermediate or test Root |CA| certificate and key, and use it to sign test
|
||||||
certificates.
|
certificates.
|
||||||
|
|
||||||
Put the Privacy Enhanced Mail \(PEM\) encoded versions of the certificate and
|
Put the Privacy Enhanced Mail (PEM) encoded versions of the certificate and
|
||||||
key in a single file, and copy the file to the controller host.
|
key in a single file, and copy the file to the controller host.
|
||||||
|
|
||||||
Also obtain the certificate of the intermediate or Root CA that signed the
|
Also obtain the certificate of the intermediate or Root CA that signed the
|
||||||
@ -87,7 +87,7 @@ information, see, :ref:`Display Certificates Installed on a System <utility-scri
|
|||||||
#. Update the Docker registry certificate using the
|
#. Update the Docker registry certificate using the
|
||||||
:command:`certificate-install` command.
|
:command:`certificate-install` command.
|
||||||
|
|
||||||
Set the mode (``-m`` or ``--mode``) parameter to docker\_registry.
|
Set the mode (``-m`` or ``--mode``) parameter to docker_registry.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
|
@ -60,7 +60,7 @@ see :ref:`Kubernetes CPU Manager Policies <kubernetes-cpu-manager-policies>`.
|
|||||||
When using the static CPU manager policy before increasing the number of
|
When using the static CPU manager policy before increasing the number of
|
||||||
platform CPUs or changing isolated CPUs to application CPUs on a host, ensure
|
platform CPUs or changing isolated CPUs to application CPUs on a host, ensure
|
||||||
that no pods on the host are making use of any isolated CPUs that will be
|
that no pods on the host are making use of any isolated CPUs that will be
|
||||||
affected. Otherwise, the pod\(s\) will transition to a Topology Affinity Error
|
affected. Otherwise, the pod\(s) will transition to a Topology Affinity Error
|
||||||
state. Although not strictly necessary, the simplest way to do this on systems
|
state. Although not strictly necessary, the simplest way to do this on systems
|
||||||
other than |AIO-SX| is to administratively lock the host, causing all the
|
other than |AIO-SX| is to administratively lock the host, causing all the
|
||||||
pods to be restarted on an alternate host, before changing CPU assigned
|
pods to be restarted on an alternate host, before changing CPU assigned
|
||||||
@ -73,9 +73,9 @@ Kubernetes will report a new **windriver.com/isolcpus** resource for each
|
|||||||
worker node. This corresponds to the application-isolated CPUs. Pods in the
|
worker node. This corresponds to the application-isolated CPUs. Pods in the
|
||||||
**Best-effort** or **Burstable** |QoS| class may specify some number of
|
**Best-effort** or **Burstable** |QoS| class may specify some number of
|
||||||
**windriver.com/isolcpus** resources and the pod will be scheduled on a host
|
**windriver.com/isolcpus** resources and the pod will be scheduled on a host
|
||||||
\(and possibly |NUMA| node depending on topology manager policy\) with
|
\(and possibly |NUMA| node depending on topology manager policy) with
|
||||||
sufficient application-isolated cores available, and the container requesting
|
sufficient application-isolated cores available, and the container requesting
|
||||||
the resource will be affined \(and restricted\) to those CPUs via cgroups.
|
the resource will be affined (and restricted) to those CPUs via cgroups.
|
||||||
|
|
||||||
Pods in the Guaranteed |QoS| class should not specify **windriver.com/isolcpus**
|
Pods in the Guaranteed |QoS| class should not specify **windriver.com/isolcpus**
|
||||||
resources as they will be allocated but not used. If there are multiple
|
resources as they will be allocated but not used. If there are multiple
|
||||||
|
@ -15,9 +15,9 @@ For example:
|
|||||||
|
|
||||||
$ docker login registry.local:9001 -u <keystoneUserName> -p <keystonePassword>
|
$ docker login registry.local:9001 -u <keystoneUserName> -p <keystonePassword>
|
||||||
|
|
||||||
An authorized administrator \('admin' and 'sysinv'\) can perform any Docker
|
An authorized administrator ('admin' and 'sysinv') can perform any Docker
|
||||||
action. Regular users can only interact with their own repositories \(i.e.
|
action. Regular users can only interact with their own repositories (i.e.
|
||||||
registry.local:9001/<keystoneUserName>/\). Any authenticated user can pull from
|
registry.local:9001/<keystoneUserName>/). Any authenticated user can pull from
|
||||||
the following list of public images:
|
the following list of public images:
|
||||||
|
|
||||||
.. _kubernetes-admin-tutorials-authentication-and-authorization-d383e50:
|
.. _kubernetes-admin-tutorials-authentication-and-authorization-d383e50:
|
||||||
|
@ -22,8 +22,8 @@ end-users' Kubernetes applications. |prod| recommends to install a Helm v3
|
|||||||
client on a remote workstation, so that non-admin (and admin) end-users can
|
client on a remote workstation, so that non-admin (and admin) end-users can
|
||||||
manage their Kubernetes applications remotely.
|
manage their Kubernetes applications remotely.
|
||||||
|
|
||||||
Upon system installation, local Helm repositories \(containing |prod-long|
|
Upon system installation, local Helm repositories (containing |prod-long|
|
||||||
packages\) are created and added to the Helm repo list.
|
packages) are created and added to the Helm repo list.
|
||||||
|
|
||||||
Use the following command to list these local Helm repositories:
|
Use the following command to list these local Helm repositories:
|
||||||
|
|
||||||
@ -34,8 +34,8 @@ Use the following command to list these local Helm repositories:
|
|||||||
starlingx `http://127.0.0.1:8080/helm_charts/starlingx`
|
starlingx `http://127.0.0.1:8080/helm_charts/starlingx`
|
||||||
stx-platform `http://127.0.0.1:8080/helm_charts/stx-platform`
|
stx-platform `http://127.0.0.1:8080/helm_charts/stx-platform`
|
||||||
|
|
||||||
Where the `stx-platform` repo holds helm charts of StarlingX Applications \(see
|
Where the `stx-platform` repo holds helm charts of StarlingX Applications (see
|
||||||
next section\) of the |prod| platform itself, while the `starlingx` repo holds
|
next section) of the |prod| platform itself, while the `starlingx` repo holds
|
||||||
helm charts of optional StarlingX applications, such as Openstack. The admin
|
helm charts of optional StarlingX applications, such as Openstack. The admin
|
||||||
user can add charts to these local repos and regenerate the index to use these
|
user can add charts to these local repos and regenerate the index to use these
|
||||||
charts, and add new remote repositories to the list of known repos.
|
charts, and add new remote repositories to the list of known repos.
|
||||||
|
@ -10,7 +10,7 @@ Use the |prod| system application commands to manage containerized application
|
|||||||
deployment from the command-line.
|
deployment from the command-line.
|
||||||
|
|
||||||
|prod| application management provides a wrapper around FluxCD and Kubernetes
|
|prod| application management provides a wrapper around FluxCD and Kubernetes
|
||||||
Helm \(see `https://github.com/helm/helm <https://github.com/helm/helm>`__\)
|
Helm (see `https://github.com/helm/helm <https://github.com/helm/helm>`__)
|
||||||
for managing containerized applications. FluxCD is a tool for managing multiple
|
for managing containerized applications. FluxCD is a tool for managing multiple
|
||||||
Helm charts with dependencies by centralizing all configurations in a single
|
Helm charts with dependencies by centralizing all configurations in a single
|
||||||
FluxCD YAML definition and providing life-cycle hooks for all Helm releases.
|
FluxCD YAML definition and providing life-cycle hooks for all Helm releases.
|
||||||
|
@ -29,15 +29,15 @@ Static policy customizations
|
|||||||
----------------------------
|
----------------------------
|
||||||
|
|
||||||
- Pods in the **kube-system** namespace are affined to platform cores
|
- Pods in the **kube-system** namespace are affined to platform cores
|
||||||
only. Other pod containers \(hosted applications\) are restricted to
|
only. Other pod containers (hosted applications) are restricted to
|
||||||
running on either the application or isolated cores. CFS quota
|
running on either the application or isolated cores. CFS quota
|
||||||
throttling for Guaranteed QoS pods is disabled.
|
throttling for Guaranteed QoS pods is disabled.
|
||||||
|
|
||||||
- When using the static policy, improved performance can be achieved if
|
- When using the static policy, improved performance can be achieved if
|
||||||
you also use the Isolated CPU behavior as described at :ref:`Isolating CPU Cores to Enhance Application Performance <isolating-cpu-cores-to-enhance-application-performance>`.
|
you also use the Isolated CPU behavior as described at :ref:`Isolating CPU Cores to Enhance Application Performance <isolating-cpu-cores-to-enhance-application-performance>`.
|
||||||
|
|
||||||
- For Kubernetes pods with a **Guaranteed** QoS \(see `https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/ <https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/>`__
|
- For Kubernetes pods with a **Guaranteed** QoS (see `https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/ <https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/>`__
|
||||||
for background information\), CFS quota throttling is disabled as it
|
for background information), CFS quota throttling is disabled as it
|
||||||
causes performance degradation.
|
causes performance degradation.
|
||||||
|
|
||||||
- Kubernetes pods are prevented by default from running on CPUs with an
|
- Kubernetes pods are prevented by default from running on CPUs with an
|
||||||
|
@ -56,13 +56,13 @@ The backup contains details as listed below:
|
|||||||
All platform configuration data and files required to fully restore the
|
All platform configuration data and files required to fully restore the
|
||||||
system to a working state following the platform restore procedure.
|
system to a working state following the platform restore procedure.
|
||||||
|
|
||||||
- \(Optional\) Any end user container images in **registry.local**; that
|
- (Optional) Any end user container images in **registry.local**; that
|
||||||
is, any images other than |org| system and application images.
|
is, any images other than |org| system and application images.
|
||||||
|prod| system and application images are repulled from their
|
|prod| system and application images are repulled from their
|
||||||
original source, external registries during the restore procedure.
|
original source, external registries during the restore procedure.
|
||||||
|
|
||||||
- Home directory 'sysadmin' user, and all |LDAP| user accounts
|
- Home directory 'sysadmin' user, and all |LDAP| user accounts
|
||||||
\(item=/etc\)
|
(item=/etc)
|
||||||
|
|
||||||
- Patching and package repositories:
|
- Patching and package repositories:
|
||||||
|
|
||||||
|
@ -7,8 +7,8 @@
|
|||||||
Restore Platform System Data and Storage
|
Restore Platform System Data and Storage
|
||||||
========================================
|
========================================
|
||||||
|
|
||||||
You can perform a system restore \(controllers, workers, including or excluding
|
You can perform a system restore (controllers, workers, including or excluding
|
||||||
storage nodes\) of a |prod| cluster from a previous system backup and bring it
|
storage nodes) of a |prod| cluster from a previous system backup and bring it
|
||||||
back to the operational state it was when the backup procedure took place.
|
back to the operational state it was when the backup procedure took place.
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
@ -31,14 +31,14 @@ details on the backup.
|
|||||||
the backup was made. You cannot use this backup file to restore the system
|
the backup was made. You cannot use this backup file to restore the system
|
||||||
to different hardware.
|
to different hardware.
|
||||||
|
|
||||||
To restore the backup, use the same version of the boot image \(ISO\) that
|
To restore the backup, use the same version of the boot image (ISO) that
|
||||||
was used at the time of the original installation.
|
was used at the time of the original installation.
|
||||||
|
|
||||||
The |prod| restore supports the following optional modes:
|
The |prod| restore supports the following optional modes:
|
||||||
|
|
||||||
.. _restoring-starlingx-system-data-and-storage-ol-tw4-kvc-4jb:
|
.. _restoring-starlingx-system-data-and-storage-ol-tw4-kvc-4jb:
|
||||||
|
|
||||||
- To keep the Ceph cluster data intact \(false - default option\), use the
|
- To keep the Ceph cluster data intact (false - default option), use the
|
||||||
following parameter, when passing the extra arguments to the Ansible Restore
|
following parameter, when passing the extra arguments to the Ansible Restore
|
||||||
playbook command:
|
playbook command:
|
||||||
|
|
||||||
@ -46,7 +46,7 @@ The |prod| restore supports the following optional modes:
|
|||||||
|
|
||||||
wipe_ceph_osds=false
|
wipe_ceph_osds=false
|
||||||
|
|
||||||
- To wipe the Ceph cluster entirely \(true\), where the Ceph cluster will
|
- To wipe the Ceph cluster entirely (true), where the Ceph cluster will
|
||||||
need to be recreated, or if the Ceph partition was wiped somehow before or
|
need to be recreated, or if the Ceph partition was wiped somehow before or
|
||||||
during reinstall, use the following parameter:
|
during reinstall, use the following parameter:
|
||||||
|
|
||||||
@ -57,9 +57,9 @@ The |prod| restore supports the following optional modes:
|
|||||||
|
|
||||||
Restoring a |prod| cluster from a backup file is done by re-installing the
|
Restoring a |prod| cluster from a backup file is done by re-installing the
|
||||||
ISO on controller-0, running the Ansible Restore Playbook, applying updates
|
ISO on controller-0, running the Ansible Restore Playbook, applying updates
|
||||||
\(patches\), unlocking controller-0, and then powering on, and unlocking the
|
\(patches), unlocking controller-0, and then powering on, and unlocking the
|
||||||
remaining hosts, one host at a time, starting with the controllers, and then
|
remaining hosts, one host at a time, starting with the controllers, and then
|
||||||
the storage hosts, ONLY if required, and lastly the compute \(worker\) hosts.
|
the storage hosts, ONLY if required, and lastly the compute (worker) hosts.
|
||||||
|
|
||||||
.. rubric:: |prereq|
|
.. rubric:: |prereq|
|
||||||
|
|
||||||
@ -92,7 +92,7 @@ conditions are in place:
|
|||||||
host manually for network boot immediately after powering it on.
|
host manually for network boot immediately after powering it on.
|
||||||
|
|
||||||
- If you are restoring a |prod-dc| subcloud first, ensure it is in
|
- If you are restoring a |prod-dc| subcloud first, ensure it is in
|
||||||
an **unmanaged** state on the Central Cloud \(SystemController\) by using
|
an **unmanaged** state on the Central Cloud (SystemController) by using
|
||||||
the following commands:
|
the following commands:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
@ -293,8 +293,8 @@ conditions are in place:
|
|||||||
#. If :command:`wipe_ceph_osds` is set to **true**, reinstall the
|
#. If :command:`wipe_ceph_osds` is set to **true**, reinstall the
|
||||||
storage hosts.
|
storage hosts.
|
||||||
|
|
||||||
#. If :command:`wipe_ceph_osds` is set to **false** \(default
|
#. If :command:`wipe_ceph_osds` is set to **false** (default
|
||||||
option\), do not reinstall the storage hosts.
|
option), do not reinstall the storage hosts.
|
||||||
|
|
||||||
.. caution::
|
.. caution::
|
||||||
Do not reinstall or power off the storage hosts if you want to
|
Do not reinstall or power off the storage hosts if you want to
|
||||||
@ -302,7 +302,7 @@ conditions are in place:
|
|||||||
will lead to data loss.
|
will lead to data loss.
|
||||||
|
|
||||||
#. Ensure that the Ceph cluster is healthy. Verify that the three Ceph
|
#. Ensure that the Ceph cluster is healthy. Verify that the three Ceph
|
||||||
monitors \(controller-0, controller-1, storage-0\) are running in
|
monitors (controller-0, controller-1, storage-0) are running in
|
||||||
quorum.
|
quorum.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
@ -332,15 +332,15 @@ conditions are in place:
|
|||||||
|
|
||||||
If the message HEALTH_WARN appears, wait a few minutes and then try
|
If the message HEALTH_WARN appears, wait a few minutes and then try
|
||||||
again. If the warning condition persists, consult the public
|
again. If the warning condition persists, consult the public
|
||||||
documentation for troubleshooting Ceph monitors \(for example,
|
documentation for troubleshooting Ceph monitors (for example,
|
||||||
`http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshootin
|
`http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshootin
|
||||||
g-mon/
|
g-mon/
|
||||||
<http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshootin
|
<http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshootin
|
||||||
g-mon/>`__\).
|
g-mon/>`__).
|
||||||
|
|
||||||
#. Restore the compute \(worker\) hosts, one at a time.
|
#. Restore the compute (worker) hosts, one at a time.
|
||||||
|
|
||||||
Restore the compute \(worker\) hosts following the same procedure used to
|
Restore the compute (worker) hosts following the same procedure used to
|
||||||
restore controller-1.
|
restore controller-1.
|
||||||
|
|
||||||
#. Allow Calico and Coredns pods to be recovered by Kubernetes. They should
|
#. Allow Calico and Coredns pods to be recovered by Kubernetes. They should
|
||||||
@ -396,7 +396,7 @@ conditions are in place:
|
|||||||
are not included as part of the backup and restore procedures.
|
are not included as part of the backup and restore procedures.
|
||||||
|
|
||||||
- After restoring a |prod-dc| subcloud, you need to bring it back
|
- After restoring a |prod-dc| subcloud, you need to bring it back
|
||||||
to the **managed** state on the Central Cloud \(SystemController\), by
|
to the **managed** state on the Central Cloud (SystemController), by
|
||||||
using the following commands:
|
using the following commands:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
@ -39,7 +39,7 @@ and target it at controller-0.
|
|||||||
|
|
||||||
#. Provide either a customized Ansible hosts file specified using the ``-i``
|
#. Provide either a customized Ansible hosts file specified using the ``-i``
|
||||||
option, or use the default one in the Ansible configuration directory
|
option, or use the default one in the Ansible configuration directory
|
||||||
\(that is, /etc/ansible/hosts\).
|
(that is, /etc/ansible/hosts).
|
||||||
|
|
||||||
#. If using a customized file, change to the ``<br>`` directory created
|
#. If using a customized file, change to the ``<br>`` directory created
|
||||||
in the previous step.
|
in the previous step.
|
||||||
@ -106,7 +106,7 @@ and target it at controller-0.
|
|||||||
|
|
||||||
~(keystone_admin)]$ ansible-playbook backup.yml --limit sm5 -i $HOME/br_test/hosts --ask-vault-pass -e "host_backup_dir=$HOME/br_test override_files_dir=$HOME/override_dir"
|
~(keystone_admin)]$ ansible-playbook backup.yml --limit sm5 -i $HOME/br_test/hosts --ask-vault-pass -e "host_backup_dir=$HOME/br_test override_files_dir=$HOME/override_dir"
|
||||||
|
|
||||||
The generated backup tar file can be found in <host\_backup\_dir>, that
|
The generated backup tar file can be found in <host_backup_dir>, that
|
||||||
is, /home/sysadmin, by default. You can overwrite it using the **-e**
|
is, /home/sysadmin, by default. You can overwrite it using the **-e**
|
||||||
option on the command line or in an override file.
|
option on the command line or in an override file.
|
||||||
|
|
||||||
|
@ -48,12 +48,12 @@ Other ``-e`` command line options:
|
|||||||
|
|
||||||
- **Optional**: You can select one of the following restore modes:
|
- **Optional**: You can select one of the following restore modes:
|
||||||
|
|
||||||
- To keep the Ceph cluster data intact \(false - default option\), use the
|
- To keep the Ceph cluster data intact (false - default option), use the
|
||||||
following parameter:
|
following parameter:
|
||||||
|
|
||||||
:command:`wipe_ceph_osds=false`
|
:command:`wipe_ceph_osds=false`
|
||||||
|
|
||||||
- To wipe the Ceph cluster entirely \(true\), where the Ceph cluster will
|
- To wipe the Ceph cluster entirely (true), where the Ceph cluster will
|
||||||
need to be recreated, use the following parameter:
|
need to be recreated, use the following parameter:
|
||||||
|
|
||||||
:command:`wipe_ceph_osds=true`
|
:command:`wipe_ceph_osds=true`
|
||||||
@ -133,7 +133,7 @@ Other ``-e`` command line options:
|
|||||||
|
|
||||||
.. rubric:: |postreq|
|
.. rubric:: |postreq|
|
||||||
|
|
||||||
After running restore\_platform.yml playbook, you can restore the local
|
After running restore_platform.yml playbook, you can restore the local
|
||||||
registry images.
|
registry images.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
@ -31,7 +31,7 @@ In this method you can run Ansible Restore playbook and point to controller-0.
|
|||||||
|
|
||||||
#. Provide an inventory file, either a customized one that is specified
|
#. Provide an inventory file, either a customized one that is specified
|
||||||
using the ``-i`` option, or the default one that is in the Ansible
|
using the ``-i`` option, or the default one that is in the Ansible
|
||||||
configuration directory \(that is, /etc/ansible/hosts\). You must
|
configuration directory (that is, /etc/ansible/hosts). You must
|
||||||
specify the floating |OAM| IP of the controller host. For example, if the
|
specify the floating |OAM| IP of the controller host. For example, if the
|
||||||
host name is |prefix|\_Cluster, the inventory file should have an entry
|
host name is |prefix|\_Cluster, the inventory file should have an entry
|
||||||
called |prefix|\_Cluster.
|
called |prefix|\_Cluster.
|
||||||
@ -54,12 +54,12 @@ In this method you can run Ansible Restore playbook and point to controller-0.
|
|||||||
|
|
||||||
where ``optional-extra-vars`` can be:
|
where ``optional-extra-vars`` can be:
|
||||||
|
|
||||||
- To keep Ceph data intact \(false - default option\), use the
|
- To keep Ceph data intact (false - default option), use the
|
||||||
following parameter:
|
following parameter:
|
||||||
|
|
||||||
:command:`wipe_ceph_osds=false`
|
:command:`wipe_ceph_osds=false`
|
||||||
|
|
||||||
- To start with an empty Ceph cluster \(true\), where the Ceph
|
- To start with an empty Ceph cluster (true), where the Ceph
|
||||||
cluster will need to be recreated, use the following parameter:
|
cluster will need to be recreated, use the following parameter:
|
||||||
|
|
||||||
:command:`wipe_ceph_osds=true`
|
:command:`wipe_ceph_osds=true`
|
||||||
@ -103,7 +103,7 @@ In this method you can run Ansible Restore playbook and point to controller-0.
|
|||||||
on controller-0.
|
on controller-0.
|
||||||
|
|
||||||
- The :command:`ansible_remote_tmp` should be set to a new
|
- The :command:`ansible_remote_tmp` should be set to a new
|
||||||
directory \(not required to create it ahead of time\) under
|
directory (not required to create it ahead of time) under
|
||||||
/home/sysadmin on controller-0 using the ``-e`` option on the command
|
/home/sysadmin on controller-0 using the ``-e`` option on the command
|
||||||
line.
|
line.
|
||||||
|
|
||||||
|
@ -17,12 +17,12 @@ of restoring the underlying platform.
|
|||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Data stored in Ceph such as Glance images, Cinder volumes or volume backups
|
Data stored in Ceph such as Glance images, Cinder volumes or volume backups
|
||||||
or Rados objects \(images stored in ceph\) are not backed up automatically.
|
or Rados objects (images stored in ceph) are not backed up automatically.
|
||||||
|
|
||||||
|
|
||||||
.. _back-up-openstack-ul-ohv-x3k-qmb:
|
.. _back-up-openstack-ul-ohv-x3k-qmb:
|
||||||
|
|
||||||
- To backup glance images use the image\_backup.sh script. For example:
|
- To backup glance images use the ``image_backup.sh`` script. For example:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
|
@ -37,8 +37,8 @@ You can restore |prod-os| from a backup with or without Ceph.
|
|||||||
|
|
||||||
|
|
||||||
- Restore only |prod-os| system data. This option will not restore the
|
- Restore only |prod-os| system data. This option will not restore the
|
||||||
Ceph data \(that is, it will not run commands like :command:`rbd
|
Ceph data (that is, it will not run commands like :command:`rbd
|
||||||
import`\). This procedure will preserve any existing Ceph data at
|
import`). This procedure will preserve any existing Ceph data at
|
||||||
restore-time.
|
restore-time.
|
||||||
|
|
||||||
- Restore |prod-os| system data, Cinder volumes and Glance images. You'll
|
- Restore |prod-os| system data, Cinder volumes and Glance images. You'll
|
||||||
|
@ -59,8 +59,8 @@ pci-passthrough interface.
|
|||||||
The name assigned to the data network.
|
The name assigned to the data network.
|
||||||
|
|
||||||
**<type>**
|
**<type>**
|
||||||
The type of data network to be created \(**flat**, **vlan**, or
|
The type of data network to be created (**flat**, **vlan**, or
|
||||||
**vxlan**\)
|
**vxlan**)
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
**vxlan** is only applicable to |prod-os|.
|
**vxlan** is only applicable to |prod-os|.
|
||||||
@ -86,7 +86,7 @@ pci-passthrough interface.
|
|||||||
|
|
||||||
For the |prod-os| application, after creating a data network of the VLAN or
|
For the |prod-os| application, after creating a data network of the VLAN or
|
||||||
VXLAN type, you can assign one or more segmentation ranges consisting of a set
|
VXLAN type, you can assign one or more segmentation ranges consisting of a set
|
||||||
of consecutive VLAN IDs \(for VLANs\) or VNIs \(for VXLANs\) using the
|
of consecutive VLAN IDs (for VLANs) or VNIs (for VXLANs) using the
|
||||||
:command:`openstack network segment range create` command. Segmentation ranges
|
:command:`openstack network segment range create` command. Segmentation ranges
|
||||||
are required in order to set up project networks.
|
are required in order to set up project networks.
|
||||||
|
|
||||||
|
@ -13,4 +13,4 @@ The command for performing the mapping has the format:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
~(keystone_admin)]$ system interface‐datanetwork‐assign <host\_name> <interface\_uuid> <datanetwork\_uuid>
|
~(keystone_admin)]$ system interface‐datanetwork‐assign <host_name> <interface_uuid> <datanetwork_uuid>
|
@ -13,7 +13,7 @@ A data network represents a Layer 2 physical or virtual network, or set of
|
|||||||
virtual networks, used to provide the underlying network connectivity needed
|
virtual networks, used to provide the underlying network connectivity needed
|
||||||
to support the application networks. Multiple data networks may be configured
|
to support the application networks. Multiple data networks may be configured
|
||||||
as required, and realized over the same or different physical networks. Access
|
as required, and realized over the same or different physical networks. Access
|
||||||
to external networks is typically \(although not necessarily\) granted to
|
to external networks is typically (although not necessarily) granted to
|
||||||
worker nodes using a data network. The extent of this connectivity, including
|
worker nodes using a data network. The extent of this connectivity, including
|
||||||
access to the open internet, is application dependent.
|
access to the open internet, is application dependent.
|
||||||
|
|
||||||
|
@ -84,6 +84,6 @@ Labels for Network Connections
|
|||||||
------------------------------
|
------------------------------
|
||||||
|
|
||||||
Network connections in the topology window may be labeled with the data
|
Network connections in the topology window may be labeled with the data
|
||||||
interface name \(displayed above the connection line\) and LLDP neighbor
|
interface name (displayed above the connection line) and LLDP neighbor
|
||||||
information \(displayed below the connection line\). You can show or hide the
|
information (displayed below the connection line). You can show or hide the
|
||||||
labels using a button above the lists \(**Show Labels** or **Hide Labels**\).
|
labels using a button above the lists (**Show Labels** or **Hide Labels**).
|
@ -58,8 +58,8 @@ To make interface changes, you must lock the compute host first.
|
|||||||
|
|
||||||
.. image:: /shared/figures/datanet/jow1442607685238.png
|
.. image:: /shared/figures/datanet/jow1442607685238.png
|
||||||
|
|
||||||
#. Enter the IPv4 or IPv6 address and netmask \(for example,
|
#. Enter the IPv4 or IPv6 address and netmask (for example,
|
||||||
192.168.1.3/24\), and then click **Create Address**.
|
192.168.1.3/24), and then click **Create Address**.
|
||||||
|
|
||||||
The new address is added to the **Address List**.
|
The new address is added to the **Address List**.
|
||||||
|
|
||||||
|
@ -39,7 +39,7 @@ where
|
|||||||
is the default gateway
|
is the default gateway
|
||||||
|
|
||||||
**metric**
|
**metric**
|
||||||
is the cost of the route \(the number of hops\)
|
is the cost of the route (the number of hops)
|
||||||
|
|
||||||
To delete routes, use the following command.
|
To delete routes, use the following command.
|
||||||
|
|
||||||
|
@ -64,7 +64,7 @@ You can use the CLI to add segmentation ranges to data networks.
|
|||||||
is the data network associated with the range.
|
is the data network associated with the range.
|
||||||
|
|
||||||
**network type**
|
**network type**
|
||||||
is the network type \(VLAN/VXLAN\) of the range.
|
is the network type (VLAN/VXLAN) of the range.
|
||||||
|
|
||||||
**minimum**
|
**minimum**
|
||||||
is the minimum value of the segmentation range.
|
is the minimum value of the segmentation range.
|
||||||
|
@ -28,7 +28,7 @@ where:
|
|||||||
**<interface name>**
|
**<interface name>**
|
||||||
is the name of the interface
|
is the name of the interface
|
||||||
|
|
||||||
**<mtu\_size>**
|
**<mtu_size>**
|
||||||
is the new |MTU| value
|
is the new |MTU| value
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
@ -7,7 +7,7 @@ Configure Data Interfaces for VXLANs
|
|||||||
====================================
|
====================================
|
||||||
|
|
||||||
For data interfaces attached to VXLAN-based data networks, endpoint IP
|
For data interfaces attached to VXLAN-based data networks, endpoint IP
|
||||||
addresses, static or dynamic from a IP Address pool\) and possibly IP Routes
|
addresses, static or dynamic from a IP Address pool) and possibly IP Routes
|
||||||
are additionally required on the host data interfaces.
|
are additionally required on the host data interfaces.
|
||||||
|
|
||||||
See :ref:`VXLAN Data Network Setup Completion
|
See :ref:`VXLAN Data Network Setup Completion
|
||||||
|
@ -100,14 +100,14 @@ For each of the above procedures, configure the node interface specifying the
|
|||||||
The name or |UUID| of the Ethernet interface to use.
|
The name or |UUID| of the Ethernet interface to use.
|
||||||
|
|
||||||
**ip4\_mode**
|
**ip4\_mode**
|
||||||
The mode for assigning IPv4 addresses to a data interface \(static or
|
The mode for assigning IPv4 addresses to a data interface (static or
|
||||||
pool.\)
|
pool.)
|
||||||
|
|
||||||
**ip6\_mode**
|
**ip6\_mode**
|
||||||
The mode for assigning IPv6 addresses to a data interface \(static or
|
The mode for assigning IPv6 addresses to a data interface (static or
|
||||||
pool.\)
|
pool.)
|
||||||
|
|
||||||
**addr\_pool**
|
**addr_pool**
|
||||||
The name of an IPv4 or IPv6 address pool, for use with the pool mode
|
The name of an IPv4 or IPv6 address pool, for use with the pool mode
|
||||||
of IP address assignment for data interfaces.
|
of IP address assignment for data interfaces.
|
||||||
|
|
||||||
|
@ -6,16 +6,16 @@
|
|||||||
Dynamic VXLAN
|
Dynamic VXLAN
|
||||||
=============
|
=============
|
||||||
|
|
||||||
|prod-os| supports dynamic mode \(learning\) VXLAN implementation that has each
|
|prod-os| supports dynamic mode (learning) VXLAN implementation that has each
|
||||||
vSwitch instance registered on the network for a particular IP multicast group,
|
vSwitch instance registered on the network for a particular IP multicast group,
|
||||||
|MAC| addresses, and |VTEP| endpoints that are populated based on neutron
|
|MAC| addresses, and |VTEP| endpoints that are populated based on neutron
|
||||||
configuration data.
|
configuration data.
|
||||||
|
|
||||||
The IP multicast group, \(for example, 239.1.1.1\), is input when a new
|
The IP multicast group, (for example, 239.1.1.1), is input when a new
|
||||||
neutron data network is provisioned. The selection of the IP multicast group
|
neutron data network is provisioned. The selection of the IP multicast group
|
||||||
constraints flooding to only those nodes that have registered for the specified
|
constraints flooding to only those nodes that have registered for the specified
|
||||||
group. The IP multicast network can work in both a single subnet \(that is,
|
group. The IP multicast network can work in both a single subnet (that is,
|
||||||
local Layer2 environment\) or can span Layer3 segments in the customer network
|
local Layer2 environment) or can span Layer3 segments in the customer network
|
||||||
for more complex routing requirements but requires IP multicast enabled routers.
|
for more complex routing requirements but requires IP multicast enabled routers.
|
||||||
|
|
||||||
.. only:: starlingx
|
.. only:: starlingx
|
||||||
@ -70,7 +70,7 @@ segmentation ranges using the |CLI|.
|
|||||||
#. Create a VXLAN data network, see :ref:`Adding Data Networks
|
#. Create a VXLAN data network, see :ref:`Adding Data Networks
|
||||||
<adding-data-networks-using-the-cli>`.
|
<adding-data-networks-using-the-cli>`.
|
||||||
|
|
||||||
#. Add segmentation ranges to dynamic |VXLAN| \(Multicast |VXLAN|\) data
|
#. Add segmentation ranges to dynamic |VXLAN| (Multicast |VXLAN|) data
|
||||||
networks, see :ref:`Adding Segmentation Ranges Using the CLI
|
networks, see :ref:`Adding Segmentation Ranges Using the CLI
|
||||||
<adding-segmentation-ranges-using-the-cli>`.
|
<adding-segmentation-ranges-using-the-cli>`.
|
||||||
|
|
||||||
|
@ -59,7 +59,7 @@ To make interface changes, you must lock the compute node first.
|
|||||||
**ifname**
|
**ifname**
|
||||||
is the name of the interface
|
is the name of the interface
|
||||||
|
|
||||||
**ip\_address**
|
**ip_address**
|
||||||
is an IPv4 or IPv6 address
|
is an IPv4 or IPv6 address
|
||||||
|
|
||||||
**prefix**
|
**prefix**
|
||||||
|
@ -39,21 +39,21 @@ where:
|
|||||||
is a name used to select the pool during data interface setup
|
is a name used to select the pool during data interface setup
|
||||||
|
|
||||||
**<network>**
|
**<network>**
|
||||||
is the subnet and mask for the range \(for example, **192.168.1.0**\)
|
is the subnet and mask for the range (for example, **192.168.1.0**)
|
||||||
|
|
||||||
**<prefix>**
|
**<prefix>**
|
||||||
is the subnet mask, expressed in network prefix length notation \(for
|
is the subnet mask, expressed in network prefix length notation (for
|
||||||
example, **24**\)
|
example, **24**)
|
||||||
|
|
||||||
**<assign\_order>**
|
**<assign_order>**
|
||||||
is the order in which to assign addresses from the pool \(random or
|
is the order in which to assign addresses from the pool (random or
|
||||||
sequential\). The default is random.
|
sequential). The default is random.
|
||||||
|
|
||||||
**<addr\_ranges>**
|
**<addr_ranges>**
|
||||||
is a set of IP address ranges to use for assignment, where the start
|
is a set of IP address ranges to use for assignment, where the start
|
||||||
and end IP address of each range is separated by a dash, and the ranges
|
and end IP address of each range is separated by a dash, and the ranges
|
||||||
are separated by commas \(for example, **192.168.1.10-192.168.1.20,
|
are separated by commas (for example, **192.168.1.10-192.168.1.20,
|
||||||
192.168.1.35-192.168.1.45**\). If no range is specified, the full range is
|
192.168.1.35-192.168.1.45**). If no range is specified, the full range is
|
||||||
used.
|
used.
|
||||||
|
|
||||||
.. _managing-ip-address-pools-using-the-cli-section-N10109-N1001F-N10001:
|
.. _managing-ip-address-pools-using-the-cli-section-N10109-N1001F-N10001:
|
||||||
|
@ -51,7 +51,7 @@ To make interface changes, you must lock the compute node first.
|
|||||||
A name used for selecting the pool during data interface setup.
|
A name used for selecting the pool during data interface setup.
|
||||||
|
|
||||||
**Network Address**
|
**Network Address**
|
||||||
The subnet for the range \(for example, **192.168.1.0/24**\).
|
The subnet for the range (for example, **192.168.1.0/24**).
|
||||||
|
|
||||||
**Allocation Order**
|
**Allocation Order**
|
||||||
The order for assigning addresses. You can select **Sequential** or
|
The order for assigning addresses. You can select **Sequential** or
|
||||||
@ -59,8 +59,8 @@ To make interface changes, you must lock the compute node first.
|
|||||||
|
|
||||||
**Address Range**
|
**Address Range**
|
||||||
One or more ranges, where the start and end IP address of each range
|
One or more ranges, where the start and end IP address of each range
|
||||||
is separated by a dash, and the ranges are separated by commas \(for
|
is separated by a dash, and the ranges are separated by commas (for
|
||||||
example, **192.168.1.10-192.168.1.20, 192.168.1.35-192.168.1.45**\).
|
example, **192.168.1.10-192.168.1.20, 192.168.1.35-192.168.1.45**).
|
||||||
If no range is specified, the full range is used.
|
If no range is specified, the full range is used.
|
||||||
|
|
||||||
.. rubric:: |postreq|
|
.. rubric:: |postreq|
|
||||||
|
@ -6,11 +6,11 @@
|
|||||||
VXLAN Data Networks
|
VXLAN Data Networks
|
||||||
===================
|
===================
|
||||||
|
|
||||||
Virtual eXtensible Local Area Networks \(|VXLANs|\) data networks are an
|
Virtual eXtensible Local Area Networks (|VXLANs|) data networks are an
|
||||||
alternative to |VLAN| data networks.
|
alternative to |VLAN| data networks.
|
||||||
|
|
||||||
A |VXLAN| data network is implemented over a range of |VXLAN| Network
|
A |VXLAN| data network is implemented over a range of |VXLAN| Network
|
||||||
Identifiers \(|VNIs|.\) This is similar to the |VLAN| option, but allows
|
Identifiers (|VNIs|.) This is similar to the |VLAN| option, but allows
|
||||||
multiple data networks to be defined over the same physical network using
|
multiple data networks to be defined over the same physical network using
|
||||||
unique |VNIs| defined in segmentation ranges.
|
unique |VNIs| defined in segmentation ranges.
|
||||||
|
|
||||||
|
@ -9,12 +9,12 @@ Deployment Configurations
|
|||||||
A variety of |prod-long| deployment configuration options are supported.
|
A variety of |prod-long| deployment configuration options are supported.
|
||||||
|
|
||||||
**All-in-one Simplex**
|
**All-in-one Simplex**
|
||||||
A single physical server providing all three cloud functions \(controller,
|
A single physical server providing all three cloud functions (controller,
|
||||||
worker and storage\).
|
worker and storage).
|
||||||
|
|
||||||
**All-in-one Duplex \(up to 50 worker nodes\)**
|
**All-in-one Duplex (up to 50 worker nodes)**
|
||||||
Two HA-protected physical servers, both running all three cloud functions
|
Two HA-protected physical servers, both running all three cloud functions
|
||||||
\(controller, worker and storage\), optionally with up to 50 worker nodes
|
(controller, worker and storage), optionally with up to 50 worker nodes
|
||||||
added to the cluster.
|
added to the cluster.
|
||||||
|
|
||||||
**Standard with Storage Cluster on Controller Nodes**
|
**Standard with Storage Cluster on Controller Nodes**
|
||||||
|
@ -23,7 +23,7 @@ A number of components are common to most |prod| deployment configurations.
|
|||||||
|
|
||||||
For standard with controller storage deployment configurations, the
|
For standard with controller storage deployment configurations, the
|
||||||
controller nodes/functions run a small-scale Ceph cluster using one or more
|
controller nodes/functions run a small-scale Ceph cluster using one or more
|
||||||
disks \(|SATA|, |SAS|, |SSD| and/or |NVMe|\) as the ceph |OSDs|. This
|
disks (|SATA|, |SAS|, |SSD| and/or |NVMe|) as the ceph |OSDs|. This
|
||||||
cluster provides the storage backend for Kubernetes' |PVCs|.
|
cluster provides the storage backend for Kubernetes' |PVCs|.
|
||||||
|
|
||||||
In most configurations, the controller nodes/functions are part of a two
|
In most configurations, the controller nodes/functions are part of a two
|
||||||
@ -35,8 +35,8 @@ A number of components are common to most |prod| deployment configurations.
|
|||||||
|
|
||||||
**Storage Node / Function**
|
**Storage Node / Function**
|
||||||
For Standard with Dedicated Storage deployment configurations, the storage
|
For Standard with Dedicated Storage deployment configurations, the storage
|
||||||
nodes run a large scale Ceph cluster using disks \(|SATA|, |SAS|, |SSD| and
|
nodes run a large scale Ceph cluster using disks (|SATA|, |SAS|, |SSD| and
|
||||||
/or |NVMe|\) across 2-9 storage nodes as Ceph |OSDs|. This provides the
|
/or |NVMe|) across 2-9 storage nodes as Ceph |OSDs|. This provides the
|
||||||
storage backend for Kubernetes' |PVCs|.
|
storage backend for Kubernetes' |PVCs|.
|
||||||
|
|
||||||
In most configurations the storage nodes/functions are part of a HA
|
In most configurations the storage nodes/functions are part of a HA
|
||||||
@ -50,20 +50,20 @@ A number of components are common to most |prod| deployment configurations.
|
|||||||
**L2 Switches and L2 Networks**
|
**L2 Switches and L2 Networks**
|
||||||
A single physical switch may support multiple L2 networks.
|
A single physical switch may support multiple L2 networks.
|
||||||
|
|
||||||
**Operations, Administration and Management (OAM) Network \(Controller Nodes Only\)**
|
**Operations, Administration and Management (OAM) Network (Controller Nodes Only)**
|
||||||
The network on which all external StarlingX platform APIs are exposed,
|
The network on which all external StarlingX platform APIs are exposed,
|
||||||
including platform REST APIs \(Keystone, StarlingX, Kubernetes\), the
|
including platform REST APIs (Keystone, StarlingX, Kubernetes), the
|
||||||
Horizon Web interface, |SSH| and |SNMP|.
|
Horizon Web interface, |SSH| and |SNMP|.
|
||||||
|
|
||||||
This is typically a 1GE network.
|
This is typically a 1GE network.
|
||||||
|
|
||||||
**Management Network \(All Nodes\)**
|
**Management Network (All Nodes)**
|
||||||
A private network \(i.e. not connected externally\) used for internal
|
A private network (i.e. not connected externally) used for internal
|
||||||
StarlingX monitoring and control, and container access to storage cluster.
|
StarlingX monitoring and control, and container access to storage cluster.
|
||||||
|
|
||||||
This is typically a 10GE network.
|
This is typically a 10GE network.
|
||||||
|
|
||||||
**Cluster Host Network \(All Nodes\)**
|
**Cluster Host Network (All Nodes)**
|
||||||
The cluster host network is used for Kubernetes management and control, as
|
The cluster host network is used for Kubernetes management and control, as
|
||||||
well as private container networking. The |CNI| service, Calico, provides
|
well as private container networking. The |CNI| service, Calico, provides
|
||||||
private tunneled networking between hosted containers on the cluster host
|
private tunneled networking between hosted containers on the cluster host
|
||||||
@ -83,12 +83,12 @@ A number of components are common to most |prod| deployment configurations.
|
|||||||
|
|
||||||
Containers' network endpoints can be exposed externally with 'NodePort'
|
Containers' network endpoints can be exposed externally with 'NodePort'
|
||||||
Kubernetes services, exposing selected application containers' network
|
Kubernetes services, exposing selected application containers' network
|
||||||
ports on *all* interfaces \(e.g. external cluster host interfaces\) of
|
ports on *all* interfaces (e.g. external cluster host interfaces) of
|
||||||
both controller nodes and *all* worker nodes. This would typically be
|
both controller nodes and *all* worker nodes. This would typically be
|
||||||
done either directly to the application containers service or through
|
done either directly to the application containers service or through
|
||||||
an ingress controller service. HA would be achieved through either an
|
an ingress controller service. HA would be achieved through either an
|
||||||
external HA load balancer across two or more worker nodes or simply
|
external HA load balancer across two or more worker nodes or simply
|
||||||
using multiple records \(two or more destination worker node IPs\) for
|
using multiple records (two or more destination worker node IPs) for
|
||||||
the application's external DNS Entry.
|
the application's external DNS Entry.
|
||||||
|
|
||||||
Containers' network endpoints can also be exposed through |BGP| within
|
Containers' network endpoints can also be exposed through |BGP| within
|
||||||
@ -111,13 +111,13 @@ A number of components are common to most |prod| deployment configurations.
|
|||||||
nodes. This is typically done either directly to the application
|
nodes. This is typically done either directly to the application
|
||||||
containers service or through an ingress controller service. HA can be
|
containers service or through an ingress controller service. HA can be
|
||||||
achieved through either an external HA load balancer across two or more
|
achieved through either an external HA load balancer across two or more
|
||||||
worker nodes or simply using multiple records \(two or more destination
|
worker nodes or simply using multiple records (two or more destination
|
||||||
worker node IP addresses\) for the application's external DNS Entry.
|
worker node IP addresses) for the application's external DNS Entry.
|
||||||
|
|
||||||
The use of Container Networking Calico |BGP| to advertise containers'
|
The use of Container Networking Calico |BGP| to advertise containers'
|
||||||
network endpoints is not available in this scenario.
|
network endpoints is not available in this scenario.
|
||||||
|
|
||||||
**Additional External Network\(s\) or Data Networks \(Worker & AIO Nodes Only\)**
|
**Additional External Network\(s) or Data Networks (Worker & AIO Nodes Only)**
|
||||||
Networks on which ingress controllers and/or hosted application containers
|
Networks on which ingress controllers and/or hosted application containers
|
||||||
expose their Kubernetes service, for example, through a NodePort service.
|
expose their Kubernetes service, for example, through a NodePort service.
|
||||||
Node interfaces to these networks are configured as platform class
|
Node interfaces to these networks are configured as platform class
|
||||||
@ -128,13 +128,13 @@ A number of components are common to most |prod| deployment configurations.
|
|||||||
hosted application containers to have interfaces directly connected to the
|
hosted application containers to have interfaces directly connected to the
|
||||||
host's interface via pci-passthru or |SRIOV|.
|
host's interface via pci-passthru or |SRIOV|.
|
||||||
|
|
||||||
**IPMI Network \(All Nodes\)**
|
**IPMI Network (All Nodes)**
|
||||||
An optional network on which |IPMI| interfaces of all nodes are connected.
|
An optional network on which |IPMI| interfaces of all nodes are connected.
|
||||||
|
|
||||||
The |IPMI| network must be L3/IP reachable from the controller's |OAM|
|
The |IPMI| network must be L3/IP reachable from the controller's |OAM|
|
||||||
interfaces.
|
interfaces.
|
||||||
|
|
||||||
**PxeBoot Network \(All Nodes\)**
|
**PxeBoot Network (All Nodes)**
|
||||||
An *optional* network over which nodes net boot from controllers.
|
An *optional* network over which nodes net boot from controllers.
|
||||||
|
|
||||||
By default, controllers network boot other nodes over the management
|
By default, controllers network boot other nodes over the management
|
||||||
|
@ -61,7 +61,7 @@ management and cluster host network.
|
|||||||
|
|
||||||
|org| recommends a 10GE shared management and cluster host network with
|
|org| recommends a 10GE shared management and cluster host network with
|
||||||
|LAG| for direct connections. If the management
|
|LAG| for direct connections. If the management
|
||||||
network must be 1GE \(to support PXE booting\), then a separate 10GE
|
network must be 1GE (to support PXE booting), then a separate 10GE
|
||||||
cluster host network with |LAG| is also
|
cluster host network with |LAG| is also
|
||||||
recommended. The use of |LAG| addresses failover
|
recommended. The use of |LAG| addresses failover
|
||||||
considerations unique to peer-to-peer connections.
|
considerations unique to peer-to-peer connections.
|
||||||
@ -81,8 +81,8 @@ provide support for small scale deployments on the Intel Xeon D family of
|
|||||||
processors using a smaller memory and CPU footprint than the standard Simplex
|
processors using a smaller memory and CPU footprint than the standard Simplex
|
||||||
configuration.
|
configuration.
|
||||||
|
|
||||||
For low-cost or low-power applications with minimal performance demands \(40
|
For low-cost or low-power applications with minimal performance demands (40
|
||||||
containers or fewer\), |prod| Simplex can be deployed on a server with a
|
containers or fewer), |prod| Simplex can be deployed on a server with a
|
||||||
single Intel Xeon D class processor. The platform-reserved memory and the
|
single Intel Xeon D class processor. The platform-reserved memory and the
|
||||||
maximum number of worker threads are reduced by default, but can be
|
maximum number of worker threads are reduced by default, but can be
|
||||||
reconfigured if required.
|
reconfigured if required.
|
||||||
|
@ -63,8 +63,8 @@ provide support for small scale deployments on the Intel Xeon D family of
|
|||||||
processors using a smaller memory and CPU footprint than the standard Simplex
|
processors using a smaller memory and CPU footprint than the standard Simplex
|
||||||
configuration.
|
configuration.
|
||||||
|
|
||||||
For low-cost or low-power applications with minimal performance demands \(40
|
For low-cost or low-power applications with minimal performance demands (40
|
||||||
Containers or fewer\), |prod| Simplex can be deployed on a server with a
|
Containers or fewer), |prod| Simplex can be deployed on a server with a
|
||||||
single Intel Xeon D class processor. The platform-reserved memory and the
|
single Intel Xeon D class processor. The platform-reserved memory and the
|
||||||
maximum number of worker threads are reduced by default, but can be
|
maximum number of worker threads are reduced by default, but can be
|
||||||
reconfigured as required.
|
reconfigured as required.
|
||||||
|
@ -9,12 +9,12 @@ Deployment Options
|
|||||||
A variety of |prod-long| deployment configuration options are supported.
|
A variety of |prod-long| deployment configuration options are supported.
|
||||||
|
|
||||||
**All-in-one Simplex**
|
**All-in-one Simplex**
|
||||||
A single physical server providing all three cloud functions \(controller,
|
A single physical server providing all three cloud functions (controller,
|
||||||
worker and storage\).
|
worker and storage).
|
||||||
|
|
||||||
**All-in-one Duplex \(up to 50 worker nodes\)**
|
**All-in-one Duplex (up to 50 worker nodes)**
|
||||||
Two HA-protected physical servers, both running all three cloud functions
|
Two HA-protected physical servers, both running all three cloud functions
|
||||||
\(controller, worker and storage\), optionally with up to 50 worker nodes
|
(controller, worker and storage), optionally with up to 50 worker nodes
|
||||||
added to the cluster.
|
added to the cluster.
|
||||||
|
|
||||||
**Standard with Storage Cluster on Controller Nodes**
|
**Standard with Storage Cluster on Controller Nodes**
|
||||||
@ -29,6 +29,6 @@ A variety of |prod-long| deployment configuration options are supported.
|
|||||||
information, see the :ref:`Storage
|
information, see the :ref:`Storage
|
||||||
<storage-configuration-storage-resources>` guide.
|
<storage-configuration-storage-resources>` guide.
|
||||||
|
|
||||||
All |prod| systems can use worker platforms \(worker hosts, or the worker
|
All |prod| systems can use worker platforms (worker hosts, or the worker
|
||||||
function on a simplex or duplex system\) configured for either standard or
|
function on a simplex or duplex system) configured for either standard or
|
||||||
low-latency worker function performance profiles.
|
low-latency worker function performance profiles.
|
@ -7,7 +7,7 @@ Standard Configuration with Dedicated Storage
|
|||||||
=============================================
|
=============================================
|
||||||
|
|
||||||
Deployment of |prod| with dedicated storage nodes provides the highest capacity
|
Deployment of |prod| with dedicated storage nodes provides the highest capacity
|
||||||
\(single region\), performance, and scalability.
|
\(single region), performance, and scalability.
|
||||||
|
|
||||||
.. image:: /shared/figures/deploy_install_guides/starlingx-deployment-options-dedicated-storage.png
|
.. image:: /shared/figures/deploy_install_guides/starlingx-deployment-options-dedicated-storage.png
|
||||||
:width: 800
|
:width: 800
|
||||||
@ -28,9 +28,9 @@ storage, and network interfaces can be scaled to meet requirements.
|
|||||||
Storage nodes provide a large scale Ceph cluster for the storage backend for
|
Storage nodes provide a large scale Ceph cluster for the storage backend for
|
||||||
Kubernetes |PVCs|. They are deployed in replication groups of either two or
|
Kubernetes |PVCs|. They are deployed in replication groups of either two or
|
||||||
three for redundancy. For a system configured to use two storage hosts per
|
three for redundancy. For a system configured to use two storage hosts per
|
||||||
replication group, a maximum of eight storage hosts \(four replication groups\)
|
replication group, a maximum of eight storage hosts (four replication groups)
|
||||||
are supported. For a system with three storage hosts per replication group, up
|
are supported. For a system with three storage hosts per replication group, up
|
||||||
to nine storage hosts \(three replication groups\) are supported. The system
|
to nine storage hosts (three replication groups) are supported. The system
|
||||||
provides redundancy and scalability through the number of Ceph |OSDs| installed
|
provides redundancy and scalability through the number of Ceph |OSDs| installed
|
||||||
in a storage node group, with more |OSDs| providing more capacity and better
|
in a storage node group, with more |OSDs| providing more capacity and better
|
||||||
storage performance. The scalability and performance of the storage function is
|
storage performance. The scalability and performance of the storage function is
|
||||||
|
@ -26,7 +26,7 @@ You can add an arbitrary number of hosts using a single CLI command.
|
|||||||
|
|
||||||
~[keystone_admin]$ system host-bulk-add <xml_file>
|
~[keystone_admin]$ system host-bulk-add <xml_file>
|
||||||
|
|
||||||
where <xml\_file> is the name of the prepared XML file.
|
where <xml_file> is the name of the prepared XML file.
|
||||||
|
|
||||||
#. Power on the hosts to be added, if required.
|
#. Power on the hosts to be added, if required.
|
||||||
|
|
||||||
|
@ -81,7 +81,7 @@ scripting an initial setup.
|
|||||||
- storage
|
- storage
|
||||||
|
|
||||||
**<subfunctions>**
|
**<subfunctions>**
|
||||||
are the host personality subfunctions \(used only for a worker host\).
|
are the host personality subfunctions (used only for a worker host).
|
||||||
|
|
||||||
For a worker host, the only valid value is worker,lowlatency to enable
|
For a worker host, the only valid value is worker,lowlatency to enable
|
||||||
a low-latency performance profile. For a standard performance profile,
|
a low-latency performance profile. For a standard performance profile,
|
||||||
@ -95,53 +95,53 @@ scripting an initial setup.
|
|||||||
is a string describing the location of the host
|
is a string describing the location of the host
|
||||||
|
|
||||||
**<console>**
|
**<console>**
|
||||||
is the output device to use for message display on the host \(for
|
is the output device to use for message display on the host (for
|
||||||
example, tty0\). The default is ttys0, 115200.
|
example, tty0). The default is ttys0, 115200.
|
||||||
|
|
||||||
**<install\_output>**
|
**<install_output>**
|
||||||
is the format for console output on the host \(text or graphical\). The
|
is the format for console output on the host (text or graphical). The
|
||||||
default is text.
|
default is text.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
The graphical option currently has no effect. Text-based
|
The graphical option currently has no effect. Text-based
|
||||||
installation is used regardless of this setting.
|
installation is used regardless of this setting.
|
||||||
|
|
||||||
**<boot\_device>**
|
**<boot_device>**
|
||||||
is the host device for boot partition, relative to /dev. The default is
|
is the host device for boot partition, relative to /dev. The default is
|
||||||
sda.
|
sda.
|
||||||
|
|
||||||
**<rootfs\_device>**
|
**<rootfs_device>**
|
||||||
is a logical volume cgts-vg/root-lv. The default is sda, it should be
|
is a logical volume cgts-vg/root-lv. The default is sda, it should be
|
||||||
the same value as specified for the boot_device.
|
the same value as specified for the boot_device.
|
||||||
|
|
||||||
**<mgmt\_mac>**
|
**<mgmt_mac>**
|
||||||
is the |MAC| address of the port connected to the internal management
|
is the |MAC| address of the port connected to the internal management
|
||||||
or |PXE| boot network.
|
or |PXE| boot network.
|
||||||
|
|
||||||
**<mgmt\_ip>**
|
**<mgmt_ip>**
|
||||||
is the IP address of the port connected to the internal management or
|
is the IP address of the port connected to the internal management or
|
||||||
|PXE| boot network, if static IP address allocation is used.
|
|PXE| boot network, if static IP address allocation is used.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
The <mgmt\_ip> option is not used for a controller node.
|
The <mgmt_ip> option is not used for a controller node.
|
||||||
|
|
||||||
**<ttys\_dcd>**
|
**<ttys_dcd>**
|
||||||
is set to **True** to have any active console session automatically
|
is set to **True** to have any active console session automatically
|
||||||
logged out when the serial console cable is disconnected, or **False**
|
logged out when the serial console cable is disconnected, or **False**
|
||||||
to disable this behavior. The server must support data carrier detect
|
to disable this behavior. The server must support data carrier detect
|
||||||
on the serial console port.
|
on the serial console port.
|
||||||
|
|
||||||
**<bm\_type>**
|
**<bm_type>**
|
||||||
is the board management controller type. Use bmc.
|
is the board management controller type. Use bmc.
|
||||||
|
|
||||||
**<bm\_ip>**
|
**<bm_ip>**
|
||||||
is the board management controller IP address \(used for external
|
is the board management controller IP address (used for external
|
||||||
access to board management controllers over the |OAM| network\)
|
access to board management controllers over the |OAM| network)
|
||||||
|
|
||||||
**<bm\_username>**
|
**<bm_username>**
|
||||||
is the username for board management controller access
|
is the username for board management controller access
|
||||||
|
|
||||||
**<bm\_password>**
|
**<bm_password>**
|
||||||
is the password for board management controller access
|
is the password for board management controller access
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
@ -34,30 +34,30 @@ valid values, refer to the CLI documentation.
|
|||||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| subfunctions | For a worker host, an optional element to enable a low-latency performance profile. |
|
| subfunctions | For a worker host, an optional element to enable a low-latency performance profile. |
|
||||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| mgmt\_mac | The MAC address of the management interface. |
|
| mgmt_mac | The MAC address of the management interface. |
|
||||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| mgmt\_ip | The IP address of the management interface. |
|
| mgmt_ip | The IP address of the management interface. |
|
||||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| bm\_ip | The IP address of the board management controller. |
|
| bm_ip | The IP address of the board management controller. |
|
||||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| bm\_type | The board management controller type. |
|
| bm_type | The board management controller type. |
|
||||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| bm\_username | The username for board management controller authentication. |
|
| bm_username | The username for board management controller authentication. |
|
||||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| bm\_password | The password for board management controller authentication. |
|
| bm_password | The password for board management controller authentication. |
|
||||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| power\_on | An empty element. If present, powers on the host automatically using the specified board management controller. |
|
| power_on | An empty element. If present, powers on the host automatically using the specified board management controller. |
|
||||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| install\_output | The display mode to use during installation \(text or graphical\). The default is **text**. |
|
| install_output | The display mode to use during installation (text or graphical). The default is **text**. |
|
||||||
| | |
|
| | |
|
||||||
| | .. note:: |
|
| | .. note:: |
|
||||||
| | The graphical option currently has no effect. Text-based installation is used regardless of this setting. |
|
| | The graphical option currently has no effect. Text-based installation is used regardless of this setting. |
|
||||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| console | If present, this element specifies the port, and if applicable the baud, for displaying messages. If the element is empty or not present, the default setting **ttyS0,115200** is used. |
|
| console | If present, this element specifies the port, and if applicable the baud, for displaying messages. If the element is empty or not present, the default setting **ttyS0,115200** is used. |
|
||||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| rootfs\_device | The root filesystem is now a logical volume cgts-vg/root-lv. This value when shown should be the same value as the boot_device. |
|
| rootfs_device | The root filesystem is now a logical volume cgts-vg/root-lv. This value when shown should be the same value as the boot_device. |
|
||||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| boot\_device | The device to use for the boot partition, relative to /dev. |
|
| boot_device | The device to use for the boot partition, relative to /dev. |
|
||||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
| location | A description of the host location. |
|
| location | A description of the host location. |
|
||||||
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||||
|
@ -16,8 +16,8 @@ host-bulk-export` command, and then use this file with the :command:`system
|
|||||||
host-bulk-add` command to re-create the system. If required, you can modify the
|
host-bulk-add` command to re-create the system. If required, you can modify the
|
||||||
file before using it.
|
file before using it.
|
||||||
|
|
||||||
The configuration settings \(management |MAC| address, BM IP address, and so
|
The configuration settings (management |MAC| address, BM IP address, and so
|
||||||
on\) for all nodes except **controller-0** are written to the file.
|
on) for all nodes except **controller-0** are written to the file.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
To ensure that the hosts are not powered on unexpectedly, the **power-on**
|
To ensure that the hosts are not powered on unexpectedly, the **power-on**
|
||||||
|
@ -17,7 +17,7 @@ For a summary of changes that require system or host reinstallation, see
|
|||||||
<configuration-changes-requiring-re-installation>`.
|
<configuration-changes-requiring-re-installation>`.
|
||||||
|
|
||||||
To reinstall an entire system, refer to the Installation Guide for your system
|
To reinstall an entire system, refer to the Installation Guide for your system
|
||||||
type \(for example, Standard or All-in-one\).
|
type (for example, Standard or All-in-one).
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
To simplify system reinstallation, you can export and reuse an existing
|
To simplify system reinstallation, you can export and reuse an existing
|
||||||
@ -29,8 +29,8 @@ To reinstall the software on a host using the Host Inventory controls, see
|
|||||||
|node-doc|: :ref:`Host Inventory <hosts-tab>`. In some cases, you must delete
|
|node-doc|: :ref:`Host Inventory <hosts-tab>`. In some cases, you must delete
|
||||||
the host instead, and then re-add it using the standard host installation
|
the host instead, and then re-add it using the standard host installation
|
||||||
procedure. This applies if the system inventory record must be corrected to
|
procedure. This applies if the system inventory record must be corrected to
|
||||||
complete the configuration change \(for example, if the |MAC| address of the
|
complete the configuration change (for example, if the |MAC| address of the
|
||||||
management interface has changed\).
|
management interface has changed).
|
||||||
|
|
||||||
- :ref:`Reinstalling a System Using an Exported Host Configuration File
|
- :ref:`Reinstalling a System Using an Exported Host Configuration File
|
||||||
<reinstalling-a-system-using-an-exported-host-configuration-file-r7>`
|
<reinstalling-a-system-using-an-exported-host-configuration-file-r7>`
|
||||||
|
@ -157,7 +157,7 @@ Example build command:
|
|||||||
--stream ${BUILD_STREAM}
|
--stream ${BUILD_STREAM}
|
||||||
|
|
||||||
| This will produce a wheels tarball in your workspace:
|
| This will produce a wheels tarball in your workspace:
|
||||||
| ${MY\_WORKSPACE}/std/build-wheels-${OS}-${BUILD\_STREAM}/stx-${OS}-${BUILD\_STREAM}-wheels.tar
|
| ${MY_WORKSPACE}/std/build-wheels-${OS}-${BUILD_STREAM}/stx-${OS}-${BUILD_STREAM}-wheels.tar
|
||||||
|
|
||||||
****************
|
****************
|
||||||
StarlingX wheels
|
StarlingX wheels
|
||||||
@ -168,7 +168,7 @@ the build. For CentOs, this means updating the package rpm specfile to
|
|||||||
build the wheel and package it in a -wheels package. The names of the
|
build the wheel and package it in a -wheels package. The names of the
|
||||||
wheels packages to be included in the tarball are listed in the
|
wheels packages to be included in the tarball are listed in the
|
||||||
wheels.inc files in the corresponding repo (ie.
|
wheels.inc files in the corresponding repo (ie.
|
||||||
centos\_stable\_wheels.inc).
|
centos_stable_wheels.inc).
|
||||||
|
|
||||||
---------------
|
---------------
|
||||||
Building images
|
Building images
|
||||||
@ -178,8 +178,8 @@ The StarlingX Docker images are built using a set of image directives
|
|||||||
files, with the base image and wheels tarball as input. The images are
|
files, with the base image and wheels tarball as input. The images are
|
||||||
built by the build-stx-images.sh tool, in
|
built by the build-stx-images.sh tool, in
|
||||||
stx-root/build-tools/build-docker-images. The build-stx-images.sh tool
|
stx-root/build-tools/build-docker-images. The build-stx-images.sh tool
|
||||||
will search the StarlingX repos for a corresponding docker\_images.inc
|
will search the StarlingX repos for a corresponding docker_images.inc
|
||||||
file (ie. centos\_dev\_docker\_images.inc) which contains a list of
|
file (ie. centos_dev_docker_images.inc) which contains a list of
|
||||||
subdirectories that contain the associated image directives files, which
|
subdirectories that contain the associated image directives files, which
|
||||||
are processed and built.
|
are processed and built.
|
||||||
|
|
||||||
@ -286,15 +286,15 @@ Options supported by BUILDER=docker image directives files include:
|
|||||||
|
|
||||||
* LABEL: the image name
|
* LABEL: the image name
|
||||||
* PROJECT: main project name
|
* PROJECT: main project name
|
||||||
* DOCKER\_REPO: main project source git repo
|
* DOCKER_REPO: main project source git repo
|
||||||
* DOCKER\_REF: git branch or tag for main project source repo (default "master")
|
* DOCKER_REF: git branch or tag for main project source repo (default "master")
|
||||||
* DOCKER\_PATCHES: list of patch files to apply to DOCKER\_REPO, relative to the local dir
|
* DOCKER_PATCHES: list of patch files to apply to DOCKER_REPO, relative to the local dir
|
||||||
* DOCKER\_CONTEXT: path to build context source, relative to the local dir (default "docker")
|
* DOCKER_CONTEXT: path to build context source, relative to the local dir (default "docker")
|
||||||
* DOCKER\_FILE: path to Dockerfile, relative to the local dir (default "docker/Dockerfile")
|
* DOCKER_FILE: path to Dockerfile, relative to the local dir (default "docker/Dockerfile")
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
DOCKER\_CONTEXT and DOCKER\_FILE are mutually exclusive to DOCKER\_REPO, DOCKER\_REF and DOCKER\_PATCHES.
|
DOCKER_CONTEXT and DOCKER_FILE are mutually exclusive to DOCKER_REPO, DOCKER_REF and DOCKER_PATCHES.
|
||||||
|
|
||||||
For an example of a BUILDER=docker image, see
|
For an example of a BUILDER=docker image, see
|
||||||
https://opendev.org/starlingx/oidc-auth-armada-app/src/branch/master/dex/centos/dex.stable_docker_image
|
https://opendev.org/starlingx/oidc-auth-armada-app/src/branch/master/dex/centos/dex.stable_docker_image
|
||||||
@ -317,11 +317,11 @@ loci include:
|
|||||||
|
|
||||||
* LABEL: the image name
|
* LABEL: the image name
|
||||||
* PROJECT: main project name
|
* PROJECT: main project name
|
||||||
* PROJECT\_REPO: main project source git repo
|
* PROJECT_REPO: main project source git repo
|
||||||
* PROJECT\_REF: git branch or tag for main project source repo
|
* PROJECT_REF: git branch or tag for main project source repo
|
||||||
* PIP\_PACKAGES: list of python modules to be installed, beyond those
|
* PIP_PACKAGES: list of python modules to be installed, beyond those
|
||||||
specified by project dependencies or requirements
|
specified by project dependencies or requirements
|
||||||
* DIST\_PACKAGES: additional packages to be installed (eg. RPMs from
|
* DIST_PACKAGES: additional packages to be installed (eg. RPMs from
|
||||||
repo, configured by base image)
|
repo, configured by base image)
|
||||||
* PROFILES: bindep profiles supported by project to be installed (eg.
|
* PROFILES: bindep profiles supported by project to be installed (eg.
|
||||||
apache)
|
apache)
|
||||||
@ -330,7 +330,7 @@ In addition, you can specify a bash command in the CUSTOMIZATION option,
|
|||||||
in order to do a modification on the loci-built image.
|
in order to do a modification on the loci-built image.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
stx-upstream/openstack/python-nova/centos/stx-nova.dev\_docker\_image
|
stx-upstream/openstack/python-nova/centos/stx-nova.dev_docker_image
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -347,7 +347,7 @@ In a case where the image is built without a main project source git
|
|||||||
repo, where the main project source is just coming from a wheel, you can
|
repo, where the main project source is just coming from a wheel, you can
|
||||||
set PROJECT to infra, and loci skips the git clone steps. For example,
|
set PROJECT to infra, and loci skips the git clone steps. For example,
|
||||||
stx-nova-api-proxy:
|
stx-nova-api-proxy:
|
||||||
stx-nfv/nova-api-proxy/centos/stx-nova-api-proxy.dev\_docker\_image
|
stx-nfv/nova-api-proxy/centos/stx-nova-api-proxy.dev_docker_image
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
@ -466,7 +466,7 @@ of the entire image. The tool allows for updates via:
|
|||||||
Specifying Python module source
|
Specifying Python module source
|
||||||
*******************************
|
*******************************
|
||||||
|
|
||||||
The --module-src command-line option (or MODULE\_SRC in an update
|
The --module-src command-line option (or MODULE_SRC in an update
|
||||||
directives file) allows a designer to specify python module source from
|
directives file) allows a designer to specify python module source from
|
||||||
either a directory or git repository. If specifying a git repository,
|
either a directory or git repository. If specifying a git repository,
|
||||||
you can also specify a branch or tag to be fetched, as well as
|
you can also specify a branch or tag to be fetched, as well as
|
||||||
@ -483,9 +483,9 @@ Customization script
|
|||||||
|
|
||||||
You can optionally provide a customization script to make changes to the
|
You can optionally provide a customization script to make changes to the
|
||||||
image that cannot be handled by updating software, using the --customize
|
image that cannot be handled by updating software, using the --customize
|
||||||
command-line option (or CUSTOMIZATION\_SCRIPT in an update directives
|
command-line option (or CUSTOMIZATION_SCRIPT in an update directives
|
||||||
file). You can also provide supporting files with the --extra
|
file). You can also provide supporting files with the --extra
|
||||||
command-line option (or EXTRA\_FILES in an update directives file),
|
command-line option (or EXTRA_FILES in an update directives file),
|
||||||
which will be accessible to the customization script in the
|
which will be accessible to the customization script in the
|
||||||
/image-update/extras directory within the update container.
|
/image-update/extras directory within the update container.
|
||||||
|
|
||||||
|
@ -36,7 +36,7 @@ manages the following certificates:
|
|||||||
.. certificate-management-for-admin-rest--api-endpoints-ul-zdc-pmk-xnb:
|
.. certificate-management-for-admin-rest--api-endpoints-ul-zdc-pmk-xnb:
|
||||||
|
|
||||||
- **DC-AdminEp-Root-CA certificate**: This certificate expires in 1825 days
|
- **DC-AdminEp-Root-CA certificate**: This certificate expires in 1825 days
|
||||||
\(approximately 5 years\). Renewal of this certificate starts 30 days prior
|
(approximately 5 years). Renewal of this certificate starts 30 days prior
|
||||||
to expiry.
|
to expiry.
|
||||||
|
|
||||||
The Root |CA| certificate is renewed on the System Controller. When the
|
The Root |CA| certificate is renewed on the System Controller. When the
|
||||||
|
@ -46,8 +46,8 @@ Ensure that all subclouds are managed and online.
|
|||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
In a subcloud, if the |CLI| command returns an authentication error
|
In a subcloud, if the |CLI| command returns an authentication error
|
||||||
after you source the script /etc/platform/openrc, you can verify
|
after you source the script ``/etc/platform/openrc``, you can verify
|
||||||
the password on the subcloud by using the :command:`env \| grep OS\_PASSWORD`
|
the password on the subcloud by using the :command:`env \| grep OS_PASSWORD`
|
||||||
command . If it returns the old password, you will need to run the
|
command . If it returns the old password, you will need to run the
|
||||||
:command:`keyring set CGCS admin` command and provide the new admin
|
:command:`keyring set CGCS admin` command and provide the new admin
|
||||||
password.
|
password.
|
||||||
|
@ -22,8 +22,8 @@ The following settings are applied by default:
|
|||||||
|
|
||||||
- alarm restriction type: relaxed
|
- alarm restriction type: relaxed
|
||||||
|
|
||||||
- default instance action: migrate \(This parameter is only applicable to
|
- default instance action: migrate (This parameter is only applicable to
|
||||||
hosted application |VMs| with the |prefix|-openstack application.\)
|
hosted application |VMs| with the |prefix|-openstack application.)
|
||||||
|
|
||||||
|
|
||||||
To update the default values, use the :command:`dcmanager strategy-config
|
To update the default values, use the :command:`dcmanager strategy-config
|
||||||
@ -112,7 +112,7 @@ individual subclouds.
|
|||||||
migrate or stop-start — determines whether hosted application |VMs| are
|
migrate or stop-start — determines whether hosted application |VMs| are
|
||||||
migrated or stopped and restarted when a worker host is upgraded
|
migrated or stopped and restarted when a worker host is upgraded
|
||||||
|
|
||||||
**subcloud\_name**
|
**subcloud_name**
|
||||||
The name of the subcloud to use the custom strategy. If this omitted,
|
The name of the subcloud to use the custom strategy. If this omitted,
|
||||||
the default upgrade strategy is updated.
|
the default upgrade strategy is updated.
|
||||||
|
|
||||||
|
@ -148,12 +148,12 @@ controller for access by subclouds. For example:
|
|||||||
|
|
||||||
**--max-parallel-subclouds**
|
**--max-parallel-subclouds**
|
||||||
Sets the maximum number of subclouds that can be upgraded in parallel
|
Sets the maximum number of subclouds that can be upgraded in parallel
|
||||||
\(default 20\). If this is not specified using the CLI, the values for
|
(default 20). If this is not specified using the CLI, the values for
|
||||||
max_parallel_subclouds defined for each subcloud group will be used by
|
max_parallel_subclouds defined for each subcloud group will be used by
|
||||||
default.
|
default.
|
||||||
|
|
||||||
**--stop-on-failure**
|
**--stop-on-failure**
|
||||||
**false** \(default\) or **true** — determines whether upgrade
|
**false** (default) or **true** — determines whether upgrade
|
||||||
orchestration failure for a subcloud prevents application to subsequent
|
orchestration failure for a subcloud prevents application to subsequent
|
||||||
subclouds.
|
subclouds.
|
||||||
|
|
||||||
@ -257,7 +257,7 @@ controller for access by subclouds. For example:
|
|||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
After the *Kubernetes Version Upgrade Distributed Cloud Orchestration
|
After the *Kubernetes Version Upgrade Distributed Cloud Orchestration
|
||||||
Strategy* has been applied \(or aborted\) it must be deleted before
|
Strategy* has been applied (or aborted) it must be deleted before
|
||||||
another Kubernetes Version Upgrade Distributed Cloud Orchestration
|
another Kubernetes Version Upgrade Distributed Cloud Orchestration
|
||||||
strategy can be created. If a Kubernetes upgrade strategy application
|
strategy can be created. If a Kubernetes upgrade strategy application
|
||||||
fails, you must address the issue that caused the failure, then delete
|
fails, you must address the issue that caused the failure, then delete
|
||||||
|
@ -20,7 +20,7 @@ If the Subclouds are in a **Managed** state and if the patching sync status is
|
|||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
|
||||||
Only one update strategy can exist at a time. The strategy controls how the
|
Only one update strategy can exist at a time. The strategy controls how the
|
||||||
subclouds are updated \(for example, serially or in parallel\).
|
subclouds are updated (for example, serially or in parallel).
|
||||||
|
|
||||||
To determine how the nodes on the Central Cloud's RegionOne and each subcloud
|
To determine how the nodes on the Central Cloud's RegionOne and each subcloud
|
||||||
are updated, the update strategy refers to separate configuration settings
|
are updated, the update strategy refers to separate configuration settings
|
||||||
|
@ -6,8 +6,8 @@
|
|||||||
Customize the Update Configuration for Distributed Cloud Update Orchestration
|
Customize the Update Configuration for Distributed Cloud Update Orchestration
|
||||||
=============================================================================
|
=============================================================================
|
||||||
|
|
||||||
You can adjust how the nodes in each system \(Central Cloud's RegionOne and/or
|
You can adjust how the nodes in each system (Central Cloud's RegionOne and/or
|
||||||
Subclouds\) are updated.
|
Subclouds) are updated.
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
|
||||||
|
@ -161,14 +161,14 @@ device image updates, including |FPGA| updates.
|
|||||||
|
|
||||||
**max-parallel-subclouds**
|
**max-parallel-subclouds**
|
||||||
Sets the maximum number of subclouds that can be updated in parallel
|
Sets the maximum number of subclouds that can be updated in parallel
|
||||||
\(default 20\).
|
(default 20).
|
||||||
|
|
||||||
If this is not specified using the |CLI|, the values for
|
If this is not specified using the |CLI|, the values for
|
||||||
:command:`max_parallel_subclouds` defined for each subcloud group
|
:command:`max_parallel_subclouds` defined for each subcloud group
|
||||||
will be used by default.
|
will be used by default.
|
||||||
|
|
||||||
**stop-on-failure**
|
**stop-on-failure**
|
||||||
true or false \(default\) — determines whether update orchestration
|
true or false (default) — determines whether update orchestration
|
||||||
failure for a subcloud prevents application to subsequent subclouds.
|
failure for a subcloud prevents application to subsequent subclouds.
|
||||||
|
|
||||||
**group**
|
**group**
|
||||||
|
@ -53,8 +53,8 @@ if using the CLI.
|
|||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Services in a Subcloud authenticate against their local Identity
|
Services in a Subcloud authenticate against their local Identity
|
||||||
Provider only \(i.e. Keystone for StarlingX and Kubernetes Service
|
Provider only (i.e. Keystone for StarlingX and Kubernetes Service
|
||||||
Accounts for Kubernetes\). This allows the subcloud to not only be
|
Accounts for Kubernetes). This allows the subcloud to not only be
|
||||||
autonomous in the face of disruptions with the Central Region, but also
|
autonomous in the face of disruptions with the Central Region, but also
|
||||||
allows the subcloud to improve service performance since authentication
|
allows the subcloud to improve service performance since authentication
|
||||||
is localized within the subcloud.
|
is localized within the subcloud.
|
||||||
@ -64,7 +64,7 @@ if using the CLI.
|
|||||||
Each subcloud can be in a Managed or Unmanaged state.
|
Each subcloud can be in a Managed or Unmanaged state.
|
||||||
|
|
||||||
**Managed**
|
**Managed**
|
||||||
When a subcloud is in the Managed state, it is updated \(synchronized\)
|
When a subcloud is in the Managed state, it is updated (synchronized)
|
||||||
immediately with configuration changes made at the System Controller.
|
immediately with configuration changes made at the System Controller.
|
||||||
This is the normal operating state. Updates may be delayed slightly
|
This is the normal operating state. Updates may be delayed slightly
|
||||||
depending on network conditions.
|
depending on network conditions.
|
||||||
|
@ -42,7 +42,7 @@ following conditions:
|
|||||||
- The subclouds must use the Redfish platform management service if it is
|
- The subclouds must use the Redfish platform management service if it is
|
||||||
an |AIO-SX| subcloud.
|
an |AIO-SX| subcloud.
|
||||||
|
|
||||||
- Duplex \(|AIO-DX|/Standard\) upgrades are supported, and they do not
|
- Duplex (|AIO-DX|/Standard) upgrades are supported, and they do not
|
||||||
require remote install using Redfish.
|
require remote install using Redfish.
|
||||||
|
|
||||||
- Redfish |BMC| is required for orchestrated subcloud upgrades. The install
|
- Redfish |BMC| is required for orchestrated subcloud upgrades. The install
|
||||||
@ -62,8 +62,8 @@ following conditions:
|
|||||||
:ref:`Installing a Subcloud Using Redfish Platform Management Service
|
:ref:`Installing a Subcloud Using Redfish Platform Management Service
|
||||||
<installing-a-subcloud-using-redfish-platform-management-service>`.
|
<installing-a-subcloud-using-redfish-platform-management-service>`.
|
||||||
|
|
||||||
- All subclouds are clear of management-affecting alarms \(with the exception of the alarm upgrade
|
- All subclouds are clear of management-affecting alarms (with the exception of the alarm upgrade
|
||||||
in progress\).
|
in progress).
|
||||||
|
|
||||||
- All hosts of all subclouds must be unlocked, enabled, and available.
|
- All hosts of all subclouds must be unlocked, enabled, and available.
|
||||||
|
|
||||||
@ -104,7 +104,7 @@ dcmanager CLI or the Horizon web interface. If you prefer to use Horizon, see
|
|||||||
After the System Controller upgrade is completed, wait for 10 minutes for
|
After the System Controller upgrade is completed, wait for 10 minutes for
|
||||||
the **load_sync_status** of all subclouds to be updated.
|
the **load_sync_status** of all subclouds to be updated.
|
||||||
|
|
||||||
To identify which subclouds are upgrade-current \(in-sync\), use the
|
To identify which subclouds are upgrade-current (in-sync), use the
|
||||||
:command:`subcloud list` command. For example:
|
:command:`subcloud list` command. For example:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
@ -184,14 +184,14 @@ dcmanager CLI or the Horizon web interface. If you prefer to use Horizon, see
|
|||||||
|
|
||||||
**max-parallel-subclouds**
|
**max-parallel-subclouds**
|
||||||
Sets the maximum number of subclouds that can be upgraded in parallel
|
Sets the maximum number of subclouds that can be upgraded in parallel
|
||||||
\(default 20\).
|
(default 20).
|
||||||
|
|
||||||
If this is not specified using the CLI, the values for
|
If this is not specified using the CLI, the values for
|
||||||
:command:`max_parallel_subclouds` defined for each subcloud group
|
:command:`max_parallel_subclouds` defined for each subcloud group
|
||||||
will be used by default.
|
will be used by default.
|
||||||
|
|
||||||
**stop-on-failure**
|
**stop-on-failure**
|
||||||
**true**\(default\) or **false**— determines whether upgrade
|
**true**\(default) or **false**— determines whether upgrade
|
||||||
orchestration failure for a subcloud prevents application to subsequent
|
orchestration failure for a subcloud prevents application to subsequent
|
||||||
subclouds.
|
subclouds.
|
||||||
|
|
||||||
|
@ -7,8 +7,8 @@
|
|||||||
Install a Subcloud Using Redfish Platform Management Service
|
Install a Subcloud Using Redfish Platform Management Service
|
||||||
============================================================
|
============================================================
|
||||||
|
|
||||||
For subclouds with servers that support Redfish Virtual Media Service \(version
|
For subclouds with servers that support Redfish Virtual Media Service (version
|
||||||
1.2 or higher\), you can use the Central Cloud's CLI to install the ISO and
|
1.2 or higher), you can use the Central Cloud's CLI to install the ISO and
|
||||||
bootstrap the subclouds from the Central Cloud.
|
bootstrap the subclouds from the Central Cloud.
|
||||||
|
|
||||||
|
|
||||||
@ -47,9 +47,9 @@ subcloud, the subcloud installation has these phases:
|
|||||||
:command:`load-import` command to allow the import into the
|
:command:`load-import` command to allow the import into the
|
||||||
System Controller ``/opt/dc-vault/loads``. The purpose of this is to allow
|
System Controller ``/opt/dc-vault/loads``. The purpose of this is to allow
|
||||||
Redfish install of subclouds referencing a single full copy of the
|
Redfish install of subclouds referencing a single full copy of the
|
||||||
``bootimage.iso`` at ``/opt/dc-vault/loads``. \(Previously, the full
|
``bootimage.iso`` at ``/opt/dc-vault/loads``. (Previously, the full
|
||||||
``bootimage.iso`` was duplicated for each :command:`subcloud add`
|
``bootimage.iso`` was duplicated for each :command:`subcloud add`
|
||||||
command\).
|
command).
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
|
@ -56,7 +56,7 @@ subcloud, the subcloud installation process has two phases:
|
|||||||
|
|
||||||
- In order to be able to deploy subclouds from either controller, all local
|
- In order to be able to deploy subclouds from either controller, all local
|
||||||
files that are referenced in the ``bootstrap.yml`` file must exist on both
|
files that are referenced in the ``bootstrap.yml`` file must exist on both
|
||||||
controllers \(for example, ``/home/sysadmin/docker-registry-ca-cert.pem``\).
|
controllers (for example, ``/home/sysadmin/docker-registry-ca-cert.pem``).
|
||||||
|
|
||||||
.. rubric:: |proc|
|
.. rubric:: |proc|
|
||||||
|
|
||||||
@ -74,8 +74,8 @@ subcloud, the subcloud installation process has two phases:
|
|||||||
:start-after: begin-ref-1
|
:start-after: begin-ref-1
|
||||||
:end-before: end-ref-1
|
:end-before: end-ref-1
|
||||||
|
|
||||||
#. Update the ISO image to modify installation boot parameters \(if
|
#. Update the ISO image to modify installation boot parameters (if
|
||||||
required\), automatically select boot menu options and add a kickstart file
|
required), automatically select boot menu options and add a kickstart file
|
||||||
to automatically perform configurations such as configuring the initial IP
|
to automatically perform configurations such as configuring the initial IP
|
||||||
Interface for bootstrapping.
|
Interface for bootstrapping.
|
||||||
|
|
||||||
@ -258,7 +258,7 @@ subcloud, the subcloud installation process has two phases:
|
|||||||
deployment by monitoring the following log files on the active controller
|
deployment by monitoring the following log files on the active controller
|
||||||
in the Central Cloud.
|
in the Central Cloud.
|
||||||
|
|
||||||
/var/log/dcmanager/ansible/<subcloud\_name>\_playbook.output.log
|
/var/log/dcmanager/ansible/<subcloud_name>\_playbook.output.log
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
|
@ -15,8 +15,8 @@ Platform Management Service.
|
|||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Each subcloud must be on a separate management subnet \(different from the
|
Each subcloud must be on a separate management subnet (different from the
|
||||||
System Controller and from any other subclouds\).
|
System Controller and from any other subclouds).
|
||||||
|
|
||||||
|
|
||||||
.. _installing-and-provisioning-a-subcloud-section-orn-jkf-t4b:
|
.. _installing-and-provisioning-a-subcloud-section-orn-jkf-t4b:
|
||||||
|
@ -18,17 +18,17 @@ a different subcloud group, if required. To create a subcloud group, see,
|
|||||||
For example, while creating a strategy, if several subclouds can be upgraded or
|
For example, while creating a strategy, if several subclouds can be upgraded or
|
||||||
updated in parallel, they can be grouped together in a subcloud group that
|
updated in parallel, they can be grouped together in a subcloud group that
|
||||||
supports parallel upgrades or updates. In this case, the
|
supports parallel upgrades or updates. In this case, the
|
||||||
:command:`max\_parallel\_subclouds`, and :command:`subcloud\_apply\_type` are
|
:command:`max_parallel_subclouds`, and :command:`subcloud_apply_type` are
|
||||||
**not** specified when the strategy is created, so that the settings in the
|
**not** specified when the strategy is created, so that the settings in the
|
||||||
subcloud group are used.
|
subcloud group are used.
|
||||||
|
|
||||||
Alternatively, if several subclouds should be upgraded or updated individually,
|
Alternatively, if several subclouds should be upgraded or updated individually,
|
||||||
they can be grouped together in a subcloud group that supports serial updates.
|
they can be grouped together in a subcloud group that supports serial updates.
|
||||||
In this case, the :command:`max\_parallel\_subclouds`,
|
In this case, the :command:`max_parallel_subclouds`,
|
||||||
and:command:`subcloud\_apply\_type` are **not** specified when creating the
|
and:command:`subcloud_apply_type` are **not** specified when creating the
|
||||||
strategy, and the subcloud group settings for
|
strategy, and the subcloud group settings for
|
||||||
:command:`max\_parallel\_subclouds` \(not applicable\), and the
|
:command:`max_parallel_subclouds` (not applicable), and the
|
||||||
:command:`subcloud\_apply\_type` \(serial\) associated with that subcloud group
|
:command:`subcloud_apply_type` (serial) associated with that subcloud group
|
||||||
are used.
|
are used.
|
||||||
|
|
||||||
For more information on creating a strategy for orchestration upgrades, updates
|
For more information on creating a strategy for orchestration upgrades, updates
|
||||||
@ -44,7 +44,7 @@ or firmware updates, see:
|
|||||||
Upgrade Orchestration Process Using the CLI
|
Upgrade Orchestration Process Using the CLI
|
||||||
<distributed-upgrade-orchestration-process-using-the-cli>`.
|
<distributed-upgrade-orchestration-process-using-the-cli>`.
|
||||||
|
|
||||||
- To create an update \(patch\) orchestration strategy use the
|
- To create an update (patch) orchestration strategy use the
|
||||||
:command:`dcmanager patch-strategy create` command.
|
:command:`dcmanager patch-strategy create` command.
|
||||||
|
|
||||||
.. xbooklink For more information see,
|
.. xbooklink For more information see,
|
||||||
|
@ -21,7 +21,7 @@ subcloud groups, is the order they are processed by orchestration.
|
|||||||
#. All subclouds in the first, second, and third group, etc.
|
#. All subclouds in the first, second, and third group, etc.
|
||||||
|
|
||||||
#. Subclouds from different groups will never be included in the same stage of
|
#. Subclouds from different groups will never be included in the same stage of
|
||||||
the strategy to ensure they are not upgraded, updated \(patched\) at the
|
the strategy to ensure they are not upgraded, updated (patched) at the
|
||||||
same time.
|
same time.
|
||||||
|
|
||||||
|
|
||||||
|
@ -23,7 +23,7 @@ Controller using the rehoming playbook.
|
|||||||
and the subcloud's controllers before rehoming the subcloud.
|
and the subcloud's controllers before rehoming the subcloud.
|
||||||
|
|
||||||
Use the following procedure to enable subcloud rehoming and to update the new
|
Use the following procedure to enable subcloud rehoming and to update the new
|
||||||
subcloud configuration \(networking parameters, passwords, etc.\) to be
|
subcloud configuration (networking parameters, passwords, etc.) to be
|
||||||
compatible with the new System Controller.
|
compatible with the new System Controller.
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
@ -63,7 +63,7 @@ There are six phases for Rehoming a subcloud:
|
|||||||
.. rubric:: |prereq|
|
.. rubric:: |prereq|
|
||||||
|
|
||||||
- Ensure that the subcloud management subnet, oam_floating_address,
|
- Ensure that the subcloud management subnet, oam_floating_address,
|
||||||
oam_node_0_address and oam_node_1_address \(if applicable\) does not overlap
|
oam_node_0_address and oam_node_1_address (if applicable) does not overlap
|
||||||
addresses already being used by the new System Controller or any of its
|
addresses already being used by the new System Controller or any of its
|
||||||
subclouds.
|
subclouds.
|
||||||
|
|
||||||
@ -130,7 +130,7 @@ There are six phases for Rehoming a subcloud:
|
|||||||
You will need to specify the old and the new password.
|
You will need to specify the old and the new password.
|
||||||
|
|
||||||
#. For an |AIO-DX| subcloud, ensure that the active controller is
|
#. For an |AIO-DX| subcloud, ensure that the active controller is
|
||||||
controller-0. Perform a host-swact of the active controller \(controller-1\)
|
controller-0. Perform a host-swact of the active controller (controller-1)
|
||||||
to make controller-0 active.
|
to make controller-0 active.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
@ -9,7 +9,7 @@ Reinstall a Subcloud with Redfish Platform Management Service
|
|||||||
=============================================================
|
=============================================================
|
||||||
|
|
||||||
For subclouds with servers that support Redfish Virtual Media Service
|
For subclouds with servers that support Redfish Virtual Media Service
|
||||||
\(version 1.2 or higher\), you can use the Central cloud's CLI to reinstall
|
\(version 1.2 or higher), you can use the Central cloud's CLI to reinstall
|
||||||
the ISO and bootstrap subclouds from the Central cloud.
|
the ISO and bootstrap subclouds from the Central cloud.
|
||||||
|
|
||||||
.. caution::
|
.. caution::
|
||||||
|
@ -33,12 +33,12 @@ subclouds.
|
|||||||
|
|
||||||
The Patch State indicates whether the patch is available,
|
The Patch State indicates whether the patch is available,
|
||||||
partially-applied or applied. Applied indicates that the update has
|
partially-applied or applied. Applied indicates that the update has
|
||||||
been installed on all hosts of the cloud \(SystemController in this
|
been installed on all hosts of the cloud (SystemController in this
|
||||||
case\).
|
case).
|
||||||
|
|
||||||
#. Check the Update Sync Status of the subclouds.
|
#. Check the Update Sync Status of the subclouds.
|
||||||
|
|
||||||
Update \(or Patch\) Sync Status is part of the overall Sync status of a
|
Update (or Patch) Sync Status is part of the overall Sync status of a
|
||||||
subcloud. To review the synchronization status of subclouds, see
|
subcloud. To review the synchronization status of subclouds, see
|
||||||
:ref:`Monitoring Subclouds Using Horizon
|
:ref:`Monitoring Subclouds Using Horizon
|
||||||
<monitoring-subclouds-using-horizon>`.
|
<monitoring-subclouds-using-horizon>`.
|
||||||
|
@ -37,9 +37,9 @@ Distributed Cloud Using Horizon
|
|||||||
|
|
||||||
The **Patch State** column indicates whether the Patch is available,
|
The **Patch State** column indicates whether the Patch is available,
|
||||||
partially-applied or applied. **Applied** indicates that the update has
|
partially-applied or applied. **Applied** indicates that the update has
|
||||||
been installed on all hosts of the cloud \(SystemController in this case\).
|
been installed on all hosts of the cloud (SystemController in this case).
|
||||||
|
|
||||||
- To identify which subclouds are update-current \(**in-sync**\), use the
|
- To identify which subclouds are update-current (**in-sync**), use the
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
|
@ -13,10 +13,10 @@ Synchronizations can be delayed slightly, depending on network traffic
|
|||||||
conditions and the amount of information to be synchronized.
|
conditions and the amount of information to be synchronized.
|
||||||
|
|
||||||
|prod| synchronizes configuration for selected attributes of system-wide
|
|prod| synchronizes configuration for selected attributes of system-wide
|
||||||
configurations \(see :ref:`Table 1
|
configurations (see :ref:`Table 1
|
||||||
<shared-configurations-shared-sys-configs>`\) and synchronizes configuration
|
<shared-configurations-shared-sys-configs>`) and synchronizes configuration
|
||||||
for resources of the Keystone Identity Service \(see :ref:`Table 2
|
for resources of the Keystone Identity Service (see :ref:`Table 2
|
||||||
<shared-configurations-shared-keystone-configs>`\).
|
<shared-configurations-shared-keystone-configs>`).
|
||||||
|
|
||||||
|
|
||||||
.. _shared-configurations-shared-sys-configs:
|
.. _shared-configurations-shared-sys-configs:
|
||||||
|
@ -20,7 +20,7 @@ and to collect alarms from the subcloud.
|
|||||||
|
|
||||||
The subcloud is synchronized when it is first connected to the |prod-dc| and
|
The subcloud is synchronized when it is first connected to the |prod-dc| and
|
||||||
set to managed. A backup audit and synchronization is run at regular intervals
|
set to managed. A backup audit and synchronization is run at regular intervals
|
||||||
\(every ten minutes\) for subclouds in the Managed state, synchronizing them to
|
\(every ten minutes) for subclouds in the Managed state, synchronizing them to
|
||||||
the System Controller. You can view the synchronization status for individual
|
the System Controller. You can view the synchronization status for individual
|
||||||
subclouds on the Cloud Overview page from **Distributed Cloud Admin** \>
|
subclouds on the Cloud Overview page from **Distributed Cloud Admin** \>
|
||||||
**Cloud Overview**.
|
**Cloud Overview**.
|
||||||
|
@ -6,11 +6,11 @@
|
|||||||
Update Management for Distributed Cloud
|
Update Management for Distributed Cloud
|
||||||
=======================================
|
=======================================
|
||||||
|
|
||||||
You can apply software updates \(also known as 'patches'\) to the Central Cloud
|
You can apply software updates (also known as 'patches') to the Central Cloud
|
||||||
and subclouds from the System Controller.
|
and subclouds from the System Controller.
|
||||||
|
|
||||||
A central update repository on the Central Cloud is introduced for |prod-dc|.
|
A central update repository on the Central Cloud is introduced for |prod-dc|.
|
||||||
This is used to store all updates \(patches\) so that unmanaged subclouds can
|
This is used to store all updates (patches) so that unmanaged subclouds can
|
||||||
be synchronized with any required updates when they are brought into a managed
|
be synchronized with any required updates when they are brought into a managed
|
||||||
state.
|
state.
|
||||||
|
|
||||||
|
@ -43,18 +43,18 @@ available:
|
|||||||
parallel, or serially.
|
parallel, or serially.
|
||||||
|
|
||||||
If this is not specified using the |CLI|, the values for
|
If this is not specified using the |CLI|, the values for
|
||||||
:command:`subcloud\_update\_type` defined for each subcloud group will be
|
:command:`subcloud_update_type` defined for each subcloud group will be
|
||||||
used by default.
|
used by default.
|
||||||
|
|
||||||
**maximum parallel subclouds**
|
**maximum parallel subclouds**
|
||||||
Sets the maximum number of subclouds that can be updated in parallel \(default 20\).
|
Sets the maximum number of subclouds that can be updated in parallel (default 20).
|
||||||
|
|
||||||
If this is not specified using the |CLI|, the values for
|
If this is not specified using the |CLI|, the values for
|
||||||
:command:`max\_parallel\_subclouds` defined for each subcloud group will be
|
:command:`max_parallel_subclouds` defined for each subcloud group will be
|
||||||
used by default.
|
used by default.
|
||||||
|
|
||||||
**stop on failure**
|
**stop on failure**
|
||||||
true \(default\) or false — determines whether update orchestration failure
|
true (default) or false — determines whether update orchestration failure
|
||||||
for a subcloud prevents application to subsequent subclouds.
|
for a subcloud prevents application to subsequent subclouds.
|
||||||
|
|
||||||
|
|
||||||
@ -224,7 +224,7 @@ individual subclouds.
|
|||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Since re-location is not possible on a single-node |prod| Simplex system,
|
Since re-location is not possible on a single-node |prod| Simplex system,
|
||||||
you must change the configuration to set default\_instance\_action to
|
you must change the configuration to set default_instance_action to
|
||||||
stop-start.
|
stop-start.
|
||||||
|
|
||||||
.. _update-orchestration-of-central-clouds-regionone-and-subclouds-using-the-cli-ul-xfb-bfz-fdb:
|
.. _update-orchestration-of-central-clouds-regionone-and-subclouds-using-the-cli-ul-xfb-bfz-fdb:
|
||||||
@ -307,7 +307,7 @@ individual subclouds.
|
|||||||
migrate or stop-start — determines whether hosted application VMs are
|
migrate or stop-start — determines whether hosted application VMs are
|
||||||
migrated or stopped and restarted when a worker host is upgraded.
|
migrated or stopped and restarted when a worker host is upgraded.
|
||||||
|
|
||||||
**subcloud\_name**
|
**subcloud_name**
|
||||||
The name of the subcloud to use the custom strategy. If this omitted,
|
The name of the subcloud to use the custom strategy. If this omitted,
|
||||||
the default update strategy is updated.
|
the default update strategy is updated.
|
||||||
|
|
||||||
|
@ -33,7 +33,7 @@ Follow the steps below to manually upgrade the System Controller:
|
|||||||
:start-after: license-begin
|
:start-after: license-begin
|
||||||
:end-before: license-end
|
:end-before: license-end
|
||||||
|
|
||||||
#. Transfer iso and signature files to controller-0 \(active controller\) and import the load.
|
#. Transfer iso and signature files to controller-0 (active controller) and import the load.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
@ -158,7 +158,7 @@ Follow the steps below to manually upgrade the System Controller:
|
|||||||
|
|
||||||
- State entered after :command:`system upgrade-start` completes.
|
- State entered after :command:`system upgrade-start` completes.
|
||||||
|
|
||||||
- Release <nn.nn> system data \(for example, postgres databases\) has
|
- Release <nn.nn> system data (for example, postgres databases) has
|
||||||
been exported to be used in the upgrade.
|
been exported to be used in the upgrade.
|
||||||
|
|
||||||
As part of the upgrade, the upgrade process checks the health of the system
|
As part of the upgrade, the upgrade process checks the health of the system
|
||||||
@ -366,10 +366,10 @@ Follow the steps below to manually upgrade the System Controller:
|
|||||||
you can safely unlock the node.
|
you can safely unlock the node.
|
||||||
|
|
||||||
After upgrading a storage node, but before unlocking, there are Ceph
|
After upgrading a storage node, but before unlocking, there are Ceph
|
||||||
synchronization alarms \(that appear to be making progress in
|
synchronization alarms (that appear to be making progress in
|
||||||
synching\), and there are infrastructure network interface alarms
|
synching), and there are infrastructure network interface alarms
|
||||||
\(since the infrastructure network interface configuration has not been
|
(since the infrastructure network interface configuration has not been
|
||||||
applied to the storage node yet, as it has not been unlocked\).
|
applied to the storage node yet, as it has not been unlocked).
|
||||||
|
|
||||||
Unlock the node as soon as the upgraded storage node comes online.
|
Unlock the node as soon as the upgraded storage node comes online.
|
||||||
|
|
||||||
@ -447,8 +447,8 @@ Follow the steps below to manually upgrade the System Controller:
|
|||||||
+--------------+--------------------------------------+
|
+--------------+--------------------------------------+
|
||||||
|
|
||||||
During the running of the :command:`upgrade-activate` command, new
|
During the running of the :command:`upgrade-activate` command, new
|
||||||
configurations are applied to the controller. 250.001 \(**hostname
|
configurations are applied to the controller. 250.001 (**hostname
|
||||||
Configuration is out-of-date**\) alarms are raised and are cleared as the
|
Configuration is out-of-date**) alarms are raised and are cleared as the
|
||||||
configuration is applied. The upgrade state goes from **activating** to
|
configuration is applied. The upgrade state goes from **activating** to
|
||||||
**activation-complete** once this is done.
|
**activation-complete** once this is done.
|
||||||
|
|
||||||
|
@ -6,7 +6,7 @@
|
|||||||
Upload and Applying Updates to SystemController Using Horizon
|
Upload and Applying Updates to SystemController Using Horizon
|
||||||
=============================================================
|
=============================================================
|
||||||
|
|
||||||
You can upload and apply updates \(patches\) to the SystemController in order
|
You can upload and apply updates (patches) to the SystemController in order
|
||||||
to update the central update repository, from the Horizon Web interface.
|
to update the central update repository, from the Horizon Web interface.
|
||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
@ -30,7 +30,7 @@ and Applying Updates to SystemController Using the CLI
|
|||||||
#. On the **Patches** tab, click **Upload Patches**.
|
#. On the **Patches** tab, click **Upload Patches**.
|
||||||
|
|
||||||
In the **Upload Patches** dialog box, click **Browse** to select updates
|
In the **Upload Patches** dialog box, click **Browse** to select updates
|
||||||
\(patches\) for upload.
|
(patches) for upload.
|
||||||
|
|
||||||
.. image:: figures/cah1525101473925.png
|
.. image:: figures/cah1525101473925.png
|
||||||
|
|
||||||
|
@ -21,7 +21,7 @@ patches for the SystemController is provided below.
|
|||||||
For standard |prod| updating procedures, see the |updates-doc|:
|
For standard |prod| updating procedures, see the |updates-doc|:
|
||||||
:ref:`software-updates-and-upgrades-software-updates` guide.
|
:ref:`software-updates-and-upgrades-software-updates` guide.
|
||||||
|
|
||||||
For SystemController of |prod-dc| \(and the central update repository\), you
|
For SystemController of |prod-dc| (and the central update repository), you
|
||||||
must include the additional |CLI| parameter ``--os-region-name`` with the value
|
must include the additional |CLI| parameter ``--os-region-name`` with the value
|
||||||
SystemController when using |CLI| :command:`sw-patch` commands.
|
SystemController when using |CLI| :command:`sw-patch` commands.
|
||||||
|
|
||||||
@ -67,7 +67,7 @@ SystemController when using |CLI| :command:`sw-patch` commands.
|
|||||||
You may receive a warning about the update already being imported. This
|
You may receive a warning about the update already being imported. This
|
||||||
is expected and occurs if the update was uploaded locally to the system
|
is expected and occurs if the update was uploaded locally to the system
|
||||||
controller. The warning will only occur for patches that were applied
|
controller. The warning will only occur for patches that were applied
|
||||||
to controller-0 \(system controller\) before it was first unlocked.
|
to controller-0 (system controller) before it was first unlocked.
|
||||||
|
|
||||||
#. Confirm that the newly uploaded patches have a status of **available**.
|
#. Confirm that the newly uploaded patches have a status of **available**.
|
||||||
|
|
||||||
|
@ -530,7 +530,7 @@ traps will not be generated.
|
|||||||
|
|
||||||
#. Modify your |SNMP| Helm chart values file (for example, ``user_conf.yaml``)
|
#. Modify your |SNMP| Helm chart values file (for example, ``user_conf.yaml``)
|
||||||
by adding the line "trap-server-port: [new port]" as shown in the example
|
by adding the line "trap-server-port: [new port]" as shown in the example
|
||||||
below \("30162" is the new port in this example\).
|
below ("30162" is the new port in this example).
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
|
@ -27,7 +27,7 @@ system application.
|
|||||||
About SNMP Support
|
About SNMP Support
|
||||||
------------------
|
------------------
|
||||||
|
|
||||||
Support for Simple Network Management Protocol \(SNMP\) is implemented as follows:
|
Support for Simple Network Management Protocol (SNMP) is implemented as follows:
|
||||||
|
|
||||||
.. _snmp-overview-ul-bjv-cjd-cp:
|
.. _snmp-overview-ul-bjv-cjd-cp:
|
||||||
|
|
||||||
@ -56,7 +56,7 @@ For information on enabling SNMP support, see
|
|||||||
.. _snmp-overview-section-N10099-N1001F-N10001:
|
.. _snmp-overview-section-N10099-N1001F-N10001:
|
||||||
|
|
||||||
-----------------------
|
-----------------------
|
||||||
SNMPv2-MIB \(RFC 3418\)
|
SNMPv2-MIB (RFC 3418)
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
Support for the basic standard MIB for SNMP entities is limited to the System
|
Support for the basic standard MIB for SNMP entities is limited to the System
|
||||||
@ -86,17 +86,17 @@ Wind River Enterprise MIBs
|
|||||||
|prod| supports the Wind River Enterprise Registration and Alarm MIBs.
|
|prod| supports the Wind River Enterprise Registration and Alarm MIBs.
|
||||||
|
|
||||||
**Enterprise Registration MIB, wrsEnterpriseReg.mib**
|
**Enterprise Registration MIB, wrsEnterpriseReg.mib**
|
||||||
Defines the Wind River Systems \(WRS\) hierarchy underneath the
|
Defines the Wind River Systems (WRS) hierarchy underneath the
|
||||||
**iso\(1\).org\(3\).dod\(6\).internet\(1\).private\(4\).enterprise\(1\)**.
|
**iso\(1).org\(3).dod\(6).internet\(1).private\(4).enterprise\(1)**.
|
||||||
This hierarchy is administered as follows:
|
This hierarchy is administered as follows:
|
||||||
|
|
||||||
- **.wrs\(731\)**, the IANA-registered enterprise code for Wind River
|
- **.wrs\(731)**, the IANA-registered enterprise code for Wind River
|
||||||
Systems
|
Systems
|
||||||
|
|
||||||
- **.wrs\(731\).wrsCommon\(1\).wrs<Module\>\(1-...\)**,
|
- **.wrs\(731).wrsCommon\(1).wrs<Module\>\(1-...)**,
|
||||||
defined in wrsCommon<Module\>.mib.
|
defined in wrsCommon<Module\>.mib.
|
||||||
|
|
||||||
- **.wrs\(731\).wrsProduct\(2-...\)**, defined in wrs<Product\>.mib.
|
- **.wrs\(731).wrsProduct\(2-...)**, defined in wrs<Product\>.mib.
|
||||||
|
|
||||||
**Alarm MIB, wrsAlarmMib.mib**
|
**Alarm MIB, wrsAlarmMib.mib**
|
||||||
Defines the common TRAP and ALARM MIBs for |org| products.
|
Defines the common TRAP and ALARM MIBs for |org| products.
|
||||||
|
@ -37,8 +37,8 @@ unnecessary alarms.
|
|||||||
<alarm-id>: **Alarm ID not found: <alarm-id\>**.
|
<alarm-id>: **Alarm ID not found: <alarm-id\>**.
|
||||||
|
|
||||||
If the specified number of Alarm IDs is greater than 1, and at least 1 is
|
If the specified number of Alarm IDs is greater than 1, and at least 1 is
|
||||||
wrong, then the suppress command is not applied \(none of the specified
|
wrong, then the suppress command is not applied (none of the specified
|
||||||
Alarm IDs are suppressed\).
|
Alarm IDs are suppressed).
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
Suppressing an Alarm will result in the system NOT notifying the
|
Suppressing an Alarm will result in the system NOT notifying the
|
||||||
|
@ -36,7 +36,7 @@ customer logs are mapped into the 'Message' trap.
|
|||||||
MIBs. See :ref:`SNMP Overview <snmp-overview>` for details.
|
MIBs. See :ref:`SNMP Overview <snmp-overview>` for details.
|
||||||
|
|
||||||
For Critical, Major, Minor, Warning, and Message traps, all variables in the
|
For Critical, Major, Minor, Warning, and Message traps, all variables in the
|
||||||
active alarm table are included as varbinds \(variable bindings\), where each
|
active alarm table are included as varbinds (variable bindings), where each
|
||||||
varbind is a pair of fields consisting of an object identifier and a value
|
varbind is a pair of fields consisting of an object identifier and a value
|
||||||
for the object.
|
for the object.
|
||||||
|
|
||||||
|
@ -38,7 +38,7 @@ Collect Tool Caveats and Usage
|
|||||||
- For |prod| Standard systems, use the following commands:
|
- For |prod| Standard systems, use the following commands:
|
||||||
|
|
||||||
|
|
||||||
- For a small deployment \(less than two worker nodes\):
|
- For a small deployment (less than two worker nodes):
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
|
@ -32,7 +32,7 @@ If you need to reactivate a suppressed alarm, you can do so using the CLI.
|
|||||||
``--uuid``
|
``--uuid``
|
||||||
includes the alarm type UUIDs in the output.
|
includes the alarm type UUIDs in the output.
|
||||||
|
|
||||||
Alarm type\(s\) with the specified <alarm-id\(s\)> will be unsuppressed.
|
Alarm type\(s) with the specified <alarm-id\(s)> will be unsuppressed.
|
||||||
|
|
||||||
You can unsuppress all currently suppressed alarms using the following command:
|
You can unsuppress all currently suppressed alarms using the following command:
|
||||||
|
|
||||||
|
@ -109,7 +109,7 @@ To review detailed information about a specific alarm instance, see
|
|||||||
|
|
||||||
This option indicates that all active alarms should be displayed,
|
This option indicates that all active alarms should be displayed,
|
||||||
including suppressed alarms. Suppressed alarms are displayed with
|
including suppressed alarms. Suppressed alarms are displayed with
|
||||||
their Alarm ID set to S<\(alarm-id\)>.
|
their Alarm ID set to S<\(alarm-id)>.
|
||||||
|
|
||||||
**--uuid**
|
**--uuid**
|
||||||
The ``--uuid`` option on the :command:`fm alarm-list` command lists the
|
The ``--uuid`` option on the :command:`fm alarm-list` command lists the
|
||||||
|
@ -41,16 +41,16 @@ You can view detailed information to help troubleshoot an alarm.
|
|||||||
| uuid | 4ab5698a-19cb-4c17-bd63-302173fef62c |
|
| uuid | 4ab5698a-19cb-4c17-bd63-302173fef62c |
|
||||||
+------------------------+-------------------------------------------------+
|
+------------------------+-------------------------------------------------+
|
||||||
|
|
||||||
The pair of attributes **\(alarm\_id, entity\_instance\_id\)** uniquely
|
The pair of attributes **\(alarm_id, entity_instance_id)** uniquely
|
||||||
identifies an active alarm:
|
identifies an active alarm:
|
||||||
|
|
||||||
**alarm\_id**
|
**alarm_id**
|
||||||
An ID identifying the particular alarm condition. Note that there are
|
An ID identifying the particular alarm condition. Note that there are
|
||||||
some alarm conditions, such as *administratively locked*, that can be
|
some alarm conditions, such as *administratively locked*, that can be
|
||||||
raised by more than one entity-instance-id.
|
raised by more than one entity-instance-id.
|
||||||
|
|
||||||
**entity\_instance\_id**
|
**entity_instance_id**
|
||||||
Type and instance information of the object raising the alarm. A
|
Type and instance information of the object raising the alarm. A
|
||||||
period-separated list of \(key, value\) pairs, representing the
|
period-separated list of (key, value) pairs, representing the
|
||||||
containment structure of the overall entity instance. This structure
|
containment structure of the overall entity instance. This structure
|
||||||
is used for processing hierarchical clearing of alarms.
|
is used for processing hierarchical clearing of alarms.
|
@ -33,7 +33,7 @@ You can use CLI commands to work with historical alarms and logs in the event lo
|
|||||||
Optional arguments:
|
Optional arguments:
|
||||||
|
|
||||||
``-q QUERY, --query QUERY``
|
``-q QUERY, --query QUERY``
|
||||||
\- key\[op\]data\_type::value; list. data\_type is optional, but if
|
\- key\[op\]data_type::value; list. data_type is optional, but if
|
||||||
supplied must be string, integer, float, or boolean.
|
supplied must be string, integer, float, or boolean.
|
||||||
|
|
||||||
``-l NUMBER, --limit NUMBER``
|
``-l NUMBER, --limit NUMBER``
|
||||||
@ -45,7 +45,7 @@ You can use CLI commands to work with historical alarms and logs in the event lo
|
|||||||
``--logs``
|
``--logs``
|
||||||
Show customer logs only.
|
Show customer logs only.
|
||||||
|
|
||||||
``--include\_suppress``
|
``--include_suppress``
|
||||||
Show suppressed alarms as well as unsuppressed alarms.
|
Show suppressed alarms as well as unsuppressed alarms.
|
||||||
|
|
||||||
``--uuid``
|
``--uuid``
|
||||||
|
@ -49,7 +49,7 @@ The following prerequisites are required before the integration:
|
|||||||
:start-after: prereq-begin
|
:start-after: prereq-begin
|
||||||
:end-before: prereq-end
|
:end-before: prereq-end
|
||||||
|
|
||||||
- The cloud is configured with a node that supports the Subordinate mode \(Secondary mode\).
|
- The cloud is configured with a node that supports the Subordinate mode (Secondary mode).
|
||||||
|
|
||||||
- The cloud is labeled with **ptp-registration=true**, and **ptp-notification=true**.
|
- The cloud is labeled with **ptp-registration=true**, and **ptp-notification=true**.
|
||||||
|
|
||||||
|
@ -96,8 +96,8 @@ Config drive
|
|||||||
------------
|
------------
|
||||||
|
|
||||||
|prod-os| can be configured to use a special-purpose configuration
|
|prod-os| can be configured to use a special-purpose configuration
|
||||||
drive \(abbreviated config drive\) to store metadata \(including
|
drive (abbreviated config drive) to store metadata (including
|
||||||
injected files\). Metadata is written to the drive, which is attached
|
injected files). Metadata is written to the drive, which is attached
|
||||||
to the instance when it boots. The instance can retrieve information
|
to the instance when it boots. The instance can retrieve information
|
||||||
normally available through the metadata service by reading from the
|
normally available through the metadata service by reading from the
|
||||||
mounted drive.
|
mounted drive.
|
||||||
|
@ -25,7 +25,7 @@ hosting compute or |AIO|-controller node, do the following in the |VM|:
|
|||||||
|
|
||||||
~(keystone_admin)$ modprobe kvm_ptp
|
~(keystone_admin)$ modprobe kvm_ptp
|
||||||
|
|
||||||
#. Update the reference clock in the chrony config file \(/etc/chrony.conf\):
|
#. Update the reference clock in the chrony config file (/etc/chrony.conf):
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
|
@ -103,7 +103,7 @@ Where:
|
|||||||
|
|
||||||
- OpenStack Horizon - dashboard
|
- OpenStack Horizon - dashboard
|
||||||
|
|
||||||
.. - Telemetry \(OPTIONAL\)
|
.. - Telemetry (OPTIONAL)
|
||||||
|
|
||||||
- Panko - Event storage
|
- Panko - Event storage
|
||||||
|
|
||||||
|
@ -12,9 +12,9 @@ maintenance purposes.
|
|||||||
On a controller node, the state transition only succeeds if there are no
|
On a controller node, the state transition only succeeds if there are no
|
||||||
services running in active mode on the host.
|
services running in active mode on the host.
|
||||||
|
|
||||||
On a worker node \(or |AIO| Controller\), the state
|
On a worker node (or |AIO| Controller), the state
|
||||||
transition only succeeds if all currently running containers
|
transition only succeeds if all currently running containers
|
||||||
\(hosted applications\) on the host can be re-located on alternative worker
|
\(hosted applications) on the host can be re-located on alternative worker
|
||||||
nodes or |AIO| Controller. Re-location of containers is
|
nodes or |AIO| Controller. Re-location of containers is
|
||||||
initiated automatically by |prod| as soon as the state transition is requested.
|
initiated automatically by |prod| as soon as the state transition is requested.
|
||||||
For containers, a live re-location of the container to another host is
|
For containers, a live re-location of the container to another host is
|
||||||
|
@ -14,8 +14,8 @@ maintenance purposes.
|
|||||||
On a controller node, the state transition only succeeds if there are no
|
On a controller node, the state transition only succeeds if there are no
|
||||||
services running in active mode on the host.
|
services running in active mode on the host.
|
||||||
|
|
||||||
On a worker node \(or |AIO|\), the state transition only succeeds if all
|
On a worker node (or |AIO|), the state transition only succeeds if all
|
||||||
currently running containers \(hosted applications\) on the host can be
|
currently running containers (hosted applications) on the host can be
|
||||||
re-located on alternative worker nodes or |AIO| Controller. Re-location of
|
re-located on alternative worker nodes or |AIO| Controller. Re-location of
|
||||||
containers is initiated automatically by |prod| as soon as the state transition
|
containers is initiated automatically by |prod| as soon as the state transition
|
||||||
is requested. For containers, a live re-location of the container to
|
is requested. For containers, a live re-location of the container to
|
||||||
|
@ -35,7 +35,7 @@ order. Observe the following tips:
|
|||||||
Typically, this is the disk associated with /dev/sda and is as
|
Typically, this is the disk associated with /dev/sda and is as
|
||||||
defined in /proc/cmdline when the load is booted.
|
defined in /proc/cmdline when the load is booted.
|
||||||
|
|
||||||
#. The |NIC| on the boot interface \(such as management or |PXE| network\).
|
#. The |NIC| on the boot interface (such as management or |PXE| network).
|
||||||
|
|
||||||
- Set the BIOS boot options to ensure a failsafe boot, if available. For
|
- Set the BIOS boot options to ensure a failsafe boot, if available. For
|
||||||
example, rotating through available boot interfaces, watchdog timer on
|
example, rotating through available boot interfaces, watchdog timer on
|
||||||
|
@ -9,7 +9,7 @@ Swact Controllers Using Horizon
|
|||||||
Swacting initiates a switch of the active/standby roles between two
|
Swacting initiates a switch of the active/standby roles between two
|
||||||
controllers.
|
controllers.
|
||||||
|
|
||||||
Swact is an abbreviated form of the term Switch Active \(host\). When
|
Swact is an abbreviated form of the term Switch Active (host). When
|
||||||
selected, this option forces the other controller to become the active one in
|
selected, this option forces the other controller to become the active one in
|
||||||
the HA cluster. This means that all active system services on this controller
|
the HA cluster. This means that all active system services on this controller
|
||||||
move to standby operation, and that the corresponding services on the other
|
move to standby operation, and that the corresponding services on the other
|
||||||
|
@ -11,7 +11,7 @@ controllers.
|
|||||||
|
|
||||||
.. rubric:: |context|
|
.. rubric:: |context|
|
||||||
|
|
||||||
Swact is an abbreviated form of the term Switch Active \(host\). When
|
Swact is an abbreviated form of the term Switch Active (host). When
|
||||||
selected, this option forces the other controller to become the active one
|
selected, this option forces the other controller to become the active one
|
||||||
in the HA cluster. This means that all active system services on this
|
in the HA cluster. This means that all active system services on this
|
||||||
controller move to standby operation, and that the corresponding services
|
controller move to standby operation, and that the corresponding services
|
||||||
|
@ -23,7 +23,7 @@ successive reboot attempts is limited; for more information,
|
|||||||
see :ref:`reboot-limits-for-host-unlock-d9a26854590a`.
|
see :ref:`reboot-limits-for-host-unlock-d9a26854590a`.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
On a |prod| Simplex system, any containers \(hosted applications\) that
|
On a |prod| Simplex system, any containers (hosted applications) that
|
||||||
were stopped by the preceding **Lock Host** operation are started
|
were stopped by the preceding **Lock Host** operation are started
|
||||||
automatically.
|
automatically.
|
||||||
|
|
||||||
|
@ -23,7 +23,7 @@ number of successive reboot attempts is limited; for more information,
|
|||||||
see :ref:`reboot-limits-for-host-unlock-d9a26854590a`.
|
see :ref:`reboot-limits-for-host-unlock-d9a26854590a`.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
On a |prod| Simplex system, any containers \(hosted applications\)
|
On a |prod| Simplex system, any containers (hosted applications)
|
||||||
that were stopped by the preceding :command:`host-lock` operation are
|
that were stopped by the preceding :command:`host-lock` operation are
|
||||||
started automatically.
|
started automatically.
|
||||||
|
|
||||||
|
@ -12,10 +12,10 @@ The hyper-threading status is controlled by the BIOS settings of the host.
|
|||||||
Some applications may benefit from hyperthreading. For applications that
|
Some applications may benefit from hyperthreading. For applications that
|
||||||
require deterministic performance, it is recommended to run with
|
require deterministic performance, it is recommended to run with
|
||||||
hyperthreading disabled. If hyperthreading is enabled, the application
|
hyperthreading disabled. If hyperthreading is enabled, the application
|
||||||
\(either running on bare metal or in a container\) must check the CPU
|
(either running on bare metal or in a container) must check the CPU
|
||||||
topology for the CPUs and affine tasks appropriately to HT siblings. For
|
topology for the CPUs and affine tasks appropriately to HT siblings. For
|
||||||
example, "/proc/cpuinfo" and
|
example, "/proc/cpuinfo" and
|
||||||
"/sys/devices/system/cpu/cpuX/topology/thread\_siblings\*" can be used to
|
"/sys/devices/system/cpu/cpuX/topology/thread_siblings\*" can be used to
|
||||||
identify HT siblings of the same core.
|
identify HT siblings of the same core.
|
||||||
|
|
||||||
|
|
||||||
@ -39,7 +39,7 @@ The hyper-threading status is controlled by the BIOS settings of the host.
|
|||||||
.. note::
|
.. note::
|
||||||
Changes to the host's BIOS must be made while it is locked and it
|
Changes to the host's BIOS must be made while it is locked and it
|
||||||
must not be subsequently unlocked until it comes back online
|
must not be subsequently unlocked until it comes back online
|
||||||
\(locked-disabled-online\) and the updated Hyperthreading settings
|
(locked-disabled-online) and the updated Hyperthreading settings
|
||||||
are available in the inventory.
|
are available in the inventory.
|
||||||
|
|
||||||
#. Boot the host in BIOS mode.
|
#. Boot the host in BIOS mode.
|
||||||
|
@ -62,7 +62,7 @@ them more CPU cores from the Horizon Web interface.
|
|||||||
increased flexibility for high-performance configurations. For
|
increased flexibility for high-performance configurations. For
|
||||||
example, you can dedicate certain |NUMA| nodes for platform
|
example, you can dedicate certain |NUMA| nodes for platform
|
||||||
use such that other |NUMA| nodes that service IRQ requests are left
|
use such that other |NUMA| nodes that service IRQ requests are left
|
||||||
available for the containers \(hosted applications\) that require
|
available for the containers (hosted applications) that require
|
||||||
high-performance IRQ servicing.
|
high-performance IRQ servicing.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
@ -46,17 +46,17 @@ see :ref:`The Life Cycle of a Host <the-life-cycle-of-a-host-93640aa2b707>`.
|
|||||||
|
|
||||||
The following service parameters control the boot timeout interval.
|
The following service parameters control the boot timeout interval.
|
||||||
|
|
||||||
**worker\_boot\_timeout**
|
**worker_boot_timeout**
|
||||||
The time in seconds to allow for a worker or storage host to boot
|
The time in seconds to allow for a worker or storage host to boot
|
||||||
\(720–1800 seconds\). The default value is 720 seconds \(12 minutes\).
|
(720–1800 seconds). The default value is 720 seconds (12 minutes).
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
This parameter also applies to storage nodes.
|
This parameter also applies to storage nodes.
|
||||||
|
|
||||||
**controller\_boot\_timeout**
|
**controller_boot_timeout**
|
||||||
The time in seconds to allow for a controller host to boot
|
The time in seconds to allow for a controller host to boot
|
||||||
\(1200-1800 seconds\). The default value is 1200 seconds
|
(1200-1800 seconds). The default value is 1200 seconds
|
||||||
\(20 minutes\).
|
(20 minutes).
|
||||||
|
|
||||||
For example, to change the boot timeout for the worker and storage
|
For example, to change the boot timeout for the worker and storage
|
||||||
hosts to 840 seconds:
|
hosts to 840 seconds:
|
||||||
|
@ -10,13 +10,13 @@ You can adjust the heartbeat interval, as well as the thresholds for missed
|
|||||||
heartbeat challenges that cause a host to be moved to the **Degraded** or
|
heartbeat challenges that cause a host to be moved to the **Degraded** or
|
||||||
**Failed** state.
|
**Failed** state.
|
||||||
|
|
||||||
The settings apply to all hosts \(controller, worker, and storage\). For more
|
The settings apply to all hosts (controller, worker, and storage). For more
|
||||||
information about host states,
|
information about host states,
|
||||||
see :ref:`The Life Cycle of a Host <the-life-cycle-of-a-host-93640aa2b707>`.
|
see :ref:`The Life Cycle of a Host <the-life-cycle-of-a-host-93640aa2b707>`.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
The heartbeat\_degrade threshold must not exceed the
|
The heartbeat_degrade threshold must not exceed the
|
||||||
heartbeat\_failure\_threshold.
|
heartbeat_failure_threshold.
|
||||||
|
|
||||||
.. rubric:: |proc|
|
.. rubric:: |proc|
|
||||||
|
|
||||||
@ -51,19 +51,19 @@ see :ref:`The Life Cycle of a Host <the-life-cycle-of-a-host-93640aa2b707>`.
|
|||||||
response thresholds for moving a host to the **Degraded** or **Failed**
|
response thresholds for moving a host to the **Degraded** or **Failed**
|
||||||
state.
|
state.
|
||||||
|
|
||||||
**heartbeat\_period**
|
**heartbeat_period**
|
||||||
The time in milliseconds between heartbeat challenges from the
|
The time in milliseconds between heartbeat challenges from the
|
||||||
controller to the other hosts \(100–1000 ms\). The default is
|
controller to the other hosts (100–1000 ms). The default is
|
||||||
100 ms.
|
100 ms.
|
||||||
|
|
||||||
**heartbeat\_degrade\_threshold**
|
**heartbeat_degrade_threshold**
|
||||||
The number of consecutive missing responses to heartbeat challenges
|
The number of consecutive missing responses to heartbeat challenges
|
||||||
before a host is moved into the **Degraded** state \(4–100\). The
|
before a host is moved into the **Degraded** state (4–100). The
|
||||||
default is six consecutive missing responses.
|
default is six consecutive missing responses.
|
||||||
|
|
||||||
**heartbeat\_failure\_threshold**
|
**heartbeat_failure_threshold**
|
||||||
The number of consecutive missing responses to heartbeat challenges
|
The number of consecutive missing responses to heartbeat challenges
|
||||||
before a host is moved into the **Failed** state \(10–100\). The
|
before a host is moved into the **Failed** state (10–100). The
|
||||||
default is 10 consecutive missing responses.
|
default is 10 consecutive missing responses.
|
||||||
|
|
||||||
For example, to change the heartbeat failure threshold for all hosts to
|
For example, to change the heartbeat failure threshold for all hosts to
|
||||||
|
@ -6,7 +6,7 @@
|
|||||||
Configure Heartbeat Failure Action
|
Configure Heartbeat Failure Action
|
||||||
==================================
|
==================================
|
||||||
|
|
||||||
You can configure **heartbeat\_failure\_action** while performing network
|
You can configure **heartbeat_failure_action** while performing network
|
||||||
related maintenance activities that may interrupt inter-host communications.
|
related maintenance activities that may interrupt inter-host communications.
|
||||||
|
|
||||||
You can configure service parameters to change the heartbeat failure behavior
|
You can configure service parameters to change the heartbeat failure behavior
|
||||||
@ -50,7 +50,7 @@ immediately in the event of a persistent loss of maintenance heartbeat.
|
|||||||
~(keystone_admin)$ system service-parameter-modify <platform maintenance heartbeat_failure_action>=ignore Action must be one of 'fail', 'degrade', 'alarm' or 'none'
|
~(keystone_admin)$ system service-parameter-modify <platform maintenance heartbeat_failure_action>=ignore Action must be one of 'fail', 'degrade', 'alarm' or 'none'
|
||||||
|
|
||||||
The following service parameters control the
|
The following service parameters control the
|
||||||
**heartbeat\_failure\_action** and accepts one of the four possible
|
**heartbeat_failure_action** and accepts one of the four possible
|
||||||
actions.
|
actions.
|
||||||
|
|
||||||
**fail**
|
**fail**
|
||||||
@ -121,11 +121,11 @@ The heartbeat alarms, such as Management Network can be viewed. For example:
|
|||||||
.. note::
|
.. note::
|
||||||
In the event of a single host heartbeat failure, maintenance will attempt
|
In the event of a single host heartbeat failure, maintenance will attempt
|
||||||
to reboot, and if unreachable, will also attempt to reset the host in order
|
to reboot, and if unreachable, will also attempt to reset the host in order
|
||||||
to expedite failed host recovery (if |LAG| Network is provisioned\).
|
to expedite failed host recovery (if |LAG| Network is provisioned).
|
||||||
|
|
||||||
.. warning::
|
.. warning::
|
||||||
To maintain a system with High Fault Detection and Availability the
|
To maintain a system with High Fault Detection and Availability the
|
||||||
**heartbeat\_failure\_action** should always be reverted back to **fail**
|
**heartbeat_failure_action** should always be reverted back to **fail**
|
||||||
once network maintenance activities are completed. This action applies to
|
once network maintenance activities are completed. This action applies to
|
||||||
all hosts and if a heartbeat failure occurs while any action other than
|
all hosts and if a heartbeat failure occurs while any action other than
|
||||||
**fail** is selected, maintenance will not take action to recover the
|
**fail** is selected, maintenance will not take action to recover the
|
||||||
@ -143,5 +143,5 @@ The heartbeat alarms, such as Management Network can be viewed. For example:
|
|||||||
|
|
||||||
.. rubric:: |postreq|
|
.. rubric:: |postreq|
|
||||||
|
|
||||||
Always revert the **heartbeat\_failure\_action** to **fail** once network
|
Always revert the **heartbeat_failure_action** to **fail** once network
|
||||||
maintenance activities are complete.
|
maintenance activities are complete.
|
||||||
|
@ -11,19 +11,19 @@ maintenance heartbeat failures of more than one host, and gracefully
|
|||||||
recovers the hosts once the heartbeat is re-established.
|
recovers the hosts once the heartbeat is re-established.
|
||||||
|
|
||||||
You can configure multi-node failure avoidance for recovery of failing hosts,
|
You can configure multi-node failure avoidance for recovery of failing hosts,
|
||||||
**mnfa\_threshold** \(default is 2, range is specified from 2 to 100\), and
|
**mnfa_threshold** (default is 2, range is specified from 2 to 100), and
|
||||||
the number of seconds the heartbeat can fail in this group of hosts,
|
the number of seconds the heartbeat can fail in this group of hosts,
|
||||||
**mnfa\_timeout** \(default is no-timeout, value of 0, or from 100 to 86400
|
**mnfa_timeout** (default is no-timeout, value of 0, or from 100 to 86400
|
||||||
secs=1 day\), before the hosts are declared failed, or are required to be
|
secs=1 day), before the hosts are declared failed, or are required to be
|
||||||
forced to reboot/reset. If the value is set outside the range, a warning is
|
forced to reboot/reset. If the value is set outside the range, a warning is
|
||||||
displayed.
|
displayed.
|
||||||
|
|
||||||
Multi-Node Failure Avoidance is based on four or more back to back heartbeat
|
Multi-Node Failure Avoidance is based on four or more back to back heartbeat
|
||||||
pulse misses for a **mnfa\_threshold** or higher number of hosts within a
|
pulse misses for a **mnfa_threshold** or higher number of hosts within a
|
||||||
full heartbeat loss window. For example, given the default heartbeat period
|
full heartbeat loss window. For example, given the default heartbeat period
|
||||||
of 100 msec and the **heartbeat\_failure\_threshold** of 10; if maintenance
|
of 100 msec and the **heartbeat_failure_threshold** of 10; if maintenance
|
||||||
sees **mnfa\_threshold** or more hosts missing four or more back to back
|
sees **mnfa_threshold** or more hosts missing four or more back to back
|
||||||
heartbeat responses within one second \( 100 msec times 10 \), then
|
heartbeat responses within one second ( 100 msec times 10 ), then
|
||||||
Multi-Node Failure Avoidance is activated for those hosts. Any additional
|
Multi-Node Failure Avoidance is activated for those hosts. Any additional
|
||||||
hosts failing heartbeat while |MNFA| is active are added to the |MNFA| pool.
|
hosts failing heartbeat while |MNFA| is active are added to the |MNFA| pool.
|
||||||
|
|
||||||
@ -31,23 +31,23 @@ In Horizon, |MNFA| displays heartbeat failing hosts in the
|
|||||||
**unlocked-enabled-degraded** state, and displays a status of “Graceful
|
**unlocked-enabled-degraded** state, and displays a status of “Graceful
|
||||||
Recovery Wait” while maintenance waits for heartbeat to that host to recover.
|
Recovery Wait” while maintenance waits for heartbeat to that host to recover.
|
||||||
This degraded state and host status is true only for the **fail** and
|
This degraded state and host status is true only for the **fail** and
|
||||||
**degrade** **heartbeat\_failure\_action** selections. For information on
|
**degrade** **heartbeat_failure_action** selections. For information on
|
||||||
viewing heartbeat-failing hosts from Horizon, see :ref:`Hosts Tab <hosts-tab>`.
|
viewing heartbeat-failing hosts from Horizon, see :ref:`Hosts Tab <hosts-tab>`.
|
||||||
|
|
||||||
Hosts whose heartbeat recovers, after ten back to back heartbeat responses,
|
Hosts whose heartbeat recovers, after ten back to back heartbeat responses,
|
||||||
are removed from the |MNFA| pool with state and status returned to what it was
|
are removed from the |MNFA| pool with state and status returned to what it was
|
||||||
prior to the event. Once the |MNFA| pool size drops below **mnfa\_threshold**,
|
prior to the event. Once the |MNFA| pool size drops below **mnfa_threshold**,
|
||||||
then the remaining hosts have 6 seconds \(100 msec times 10 plus 5 second grace
|
then the remaining hosts have 6 seconds (100 msec times 10 plus 5 second grace
|
||||||
period\) to recover before the selected **heartbeat\_failure\_action** is taken
|
period) to recover before the selected **heartbeat_failure_action** is taken
|
||||||
against the hosts. With the **mnfa\_threshold** of two that would only be one
|
against the hosts. With the **mnfa_threshold** of two that would only be one
|
||||||
host \(or for 3 that could be 2\). If late recovering hosts recover, and if
|
host (or for 3 that could be 2). If late recovering hosts recover, and if
|
||||||
their uptime shows that they had rebooted, then they are tested and brought
|
their uptime shows that they had rebooted, then they are tested and brought
|
||||||
back into service like the others. Otherwise they are fully re-enabled
|
back into service like the others. Otherwise they are fully re-enabled
|
||||||
through reboot/reset.
|
through reboot/reset.
|
||||||
|
|
||||||
In |MNFA| recovery, where the **heartbeat\_failure\_action** is **fail** and
|
In |MNFA| recovery, where the **heartbeat_failure_action** is **fail** and
|
||||||
the hosts do not reboot during the loss of heartbeat. It is possible that
|
the hosts do not reboot during the loss of heartbeat. It is possible that
|
||||||
maintenance will force a reboot of the **mnfa\_threshold-1** late recovering
|
maintenance will force a reboot of the **mnfa_threshold-1** late recovering
|
||||||
hosts upon eventual recovery, as at that point they are treated as individual
|
hosts upon eventual recovery, as at that point they are treated as individual
|
||||||
heartbeat failures.
|
heartbeat failures.
|
||||||
|
|
||||||
@ -56,7 +56,7 @@ heartbeat failures.
|
|||||||
.. _configuring-multi-node-failure-avoidance-steps-m4h-j3h-gfb:
|
.. _configuring-multi-node-failure-avoidance-steps-m4h-j3h-gfb:
|
||||||
|
|
||||||
#. Use the :command:`system service-parameter-modify` command to specify the
|
#. Use the :command:`system service-parameter-modify` command to specify the
|
||||||
new **mnfa\_threshold** and **mnfa\_timeout** setting. Changing this to
|
new **mnfa_threshold** and **mnfa_timeout** setting. Changing this to
|
||||||
an invalid value, results in a semantic check error similar to the
|
an invalid value, results in a semantic check error similar to the
|
||||||
following:
|
following:
|
||||||
|
|
||||||
@ -65,8 +65,8 @@ heartbeat failures.
|
|||||||
~(keystone_admin)$ system service-parameter-modify platform maintenance mnfa_threshold=<1>
|
~(keystone_admin)$ system service-parameter-modify platform maintenance mnfa_threshold=<1>
|
||||||
Parameter 'mnfa_threshold' must be between 2 and 100
|
Parameter 'mnfa_threshold' must be between 2 and 100
|
||||||
|
|
||||||
The **mnfa\_timeout** accepts a value of 0 indicating no-timeout or
|
The **mnfa_timeout** accepts a value of 0 indicating no-timeout or
|
||||||
from 100 to 86400 secs \(1 day\). For example:
|
from 100 to 86400 secs (1 day). For example:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
@ -88,7 +88,7 @@ heartbeat failures.
|
|||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
Multi-Node Failure Avoidance is never activated as it does not apply
|
Multi-Node Failure Avoidance is never activated as it does not apply
|
||||||
if the **heartbeat\_failure\_action** is set to **alarm** or **none**.
|
if the **heartbeat_failure_action** is set to **alarm** or **none**.
|
||||||
|
|
||||||
For more information, see :ref:`Configure Heartbeat Failure Action
|
For more information, see :ref:`Configure Heartbeat Failure Action
|
||||||
<configuring-heartbeat-failure-action>`.
|
<configuring-heartbeat-failure-action>`.
|
||||||
|
@ -50,8 +50,8 @@ enables the ACC100/ACC200 device.
|
|||||||
~(keystone_admin)$ system host-label-assign controller-0 kube-topology-mgr-policy=restricted
|
~(keystone_admin)$ system host-label-assign controller-0 kube-topology-mgr-policy=restricted
|
||||||
|
|
||||||
#. Modify the CPU core assignments for controller-0 to have 12
|
#. Modify the CPU core assignments for controller-0 to have 12
|
||||||
application-isolated physical cores \(24 virtual cores if hyper-threading
|
application-isolated physical cores (24 virtual cores if hyper-threading
|
||||||
is supported and enabled on the processor\) on processor 0. Your specific
|
is supported and enabled on the processor) on processor 0. Your specific
|
||||||
application(s) may need more or less cores.
|
application(s) may need more or less cores.
|
||||||
|
|
||||||
|
|
||||||
|
@ -10,7 +10,7 @@ To initiate device image updates on a host, use the
|
|||||||
:command:`host-device-image-update` command.
|
:command:`host-device-image-update` command.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
Modifying or deleting device labels for devices on the host \(if any\)
|
Modifying or deleting device labels for devices on the host (if any)
|
||||||
is not allowed while the update is in progress.
|
is not allowed while the update is in progress.
|
||||||
|
|
||||||
The command syntax is:
|
The command syntax is:
|
||||||
|
@ -33,16 +33,16 @@ displayed in the list of host devices by running the following command:
|
|||||||
+-------+----------+--------+--------+--------+------------+-------------+----------------------+-----------+---------|
|
+-------+----------+--------+--------+--------+------------+-------------+----------------------+-----------+---------|
|
||||||
|
|
||||||
To enable the |FEC| device for |SRIOV| interfaces, it must be modified in order
|
To enable the |FEC| device for |SRIOV| interfaces, it must be modified in order
|
||||||
to set the number of virtual functions \(VF\), and the appropriate userspace
|
to set the number of virtual functions (VF), and the appropriate userspace
|
||||||
drivers for the physical function \(PF\), and VFs.
|
drivers for the physical function (PF), and VFs.
|
||||||
|
|
||||||
The following PF and VF drivers are supported:
|
The following PF and VF drivers are supported:
|
||||||
|
|
||||||
.. _n3000-fpga-forward-error-correction-ul-klj-2zh-bmb:
|
.. _n3000-fpga-forward-error-correction-ul-klj-2zh-bmb:
|
||||||
|
|
||||||
- PF driver: igb\_uio
|
- PF driver: igb_uio
|
||||||
|
|
||||||
- VF driver: igb\_uio, vfio
|
- VF driver: igb_uio, vfio
|
||||||
|
|
||||||
For example, run the following commands:
|
For example, run the following commands:
|
||||||
|
|
||||||
|
@ -23,7 +23,7 @@ The following procedure shows an example of launching a container image with
|
|||||||
$ source /etc/platform/openrc ~(keystone_admin)$
|
$ source /etc/platform/openrc ~(keystone_admin)$
|
||||||
|
|
||||||
#. Create a pod.yml file that requests 16 ACC100/ACC200 VFs
|
#. Create a pod.yml file that requests 16 ACC100/ACC200 VFs
|
||||||
\(i.e. intel.com/intel_acc100_fec: '16'\)
|
(i.e. intel.com/intel_acc100_fec: '16')
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user