Dist. Cloud edits (r6, dsr6)

Copy edits for typos, markup and other technical issues.
Fix label in :ref:
Fix gerund mismatch.

Signed-off-by: Ron Stone <ronald.stone@windriver.com>
Change-Id: Ie6dd03f0af3ff9d7ace7efe0f61479dfee7dc1ba
This commit is contained in:
Ron Stone 2022-08-18 06:34:53 -04:00 committed by Juanita-Balaraj
parent 126131ce63
commit d7a2a00182
15 changed files with 67 additions and 72 deletions

View File

@ -29,7 +29,3 @@ Update Orchestration.
subcloud, use the **Host Inventory** page on the subcloud.
.. rubric:: |result|
.. procedure results here

View File

@ -11,10 +11,10 @@ be added to the 'Default' group, unless a different subcloud group has been
specified.
A subcloud can be moved to a different subcloud group using the
'dcmanager subcloud update' command for the group attribute. A subcloud group
cannot be deleted if it contains any subclouds. Removing a subcloud from a
subcloud group is done by moving the subcloud back to the 'Default' subcloud
group.
:command:`dcmanager subcloud update` command for the group attribute. A
subcloud group cannot be deleted if it contains any subclouds. Removing a
subcloud from a subcloud group is done by moving the subcloud back to the
'Default' subcloud group.
.. rubric:: |context|

View File

@ -31,8 +31,8 @@ subcloud, the subcloud installation has these phases:
After a successful remote installation of a subcloud in a Distributed Cloud
system, a subsequent remote reinstallation fails because of an existing ssh
key entry in the /root/.ssh/known_hosts on the System Controller. In this
case, delete the host key entry, if present, from /root/.ssh/known_hosts
key entry in the ``/root/.ssh/known_hosts`` on the System Controller. In this
case, delete the host key entry, if present, from ``/root/.ssh/known_hosts``
on the System Controller before doing reinstallations.
.. rubric:: |prereq|
@ -40,14 +40,14 @@ subcloud, the subcloud installation has these phases:
.. _installing-a-subcloud-using-redfish-platform-management-service-ul-g5j-3f3-qjb:
- The docker **rvmc** image needs to be added to the System Controller
bootstrap override file, docker.io/starlingx/rvmc:stx.5.0-v1.0.0.
bootstrap override file, ``docker.io/starlingx/rvmc:stx.5.0-v1.0.0``.
- A new system CLI option ``--active`` is added to the
:command:`load-import` command to allow the import into the
System Controller /opt/dc-vault/loads. The purpose of this is to allow
System Controller ``/opt/dc-vault/loads``. The purpose of this is to allow
Redfish install of subclouds referencing a single full copy of the
**bootimage.iso** at /opt/dc-vault/loads. \(Previously, the full
**bootimage.iso** was duplicated for each :command:`subcloud add`
``bootimage.iso`` at ``/opt/dc-vault/loads``. \(Previously, the full
``bootimage.iso`` was duplicated for each :command:`subcloud add`
command\).
.. note::
@ -65,8 +65,8 @@ subcloud, the subcloud installation has these phases:
~(keystone_admin)]$ system --os-region-name SystemController load-import --active |installer-image-name|.iso |installer-image-name|.sig
In order to be able to deploy subclouds from either controller, all local
files that are referenced in the **bootstrap.yml** file must exist on both
controllers \(for example, /home/sysadmin/docker-registry-ca-cert.pem\).
files that are referenced in the ``bootstrap.yml`` file must exist on both
controllers (for example, ``/home/sysadmin/docker-registry-ca-cert.pem``).
.. _increase-subcloud-platform-backup-size:
@ -89,8 +89,8 @@ subcloud reinstall using the following commands:
~(keystone_admin)]$ dcmanager subcloud update --install-values <install-values-yaml-file><subcloud-name>
For a new subcloud deployment, use the :command:`dcmanager subcloud add`
command with the install-values.yaml file containing the desired
**persistent_size** value.
command with the ``install-values.yaml`` file containing the desired
``persistent_size`` value.
.. rubric:: |proc|
@ -116,7 +116,7 @@ command with the install-values.yaml file containing the desired
:start-after: begin-ref-1
:end-before: end-ref-1
#. Create the install-values.yaml file and use the content to pass the file
#. Create the ``install-values.yaml`` file and use the content to pass the file
into the :command:`dcmanager subcloud add` command, using the
``--install-values`` command option.
@ -309,7 +309,7 @@ command with the install-values.yaml file containing the desired
**Pre-Install**
This status indicates that the ISO for the subcloud is being updated by
the Central Cloud with the boot menu parameters, and kickstart
configuration as specified in the install-values.yaml file.
configuration as specified in the ``install-values.yaml`` file.
**Installing**
This status indicates that the subcloud's ISO is being installed from
@ -386,7 +386,7 @@ command with the install-values.yaml file containing the desired
{SECRET_UUID} | awk '{print $2}''
openstack secret get ${SECRET_REF} --payload -f value
The secret payload should be, "username: sysinv password:<password>". If
The secret payload should be, ``username: sysinv password:<password>``. If
the secret payload is, "username: admin password:<password>", see,
:ref:`Updating Docker Registry Credentials on a Subcloud
<updating-docker-registry-credentials-on-a-subcloud>` for more information.

View File

@ -22,7 +22,7 @@ subcloud, the subcloud installation process has two phases:
.. _installing-a-subcloud-without-redfish-platform-management-service-ul-fmx-jpl-mkb:
- Installing the ISO on controller-0; this is done locally at the subcloud by
using either, a bootable USB device, or a local PXE boot server
using either, a bootable USB device, or a local |PXE| boot server
- Executing the :command:`dcmanager subcloud add` command in the Central
Cloud that uses Ansible to bootstrap |prod-long| on controller-0 in
@ -33,8 +33,8 @@ subcloud, the subcloud installation process has two phases:
After a successful remote installation of a subcloud in a Distributed Cloud
system, a subsequent remote reinstallation fails because of an existing ssh
key entry in the /root/.ssh/known\_hosts on the System Controller. In this
case, delete the host key entry, if present, from /root/.ssh/known\_hosts
key entry in the ``/root/.ssh/known_hosts`` on the System Controller. In this
case, delete the host key entry, if present, from ``/root/.ssh/known_hosts``
on the System Controller before doing reinstallations.
.. rubric:: |prereq|
@ -50,7 +50,7 @@ subcloud, the subcloud installation process has two phases:
- You must have downloaded ``update-iso.sh`` from |dnload-loc|.
- In order to be able to deploy subclouds from either controller, all local
files that are referenced in the **bootstrap.yml** file must exist on both
files that are referenced in the ``bootstrap.yml`` file must exist on both
controllers \(for example, ``/home/sysadmin/docker-registry-ca-cert.pem``\).
.. rubric:: |proc|
@ -62,7 +62,7 @@ subcloud, the subcloud installation process has two phases:
The servers require connectivity to a gateway router that provides IP
routing between the subcloud management subnet and the System
Controller management subnet, and between the subcloud OAM subnet and
Controller management subnet, and between the subcloud |OAM| subnet and
the System Controller subnet.
.. include:: /_includes/installing-a-subcloud-without-redfish-platform-management-service.rest
@ -110,9 +110,9 @@ subcloud, the subcloud installation process has two phases:
Specify boot menu timeout, in seconds
The following example ks-addon.cfg, used with the -a option, sets up an
initial IP interface at boot time by defining a |VLAN| on an Ethernet
interface and has it use DHCP to request an IP address:
The following example ``ks-addon.cfg`` file, used with the -a option,
sets up an initial IP interface at boot time by defining a |VLAN| on
an Ethernet interface and has it use |DHCP| to request an IP address:
.. code-block:: none
@ -197,7 +197,7 @@ subcloud, the subcloud installation process has two phases:
type: docker
Where <sysinv\_password\> can be found by running the following command
Where <sysinv_password\> can be found by running the following command
as 'sysadmin' on the Central Cloud:
.. code-block:: none
@ -206,8 +206,8 @@ subcloud, the subcloud installation process has two phases:
This configuration uses the local registry on your central cloud. If you
prefer to use the default external registries, make the following
substitutions for the **docker\_registries** and
**additional\_local\_registry\_images** sections of the file.
substitutions for the ``docker_registries`` and
``additional_local_registry_images`` sections of the file.
.. code-block:: none
@ -223,10 +223,10 @@ subcloud, the subcloud installation process has two phases:
#. You can use the Central Cloud's local registry to pull images on subclouds.
The Central Cloud's local registry's HTTPS certificate must have the
Central Cloud's |OAM| IP, **registry.local** and **registry.central** in the
Central Cloud's |OAM| IP, ``registry.local`` and ``registry.central`` in the
certificate's |SAN| list. For example, a valid certificate contains a |SAN|
list **"DNS.1: registry.local DNS.2: registry.central IP.1: <floating
management\> IP.2: <floating OAM\>"**.
list ``"DNS.1: registry.local DNS.2: registry.central IP.1: <floating
management\> IP.2: <floating OAM\>"``.
If required, run the following command on the Central Cloud prior to
bootstrapping the subcloud to install the new certificate for the Central

View File

@ -34,19 +34,19 @@ modifications noted above and below.
You will also need to make the following modifications:
- when creating the user configuration overrides for the Ansible bootstrap
playbook in /home/sysadmin/localhost.yml
playbook in ``/home/sysadmin/localhost.yml``
- Add the parameters shown in bold below to your /home/sysadmin/localhost.yml
- Add the parameters shown in bold below to your ``/home/sysadmin/localhost.yml``
Ansible bootstrap override file to indicate that this cloud will play
the role of the Central Cloud / System Controller.
- restrict the range of addresses for the management network (using
management_start_address and management_end_address, as shown below) to
``management_start_address`` and ``management_end_address``, as shown below) to
exclude the IP addresses reserved for gateway routers that provide routing
to the subclouds' management subnets.
- Also, include the container images shown in bold below in
additional\_local\_registry\_images, required for support of subcloud
``additional_local_registry_images``, required for support of subcloud
installs with the Redfish Platform Management Service, and subcloud installs
using a Ceph storage backend.

View File

@ -128,12 +128,12 @@ fails, delete subclouds, and monitor or change the managed status of subclouds.
- To reconfigure a subcloud, if deployment fails, use the :command:`subcloud reconfig` command.
.. note::
You can enter the sysadmin password to avoid being prompted for the password.
You can enter the ``sysadmin`` password to avoid being prompted for the password.
.. code-block:: none
~(keystone_admin)]$ dcmanager subcloud reconfig <subcloud-id/name> --deploy-config \
<<filepath>> --sysadmin-password <<password>>
<filepath> --sysadmin-password <<password>>
where ``--deploy-config`` must reference the deployment configuration file.

View File

@ -58,11 +58,11 @@ There are six phases for Rehoming a subcloud:
- Ensure that the subcloud has been backed up, in case something goes wrong
and a subcloud system recovery is required.
- Transfer the yaml file that was used to bootstrap the subcloud prior to
- Transfer the ``yaml`` file that was used to bootstrap the subcloud prior to
rehoming, to the new System Controller. This data is required for rehoming.
- If the subcloud can be remotely installed via Redfish Virtual Media service,
transfer the yaml file that contains the install data for this subcloud,
transfer the ``yaml`` file that contains the install data for this subcloud,
and use this install data in the new System Controller, via the
``--install-values`` option, when running the remote subcloud reinstall,
upgrade or restore commands.
@ -83,7 +83,7 @@ There are six phases for Rehoming a subcloud:
#. Ensure that the subcloud's bootstrap values file is available on the new
System Controller. If required, in the subcloud's bootstrap values file
update the **systemcontroller_gateway_address** entry to point to the
update the ``systemcontroller_gateway_address`` entry to point to the
appropriate network gateway for the new System Controller to communicate
with the subcloud.

View File

@ -147,7 +147,7 @@ Executing the dcmanager subcloud reinstall command in the Central Cloud:
bootstrapping by monitoring the following log file on the active
controller in the Central cloud:
- /var/log/dcmanager/ansible/subcloud1_playbook_output.log
- ``/var/log/dcmanager/ansible/subcloud1_playbook_output.log``
#. After the subcloud is successfully reinstalled and bootstrapped, run the
subcloud reconfig command to complete the process. The subcloud

View File

@ -41,7 +41,7 @@ phases:
bootstrap or deployment.
- The platform backup tar file is already on the subcloud in
/opt/platform-backup directory or has been transferred to the
``/opt/platform-backup`` directory or has been transferred to the
SystemController.
- The subcloud install values have been saved in the **dcmanager** database
@ -49,7 +49,7 @@ phases:
.. rubric:: |proc|
#. Create the restore_values.yaml file which will be passed to the
#. Create the ``restore_values.yaml`` file which will be passed to the
:command:`dcmanager subcloud restore` command using the ``--restore-values``
option. This file contains parameters that will be used during the platform
restore phase. Minimally, the **backup_filename** parameter, indicating the
@ -95,7 +95,7 @@ phases:
+----+-----------+------------+--------------+---------------+---------+
#. In case of a failure, check the Ansible log for the corresponding subcloud
under /var/log/dcmanager/ansible directory.
under ``/var/log/dcmanager/ansible`` directory.
#. When the subcloud deploy status changes to "complete", the controller-0
is ready to be unlocked. Log into the controller-0 of the subcloud using

View File

@ -84,9 +84,9 @@ Container_image sets:
cb1b51f019c612178f14df6f03131a18 container-image1.tar.gz
db6c0ded6eb7bc2807edf8c345d4fe97 container-image2.tar.gz
----------------------------------------------------
Creating the Prestaged ISO with gen-prestaged-iso.sh
----------------------------------------------------
--------------------------------------------------
Create the Prestaged ISO with gen-prestaged-iso.sh
--------------------------------------------------
You can prepare and manually prestage the Install Bundle or use the
``gen-prestaged-iso.sh`` tool to create a self-installing prestaging ISO image.
@ -231,7 +231,7 @@ Use the ``--images`` option to specify the path/filename to a container image
to be installed on the subcloud.
Use the ``--param`` option to specify the rootfs device and boot device to
install the prestaging image. The tool defaults to /dev/sda directory. Use this
install the prestaging image. The tool defaults to ``/dev/sda directory``. Use this
option to override the default storage device the prestaging image is to be
installed.

View File

@ -13,7 +13,7 @@ You can access the Horizon Web interface for individual subclouds from the Syste
The System Controller page includes a menu for selecting subclouds or regions.
When you select a subcloud from this menu, the view changes to show the Horizon
interface for the subcloud. You can use this to provision and manage the
subcloud hosts and networks, just as you would for any |prod| system
subcloud hosts and networks, just as you would for any |prod| system.
.. rubric:: |proc|

View File

@ -30,7 +30,7 @@ subcloud is synchronized immediately when it is changed to the **Managed**
state.
Configuration changes made from the System Controller, and i.e. by specifying
the --os-region-name option as **SystemController** are synchronized
the ``--os-region-name`` option as ``SystemController`` are synchronized
immediately. For example, to add an |SNMP| trap destination and immediately
synchronize this configuration change to all subclouds in the **Managed**
state, use the following command:

View File

@ -13,20 +13,20 @@ This makes access to registry.central independent of changes to the Distributed
Cloud's Keystone admin user password.
Use the following procedure to update the install registry credentials on the
subcloud to the sysinv service credentials of the System Controller.
subcloud to the ``sysinv`` service credentials of the System Controller.
.. rubric:: |proc|
.. _updating-docker-registry-credentials-on-a-subcloud-steps-ywx-wyt-kmb:
#. On the System Controller, get the password for the sysinv services.
#. On the System Controller, get the password for the ``sysinv`` services.
.. code-block:: none
$ keyring get sysinv services
#. On each subcloud, run the following script to update the Docker registry
credentials to sysinv:
credentials to ``sysinv``:
.. code-block:: none

View File

@ -132,7 +132,7 @@ Follow the steps below to manually upgrade the System Controller:
- State entered after :command:`system upgrade-start` completes.
- Release nn.nn system data \(for example, postgres databases\) has
- Release <nn.nn> system data \(for example, postgres databases\) has
been exported to be used in the upgrade.
As part of the upgrade, the upgrade process checks the health of the system
@ -190,8 +190,8 @@ Follow the steps below to manually upgrade the System Controller:
- State entered when controller-1 upgrade is complete.
- System data has been successfully migrated from release nn.nn
to release nn.nn.
- System data has been successfully migrated from release <nn.nn>
to release <nn.nn>.
where *nn.nn* in the update file name is the |prod| release number.
@ -250,8 +250,8 @@ Follow the steps below to manually upgrade the System Controller:
If it transitions to **unlocked-disabled-failed**, check the issue
before proceeding to the next step. The alarms may indicate a
configuration error. Check the result of the configuration logs on
controller-1, \(for example, Error logs in
controller1:/var/log/puppet\).
controller-1, (for example, Error logs in
controller1:``/var/log/puppet``).
#. Run the :command:`system application-list`, and :command:`system
host-upgrade-list` commands to view the current progress.
@ -301,7 +301,7 @@ Follow the steps below to manually upgrade the System Controller:
- upgrading-hosts:
- State entered when both controllers are running release nn.nn
- State entered when both controllers are running release <nn.nn>
software.

View File

@ -2,9 +2,9 @@
.. clv1558615616705
.. _uploading-and-applying-updates-to-systemcontroller-using-the-cli:
=============================================================
Upload and Applying Updates to SystemController Using the CLI
=============================================================
==========================================================
Upload and Apply Updates to SystemController Using the CLI
==========================================================
You can upload and apply updates to the SystemController in order to update the
central update repository, from the CLI using the standard update procedures
@ -18,9 +18,8 @@ If you prefer, you can use the Horizon Web interface. For more information, see
the specific procedure for incrementally uploading and applying one or more
patches for the SystemController is provided below.
For standard |prod| updating procedures, see the
.. xbooklink :ref:`|updates-doc| <software-updates-and-upgrades-software-updates>` guide.
For standard |prod| updating procedures, see the |updates-doc|:
:ref:`software-updates-and-upgrades-software-updates` guide.
For SystemController of |prod-dc| \(and the central update repository\), you
must include the additional |CLI| parameter ``--os-region-name`` with the value
@ -55,9 +54,9 @@ SystemController when using |CLI| :command:`sw-patch` commands.
#. Log in as the **sysadmin** user.
#. Copy all patches to be uploaded and applied to /home/sysadmin/patches/.
#. Copy all patches to be uploaded and applied to ``/home/sysadmin/patches/``.
#. Upload all patches placed in /home/sysadmin/patches/ to the storage area.
#. Upload all patches placed in ``/home/sysadmin/patches/`` to the storage area.
.. code-block:: none