Editorial Updates for all Upstream / Downstream Guides
Acted on Greg's comments on Deployment Configurations Guide Patch 1: Acted on Greg's comment. https://review.opendev.org/c/starlingx/docs/+/791820 Signed-off-by: egoncalv <elisamaraaoki.goncalves@windriver.com> Change-Id: I4d641529974dc15670e2fb1e3afa0aa0277c467b
This commit is contained in:
		@@ -117,7 +117,7 @@ A number of components are common to most |prod| deployment configurations.
 | 
			
		||||
        The use of Container Networking Calico |BGP| to advertise containers'
 | 
			
		||||
        network endpoints is not available in this scenario.
 | 
			
		||||
 | 
			
		||||
**Additional External Network\(s\) \(Worker & AIO Nodes Only\)**
 | 
			
		||||
**Additional External Network\(s\) or Data Networks \(Worker & AIO Nodes Only\)**
 | 
			
		||||
    Networks on which ingress controllers and/or hosted application containers
 | 
			
		||||
    expose their Kubernetes service, for example, through a NodePort service.
 | 
			
		||||
    Node interfaces to these networks are configured as platform class
 | 
			
		||||
 
 | 
			
		||||
@@ -13,10 +13,6 @@ controller nodes instead of using dedicated storage nodes.
 | 
			
		||||
.. image:: /deploy_install_guides/r5_release/figures/starlingx-deployment-options-controller-storage.png
 | 
			
		||||
   :width: 800
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
    Physical L2 switches are not shown in the deployment diagram in subsequent
 | 
			
		||||
    chapters. Only the L2 networks they support are shown.
 | 
			
		||||
 | 
			
		||||
See :ref:`Common Components <common-components>` for a description of common
 | 
			
		||||
components of this deployment configuration.
 | 
			
		||||
 | 
			
		||||
@@ -25,11 +21,14 @@ cluster managing up to 200 worker nodes. The limit on the size of the worker
 | 
			
		||||
node pool is due to the performance and latency characteristics of the small
 | 
			
		||||
integrated Ceph cluster on the controller+storage nodes.
 | 
			
		||||
 | 
			
		||||
This configuration uses dedicated physical disks configured on each
 | 
			
		||||
controller+storage node as Ceph |OSDs|. The
 | 
			
		||||
primary disk is used by the platform for system purposes and subsequent disks
 | 
			
		||||
This configuration optionally uses dedicated physical disks configured on each
 | 
			
		||||
controller+storage node as Ceph |OSDs|. The typical solution requires one
 | 
			
		||||
primary disk used by the platform for system purposes and subsequent disks
 | 
			
		||||
are used for Ceph |OSDs|.
 | 
			
		||||
 | 
			
		||||
Optionally, instead of using an internal Ceph cluster across controllers, you
 | 
			
		||||
can configure an external Netapp Trident storage backend.
 | 
			
		||||
 | 
			
		||||
On worker nodes, the primary disk is used for system requirements and for
 | 
			
		||||
container local ephemeral storage.
 | 
			
		||||
 | 
			
		||||
 
 | 
			
		||||
@@ -27,14 +27,17 @@ cloud processing / storage power is required.
 | 
			
		||||
HA services run on the controller function across the two physical servers in
 | 
			
		||||
either Active/Active or Active/Standby mode.
 | 
			
		||||
 | 
			
		||||
The storage function is provided by a small-scale two node Ceph cluster using
 | 
			
		||||
one or more disks/|OSDs| from each server, and
 | 
			
		||||
provides the backend for Kubernetes' |PVCs|.
 | 
			
		||||
The optional storage function is provided by a small-scale two node Ceph
 | 
			
		||||
cluster using one or more disks/|OSDs| from each server, and provides the
 | 
			
		||||
backend for Kubernetes' |PVCs|.
 | 
			
		||||
 | 
			
		||||
The solution requires two or more disks per server; one for system
 | 
			
		||||
The typical solution requires two or more disks per server; one for system
 | 
			
		||||
requirements and container ephemeral storage, and one or more for Ceph
 | 
			
		||||
|OSDs|.
 | 
			
		||||
 | 
			
		||||
Optionally, instead of using an internal Ceph cluster across servers, you can
 | 
			
		||||
configure an external Netapp Trident storage backend.
 | 
			
		||||
 | 
			
		||||
Hosted application containers are scheduled on both worker functions.
 | 
			
		||||
 | 
			
		||||
In the event of an overall server hardware fault:
 | 
			
		||||
 
 | 
			
		||||
@@ -14,8 +14,9 @@ non-redundant host.
 | 
			
		||||
   :width: 800
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
    Physical L2 switches are not shown in the deployment diagram in subsequent
 | 
			
		||||
    chapters. Only the L2 networks they support are shown.
 | 
			
		||||
    Physical L2 switches are not shown in this deployment diagram and in
 | 
			
		||||
    subsequent deployment diagrams. Only the L2 networks they support are
 | 
			
		||||
    shown.
 | 
			
		||||
 | 
			
		||||
See :ref:`Common Components <common-components>` for a description of common
 | 
			
		||||
components of this deployment configuration.
 | 
			
		||||
@@ -29,12 +30,15 @@ Typically, this solution is used where only a small amount of cloud processing
 | 
			
		||||
/ storage power is required, and protection against overall server hardware
 | 
			
		||||
faults is either not required or done at a higher level.
 | 
			
		||||
 | 
			
		||||
Ceph is deployed in this configuration using one or more disks for |OSDs|, and
 | 
			
		||||
Optionally, Ceph is deployed in this configuration using one or more disks for |OSDs|, and
 | 
			
		||||
provides the backend for Kubernetes' |PVCs|.
 | 
			
		||||
 | 
			
		||||
The solution requires two or more disks, one for system requirements and
 | 
			
		||||
Typically, the solution requires two or more disks, one for system requirements and
 | 
			
		||||
container ephemeral storage, and one or more for Ceph |OSDs|.
 | 
			
		||||
 | 
			
		||||
Optionally, instead of using an internal Ceph cluster on the server, you can
 | 
			
		||||
configure an external Netapp Trident storage backend.
 | 
			
		||||
 | 
			
		||||
.. xreflink .. note::
 | 
			
		||||
    A storage backend is not configured by default. You can use either
 | 
			
		||||
    internal Ceph or an external Netapp Trident backend. For more information,
 | 
			
		||||
 
 | 
			
		||||
@@ -25,21 +25,10 @@ A variety of |prod-long| deployment configuration options are supported.
 | 
			
		||||
    A two node HA controller node cluster with a 2-9 node Ceph storage
 | 
			
		||||
    cluster, managing up to 200 worker nodes.
 | 
			
		||||
 | 
			
		||||
    .. note::
 | 
			
		||||
        A storage backend is not configured by default. You can use either
 | 
			
		||||
        internal Ceph or an external Netapp Trident backend.
 | 
			
		||||
 | 
			
		||||
.. xreflink        For more
 | 
			
		||||
        information, see the :ref:`Storage
 | 
			
		||||
        <storage-configuration-storage-resources>` guide.
 | 
			
		||||
 | 
			
		||||
All |prod| systems can use worker platforms \(worker hosts, or the worker
 | 
			
		||||
function on a simplex or duplex system\) configured for either standard or
 | 
			
		||||
low-latency performance.
 | 
			
		||||
 | 
			
		||||
.. seealso::
 | 
			
		||||
 | 
			
		||||
	:ref:`Worker Function Performance Profiles
 | 
			
		||||
	<worker-function-performance-profiles>`
 | 
			
		||||
 | 
			
		||||
The Ceph storage backend is configured by default.
 | 
			
		||||
low-latency worker function performance profiles.
 | 
			
		||||
@@ -14,6 +14,6 @@ Deployment Configurations
 | 
			
		||||
   common-components
 | 
			
		||||
   deployment-config-optionsall-in-one-simplex-configuration
 | 
			
		||||
   deployment-config-options-all-in-one-duplex-configuration
 | 
			
		||||
   standard-configuration-with-dedicated-storage
 | 
			
		||||
   deployment-and-configuration-options-standard-configuration-with-controller-storage
 | 
			
		||||
   standard-configuration-with-dedicated-storage
 | 
			
		||||
   worker-function-performance-profiles
 | 
			
		||||
 
 | 
			
		||||
@@ -12,10 +12,6 @@ Deployment of |prod| with dedicated storage nodes provides the highest capacity
 | 
			
		||||
.. image:: /deploy_install_guides/r5_release/figures/starlingx-deployment-options-dedicated-storage.png
 | 
			
		||||
   :width: 800
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
    Physical L2 switches are not shown in the deployment diagram in subsequent
 | 
			
		||||
    chapters. Only the L2 networks they realize are shown.
 | 
			
		||||
 | 
			
		||||
See :ref:`Common Components <common-components>` for a description of common
 | 
			
		||||
components of this deployment configuration.
 | 
			
		||||
 | 
			
		||||
@@ -42,9 +38,8 @@ affected by the |OSD| size and speed, optional |SSD| or |NVMe| Ceph journals,
 | 
			
		||||
CPU cores and speeds, memory, disk controllers, and networking. |OSDs| can be
 | 
			
		||||
grouped into storage tiers according to their performance characteristics.
 | 
			
		||||
 | 
			
		||||
.. note::
 | 
			
		||||
    A storage backend is not configured by default. You can use either
 | 
			
		||||
    internal Ceph or an external Netapp Trident backend.
 | 
			
		||||
Alternatively, instead of configuring Storage Nodes, you can configure an
 | 
			
		||||
external Netapp Trident storage backend.
 | 
			
		||||
 | 
			
		||||
.. xreflink    For more information,
 | 
			
		||||
    see the :ref:`|stor-doc| <storage-configuration-storage-resources>` guide.
 | 
			
		||||
 
 | 
			
		||||
		Reference in New Issue
	
	Block a user