Openstack planning

Updates for patchset 2 review comments
Changed link depth of main Planning index and added some narrative guidance
Added planning/openstack as sibling of planning/kubernetes
Related additions to abbrevs.txt
Added max-workers substitution to accomodate StarlingX/vendor variants

Signed-off-by: Ron Stone <ronald.stone@windriver.com>
Change-Id: Ibff9af74ab3f2c00958eff0e33c91465f1dab6b4
Signed-off-by: Stone <ronald.stone@windriver.com>
This commit is contained in:
Ron Stone 2021-01-20 13:37:25 -05:00 committed by Stone
parent ebdf63ec68
commit 3143d86b69
100 changed files with 4517 additions and 2454 deletions

View File

@ -0,0 +1 @@

View File

@ -0,0 +1 @@
.. [#f1] See :ref:`Data Network Planning <data-network-planning>` for more information.

View File

@ -0,0 +1,4 @@
.. vxlan-begin
- To minimize flooding of multicast packets, |IGMP| and |MLD| snooping is
recommended on all Layer 2 switches.

View File

@ -0,0 +1,2 @@
.. unmodified-guests-virtio-begin
.. highest-performance-begin

4
doc/source/_vendor/vendor_strings.txt Normal file → Executable file
View File

@ -44,3 +44,7 @@
To insert no spaces, use "replace:: \"
.. |s| replace:: \
.. product capabilities
.. |max-workers| replace:: 99

View File

@ -15,7 +15,7 @@ Bare Metal
Worker
A node within a |prod| edge cloud that is dedicated to running application
workloads. There can be 0 to 99 worker nodes in a |prod| edge cloud.
workloads. There can be 0 to |max-workers| worker nodes in a |prod| edge cloud.
- Runs virtual switch for realizing virtual networks.
- Provides L3 routing and NET services.

1
doc/source/planning/.vscode/settings.json vendored Normal file → Executable file
View File

@ -1,3 +1,2 @@
{
"restructuredtext.confPath": "/mnt/c/Users/rstone/Desktop/upstream/planning/docs/doc/source"
}

0
doc/source/planning/figures/eag1565612501060.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 25 KiB

0
doc/source/planning/figures/fqq1554387160841.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 54 KiB

0
doc/source/planning/figures/gnc1565626763250.jpeg Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 108 KiB

After

Width:  |  Height:  |  Size: 108 KiB

0
doc/source/planning/figures/jow1404333560781.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 105 KiB

After

Width:  |  Height:  |  Size: 105 KiB

0
doc/source/planning/figures/jow1438030468959.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 9.2 KiB

After

Width:  |  Height:  |  Size: 9.2 KiB

0
doc/source/planning/figures/jrh1581365123827.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 43 KiB

0
doc/source/planning/figures/noc1581364555316.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

0
doc/source/planning/figures/rld1581365711865.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

0
doc/source/planning/figures/rsn1565611176484.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 28 KiB

0
doc/source/planning/figures/sye1565216249447.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 21 KiB

0
doc/source/planning/figures/uac1581365928043.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 44 KiB

After

Width:  |  Height:  |  Size: 44 KiB

0
doc/source/planning/figures/vzz1565620523528.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

0
doc/source/planning/figures/xjf1565612136985.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 15 KiB

0
doc/source/planning/figures/zpk1486667625575.png Normal file → Executable file
View File

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

117
doc/source/planning/index.rst Normal file → Executable file
View File

@ -11,112 +11,27 @@ Planning
Kubernetes
----------
************
Introduction
************
|prod| platform planning helps ensure that the requirements of your containers,
and the requirements of your cloud administration and operations teams can be
met. It ensures proper integration of a |prod| into the target data center or
telecom office, and helps you plan up front for future cloud growth.
Planning your |prod| installation is a prerequisite for further |prod-os|
installation planning.
.. toctree::
:maxdepth: 1
:maxdepth: 2
kubernetes/overview-of-starlingx-planning
****************
Network planning
****************
.. toctree::
:maxdepth: 1
kubernetes/network-requirements
kubernetes/networks-for-a-simplex-system
kubernetes/networks-for-a-duplex-system
kubernetes/networks-for-a-system-with-controller-storage
kubernetes/networks-for-a-system-with-dedicated-storage
kubernetes/network-requirements-ip-support
kubernetes/network-planning-the-pxe-boot-network
kubernetes/the-cluster-host-network
kubernetes/the-storage-network
Internal management network
***************************
.. toctree::
:maxdepth: 1
kubernetes/the-internal-management-network
kubernetes/internal-management-network-planning
kubernetes/multicast-subnets-for-the-management-network
OAM network
***********
.. toctree::
:maxdepth: 1
kubernetes/about-the-oam-network
kubernetes/oam-network-planning
kubernetes/dns-and-ntp-servers
kubernetes/network-planning-firewall-options
L2 access switches
******************
.. toctree::
:maxdepth: 1
kubernetes/l2-access-switches
kubernetes/redundant-top-of-rack-switch-deployment-considerations
Ethernet interfaces
*******************
.. toctree::
:maxdepth: 1
kubernetes/about-ethernet-interfaces
kubernetes/network-planning-ethernet-interface-configuration
kubernetes/the-ethernet-mtu
kubernetes/shared-vlan-or-multi-netted-ethernet-interfaces
****************
Storage planning
****************
.. toctree::
:maxdepth: 1
kubernetes/storage-planning-storage-resources
kubernetes/storage-planning-storage-on-controller-hosts
kubernetes/storage-planning-storage-on-worker-hosts
kubernetes/storage-planning-storage-on-storage-hosts
kubernetes/external-netapp-trident-storage
*****************
Security planning
*****************
.. toctree::
:maxdepth: 1
kubernetes/security-planning-uefi-secure-boot-planning
kubernetes/tpm-planning
**********************************
Installation and resource planning
**********************************
.. toctree::
:maxdepth: 1
kubernetes/installation-and-resource-planning-https-access-planning
kubernetes/starlingx-hardware-requirements
kubernetes/verified-commercial-hardware
kubernetes/starlingx-boot-sequence-considerations
kubernetes/hard-drive-options
kubernetes/controller-disk-configurations-for-all-in-one-systems
kubernetes/index
---------
OpenStack
---------
Coming soon.
|prod-os| is installed as an application in a deployed |prod| environment and
requires additional network, storage, security and resource planning.
.. toctree::
:maxdepth: 2
openstack/index

View File

View File

0
doc/source/planning/kubernetes/dns-and-ntp-servers.rst Normal file → Executable file
View File

View File

0
doc/source/planning/kubernetes/hard-drive-options.rst Normal file → Executable file
View File

View File

@ -0,0 +1,109 @@
.. _planning_kubernetes_index:
----------
Kubernetes
----------
************
Introduction
************
.. toctree::
:maxdepth: 1
overview-of-starlingx-planning
****************
Network planning
****************
.. toctree::
:maxdepth: 1
network-requirements
networks-for-a-simplex-system
networks-for-a-duplex-system
networks-for-a-system-with-controller-storage
networks-for-a-system-with-dedicated-storage
network-requirements-ip-support
network-planning-the-pxe-boot-network
the-cluster-host-network
the-storage-network
Internal management network
***************************
.. toctree::
:maxdepth: 1
the-internal-management-network
internal-management-network-planning
multicast-subnets-for-the-management-network
OAM network
***********
.. toctree::
:maxdepth: 1
about-the-oam-network
oam-network-planning
dns-and-ntp-servers
network-planning-firewall-options
L2 access switches
******************
.. toctree::
:maxdepth: 1
l2-access-switches
redundant-top-of-rack-switch-deployment-considerations
Ethernet interfaces
*******************
.. toctree::
:maxdepth: 1
about-ethernet-interfaces
network-planning-ethernet-interface-configuration
the-ethernet-mtu
shared-vlan-or-multi-netted-ethernet-interfaces
****************
Storage planning
****************
.. toctree::
:maxdepth: 1
storage-planning-storage-resources
storage-planning-storage-on-controller-hosts
storage-planning-storage-on-worker-hosts
storage-planning-storage-on-storage-hosts
external-netapp-trident-storage
*****************
Security planning
*****************
.. toctree::
:maxdepth: 1
security-planning-uefi-secure-boot-planning
tpm-planning
**********************************
Installation and resource planning
**********************************
.. toctree::
:maxdepth: 1
installation-and-resource-planning-https-access-planning
starlingx-hardware-requirements
verified-commercial-hardware
starlingx-boot-sequence-considerations
hard-drive-options
controller-disk-configurations-for-all-in-one-systems

View File

@ -13,7 +13,7 @@ These include:
.. _installation-and-resource-planning-https-access-planning-d18e34:
.. contents::
.. contents:: |minitoc|
:local:
:depth: 1

View File

@ -2,9 +2,9 @@
.. lla1552670572043
.. _internal-management-network-planning:
====================================
Internal Management Network Planning
====================================
===============================================
Kubernetes Internal Management Network Planning
===============================================
The internal management network is a private network, visible only to the hosts
in the cluster.

6
doc/source/planning/kubernetes/l2-access-switches.rst Normal file → Executable file
View File

@ -2,9 +2,9 @@
.. kvt1552671101079
.. _l2-access-switches:
==================
L2 Access Switches
==================
=============================
Kubernetes L2 Access Switches
=============================
L2 access switches connect the |prod| hosts to the different networks. Proper
configuration of the access ports is necessary to ensure proper traffic flow.

View File

View File

View File

View File

View File

View File

View File

View File

@ -2,9 +2,9 @@
.. qzw1552672165570
.. _security-planning-uefi-secure-boot-planning:
=========================
UEFI Secure Boot Planning
=========================
====================================
Kubernetes UEFI Secure Boot Planning
====================================
|UEFI| Secure Boot Planning allows you to authenticate modules before they are
allowed to execute.

View File

@ -8,7 +8,7 @@ System Hardware Requirements
|prod| has been tested to work with specific hardware configurations:
.. contents::
.. contents:: |minitoc|
:local:
:depth: 1

View File

@ -11,7 +11,7 @@ system configuration files, local Docker images, container's ephemeral
filesystems, the local Docker registry container image store, platform backup,
and the system backup operations.
.. contents:: In this section:
.. contents:: |minitoc|
:local:
:depth: 1

View File

@ -12,7 +12,7 @@ storage hosts if they are present.
The |prod| storage configuration is highly flexible. The specific configuration
depends on the type of system installed, and the requirements of the system.
.. contents:: In this section:
.. contents:: |minitoc|
:local:
:depth: 1

View File

0
doc/source/planning/kubernetes/the-ethernet-mtu.rst Normal file → Executable file
View File

View File

0
doc/source/planning/kubernetes/the-storage-network.rst Normal file → Executable file
View File

0
doc/source/planning/kubernetes/tpm-planning.rst Normal file → Executable file
View File

View File

@ -2,9 +2,9 @@
.. svs1552672428539
.. _verified-commercial-hardware:
============================
Verified Commercial Hardware
============================
=======================================
Kubernetes Verified Commercial Hardware
=======================================
Verified and approved hardware components for use with |prod| are listed
here.

View File

@ -0,0 +1,112 @@
.. ixo1464634136835
.. _block-storage-for-virtual-machines:
==================================
Block Storage for Virtual Machines
==================================
Virtual machines use controller or storage host resources for root and
ephemeral disk storage.
.. _block-storage-for-virtual-machines-section-N10022-N1001F-N10001:
-------------------------
Root Disk Storage for VMs
-------------------------
You can allocate root disk storage for virtual machines using the following:
.. _block-storage-for-virtual-machines-ul-d1c-j5k-s5:
- Cinder volumes on controller hosts \(backed by small Ceph Cluster\) or
storage hosts \(backed by large-scale Ceph\).
- Ephemeral local storage on compute hosts, using image-based instance
backing.
- Ephemeral remote storage on controller hosts or storage hosts, backed by
Ceph.
The use of Cinder volumes or ephemeral storage is determined by the **Instance
Boot Source** setting when an instance is launched. Boot from volume results in
the use of a Cinder volume, while Boot from image results in the use of
ephemeral storage.
.. note::
On systems with one or more single-disk compute hosts configured with local
instance backing, the use of Boot from volume for all |VMs| is strongly
recommended. This helps prevent the use of local ephemeral storage on these
hosts.
On systems without dedicated storage hosts, Cinder-backed persistent storage
for virtual machines is provided using the small Ceph cluster on controller
disks.
On systems with dedicated hosts, Cinder storage is provided using Ceph-backed
|OSD| disks on high-availability and highly-scalable storage hosts.
.. _block-storage-for-virtual-machines-section-N100A2-N1001F-N10001:
---------------------------------------
Ephemeral and Swap Disk Storage for VMs
---------------------------------------
Storage for |VM| ephemeral and swap disks, and for ephemeral boot disks if the
|VM| is launched from an image rather than a volume, is provided using the
**nova-local** local volume group defined on compute hosts.
The **nova-local** group provides either local ephemeral storage using
|CoW|-image-backed storage resources on compute hosts, or remote ephemeral
storage, using Ceph-backed resources on storage hosts. You must configure the
storage backing type at installation before you can unlock a compute host. The
default type is image-backed local ephemeral storage. You can change the
configuration after installation.
.. xbooklink For more information, see |stor-doc|: :ref:`Working with Local Volume Groups <working-with-local-volume-groups>`.
.. caution::
On a compute node with a single disk, local ephemeral storage uses the root
disk. This can adversely affect the disk I/O performance of the host. To
avoid this, ensure that single-disk compute nodes use remote Ceph-backed
storage if available. If Ceph storage is not available on the system, or is
not used for one or more single-disk compute nodes, then you must ensure
that all VMs on the system are booted from Cinder volumes and do not use
ephemeral or swap disks.
On |prod-os| Simplex or Duplex systems that use a single disk, the same
consideration applies. Since the disk also provides Cinder support, adverse
effects on I/O performance can also be expected for VMs booted from Cinder
volumes.
The backing type is set individually for each host using the **Instance
Backing** parameter on the **nova-local** local volume group.
**Local CoW Image backed**
This provides local ephemeral storage using a |CoW| sparse-image-format
backend, to optimize launch and delete performance.
**Remote RAW Ceph storage backed**
This provides remote ephemeral storage using a Ceph backend on a system
with storage nodes, to optimize migration capabilities. Ceph backing uses a
Ceph storage pool configured from the storage host resources.
You can control whether a |VM| is instantiated with |CoW| or Ceph-backed
storage by setting a flavor extra specification.
.. xbooklink For more information, see OpenStack Configuration and Management: :ref:`Specifying the Storage Type for VM Ephemeral Disks <specifying-the-storage-type-for-vm-ephemeral-disks>`.
.. _block-storage-for-virtual-machines-d29e17:
.. caution::
Unlike Cinder-based storage, ephemeral storage does not persist if the
instance is terminated or the compute node fails.
In addition, for local ephemeral storage, migration and resizing support
depends on the storage backing type specified for the instance, as well as
the boot source selected at launch.
The **nova-local** storage type affects migration behavior. Live migration is
not always supported for |VM| disks using local ephemeral storage.
.. xbooklink For more information, see :ref:`VM Storage Settings for Migration, Resize, or Evacuation <vm-storage-settings-for-migration-resize-or-evacuation>`.

View File

@ -0,0 +1,71 @@
.. jow1404333738594
.. _data-network-planning:
=====================
Data Network Planning
=====================
Data networks are the payload-carrying networks used implicitly by end users
when they move traffic over their project networks.
You can review details for existing data networks using OpenStack Horizon Web
interface or the CLI.
When planning data networks, you must consider the following guidelines:
.. _data-network-planning-ul-cmp-rl2-4n:
- From the point of view of the projects, all networking happens over the
project networks created by them, or by the **admin** user on their behalf.
Projects are not necessarily aware of the available data networks. In fact,
they cannot create project networks over data networks not already
accessible to them. For this reason, the system administrator must ensure
that proper communication mechanisms are in place for projects to request
access to specific data networks when required.
For example, a project may be interested in creating a new project network
with access to a specific network access device in the data center, such as
an access point for a wireless transport. In this case, the system
administrator must create a new project network on behalf of the project,
using a |VLAN| ID in the project's segmentation range that provides
connectivity to the said network access point.
- Consider how different offerings of bandwidth, throughput commitments, and
class-of-service, can be used by your users. Having different data network
offerings available to your projects enables end users to diversify their
own portfolio of services. This in turn gives the |prod-os| administration
an opportunity to put different revenue models in place.
- For the IPv4 address plan, consider the following:
- Project networks attached to a public network, such as the Internet,
have to have external addresses assigned to them. You must therefore
plan for valid definitions of their IPv4 subnets and default gateways.
- As with the |OAM| network, you must ensure that suitable firewall
services are in place on any project network with a public address.
- Segmentation ranges may be owned by the administrator, a specific project,
or may be shared by all projects. With this ownership model:
- A base deployment scenario has each compute node using a single data
interface defined over a single data network. In this scenario, all
required project networks can be instantiated making use of the
available |VLANs| or |VNIs| in each corresponding segmentation range.
You may need more than one data network when the underlying physical
networks demand different |MTU| sizes, or when boundaries between data
networks are dictated by policy or other non-technical considerations.
- Segmentation ranges can be reserved and assigned on-demand without
having to lock and unlock the compute nodes. This facilitates
day-to-day operations which can be performed without any disruption to
the running environment.
- In some circumstances, data networks can be configured to support |VLAN|
Transparent mode on project networks. In this mode |VLAN| tagged packets
are encapsulated within a data network segment without removing or
modifying the guest |VLAN| tag\(s\).

View File

@ -0,0 +1,94 @@
.. jow1423169316542
.. _ethernet-interface-configuration:
.. partner only ?
================================
Ethernet Interface Configuration
================================
You can review and modify the configuration for physical or virtual Ethernet
interfaces using the OpenStack Horizon Web interface or the CLI.
.. _ethernet-interface-configuration-section-N1001F-N1001C-N10001:
----------------------------
Physical Ethernet Interfaces
----------------------------
The physical Ethernet interfaces on |prod-os| nodes are configured to use the
following networks:
.. _ethernet-interface-configuration-ul-lk1-b4j-zq:
- the internal management network
- the internal cluster host network \(by default sharing the same L2
interface as the internal management network\)
- the external |OAM| network
- one or more data networks
A single interface can optionally be configured to support more than one
network using |VLAN| tagging \(see :ref:`Shared (VLAN or Multi-Netted) Ethernet
Interfaces
<network-planning-shared-vlan-or-multi-netted-ethernet-interfaces>`\).
.. _ethernet-interface-configuration-section-N10059-N1001C-N10001:
---------------------------
Virtual Ethernet Interfaces
---------------------------
The virtual Ethernet interfaces for guest |VMs| running on |prod-os| are
defined when an instance is launched. They connect the |VM| to project
networks, which are virtual networks defined over data networks, which in turn
are abstractions associated with physical interfaces assigned to physical
networks on the compute nodes.
The following virtual network interfaces are available:
.. _ethernet-interface-configuration-ul-amy-z5z-zs:
- |AVP|
- ne2k\_pci \(NE2000 Emulation\)
- pcnet \(AMD PCnet/|PCI| Emulation\)
- rtl8139 \(Realtek 8139 Emulation\)
- virtio \(VirtIO Network\)
- pci-passthrough \(|PCI| Passthrough Device\)
- pci-sriov \(|SRIOV| device\)
Unmodified guests can use Linux networking and virtio drivers. This provides a
mechanism to bring existing applications into the production environment
immediately.
.. xbooklink For more information about |AVP| drivers, see OpenStack VNF Integration: :ref:`Accelerated Virtual Interfaces <accelerated-virtual-interfaces>`.
|prod-os| incorporates |DPDK|-Accelerated Neutron Virtual Router L3 Forwarding
\(AVR\). Accelerated forwarding is used for directly attached project networks
and subnets, as well as for gateway, |SNAT| and floating IP functionality.
|prod-os| also supports direct guest access to |NICs| using |PCI| passthrough
or |SRIOV|, with enhanced |NUMA| scheduling options compared to standard
OpenStack. This offers very high performance, but because access is not managed
by |prod-os| or the vSwitch process, there is no support for live migration,
|prod-os|-provided |LAG|, host interface monitoring, |QoS|, or ACL. If |VLANs|
are used, they must be managed by the guests.
For further performance improvements, |prod-os| supports direct access to
|PCI|-based hardware accelerators, such as the Coleto Creek encryption
accelerator from Intel. |prod-os| manages the allocation of |SRIOV| VFs to
|VMs|, and provides intelligent scheduling to optimize |NUMA| node affinity.
.. only:: partner
.. include:: ../../_includes/ethernet-interface-configuration.rest

View File

@ -0,0 +1,37 @@
.. jow1404333731990
.. _ethernet-interfaces:
===================
Ethernet Interfaces
===================
Ethernet interfaces, both physical and virtual, play a key role in the overall
performance of the virtualized network. Therefore, it is important to
understand the available interface types, their configuration options, and
their impact on network design.
.. _ethernet-interfaces-section-N1006F-N1001A-N10001:
-----------------------
About LAG/AE Interfaces
-----------------------
You can use |LAG| for Ethernet interfaces. |prod-os| supports up to four ports
in a |LAG| group.
Ethernet interfaces in a |LAG| group can be attached either to the same L2
switch, or to multiple switches in a redundant configuration. For more
information about L2 switch configurations, see :ref:`L2 Access Switches
<network-planning-l2-access-switches>`. For information about the different
|LAG| modes, see |node-doc|: :ref:`Link Aggregation Settings
<link-aggregation-settings>`.
.. seealso::
:ref:`Ethernet Interface Configuration <ethernet-interface-configuration>`
:ref:`The Ethernet MTU <the-ethernet-mtu>`
:ref:`Shared (VLAN or Multi-Netted) Ethernet Interfaces
<network-planning-shared-vlan-or-multi-netted-ethernet-interfaces>`

View File

@ -0,0 +1,277 @@
.. fnr1551900935447
.. _hardware-requirements:
=====================
Hardware Requirements
=====================
|prod-os| has been tested to work with specific hardware configurations.
If the minimum hardware requirements are not met, system performance cannot be
guaranteed.
See :ref:`StarlingX Hardware Requirements <starlingx-hardware-requirements>`
for the |prod-long| Hardware Requirements. In the table below, only the
Interface sections are modified for |prod-os|.
.. _hardware-requirements-section-N10044-N10024-N10001:
--------------------------------------
Controller, Compute, and Storage Hosts
--------------------------------------
.. _hardware-requirements-table-nvy-52x-p5:
.. table:: Table 1. Hardware Requirements — |prod-os| Standard Configuration
:widths: auto
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Minimum Requirement | Controller | Storage | Compute |
+===========================================================+=================================================================================================================================================================================================================================================+==============================================================================================+====================================================================================================================+
| Minimum Qty of Servers | 2 \(required\) | \(if Ceph storage used\) | 2 100 |
| | | | |
| | | 2 8 \(for replication factor 2\) | |
| | | | |
| | | 3 9 \(for replication factor 3\) | |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Minimum Processor Class | Dual-CPU Intel® Xeon® E5 26xx Family \(SandyBridge\) 8 cores/socket |
| | |
| | |
| | |
| | |
| | |
+ +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| | Platform: All cores | Platform: All cores | - Platform: 1x physical core \(2x logical cores if hyper-threading\), \(by default, configurable\) |
| | | | |
| | | | - vSwitch: 1x physical core / socket \(by default, configurable\) |
| | | | |
| | | | - Application: Remaining cores |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Minimum Memory | 64 GB | 64 GB | 32 GB |
| | | | |
| | Platform: All memory | Platform: All memory | - Platform: |
| | | | |
| | | | |
| | | | - Socket 0: 7GB \(by default, configurable\) |
| | | | |
| | | | - Socket 1: 1GB \(by default, configurable\) |
| | | | |
| | | | |
| | | | - vSwitch: 1GB / socket \(by default, configurable\) |
| | | | |
| | | | - Application: |
| | | | |
| | | | |
| | | | - Socket 0: Remaining memory |
| | | | |
| | | | - Socket 1: Remaining memory |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Minimum Primary Disk \(two-disk hardware RAID suggested\) | 500 GB - SSD or NVMe | 120 GB \(min. 10K RPM\) |
| | | |
+ +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| | .. note:: |
| | Installation on software RAID is not supported. |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Additional Disks | 1 X 500 GB \(min 10K RPM\) | 500 GB \(min. 10K RPM\) for OSD storage | 500 GB \(min. 10K RPM\) — 1 or more |
| | | | |
| | \(not required for systems with dedicated storage nodes\) | one or more SSDs or NVMe drives \(recommended for Ceph journals\); min. 1024 MiB per journal | .. note:: |
| | | | Single-disk hosts are supported, but must not be used for local ephemeral storage |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Network Ports | \(Typical deployment\) |
| | |
| | |
| | |
+ +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) | - Mgmt and Cluster Host: 2 x 10GE LAG \(shared interface\) |
| | | | |
| | - OAM: 2 x 1GE LAG | | - Data: 2 x LAG, DPDK-compatible \(see "Verified Commercial Hardware: NICs Verified for Data Interfaces" below\) |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Board Management Controller \(BMC\) | 1 \(required\) | 1 \(required\) | 1 \(required\) |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| USB Interface | 1 | not required |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Power Profile | Max Performance |
| | |
| | Min Proc Idle Power:No C States |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Boot Order | HD, PXE, USB | HD, PXE |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| BIOS Mode | BIOS or UEFI |
| | |
| | .. note:: |
| | UEFI Secure Boot and UEFI PXE boot over IPv6 are not supported. On systems with an IPv6 management network, you can use a separate IPv4 network for PXE boot. For more information, see :ref:`The PXE Boot Network <the-pxe-boot-network>`. |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Intel Hyperthreading | Disabled or Enabled |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
| Intel Virtualization \(VTD, VTX\) | Disabled | Enabled |
+-----------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------+
.. _hardware-requirements-section-N102D0-N10024-N10001:
---------------------------------
Combined Controller-Compute Hosts
---------------------------------
Hardware requirements for a |prod-os| Simplex or Duplex configuration are
listed in the following table.
See :ref:`StarlingX Hardware Requirements <starlingx-hardware-requirements>`
for the |prod-long| Hardware Requirements. In the table below, only the
Interface sections are modified for |prod-os|.
.. _hardware-requirements-table-cb2-lfx-p5:
.. table:: Table 2. Hardware Requirements — |prod-os| Simplex or Duplex Configuration
:widths: auto
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Requirement | Controller + Compute |
| | |
| | \(Combined Server\) |
+===================================+=================================================================================================================================================================================================================================================+
| Minimum Qty of Servers | Simplex―1 |
| | |
| | Duplex―2 |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Processor Class | Dual-CPU Intel® Xeon® E5 26xx Family \(SandyBridge\) 8 cores/socket |
| | |
| | or |
| | |
| | Single-CPU Intel Xeon D-15xx Family, 8 cores \(low-power/low-cost option for Simplex deployments\) |
| | |
| | |
| | |
| | |
+ +-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | - Platform: 2x physical cores \(4x logical cores if hyper-threading\), \(by default, configurable\) |
| | |
| | - vSwitch: 1x physical core / socket \(by default, configurable\) |
| | |
| | - Application: Remaining cores |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Memory | 64 GB |
| | |
| | - Platform: |
| | |
| | |
| | - Socket 0: 7GB \(by default, configurable\) |
| | |
| | - Socket 1: 1GB \(by default, configurable\) |
| | |
| | |
| | - vSwitch: 1GB / socket \(by default, configurable\) |
| | |
| | - Application: |
| | |
| | |
| | - Socket 0: Remaining memory |
| | |
| | - Socket 1: Remaining memory |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Minimum Primary Disk | 500 GB - SSD or NVMe |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Additional Disks | - Single-disk system: N/A |
| | |
| | - Two-disk system: |
| | |
| | |
| | - 1 x 500 GB SSD or NVMe for Persistent Volume Claim storage |
| | |
| | |
| | - Three-disk system: |
| | |
| | |
| | - 1 x 500 GB \(min 10K RPM\) for Persistent Volume Claim storage |
| | |
| | - 1 or more x 500 GB \(min. 10K RPM\) for Container ephemeral disk storage |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Network Ports | \(Typical deployment.\) |
| | |
| | - Magement and Cluster Host: 2 x 10GE LAG \(shared interface\) |
| | |
| | .. note:: |
| | Magement ports are required for Duplex systems only |
| | |
| | - OAM: 2 x 1GE LAG |
| | |
| | - Data: 2 x LAG, DPDK-compatible \(see "Verified Commercial Hardware: NICs Verified for Data Interfaces" below\) |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| USB Interface | 1 |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Power Profile | Max Performance |
| | |
| | Min Proc Idle Power:No C States |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Boot Order | HD, PXE, USB |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| BIOS Mode | BIOS or UEFI |
| | |
| | .. note:: |
| | UEFI Secure Boot and UEFI PXE boot over IPv6 are not supported. On systems with an IPv6 management network, you can use a separate IPv4 network for PXE boot. For more information, see :ref:`The PXE Boot Network <the-pxe-boot-network>`. |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Intel Hyperthreading | Disabled or Enabled |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Intel Virtualization \(VTD, VTX\) | Enabled |
+-----------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. _hardware-requirements-section-if-scenarios:
|row-alt-off|
---------------------------------
Interface Configuration Scenarios
---------------------------------
|prod-os| supports the use of consolidated interfaces for the management,
cluster host and |OAM| networks. Some typical configurations are shown in the
following table. For best performance, |org| recommends dedicated interfaces.
|LAG| is optional in all instances.
.. _hardware-requirements-table-if-scenarios:
.. table::
:widths: auto
+--------------------------------------------------------------------+-------------------------------+-------------------------------+--------------------------------+
| Scenario | Controller | Storage | Compute |
+====================================================================+===============================+===============================+================================+
| | | | |
+--------------------------------------------------------------------+-------------------------------+-------------------------------+--------------------------------+
| - Physical interfaces on servers limited to two pairs | 2x 10GE LAG: | 2x 10GE LAG: | 2x 10GE LAG: |
| | | | |
| - Estimated aggregate average VM storage traffic less than 5G | - Mgmt \(untagged\) | - Mgmt \(untagged\) | - Mgmt \(untagged\) |
| | | | |
| | - Cluster Host \(untagged\) | - Cluster Host \(untagged\) | - Cluster Host \(untagged\) |
| | | | |
| | | | |
| | 2x 1GE LAG: | | 2x 10GE LAG |
| | | | |
| | - OAM \(untagged\) | | - Data \(tagged\) |
| | | | |
| | | | |
| | | | \[ … more data interfaces … \] |
+--------------------------------------------------------------------+-------------------------------+-------------------------------+--------------------------------+
| - No specific limit on number of physical interfaces | 2x 1GE LAG: | 2x 1GE LAG | 2x 1GE LAG |
| | | | |
| - Estimated aggregate average VM storage traffic greater than 5G | - Mgmt \(untagged\) | - Mgmt \(untagged\) | - Mgmt \(untagged\) |
| | | | |
| | | | |
| | 2x 1GE LAG: | 2x 1GE LAG: | 2x 1GE LAG: |
| | | | |
| | - OAM \(untagged\) | - OAM \(untagged\) | - OAM \(untagged\) |
| | | | |
| | | | |
| | | | 2x 10GE LAG: |
| | | | |
| | | | - Data \(tagged\) |
| | | | |
| | | | |
| | | | \[ … more data interfaces … \] |
+--------------------------------------------------------------------+-------------------------------+-------------------------------+--------------------------------+

View File

@ -0,0 +1,29 @@
.. iym1475074530218
.. _https-access-planning:
=====================
HTTPS Access Planning
=====================
You can enable secure HTTPS access for |prod-os|'s REST API endpoints or
OpenStack Horizon Web interface users.
.. note::
To enable HTTPS access for |prod-os|, you must enable HTTPS in the
underlying |prod-long| platform.
By default, |prod-os| provides HTTP access for remote connections. For improved
security, you can enable HTTPS access. When HTTPS access is enabled, HTTP
access is disabled.
When HTTPS is enabled for the first time on a |prod-os| system, a self-signed
certificate is automatically installed. In order to connect, remote clients
must be configured to accept the self-signed certificate without verifying it
\("insecure" mode\).
For secure-mode connections, a |CA|-signed certificate is required. The use of
a |CA|-signed certificate is strongly recommended.
You can update the certificate used by |prod-os| at any time after
installation.

View File

@ -0,0 +1,114 @@
---------
OpenStack
---------
================
Network planning
================
.. toctree::
:maxdepth: 1
network-planning-ip-support
the-pxe-boot-network
network-planning-the-internal-management-network
network-planning-the-cluster-host-network
the-oam-network
*************
Data networks
*************
.. toctree::
:maxdepth: 1
network-planning-data-networks
physical-network-planning
.. toctree::
:maxdepth: 1
network-planning-l2-access-switches
*******************
Ethernet interfaces
*******************
.. toctree::
:maxdepth: 1
ethernet-interfaces
ethernet-interface-configuration
the-ethernet-mtu
network-planning-shared-vlan-or-multi-netted-ethernet-interfaces
*************************
Virtual or cloud networks
*************************
.. toctree::
:maxdepth: 1
virtual-or-cloud-networks
os-data-networks-overview
project-networks
Project network planning
************************
.. toctree::
:maxdepth: 1
project-network-planning
project-network-ip-address-management
subnet-details
internal-dns-resolution
data-network-planning
vxlans
vlan-aware-vms
VM network interface options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. toctree::
:maxdepth: 1
vm-network-interface-options
port-security-extension
pci-passthrough-ethernet-interfaces
sr-iov-ethernet-interfaces
================
Storage planning
================
.. toctree::
:maxdepth: 1
storage-resources
storage-configuration-storage-on-hosts
block-storage-for-virtual-machines
vm-storage-settings-for-migration-resize-or-evacuation
=================
Security planning
=================
.. toctree::
:maxdepth: 1
uefi-secure-boot-planning
https-access-planning
==================================
Installation and resource planning
==================================
.. toctree::
:maxdepth: 1
hardware-requirements
installation-and-resource-planning-verified-commercial-hardware
installation-and-resource-planning-controller-disk-configurations-for-all-in-one-systems

View File

@ -0,0 +1,13 @@
.. gkz1516633358554
.. _installation-and-resource-planning-controller-disk-configurations-for-all-in-one-systems:
=====================================================
Controller Disk Configurations for All-in-one Systems
=====================================================
Verified |AIO| controller disk configurations are discussed in the |prod-long|
documentation.
See :ref:`Kubernetes Controller Disk Configurations for All-in-one Systems
<controller-disk-configurations-for-all-in-one-systems>` for details.

View File

@ -0,0 +1,189 @@
.. ikn1516739312384
.. _installation-and-resource-planning-verified-commercial-hardware:
======================================
OpenStack Verified Commercial Hardware
======================================
Verified and approved hardware components for use with |prod-os| are listed
here.
.. _installation-and-resource-planning-verified-commercial-hardware-verified-components:
.. table:: Table 1. Verified Components
:widths: 100, 200
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Component | Approved Hardware |
+==========================================================+=========================================================================================================================================================================================================================================================================================================================================================================================================================================+
| Hardware Platforms | - Hewlett Packard Enterprise |
| | |
| | |
| | - HPE ProLiant DL360p Gen8 Server |
| | |
| | - HPE ProLiant DL360p Gen9 Server |
| | |
| | - HPE ProLiant DL360 Gen10 Server |
| | |
| | - HPE ProLiant DL380p Gen8 Server |
| | |
| | - HPE ProLiant DL380p Gen9 Server |
| | |
| | - HPE ProLiant ML350 Gen10 Server |
| | |
| | - c7000 Enclosure with HPE ProLiant BL460 Gen9 Server |
| | |
| | .. caution:: |
| | LAG support is dependent on the switch cards deployed with the c7000 enclosure. To determine whether LAG can be configured, consult the switch card documentation. |
| | |
| | |
| | - Dell |
| | |
| | |
| | - Dell PowerEdge R430 |
| | |
| | - Dell PowerEdge R630 |
| | |
| | - Dell PowerEdge R640 |
| | |
| | - Dell PowerEdge R720 |
| | |
| | - Dell PowerEdge R730 |
| | |
| | - Dell PowerEdge R740 |
| | |
| | |
| | - Kontron Symkloud MS2920 |
| | |
| | .. note:: |
| | The Kontron platform does not support power ON/OFF or reset through the BMC interface on Cloud Platform. As a result, it is not possible for the system to properly fence a node in the event of a management network isolation event. In order to mitigate this, hosted application auto recovery needs to be disabled. |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Supported Reference Platforms | - Intel Iron Pass |
| | |
| | - Intel Canoe Pass |
| | |
| | - Intel Grizzly Pass |
| | |
| | - Intel Wildcat Pass |
| | |
| | - Intel Wolf Pass |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Disk Controllers | - Dell |
| | |
| | |
| | - PERC H310 Mini |
| | |
| | - PERC H730 Mini |
| | |
| | - PERC H740P |
| | |
| | - PERC H330 |
| | |
| | - PERC HBA330 |
| | |
| | |
| | |
| | - HPE Smart Array |
| | |
| | |
| | - P440ar |
| | |
| | - P420i |
| | |
| | - P408i-a |
| | |
| | - P816i-a |
| | |
| | |
| | - LSI 2308 |
| | |
| | - LSI 3008 |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| NICs Verified for PXE Boot, Management, and OAM Networks | - Intel I210 \(Springville\) 1G |
| | |
| | - Intel I350 \(Powerville\) 1G |
| | |
| | - Intel 82599 \(Niantic\) 10G |
| | |
| | - Intel X540 10G |
| | |
| | - Intel X710/XL710 \(Fortville\) 10G |
| | |
| | - Intel X722 \(Fortville\) 10G |
| | |
| | - Emulex XE102 10G |
| | |
| | - Broadcom BCM5719 1G |
| | |
| | - Broadcom BCM57810 10G |
| | |
| | - Mellanox MT27710 Family \(ConnectX-4 Lx\) 10G/25G |
| | |
| | - Mellanox MT27700 Family \(ConnectX-4\) 40G |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| NICs Verified for Data Interfaces [#f1]_ | The following NICs are supported: |
| | |
| | - Intel I350 \(Powerville\) 1G |
| | |
| | - Intel 82599 \(Niantic\) 10G |
| | |
| | - Intel X710/XL710 \(Fortville\) 10 G |
| | |
| | - Intel X552 \(Xeon-D\) 10G |
| | |
| | - Mellanox Technologies |
| | |
| | |
| | - MT27710 Family \(ConnectX-4\) 10G/25G |
| | |
| | - MT27700 Family \(ConnectX-4\) 40G |
| | |
| | |
| | |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| PCI passthrough or PCI SR-IOV NICs | - Intel 82599 \(Niantic\) 10 G |
| | |
| | - Intel X710/XL710 \(Fortville\) 10G |
| | |
| | - Mellanox Technologies |
| | |
| | |
| | - MT27500 Family \(ConnectX-3\) 10G \(support for PCI passthrough only\) |
| | |
| | .. note:: |
| | Support for Mellanox CX3 is deprecated in Release 5 and scheduled for removal in Release 6 |
| | |
| | - MT27710 Family \(ConnectX-4\) 10G/25G |
| | |
| | - MT27700 Family \(ConnectX-4\) 40G |
| | |
| | |
| | .. note:: |
| | For a Mellanox CX3 using PCI passthrough or a CX4 using PCI passthrough or SR-IOV, SR-IOV must be enabled in the CX3/CX4 firmware. For more information, see `How To Configure SR-IOV for ConnectX-3 with KVM (Ethernet): Enable SR-IOV on the Firmware <https://community.mellanox.com/docs/DOC-2365#jive_content_id_I_Enable_SRIOV_on_the_Firmware>`__. |
| | |
| | |
| | .. note:: |
| | The maximum number of VFs per hosted application instance, across all PCI devices, is 32. |
| | |
| | For example, a hardware encryption hosted application can be launched with virtio interfaces and 32 QAT VFs. However, a hardware encryption hosted application with an SR-IOV network interface \(with 1 VF\) can only be launched with 31 VFs. |
| | |
| | .. note:: |
| | Dual-use configuration \(PCI passthrough or PCI SR-IOV on the same interface\) is supported for Fortville NICs only. |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| PCI SR-IOV Hardware Accelerators | - Intel AV-ICE02 VPN Acceleration Card, based on the Intel Coleto Creek 8925/8950, and C62x device with QuickAssist Technology. |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| GPUs Verified for PCI Passthrough | - NVIDIA Corporation: VGA compatible controller - GM204GL \(Tesla M60 rev a1\) |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Board Management Controllers | - HPE iLO3 |
| | |
| | - HPE iLO4 |
| | |
| | - Quanta |
+----------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. include:: ../../_includes/installation-and-resource-planning-verified-commercial-hardware.rest
.. seealso::
:ref:`Kubernetes Verified Commercial Hardware <verified-commercial-hardware>`

View File

@ -0,0 +1,23 @@
.. kss1491241946903
.. _internal-dns-resolution:
=======================
Internal DNS Resolution
=======================
|prod-os| supports internal DNS resolution for instances. When this feature
is enabled, instances are automatically assigned hostnames, and an internal
DNS server is maintained which associates the hostnames with IP addresses.
The ability to use hostnames is based on the OpenStack DNS integration
capability. For more about this capability, see
`https://docs.openstack.org/mitaka/networking-guide/config-dns-int.html
<https://docs.openstack.org/mitaka/networking-guide/config-dns-int.html>`__.
When internal DNS resolution is enabled on a |prod-os| system, the Neutron
service maintains an internal DNS server with a hostname-IP address pair for
each instance. The hostnames are derived automatically from the names assigned
to the instances when they are launched, providing |PQDNs|. They can be
concatenated with a domain name defined for the Neutron service to form fully
|FQDNs|.

View File

@ -0,0 +1,17 @@
.. etp1466026368950
.. _network-planning-data-networks:
=============
Data Networks
=============
The physical Ethernet interfaces on |prod-os| nodes can be configured to use
one or more data networks.
For more information, see, |prod-os| Configuration and Management: :ref:`Data
Networks and Data Network Interfaces <data-networks-overview>`.
.. seealso::
:ref:`Physical Network Planning <physical-network-planning>`

View File

@ -0,0 +1,37 @@
.. ekg1551898490562
.. _network-planning-ip-support:
==========
IP Support
==========
|prod-os| supports IPv4 and IPv6 versions for various networks.
All networks must be a single address family, either IPv4 or IPv6, with the
exception of the |PXE| boot network which must always use IPv4. The following
table lists IPv4 and IPv6 support for different networks:
.. _network-planning-ip-support-table-xqy-3cj-4cb:
.. table:: Table 1. IPv4 and IPv6 Support
:widths: auto
+--------------------------------------------------------------------+--------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Networks | IPv4 Support | IPv6 Support | Comment |
+====================================================================+==============+==============+==================================================================================================================================================================================================================================================+
| PXE boot | Y | N | If present, the PXE boot network is used for PXE booting of new hosts \(instead of using the internal management network\), and therefore must be untagged. It is limited to IPv4, since the Platform does not support IPv6 booting. |
+--------------------------------------------------------------------+--------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Internal Management | Y | Y | |
+--------------------------------------------------------------------+--------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| OAM | Y | Y | |
+--------------------------------------------------------------------+--------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Cluster Host | Y | Y | The cluster host network supports IPv4 or IPv6 addressing. |
+--------------------------------------------------------------------+--------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| VXLAN Data Network | Y | Y | .. note:: |
| | | | Flat and VLAN Data networks are L2 functions, so IPv4 or IPv6 can be used, if required. |
+--------------------------------------------------------------------+--------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Project Networks, Routers, MetaData Server, SNAT, and Floating IPs | Y | Y\* | - DHCP and Routing support for IPv6. |
| | | | |
| | | | - No IPv6 support for access to MetaData Server, SNAT or Floating IPs. |
+--------------------------------------------------------------------+--------------+--------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

View File

@ -0,0 +1,50 @@
.. jow1404333739778
.. _network-planning-l2-access-switches:
============================
OpenStack L2 Access Switches
============================
L2 access switches connect the |prod-os| hosts to the different networks.
Proper configuration of the access ports is necessary to ensure proper traffic
flow.
One or more L2 switches can be used to connect the |prod-os| hosts to the
different networks. When sharing a single L2 switch you must ensure proper
isolation of the network traffic. Here is an example of how to configure a
shared L2 switch:
.. _network-planning-l2-access-switches-ul-obf-dyr-4n:
- one port-based |VLAN| for the internal management network and cluster host
network
- one port-based |VLAN| for the |OAM| network
- one or more sets of |VLANs| for data networks. For example:
- one set of |VLANs| with good |QoS| for bronze projects
- one set of |VLANs| with better |QoS| for silver projects
- one set of |VLANs| with the best |QoS| for gold projects
When using multiple L2 switches, there are several deployment possibilities.
Here are some examples:
.. _network-planning-l2-access-switches-ul-qmd-wyr-4n:
- A single L2 switch for the internal management, cluster host, and |OAM|
networks. Port- or |MAC|-based network isolation is mandatory.
- One or more L2 switches, not necessarily inter-connected, with one L2
switch per data network.
- Redundant L2 switches to support link aggregation, using either a failover
model, or |VPC| for more robust redundancy.
See :ref:`Kubernetes Platform Planning <l2-access-switches>` for
additional considerations related to L2 switches.

View File

@ -0,0 +1,40 @@
.. jow1423173019489
.. _network-planning-shared-vlan-or-multi-netted-ethernet-interfaces:
===================================================
Shared \(VLAN or Multi-Netted\) Ethernet Interfaces
===================================================
The management, cluster host, |OAM|, and physical networks can share Ethernet
or aggregated Ethernet interfaces using |VLAN| tagging or IP Multi-Netting.
The |OAM|, internal management, and cluster host networks can use |VLAN|
tagging or IP Multi-Tagging, allowing them to share an Ethernet or aggregated
Ethernet interface with other networks. The one restriction is that if the
internal management network is implemented as a |VLAN|-tagged network, then it
must be on the physical interface used for |PXE| booting.
The following arrangements are possible:
.. _network-planning-shared-vlan-or-multi-netted-ethernet-interfaces-ul-y5k-zg2-zq:
- One interface for the internal management network and internal cluster host
network using multi-netting, another interface for |OAM| \(on which
container workloads are exposed externally\) and one or more additional
interfaces data networks. This is the default configuration.
- One interface for the internal management network, another interface for
the |OAM| network, a third interface for the cluster host network, and one
or more additional interfaces for data networks.
- One interface for the internal management network, with the |OAM| and
cluster host networks also implemented on the interface using |VLAN|
tagging, and additional interfaces for data networks .
For some typical interface scenarios, see :ref:`Managed Kubernetes Cluster
Hardware Requirements <hardware-requirements>`.
For more information about configuring |VLAN| interfaces, see |node-doc|:
:ref:`Configure VLAN Interfaces Using the CLI
<configuring-vlan-interfaces-using-the-cli>`.

View File

@ -0,0 +1,20 @@
.. nzw1555338241460
.. _network-planning-the-cluster-host-network:
========================
The Cluster Host Network
========================
The cluster host network provides the physical network required for Kubernetes
container networking in support of the containerized OpenStack control plane
traffic.
All nodes in the cluster must be attached to the cluster host network.
In the |prod-os| scenario, this network is considered internal and by default
shares an L2 network / interface with the management network \(although can be
configured on a separate interface, if required\). External access to the
Containerized OpenStack Service Endpoints is through a deployed nginx ingress
controller using host networking to expose itself on ports 80/443
\(http/https\) on the |OAM| Floating IP.

View File

@ -0,0 +1,46 @@
.. wib1463582694200
.. _network-planning-the-internal-management-network:
===============================
The Internal Management Network
===============================
The internal management network must be implemented as a single, dedicated,
Layer 2 broadcast domain for the exclusive use of each |prod-os| cluster.
Sharing of this network by more than one |prod-os| cluster is not supported.
The internal management network is also used for disk IO traffic to and from
the Ceph storage cluster.
If required, the internal management network can be configured as a
|VLAN|-tagged network. In this case, a separate IPv4 |PXE| boot network must be
implemented as the untagged network on the same physical interface. This
configuration must also be used if the management network must support IPv6.
During the |prod-os| software installation process, several network services
such as |BOOTP|, |DHCP|, and |PXE|, are expected to run over the internal
management network. These services are used to bring up the different hosts to
an operational state. It is therefore mandatory that this network be
operational and available in advance, to ensure a successful installation.
On each host, the internal management network can be implemented using a 1 Gb
or 10 Gb Ethernet port. Requirements for this port are that:
.. _network-planning-the-internal-management-network-ul-uh1-pqs-hp:
- it must be capable of |PXE|-booting
- it can be used by the motherboard as a primary boot device
.. note::
This network is not used with Simplex systems.
.. note::
|OSDs| bind to all addresses but communicate on the same network as
monitors \(monitors and |OSDs| need to communicate, if monitors are on
management networks, then |OSDs| source address will also be on mgmt0\).
See :ref:`Kubernetes Internal Management Network
<internal-management-network-planning>` for details.

View File

@ -0,0 +1,43 @@
.. wdq1463583173409
.. _os-planning-data-networks-overview:
========
Overview
========
Data networks are used to model the L2 Networks that nodes' data, pci-sriov and
pci-passthrough interfaces attach to.
A Layer 2 physical or virtual network or set of virtual networks is used to
provide the underlying network connectivity needed to support the application
project networks. Multiple data networks may be configured as required, and
realized over the same or different physical networks. Access to external
networks is typically granted to the **openstack-compute** labeled worker nodes
using the data network. The extent of this connectivity, including access to
the open internet, is application dependent.
Data networks are created at the |prod| level.
.. _data-networks-overview-ul-yj1-dtq-3nb:
.. xbooklink VXLAN Data Networks are specific to |prod-os| application and are described in detail in :ref:`VXLAN Data Networks <vxlan-data-networks>` .
Segmentation ID ranges, for |VLAN| and |VXLAN| data networks, are defined
through OpenStack Neutron commands, see :ref:`Add Segmentation Ranges Using
the CLI <adding-segmentation-ranges-using-the-cli>`.
For details on creating data networks and assigning them to node interfaces,
see the |datanet-doc| documentation:
- :ref:`Add Data Networks Using the CLI <adding-data-networks-using-the-cli>`
- :ref:`Assign a Data Network to an Interface
<assigning-a-data-network-to-an-interface>`
- :ref:`Remove a Data Network Using the CLI
<removing-a-data-network-using-the-cli>`
.. only:: partner
.. include:: ../../_includes/os-data-networks-overview.rest

View File

@ -0,0 +1,28 @@
.. osb1466081265288
.. _pci-passthrough-ethernet-interfaces:
===================================
PCI Passthrough Ethernet Interfaces
===================================
A passthrough Ethernet interface is a physical |PCI| Ethernet |NIC| on a
compute node to which a virtual machine is granted direct access.
This minimizes packet processing delays but at the same time demands special
operational considerations.
For all purposes, a |PCI| passthrough interface behaves as if it were
physically attached to the virtual machine. Therefore, any potential throughput
limitations coming from the virtualized environment, such as the ones
introduced by internal copying of data buffers, are eliminated. However, by
bypassing the virtualized environment, the use of |PCI| passthrough Ethernet
devices introduces several restrictions that you must take into consideration.
They include:
.. _pci-passthrough-ethernet-interfaces-ul-mjs-m52-tp:
- no support for |LAG|, |QoS|, |ACL|, or host interface monitoring
- no support for live migration

View File

@ -0,0 +1,10 @@
.. qrb1466026876949
.. _physical-network-planning:
=========================
Physical Network Planning
=========================
The data network is the backing network for the overlay project networks and
therefore has a direct impact on the networking performance of the guest.

View File

@ -0,0 +1,27 @@
.. hjx1519399837056
.. _port-security-extension:
=======================
Port Security Extension
=======================
|prod-os| supports the Neutron port security extension for disabling IP address
filtering at the project network or |VM| port level.
By default, IP address filtering is enabled on all ports and networks, subject
to security group settings. You can override the default IP address filtering
rules that apply to a |VM| by enabling the Neutron port security extension
driver, and then disabling port security on individual project networks or
ports. For example, you can configure a |VM| to allow packet routing by
enabling the port security extension driver, and then disabling port security
on the |VM| port used for routing.
Disabling port security on a network also disables |MAC| address filtering on the
network.
By default, the port security extension driver is disabled.
.. only:: partner
.. include:: ../../_includes/port-security-extension.rest

View File

@ -0,0 +1,12 @@
.. ter1493141472996
.. _project-network-ip-address-management:
=====================================
Project Network IP Address Management
=====================================
For |VM| IP address assignment, |prod-os| supports both internal management
using project networks with subnets, and external management \(for example,
static addressing or an external |DHCP| server\) using project networks without
subnets.

View File

@ -0,0 +1,13 @@
.. nhz1466029435409
.. _project-network-planning:
========================
Project Network Planning
========================
In addition to the standard features for OpenStack project networks, project
network planning on an |prod-os| system can take advantage of extended
capabilities offered by |prod-os|.
.. include:: ../../_includes/project-network-planning.rest

View File

@ -0,0 +1,88 @@
.. jow1404333739174
.. _project-networks:
================
Project Networks
================
Project networks are logical networking entities visible to project users, and
around which working network topologies are built.
Project networks need support from the physical layers to work as intended.
This means that the access L2 switches, data networks, and data interface
definitions on the compute nodes, must all be properly configured. In
particular, careful |prod-os| project network management planning
is required to achieve the proper configuration when using data networks of the
|VLAN| or |VXLAN| type.
For data networks of the |VLAN| type, consider the following guidelines:
.. _project-networks-ul-hqm-n2s-4n:
- All ports on the access L2 switches must be statically configured to
support all the |VLANs| defined on the data networks they provide access
to. The dynamic nature of the cloud might force the set of |VLANs| in use
by a particular L2 switch to change at any moment.
.. only:: partner
.. include:: ../../_includes/project-networks.rest
:start-after: vlan-begin
:end-before: vxlan-begin
- Configuring a project network to have access to external networks \(not
just providing local networking\) requires the following elements:
- A physical router, and the data network's access L2 switch, must be
part of the same Layer-2 network. Because this Layer 2 network uses a
unique |VLAN| ID, this means also that the router's port used in the
connection must be statically configured to support the corresponding
|VLAN| ID.
- The router must be configured to be part of the same IP subnet that the
project network is intending to use.
- When configuring the IP subnet, the project must use the router's port
IP address as its external gateway.
- The project network must have the external flag set. Only the **admin**
user can set this flag when the project network is created.
For data networks of the |VXLAN| type, consider the following guidelines:
.. _project-networks-ul-gwl-5fh-hr:
- Layer 3 routers used to interconnect compute nodes must be
multicast-enabled, as required by the |VXLAN| protocol.
.. include:: ../../_includes/project-networks.rest
:start-after: vxlan-begin
- To support |IGMP| and |MLD| snooping, Layer 3 routers must be configured
for |IGMP| and |MLD| querying.
- To accommodate |VXLAN| encapsulation, the |MTU| values for Layer 2 switches
and compute node data interfaces must allow for additional headers. For
more information, see :ref:`The Ethernet MTU <the-ethernet-mtu>`.
- To participate in a |VXLAN| network, the data interfaces on the compute
nodes must be configured with IP addresses, and with route table entries
for the destination subnets or the local gateway. For more information, see
:ref:`Manage Data Interface Static IP Addresses Using the CLI
<managing-data-interface-static-ip-addresses-using-the-cli>`, and :ref:`Add
and Maintain Routes for a VXLAN Network
<adding-and-maintaining-routes-for-a-vxlan-network>`.
In some circumstances, project networks can be configured to use |VLAN|
Transparent mode, in which |VLAN| tagged packets from the guest are
encapsulated within a data network segment \(|VLAN|\) without removing or
modifying the guest |VLAN| tag.
Alternatively, guest |VLAN|-tagged traffic can be implemented using |prod-os|
support for OpenStack |VLAN| Aware |VMs|.
For more information about |VLAN| Aware |VMs|, see :ref:`VLAN Aware VMs
<vlan-aware-vms>` or consult the public OpenStack documentation at,
`http://specs.openstack.org/openstack/neutron-specs/specs/newton/vlan-aware-vms.html
<http://specs.openstack.org/openstack/neutron-specs/specs/newton/vlan-aware-vms.html>`__.

View File

@ -0,0 +1,29 @@
.. aqg1466081208315
.. _sr-iov-ethernet-interfaces:
==========================
SR-IOV Ethernet Interfaces
==========================
An |SRIOV| Ethernet interface is a physical |PCI| Ethernet |NIC| that
implements hardware-based virtualization mechanisms to expose multiple virtual
network interfaces that can be used by one or more virtual machines
simultaneously.
The PCI-SIG |SRIOV| specification defines a standardized mechanism to create
individual virtual Ethernet devices from a single physical Ethernet interface.
For each exposed virtual Ethernet device, formally referred to as a |VF|, the
|SRIOV| interface provides separate management memory space, work queues,
interrupts resources, and DMA streams, while utilizing common resources behind
the host interface. Each VF therefore has direct access to the hardware and can
be considered to be an independent Ethernet interface.
The following limitations apply to |SRIOV| interfaces:
.. _sr-iov-ethernet-interfaces-ul-mjs-m52-tp:
- no support for |LAG|, |QoS|, |ACL|, or host interface monitoring
- no support for live migration

View File

@ -0,0 +1,95 @@
.. bfh1466190844731
.. _storage-configuration-storage-by-host-type:
======================
Storage Configurations
======================
.. contents:: |minitoc|
:local:
:depth: 1
---------------------------
Storage on controller hosts
---------------------------
.. _storage-configuration-storage-on-controller-hosts:
The controllers provide storage for the |prod-os|'s OpenStack Controller
Services through a combination of local container ephemeral disk, |PVCs| backed
by Ceph and a containerized HA mariadb deployment.
On systems configured for controller storage with a small Ceph cluster on the
master/controller nodes, they also provide persistent block storage for
persistent |VM| volumes \(Cinder\), storage for |VM| images \(Glance\), and
storage for |VM| remote ephemeral volumes \(Nova\). On All-in-One Simplex or
Duplex systems, the controllers also provide nova-local storage for ephemeral
|VM| volumes.
On systems configured for controller storage, the master/controller's root disk
is reserved for system use, and additional disks are required to support the
small Ceph cluster. On a All-in-One Simplex or Duplex system, you have the
option to partition the root disk for the nova-local storage \(to realize a
two-disk controller\) or use a third disk for nova-local storage.
.. _storage-configuration-storage-on-controller-hosts-section-N10031-N10024-N10001:
**************************************
Underlying platform filesystem storage
**************************************
To pass the disk-space checks, any replacement disks must be installed before
the allotments are changed.
.. _storage-configuration-storage-on-controller-hosts-section-N1010F-N1001F-N10001:
***************************************
Glance, Cinder, and remote Nova storage
***************************************
On systems configured for controller storage with a small Ceph cluster on the
master/controller nodes, this small Ceph cluster on the controller provides
Glance image storage, Cinder block storage, Cinder backup storage, and Nova
remote ephemeral block storage. For more information, see :ref:`Block Storage
for Virtual Machines <block-storage-for-virtual-machines>`.
.. _storage-configuration-storage-on-controller-hosts-section-N101BB-N10029-N10001:
******************
Nova-local storage
******************
Controllers on |prod-os| Simplex or Duplex systems incorporate the Compute
function, and therefore provide **nova-local** storage for ephemeral disks. On
other systems, **nova-local** storage is provided by compute hosts. For more
information about this type of storage, see :ref:`Storage on Compute Hosts
<storage-on-compute-hosts>` and :ref:`Block Storage for Virtual Machines
<block-storage-for-virtual-machines>`.
You can add a physical volume using the system :command:`host-pv-add` command.
.. xbooklink For more information, see Cloud Platform Storage Configuration: :ref:`Adding a Physical Volume <adding-a-physical-volume>`.
.. _storage-on-storage-hosts:
------------------------
Storage on storage hosts
------------------------
|prod-os| creates default Ceph storage pools for Glance images, Cinder volumes,
Cinder backups, and Nova ephemeral data.
.. xbooklink For more information, see the :ref:`Platform Storage Configuration <storage-configuration-storage-resources>` guide for details on configuring the internal Ceph cluster on either controller or storage hosts.
.. _storage-on-compute-hosts:
------------------------
Storage on compute hosts
------------------------
Compute-labelled worker hosts can provide ephemeral storage for |VM| disks.
.. note::
On All-in-One Simplex or Duplex systems, compute storage is provided using
resources on the combined host.

View File

@ -0,0 +1,124 @@
.. uvy1462906813562
.. _storage-resources:
=================
Storage Resources
=================
|prod-os| uses storage resources on the controller-labelled master hosts, the
compute-labeled worker hosts, and on storage hosts if they are present.
The storage configuration for |prod-os| is very flexible. The specific
configuration depends on the type of system installed, and the requirements of
the system.
.. _storage-resources-section-N1005C-N10029-N10001:
-----------------------------
Storage Services and Backends
-----------------------------
The figure below shows the storage options and backends for |prod-os|.
.. figure:: ../figures/zpk1486667625575.png
|prod-os| Storage Options and Backends
Each service can use different storage backends.
**Ceph**
This provides storage managed by the internal Ceph cluster. Depending on
the deployment configuration, the internal Ceph cluster is provided through
|OSDs| on OpenStack master / controller hosts or storage hosts.
.. _storage-resources-table-ajr-tlf-zbb:
.. table:: Table 1. Available Backends for Storage Services
:widths: auto
+---------+---------------------------------------------------------------+---------------------------------------------------------------+
| Service | Description | Available Backends |
+=========+===============================================================+===============================================================+
| Cinder | - persistent block storage | - Internal Ceph on master/controller hosts or storage hosts |
| | | |
| | - used for VM boot disk volumes | |
| | | |
| | - used as aditional disk volumes for VMs booted from images | |
| | | |
| | - snapshots and persistent backups for volumes | |
+---------+---------------------------------------------------------------+---------------------------------------------------------------+
| Glance | - image file storage | - Internal Ceph on master/controller hosts or storage hosts |
| | | |
| | - used for VM boot disk images | |
+---------+---------------------------------------------------------------+---------------------------------------------------------------+
| Nova | - ephemeral object storage | - CoW-Image on Compute Nodes |
| | | |
| | - used for VM ephemeral disks | - Internal Ceph on master/controller hosts or storage hosts |
+---------+---------------------------------------------------------------+---------------------------------------------------------------+
.. _storage-resources-section-N10035-N10028-N10001:
--------------------
Uses of Disk Storage
--------------------
**Containerized OpenStack System**
The |prod-os| system containers use a combination of local container
ephemeral disk, |PVCs| backed by Ceph and a containerized HA mariadb
deployment for configuration and database files.
**VM Ephemeral Boot Disk Volumes \(that is, when booting from an image\)**
Virtual machines use local ephemeral disk storage on computes for Nova
ephemeral local boot disk volumes built from images. These virtual disk
volumes are created when the |VM| instances are launched. These virtual
volumes are destroyed when the |VM| instances are terminated.
**VM Persistent Boot Disk Volumes \(that is, when booting from Cinder Volumes\)**
Virtual machines can optionally use the Ceph-backed storage cluster for
backing Cinder boot disk volumes. This provides permanent storage for the
|VM| root disks, facilitating faster machine startup, but requiring more
storage resources. For |VMs| booted from images it provides additional
Cinder disk volumes for persistent storage.
**VM Additional Disks**
Virtual machines can optionally use local ephemeral disk storage on
computes for additional virtual disks, such as swap disks. These disks are
ephemeral; they are created when a |VM| instance is launched, and destroyed
when the |VM| instance is terminated.
**VM Block Storage backups**
Cinder volumes can be backed up for long term storage in a separate Ceph
pool.
.. _storage-resources-section-N100B3-N10028-N10001:
-----------------
Storage Locations
-----------------
In additional to the storage used by |prod-os| system containers, the following
storage locations may be used.
**Controller Hosts**
In the Standard with Controller Storage deployment option, one or more
disks can be used on controller hosts to provide a small Ceph-based cluster
for providing the storage backend for Cinder volumes, Cinder backups,
Glance images, and remote Nova ephemeral volumes.
**Compute Hosts**
One or more disks can be used on compute hosts to provide local Nova
ephemeral storage for virtual machines.
**Combined Controller-Compute Hosts**
One or more disks can be used on combined hosts in Simplex or Duplex
systems to provide local Nova Ephemeral Storage for virtual machines and a
small Ceph-backed storage cluster for backing Cinder, Glance, and Remote
Nova Ephemeral storage.
**Storage Hosts**
One or more disks are used on storage hosts to provide a large scale
Ceph-backed storage cluster for backing Cinder, Glance, and Remote Nova
Ephemeral storage. Storage hosts are used only on |prod-os| with Dedicated
Storage systems.

View File

@ -0,0 +1,51 @@
.. psa1412702861873
.. _subnet-details:
==============
Subnet Details
==============
You can adjust several options for project network subnets, including |DHCP|,
IP allocation pools, DNS server addresses, and host routes.
These options are available on the **Subnet Details** tab when you create or
edit a project network subnet.
.. note::
IP addresses on project network subnets are always managed internally. For
external address management, use project networks without subnets. For more
information, see :ref:`Project Network IP Address Management
<project-network-ip-address-management>`.
When creating a new IP subnet for a project network, you can specify the
following attributes:
**Enable DHCP**
When this attribute is enabled, a virtual |DHCP| server becomes available
when the subnet is created. It uses the \(MAC address, IP address\) pairs
registered in the Neutron database to offer IP addresses in response to
|DHCP| discovery requests broadcast on the subnet. |DHCP| discovery
requests from unknown |MAC| addresses are ignored.
When the |DHCP| attribute is disabled, all |DHCP| and DNS services, and all
static routes, if any, must be provisioned externally.
**Allocation Pools**
This a list attribute where each element in the list specifies an IP
address range, or address pool, in the subnet address space that can be
used for dynamic offering of IP addresses. By default, there is a single
allocation pool comprised of the entire subnet's IP address space, with the
exception of the default gateway's IP address.
An external, non-Neutron, |DHCP| server can be attached to a subnet to
support specific deployment needs as required. For example, it can be
configured to offer IP addresses on ranges outside the Neutron allocation
pools to service physical devices attached to the project network, such as
testing equipment and servers.
**DNS Name Servers**
You can reserve IP addresses for use by DNS servers.
**Host Routes**
You can use this to specify host connections to routers on the subnet.

View File

@ -0,0 +1,106 @@
.. jow1404333732592
.. _os-planning-the-ethernet-mtu:
================
The Ethernet MTU
================
The |MTU| of an Ethernet frame is a configurable attribute in |prod-os|.
Changing its default size must be done in coordination with other network
elements on the Ethernet link.
In the context of |prod-os|, the |MTU| refers to the largest possible payload
on the Ethernet frame on a particular network link. The payload is enclosed by
the Ethernet header \(14 bytes\) and the CRC \(4 bytes\), resulting in an
Ethernet frame that is 18 bytes longer than the |MTU| size.
The original IEEE 802.3 specification defines a valid standard Ethernet frame
size to be from 64 to 1518 bytes, accommodating payloads ranging in size from
46 to 1500 bytes. Ethernet frames with a payload larger than 1500 bytes are
considered to be jumbo frames.
For a |VLAN| network, the frame also includes a 4-byte |VLAN| ID header,
resulting in a frame size 22 bytes longer than the |MTU| size.
For a |VXLAN| network, the frame is either 54 or 74 bytes longer, depending on
whether IPv4 or IPv6 protocol is used. This is because, in addition to the
Ethernet header and CRC, the payload is enclosed by an IP header \(20 bytes for
Ipv4 or 40 bytes for IPv6\), a |UDP| header \(8 bytes\), and a |VXLAN| header
\(8 bytes\).
In |prod-os|, you can configure the |MTU| size for the following interfaces and
networks:
.. _the-ethernet-mtu-ul-qmn-yvn-m4:
- The management and |OAM| network interfaces on the controller. The |MTU|
size for these interfaces is set during initial installation; for more
information, see the |prod-os| installation guide for your system. To make
changes after installation, see |sysconf-doc|: :ref:`Change the MTU of an
OAM Interface <changing-the-mtu-of-an-oam-interface-using-the-cli>`.
- Data interfaces on compute nodes. For more information, see :ref:`Change
the MTU of a Data Interface <changing-the-mtu-of-a-data-interface>`.
- Data networks. For more information, see |datanet-doc|: :ref:`Data Networks
<data-network-management-data-networks>`.
In all cases, the default |MTU| size is 1500. The minimum value is 576, and the
maximum is 9216.
.. note::
You cannot change the |MTU| for a cluster-host interface. The default |MTU|
of 1500 must always be used.
Because data interfaces are defined over physical interfaces connecting to data
networks, it is important that you consider the implications of modifying the
default |MTU| size:
.. _the-ethernet-mtu-ul-hsq-2f4-m4:
- The |MTU| sizes for a data interface and the corresponding Ethernet
interface on the edge router or switch must be compatible. You must ensure
that each side of the link is configured to accept the maximum frame size
that can be delivered from the other side. For example, if the data
interface is configured with a |MTU| size of 9216 bytes, the corresponding
switch interface must be configured to accept a maximum frame size of 9238
bytes, assuming a |VLAN| tag is present.
The way switch interfaces are configured varies from one switch
manufacturer to another. In some cases you configure the |MTU| size
directly, while in some others you configure the maximum Ethernet frame
size instead. In the latter case, it is often unclear whether the frame
size includes |VLAN| headers or not. In any case, you must ensure that both
sides are configured to accept the expected maximum frame sizes.
- For a |VXLAN| network, the additional IP, |UDP|, and |VXLAN| headers are
invisible to the data interface, which expects a frame only 18 bytes larger
than the |MTU|. To accommodate the larger frames on a |VXLAN| network, you
must specify a larger nominal |MTU| on the data interface. For simplicity,
and to avoid issues with stacked |VLAN| tagging, some third party vendors
recommend rounding up by an additional 100 bytes for calculation purposes.
For example, to attach to a |VXLAN| data network with an |MTU| of 1500, a
data interface with an |MTU| of 1600 is recommended.
- A data network can only be associated with a compute node data interface
with an |MTU| of equal or greater value.
- The |MTU| size of a compute node data interface cannot be modified to be
less than the |MTU| size of any of its associated data networks.
- The |MTU| size of a data network is automatically propagated to new project
networks. Changes to the data network |MTU| are *not* propagated to
existing project networks.
- The Neutron L3 and |DHCP| agents automatically propagate the |MTU| size of
their networks to their Linux network interfaces.
- The Neutron |DHCP| agent makes the option interface-mtu available to any
|DHCP| client request from a virtual machine. The request response from the
server is the current interface's |MTU| size, which can then be used by the
client to adjust its own interface |MTU| size.
.. .. only:: partner
.. include:: ../../_includes/the-ethernet-mtu.rest

View File

@ -0,0 +1,17 @@
.. znn1463582982634
.. _the-oam-network:
===============
The OAM Network
===============
This network provides external access to OpenStack Services' External API
Endpoints and External Access to the OpenStack Horizon Web interface web
server.
.. note::
You can enable secure HTTPS connectivity on the |OAM| network.
See :ref:`OAM Network Planning <oam-network-planning>` for additional services
that need to be available to the underlying |prod| through the |OAM| Network.

View File

@ -0,0 +1,24 @@
.. xms1466026140926
.. _the-pxe-boot-network:
====================
The PXE Boot Network
====================
You can set up a |PXE| Boot network for booting all nodes to allow a
non-standard management network configuration.
The internal management network is used for |PXE| booting of new hosts and the
|PXE| Boot network is not required. However, there are scenarios where the
internal management network cannot be used for |PXE| booting of new hosts. For
example, if the internal management network needs to be on a |VLAN|-tagged
network for deployment reasons, or if it must support IPv6, you must configure
the optional untagged |PXE| boot network for |PXE| booting of new hosts using
IPv4.
.. note::
|prod| does not support IPv6 |PXE| Booting.
See :ref:`The PXE Boot Network <network-planning-the-pxe-boot-network>` for
details.

View File

@ -0,0 +1,12 @@
.. xft1580509778612
.. _uefi-secure-boot-planning:
=========================
UEFI Secure Boot Planning
=========================
You may want to plan for utilizing the supported |UEFI| secure boot feature.
See :ref:`Kubernetes UEFI Secure Boot Planning
<security-planning-uefi-secure-boot-planning>` for details.

View File

@ -0,0 +1,28 @@
.. yfz1466026434733
.. _virtual-or-cloud-networks:
=========================
Virtual or Cloud Networks
=========================
In addition to the physical networks used to connect the |prod-os| hosts,
|prod-os| uses virtual networks to support |VMs|.
Virtual networks, which include data networks and project networks, are defined
and implemented internally. They are connected to system hosts and to the
outside world using data \(physical\) networks attached to data interfaces on
compute nodes.
Each data network supports one or more data networks, which may be implemented
as a flat, |VLAN|, or |VXLAN| network. The data networks in turn support
project networks, which are allocated for use by different projects and their
|VMs|, and which can be isolated from one another.
.. seealso::
:ref:`Overview <data-networks-overview>`
:ref:`Project Networks <project-networks>`
:ref:`Project Network Planning <project-network-planning>`

View File

@ -0,0 +1,19 @@
.. psa1428328539397
.. _vlan-aware-vms:
==============
VLAN Aware VMs
==============
|prod-os| supports OpenStack |VLAN| Aware |VMs| \(also known as port
trunking\), which adds |VLAN| support for |VM| interfaces.
For more information about |VLAN| Aware |VMs|, consult the public OpenStack
documentation at,
`http://specs.openstack.org/openstack/neutron-specs/specs/newton/vlan-aware-vms.html
<http://specs.openstack.org/openstack/neutron-specs/specs/newton/vlan-aware-vms.html>`__.
Alternatively, project networks can be configured to use |VLAN| Transparent
mode, in which |VLAN| tagged guest packets are encapsulated within a data
network segment without removing or modifying the guest |VLAN| tag.

View File

@ -0,0 +1,52 @@
.. jow1411482049845
.. _vm-network-interface-options:
============================
VM Network Interface Options
============================
|prod-os| supports a variety of standard and performance-optimized network
interface drivers in addition to the standard OpenStack choices.
.. _vm-network-interface-options-ul-mgc-xnp-nn:
- Unmodified guests can use Linux networking and virtio drivers. This
provides a mechanism to bring existing applications into the production
environment immediately.
.. only:: partner
.. include:: ../../_includes/vm-network-interface-options.rest
:start-after: unmodified-guests-virtio-begin
:end-before: highest-performance-begin
.. note::
The virtio devices on a |VM| cannot use vhost-user for enhanced
performance if any of the following is true:
- The |VM| is not backed by huge pages.
- The |VM| is backed by 4k huge pages.
- The |VM| is live-migrated from an older platform that does not
support vhost-user.
.. only:: partner
.. include:: ../../_includes/vm-network-interface-options.rest
:start-after: highest-performance-begin
.. xbooklink For more information about |AVP| drivers, see OpenStack VNF Integration: :ref:`Accelerated Virtual Interfaces <accelerated-virtual-interfaces>`.
.. seealso::
:ref:`Port Security Extension <port-security-extension>`
:ref:`PCI Passthrough Ethernet Interfaces
<pci-passthrough-ethernet-interfaces>`
:ref:`SR-IOV Ethernet Interfaces <sr-iov-ethernet-interfaces>`
.. xpartnerlink :ref:`MAC Address Filtering on Virtual Interfaces
<mac-address-filtering-on-virtual-interfaces>`

View File

@ -0,0 +1,58 @@
.. ksh1464711502906
.. _vm-storage-settings-for-migration-resize-or-evacuation:
========================================================
VM Storage Settings for Migration, Resize, or Evacuation
========================================================
The migration, resize, or evacuation behavior for an instance depends on the
type of ephemeral storage used.
.. note::
Live migration behavior can also be affected by flavor extra
specifications, image metadata, or instance metadata.
The following table summarizes the boot and local storage configurations needed
to support various behaviors.
.. _vm-storage-settings-for-migration-resize-or-evacuation-table-wmf-qdh-v5:
.. table::
:widths: auto
+----------------------------------------------------------------------------+-----------------------+-------------------------------------+------------------------------------+----------------+-------------------+--------------------------+
| Instance Boot Type and Ephemeral and Swap Disks from flavor | Local Storage Backing | Live Migration with Block Migration | Live Migration w/o Block Migration | Cold Migration | Local Disk Resize | Evacuation |
+============================================================================+=======================+=====================================+====================================+================+===================+==========================+
| From Cinder Volume \(no local disks\) | N/A | N | Y | Y | N/A | Y |
+----------------------------------------------------------------------------+-----------------------+-------------------------------------+------------------------------------+----------------+-------------------+--------------------------+
| From Cinder Volume \(w/ remote Ephemeral and/or Swap\) | N/A | N | Y | Y | N/A | Y |
+----------------------------------------------------------------------------+-----------------------+-------------------------------------+------------------------------------+----------------+-------------------+--------------------------+
| From Cinder Volume \(w/ local Ephemeral and/or Swap\) | CoW | Y | Y \(CLI only\) | Y | Y | Y |
| | | | | | | |
| | | | | | | Ephemeral/Swap data loss |
+----------------------------------------------------------------------------+-----------------------+-------------------------------------+------------------------------------+----------------+-------------------+--------------------------+
| From Glance Image \(all flavor disks are local\) | CoW | Y | Y \(CLI Only\) | Y | Y | Y |
| | | | | | | |
| | | | | | | Local disk data loss |
+----------------------------------------------------------------------------+-----------------------+-------------------------------------+------------------------------------+----------------+-------------------+--------------------------+
| From Glance Image \(all flavor disks are local + attached Cinder Volumes\) | CoW | Y | Y \(CLI only\) | Y | Y | Y |
| | | | | | | |
| | | | | | | Local disk data loss |
+----------------------------------------------------------------------------+-----------------------+-------------------------------------+------------------------------------+----------------+-------------------+--------------------------+
In addition to the behavior summarized in the table, system-initiated cold
migrate \(e.g. when locking a host\) and evacuate restrictions may be applied
if a |VM| with a large root disk size exists on the host. For a Local |CoW|
Image Backed \(local\_image\) storage type, the VIM can cold migrate or
evacuate |VMs| with disk sizes up to 60 GB
.. note::
The criteria for live migration are independent of disk size.
.. note::
The **Local Storage Backing** is a consideration only for instances that
use local ephemeral or swap disks.
The boot configuration for an instance is determined by the **Instance Boot
Source** selected at launch.

View File

@ -0,0 +1,18 @@
.. ovi1474997555122
.. _vxlans:
======
VXLANs
======
You can use |VXLANs| to connect |VM| instances across non-contiguous Layer 2
segments \(that is, Layer 2 segments connected by one or more Layer 3
routers\).
A |VXLAN| is a Layer 2 overlay network scheme on a Layer 3 network
infrastructure. Packets originating from |VMs| and destined for other |VMs| are
encapsulated with IP, |UDP|, and |VXLAN| headers and sent as Layer 3 packets.
The IP addresses of the source and destination compute nodes are included in
the headers.

7
doc/source/shared/abbrevs.txt Normal file → Executable file
View File

@ -22,7 +22,7 @@
.. |CSK| replace:: :abbr:`CSK (Code Signing Key)`
.. |CSKs| replace:: :abbr:`CSKs (Code Signing Keys)`
.. |CVE| replace:: :abbr:`CVE (Common Vulnerabilities and Exposures)`
.. |DHCP| replace:: :abbr:`DHCP (Dynamic Host Configuration Protoco)`
.. |DHCP| replace:: :abbr:`DHCP (Dynamic Host Configuration Protocol)`
.. |DPDK| replace:: :abbr:`DPDK (Data Plane Development Kit)`
.. |DRBD| replace:: :abbr:`DRBD (Distributed Replicated Block Device)`
.. |DSCP| replace:: :abbr:`DSCP (Differentiated Services Code Point)`
@ -30,6 +30,7 @@
.. |FEC| replace:: :abbr:`FEC (Forward Error Correction)`
.. |FPGA| replace:: :abbr:`FPGA (Field Programmable Gate Array)`
.. |FQDN| replace:: :abbr:`FQDN (Fully Qualified Domain Name)`
.. |FQDNs| replace:: :abbr:`FQDNs (Fully Qualified Domain Names)`
.. |GNP| replace:: :abbr:`GNP (Global Network Policy)`
.. |IGMP| replace:: :abbr:`IGMP (Internet Group Management Protocol)`
.. |IoT| replace:: :abbr:`IoT (Internet of Things)`
@ -59,6 +60,8 @@
.. |PDU| replace:: :abbr:`PDU (Packet Data Unit)`
.. |PF| replace:: :abbr:`PF (Physical Function)`
.. |PHB| replace:: :abbr:`PHB (Per-Hop Behavior)`
.. |PQDN| replace:: :abbr:`PDQN (Partially Qualified Domain Name)`
.. |PQDNs| replace:: :abbr:`PQDNs (Partially Qualified Domain Names)`
.. |PTP| replace:: :abbr:`PTP (Precision Time Protocol)`
.. |PVC| replace:: :abbr:`PVC (Persistent Volume Claim)`
.. |PVCs| replace:: :abbr:`PVCs (Persistent Volume Claims)`
@ -93,6 +96,8 @@
.. |VMs| replace:: :abbr:`VMs (Virtual Machines)`
.. |VNC| replace:: :abbr:`VNC (Virtual Network Computing)`
.. |VPC| replace:: :abbr:`VPC (Virtual Port Channel)`
.. |VNI| replace:: :abbr:`VNI (VXLAN Network Interface)`
.. |VNIs| replace:: :abbr:`VNIs (VXLAN Network Interfaces)`
.. |VXLAN| replace:: :abbr:`VXLAN (Virtual eXtensible Local Area Network)`
.. |VXLANs| replace:: :abbr:`VXLANs (Virtual eXtensible Local Area Networks)`
.. |XML| replace:: :abbr:`XML (eXtensible Markup Language)`