Merge "[networking] Replace "tenant" with "project""

This commit is contained in:
Jenkins 2016-11-07 14:48:47 +00:00 committed by Gerrit Code Review
commit 3a6ccc2fd1
10 changed files with 60 additions and 59 deletions

View File

@ -9,7 +9,7 @@ for controlling the allocation of addresses to subnets, address scopes show
where addresses can be routed between networks, preventing the use of
overlapping addresses in any two subnets. Because all addresses allocated in
the address scope do not overlap, neutron routers do not NAT between your
tenants' network and your external network. As long as the addresses within
projects' network and your external network. As long as the addresses within
an address scope match, the Networking service performs simple routing
between networks.
@ -286,7 +286,7 @@ route straight to an external network without NAT.
| | 917f9360-a840-45c1-83a1-2a093bd7b376 |
+-------------------------+--------------------------------------+
#. Connect a router to each of the tenant subnets that have been created, for
#. Connect a router to each of the project subnets that have been created, for
example, using a router called ``router1``:
.. code-block:: console

View File

@ -107,7 +107,7 @@ users can get their auto-allocated network topology as follows:
+-----------+--------------------------------------+
Operators (and users with admin role) can get the auto-allocated
topology for a tenant by specifying the tenant ID:
topology for a project by specifying the project ID:
.. code-block:: console

View File

@ -23,7 +23,7 @@ Not in scope
Things not in the scope of this document include:
* Single stack IPv6 tenant networking
* Single stack IPv6 project networking
* OpenStack control communication between servers and services over an IPv6
network.
* Connection to the OpenStack APIs via an IPv6 transport network
@ -174,8 +174,8 @@ ipv6_ra_mode and ipv6_address_mode combinations
-
- *Invalid combination.*
Tenant network considerations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Project network considerations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dataplane
---------
@ -198,7 +198,7 @@ There are four methods for a subnet to get its ``cidr`` in OpenStack:
#. Referencing a subnet pool during subnet creation
In the future, different techniques could be used to allocate subnets
to tenants:
to projects:
#. Using a PD client to request a prefix for a subnet from a PD server
#. Use of an external IPAM module to allocate the subnet
@ -231,7 +231,7 @@ relay and DHCPv6 address and optional information for their networks
or this can be delegated to external routers and services based on the
drivers that are in use. There are two neutron subnet attributes -
``ipv6_ra_mode`` and ``ipv6_address_mode`` that determine how IPv6
addressing and network information is provided to tenant instances:
addressing and network information is provided to project instances:
* ``ipv6_ra_mode``: Determines who sends RA.
* ``ipv6_address_mode``: Determines how instances obtain IPv6 address,
@ -326,16 +326,16 @@ separate IPv4 internal router interface for the IPv4 subnet. On the other
hand, external router ports are allowed to have a dual-stack configuration
with both an IPv4 and an IPv6 address assigned to them.
Neutron tenant networks that are assigned Global Unicast Address (GUA) prefixes
and addresses dont require NAT on the neutron router external gateway port to
access the outside world. As a consequence of the lack of NAT the external
router port doesnt require a GUA to send and receive to the external networks.
This implies a GUA IPv6 subnet prefix is not necessarily needed for the neutron
external network. By default, a IPv6 LLA associated with the external gateway
port can be used for routing purposes. To handle this scenario, the
implementation of router-gateway-set API in neutron has been modified so
that an IPv6 subnet is not required for the external network that is
associated with the neutron router. The LLA address of the upstream router
Neutron project networks that are assigned Global Unicast Address (GUA)
prefixes and addresses dont require NAT on the neutron router external gateway
port to access the outside world. As a consequence of the lack of NAT the
external router port doesnt require a GUA to send and receive to the external
networks. This implies a GUA IPv6 subnet prefix is not necessarily needed for
the neutron external network. By default, a IPv6 LLA associated with the
external gateway port can be used for routing purposes. To handle this
scenario, the implementation of router-gateway-set API in neutron has been
modified so that an IPv6 subnet is not required for the external network that
is associated with the neutron router. The LLA address of the upstream router
can be learned in two ways.
#. In the absence of an upstream RA support, ``ipv6_gateway`` flag can be set
@ -353,7 +353,7 @@ gateway for the subnet.
.. note::
That it should be possible for tenants to communicate with each other
That it should be possible for projects to communicate with each other
on an isolated network (a network without a router port) using LLA
with little to no participation on the part of OpenStack. The authors
of this section have not proven that to be true for all scenarios.
@ -409,8 +409,8 @@ NAT & Floating IPs
At the current time OpenStack Networking does not provide any facility
to support any flavor of NAT with IPv6. Unlike IPv4 there is no
current embedded support for floating IPs with IPv6. It is assumed
that the IPv6 addressing amongst the tenants are using GUAs with no
overlap across the tenants.
that the IPv6 addressing amongst the projects are using GUAs with no
overlap across the projects.
Security considerations
~~~~~~~~~~~~~~~~~~~~~~~
@ -437,7 +437,7 @@ OpenStack control & management network considerations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As of the Kilo release, considerable effort has gone in to ensuring
the tenant network can handle dual stack IPv6 and IPv4 transport
the project network can handle dual stack IPv6 and IPv4 transport
across the variety of configurations describe above. OpenStack control
network can be run in a dual stack configuration and OpenStack API
endpoints can be accessed via an IPv6 network. At this time, Open vSwitch
@ -452,7 +452,7 @@ delegation. This section describes the configuration and workflow steps
necessary to use IPv6 prefix delegation to provide automatic allocation of
subnet CIDRs. This allows you as the OpenStack administrator to rely on an
external (to the OpenStack Networking service) DHCPv6 server to manage your
tenant network prefixes.
project network prefixes.
.. note::

View File

@ -19,7 +19,7 @@ distinguishes between the two kinds of drivers that can be configured:
Each available network type is managed by an ML2 type driver. Type drivers
maintain any needed type-specific network state. They validate the type
specific information for provider networks and are responsible for the
allocation of a free segment in tenant networks.
allocation of a free segment in project networks.
* Mechanism drivers
@ -162,7 +162,7 @@ More information about provider networks see
Project network types
^^^^^^^^^^^^^^^^^^^^^
Project (tenant) networks provide connectivity to instances for a particular
Project networks provide connectivity to instances for a particular
project. Regular (non-privileged) users can manage project networks
within the allocation that an administrator or operator defines for
them. More information about project and provider networks see
@ -177,26 +177,26 @@ server:
* VLAN
The administrator needs to configure the range of VLAN IDs that can be
used for project (tenant) network allocation.
used for project network allocation.
For more details, see the related section in the
`Configuration Reference <http://docs.openstack.org/newton/config-reference/networking/networking_options_reference.html#modular-layer-2-ml2-vlan-type-configuration-options>`__.
* GRE
The administrator needs to configure the range of tunnel IDs that can be
used for project (tenant) network allocation.
used for project network allocation.
For more details, see the related section in the
`Configuration Reference <http://docs.openstack.org/newton/config-reference/networking/networking_options_reference.html#modular-layer-2-ml2-gre-type-configuration-options>`__.
* VXLAN
The administrator needs to configure the range of VXLAN IDs that can be
used for project (tenant) network allocation.
used for project network allocation.
For more details, see the related section in the
`Configuration Reference <http://docs.openstack.org/newton/config-reference/networking/networking_options_reference.html#modular-layer-2-ml2-vxlan-type-configuration-options>`__.
.. note::
Flat networks for project (tenant) allocation are not supported. They only
Flat networks for project allocation are not supported. They only
can exist as a provider network.
Mechanism drivers

View File

@ -107,10 +107,10 @@ On compute nodes:
QoS currently works with ml2 only (SR-IOV, Open vSwitch, and linuxbridge
are drivers that are enabled for QoS in Mitaka release).
Trusted tenants policy.json configuration
-----------------------------------------
Trusted projects policy.json configuration
------------------------------------------
If tenants are trusted to administrate their own QoS policies in
If projects are trusted to administrate their own QoS policies in
your cloud, neutron's file ``policy.json`` can be modified to allow this.
Modify ``/etc/neutron/policy.json`` policy entries as follows:
@ -147,10 +147,10 @@ User workflow
QoS policies are only created by admins with the default ``policy.json``.
Therefore, you should have the cloud operator set them up on
behalf of the cloud tenants.
behalf of the cloud projects.
If tenants are trusted to create their own policies, check the trusted tenants
``policy.json`` configuration section.
If projects are trusted to create their own policies, check the trusted
projects ``policy.json`` configuration section.
First, create a QoS policy and its bandwidth limit rule:
@ -285,11 +285,11 @@ network, or initially create the network attached to the policy.
Administrator enforcement
-------------------------
Administrators are able to enforce policies on tenant ports or networks.
As long as the policy is not shared, the tenant is not be able to detach
Administrators are able to enforce policies on project ports or networks.
As long as the policy is not shared, the project is not be able to detach
any policy attached to a network or port.
If the policy is shared, the tenant is able to attach or detach such
If the policy is shared, the project is able to attach or detach such
policy from its own ports and networks.

View File

@ -82,7 +82,7 @@ VLAN is a networking technology that enables a single switch to act as
if it was multiple independent switches. Specifically, two hosts that
are connected to the same switch but on different VLANs do not see
each other's traffic. OpenStack is able to take advantage of VLANs to
isolate the traffic of different tenants, even if the tenants happen
isolate the traffic of different projects, even if the projects happen
to have instances running on the same compute host. Each VLAN has an
associated numerical ID, between 1 and 4095. We say "VLAN 15" to refer
to the VLAN with a numerical ID of 15.
@ -121,7 +121,7 @@ the VLAN IDs is called a *trunk port*. IEEE 802.1Q is the network standard
that describes how VLAN tags are encoded in Ethernet frames when trunking is
being used.
Note that if you are using VLANs on your physical switches to implement tenant
Note that if you are using VLANs on your physical switches to implement project
isolation in your OpenStack cloud, you must ensure that all of your
switchports are configured as trunk ports.
@ -129,7 +129,7 @@ It is important that you select a VLAN range not being used by your current
network infrastructure. For example, if you estimate that your cloud must
support a maximum of 100 projects, pick a VLAN range outside of that value,
such as VLAN 200299. OpenStack, and all physical network infrastructure that
handles tenant networks, must then support this VLAN range.
handles project networks, must then support this VLAN range.
Trunking is used to connect between different switches. Each trunk uses a tag
to identify which VLAN is in use. This ensures that switches on the same VLAN

View File

@ -50,12 +50,12 @@ and subnets and instruct other OpenStack services like Compute to attach
virtual devices to ports on these networks.
OpenStack Compute is a prominent consumer of OpenStack Networking to provide
connectivity for its instances.
In particular, OpenStack Networking supports each tenant having multiple
private networks and enables tenants to choose their own IP addressing scheme,
even if those IP addresses overlap with those that other tenants use. There are
two types of network, tenant and provider networks. It is possible to share any
of these types of networks among tenants as part of the network creation
process.
In particular, OpenStack Networking supports each project having multiple
private networks and enables projects to choose their own IP addressing scheme,
even if those IP addresses overlap with those that other projects use. There
are two types of network, project and provider networks. It is possible to
share any of these types of networks among projects as part of the network
creation process.
.. _intro-os-networking-provider:
@ -137,16 +137,17 @@ self-service networks and instances using them. Consider implementing one or
more high-availability features to increase redundancy and performance
of self-service networks.
Users create tenant networks for connectivity within projects. By default, they
are fully isolated and are not shared with other projects. OpenStack Networking
supports the following types of network isolation and overlay technologies.
Users create project networks for connectivity within projects. By default,
they are fully isolated and are not shared with other projects. OpenStack
Networking supports the following types of network isolation and overlay
technologies.
Flat
All instances reside on the same network, which can also be shared
with the hosts. No VLAN tagging or other network segregation takes place.
VLAN
Networking allows users to create multiple provider or tenant networks
Networking allows users to create multiple provider or project networks
using VLAN IDs (802.1Q tagged) that correspond to VLANs present in the
physical network. This allows instances to communicate with each other
across the environment. They can also communicate with dedicated servers,
@ -157,8 +158,8 @@ GRE and VXLAN
VXLAN and GRE are encapsulation protocols that create overlay networks
to activate and control communication between compute instances. A
Networking router is required to allow traffic to flow outside of the
GRE or VXLAN tenant network. A router is also required to connect
directly-connected tenant networks with external networks, including the
GRE or VXLAN project network. A router is also required to connect
directly-connected project networks with external networks, including the
Internet. The router provides the ability to connect to instances directly
from an external network using floating IP addresses.
@ -170,7 +171,7 @@ Subnets
A block of IP addresses and associated configuration state. This
is also known as the native IPAM (IP Address Management) provided by the
networking service for both tenant and provider networks.
networking service for both project and provider networks.
Subnets are used to allocate IP addresses when new ports are created on a
network.
@ -178,7 +179,7 @@ Subnet pools
------------
End users normally can create subnets with any valid IP addresses without other
restrictions. However, in some cases, it is nice for the admin or the tenant
restrictions. However, in some cases, it is nice for the admin or the project
to pre-define a pool of addresses from which to create subnets with automatic
allocation.

View File

@ -33,7 +33,7 @@ components:
NIC on the VM into a particular network.
* OpenStack :term:`Dashboard (horizon)` is used by administrators
and tenant users to create and manage network services through a web-based
and project users to create and manage network services through a web-based
graphical interface.
.. note::

View File

@ -8,7 +8,7 @@ Two networking models exist in OpenStack. The first is called legacy
networking (:term:`nova-network`) and it is a sub-process embedded in
the Compute project (nova). This model has some limitations, such as
creating complex network topologies, extending its back-end implementation
to vendor-specific technologies, and providing tenant-specific networking
to vendor-specific technologies, and providing project-specific networking
elements. These limitations are the main reasons the OpenStack
Networking (neutron) model was created.

View File

@ -5,7 +5,7 @@ Resource purge
==============
The Networking service provides a purge mechanism to delete the
following network resources for a project (tenant):
following network resources for a project:
* Networks
* Subnets
@ -33,7 +33,7 @@ Usage
$ neutron purge PROJECT_ID
Replace ``PROJECT_ID`` with the project (tenant) ID.
Replace ``PROJECT_ID`` with the project ID.
The command provides output that includes a completion percentage and
the quantity of successful or unsuccessful network resource deletions.