Update Neutron documentation with `project`

Based on changes in Keystone we try to unify naming across the projects.

Change-Id: I9fb3a9aa32b9834cb7bd7ae651c75939a0f5f687
Partially-Implements: blueprint keystone-v3
This commit is contained in:
Dariusz Smigiel 2016-06-08 12:45:40 -05:00
parent bc83a0eb0f
commit 23c240724b
8 changed files with 42 additions and 42 deletions

View File

@ -83,13 +83,13 @@ from tenancy.
Prior to the Mitaka release, there was implicitly only a single 'shared'
address scope. Arbitrary address overlap was allowed making it pretty much a
"free for all". To make things seem somewhat sane, normal tenants are not able
to use routers to cross-plug networks from different tenants and NAT was used
"free for all". To make things seem somewhat sane, normal users are not able
to use routers to cross-plug networks from different projects and NAT was used
between internal networks and external networks. It was almost as if each
tenant had a private address scope.
project had a private address scope.
The problem is that this model cannot support use cases where NAT is not
desired or supported (e.g. IPv6) or we want to allow different tenants to
desired or supported (e.g. IPv6) or we want to allow different projects to
cross-plug their networks.
An AddressScope covers only one address family. But, they work equally well

View File

@ -75,7 +75,7 @@ Example response ::
"used_ips": 3
}
],
"tenant_id": "test-tenant",
"tenant_id": "test-project",
"total_ips": 253,
"used_ips": 3
},
@ -83,7 +83,7 @@ Example response ::
"network_id": "47035bae-4f29-4fef-be2e-2941b72528a8",
"network_name": "net2",
"subnet_ip_availability": [],
"tenant_id": "test-tenant",
"tenant_id": "test-project",
"total_ips": 0,
"used_ips": 0
},
@ -100,7 +100,7 @@ Example response ::
"used_ips": 2
}
],
"tenant_id": "test-tenant",
"tenant_id": "test-project",
"total_ips": 253,
"used_ips": 2
}
@ -137,7 +137,7 @@ Example response ::
"used_ips": 3
}
],
"tenant_id": "test-tenant",
"tenant_id": "test-project",
"total_ips": 253,
"used_ips": 3
}
@ -154,7 +154,7 @@ This API currently supports the following query parameters:
* **network_name**: Returns availability for network matching
the provided name
* **tenant_id**: Returns availability for all networks owned by the provided
tenant ID.
project ID.
* **ip_version**: Filters network subnets by those supporting the supplied
ip version. Values can be either 4 or 6.
@ -162,8 +162,8 @@ Query filters can be combined to further narrow results and what is returned
will match all criteria. When a parameter is specified more
than once, it will return results that match both. Examples: ::
# Fetch IPv4 availability for a specific tenant uuid
GET /v2.0/network-ip-availabilities?ip_version=4&tenant_id=example-tenant-uuid
# Fetch IPv4 availability for a specific project uuid
GET /v2.0/network-ip-availabilities?ip_version=4&tenant_id=example-project-uuid
# Fetch multiple networks by their ids
GET /v2.0/network-ip-availabilities?network_id=uuid_sample_1&network_id=uuid_sample_2

View File

@ -29,7 +29,7 @@ connectivity for instances, along with bridges created in conjunction
with OpenStack Nova for filtering.
ovs-neutron-agent can be configured to use different networking technologies
to create tenant isolation.
to create project isolation.
These technologies are implemented as ML2 type drivers which are used in
conjunction with the OpenVSwitch mechanism driver.
@ -72,7 +72,7 @@ Bridge Management
In order to make the agent capable of handling more than one tunneling
technology, to decouple the requirements of segmentation technology
from tenant isolation, and to preserve backward compatibility for OVS
from project isolation, and to preserve backward compatibility for OVS
agents working without tunneling, the agent relies on a tunneling bridge,
or br-tun, and the well known integration bridge, or br-int.
@ -82,7 +82,7 @@ externally). The VLAN id of this local VLAN is mapped to the physical
networking details realizing that virtual network.
For virtual networks realized as VXLAN/GRE tunnels, a Logical Switch
(LS) identifier is used to differentiate tenant traffic on inter-HV
(LS) identifier is used to differentiate project traffic on inter-HV
tunnels. A mesh of tunnels is created to other Hypervisors in the
cloud. These tunnels originate and terminate on the tunneling bridge
of each hypervisor, leaving br-int unaffected. Port patching is done

View File

@ -61,7 +61,7 @@ This step uses the ``neutron.policy.enforce`` routine. This routine raises
REST API controllers catch this exception and return:
* A 403 response code on a ``POST`` request or an ``PUT`` request for an
object owned by the tenant submitting the request;
object owned by the project submitting the request;
* A 403 response for failures while authorizing API actions such as
``add_router_interface``;
* A 404 response for ``DELETE``, ``GET`` and all other ``PUT`` requests.
@ -177,7 +177,7 @@ OwnerCheck: Extended Checks for Resource Ownership
This class is registered for rules matching the ``tenant_id`` keyword and
overrides the generic check performed by oslo_policy in this case.
It uses for those cases where neutron needs to check whether the tenant
It uses for those cases where neutron needs to check whether the project
submitting a request for a new resource owns the parent resource of the one
being created. Current usages of ``OwnerCheck`` include, for instance,
creating and updating a subnet.
@ -214,7 +214,7 @@ FieldCheck: Verify Resource Attributes
This class is registered with the policy engine for rules matching the 'field'
keyword, and provides a way to perform fine grained checks on resource
attributes. For instance, using this class of rules it is possible to specify
a rule for granting every tenant read access to shared resources.
a rule for granting every project read access to shared resources.
In policy.json, a FieldCheck rules is specified in the following way::
@ -250,11 +250,11 @@ served by Neutron "core" and for the APIs served by the various Neutron
policy engine for evaluation;
* The ``tenant_id`` attribute is a fundamental one in Neutron API request
authorization. The default policy, ``admin_or_owner``, uses it to validate
if a tenant owns the resource it is trying to operate on. To this aim,
if a project owns the resource it is trying to operate on. To this aim,
if a resource without a tenant_id is created, it is important to ensure
that ad-hoc authZ policies are specified for this resource.
* There is still only one check which is hardcoded in Neutron's API layer:
the check to verify that a tenant owns the network on which it is creating
the check to verify that a project owns the network on which it is creating
a port. This check is hardcoded and is always executed when creating a
port, unless the network is shared. Unfortunatelu a solution for performing
this check in an efficient way through the policy engine has not yet been
@ -274,9 +274,9 @@ Notes
AMQP channel. For all these requests a neutron admin context is built, and
the plugins will process them as such.
* For ``PUT`` and ``DELETE`` requests a 404 error is returned on request
authorization failures rather than a 403, unless the tenant submitting the
authorization failures rather than a 403, unless the project submitting the
request own the resource to update or delete. This is to avoid conditions
in which an API client might try and find out other tenants' resource
in which an API client might try and find out other projects' resource
identifiers by sending out ``PUT`` and ``DELETE`` requests for random
resource identifiers.
* There is no way at the moment to specify an ``OR`` relationship between two

View File

@ -29,10 +29,10 @@ The Neutron API exposes an extension for managing such quotas. Quota limits are
enforced at the API layer, before the request is dispatched to the plugin.
Default values for quota limits are specified in neutron.conf. Admin users
can override those defaults values on a per-tenant basis. Limits are stored
in the Neutron database; if no limit is found for a given resource and tenant,
can override those defaults values on a per-project basis. Limits are stored
in the Neutron database; if no limit is found for a given resource and project,
then the default value for such resource is used.
Configuration-based quota management, where every tenant gets the same quota
Configuration-based quota management, where every project gets the same quota
limit specified in the configuration file, has been deprecated as of the
Liberty release.
@ -71,10 +71,10 @@ component handles quota enforcement. This API extension is loaded like any
other extension. For this reason plugins must explicitly support it by including
"quotas" in the support_extension_aliases attribute.
In the Quota API simple CRUD operations are used for managing tenant quotas.
Please note that the current behaviour when deleting a tenant quota is to reset
quota limits for that tenant to configuration defaults. The API
extension does not validate the tenant identifier with the identity service.
In the Quota API simple CRUD operations are used for managing project quotas.
Please note that the current behaviour when deleting a project quota is to reset
quota limits for that project to configuration defaults. The API
extension does not validate the project identifier with the identity service.
Performing quota enforcement is the responsibility of the Quota Engine.
RESTful API controllers, before sending a request to the plugin, try to obtain
@ -84,7 +84,7 @@ operation to the plugin.
For a reservation to be successful, the total amount of resources requested,
plus the total amount of resources reserved, plus the total amount of resources
already stored in the database should not exceed the tenant's quota limit.
already stored in the database should not exceed the project's quota limit.
Finally, both quota management and enforcement rely on a "quota driver" [#]_,
whose task is basically to perform database operations.
@ -99,25 +99,25 @@ controller class [#]_.
This class does not implement the POST operation. List, get, update, and
delete operations are implemented by the usual index, show, update and
delete methods. These method simply call into the quota driver for either
fetching tenant quotas or updating them.
fetching project quotas or updating them.
The _update_attributes method is called only once in the controller lifetime.
This method dynamically updates Neutron's resource attribute map [#]_ so that
an attribute is added for every resource managed by the quota engine.
Request authorisation is performed in this controller, and only 'admin' users
are allowed to modify quotas for tenants. As the neutron policy engine is not
are allowed to modify quotas for projects. As the neutron policy engine is not
used, it is not possible to configure which users should be allowed to manage
quotas using policy.json.
The driver operations dealing with quota management are:
* delete_tenant_quota, which simply removes all entries from the 'quotas'
table for a given tenant identifier;
* update_quota_limit, which adds or updates an entry in the 'quotas' tenant for
a given tenant identifier and a given resource name;
* _get_quotas, which fetches limits for a set of resource and a given tenant
table for a given project identifier;
* update_quota_limit, which adds or updates an entry in the 'quotas' project
for a given project identifier and a given resource name;
* _get_quotas, which fetches limits for a set of resource and a given project
identifier
* _get_all_quotas, which behaves like _get_quotas, but for all tenants.
* _get_all_quotas, which behaves like _get_quotas, but for all projects.
Resource Usage Info
@ -238,12 +238,12 @@ Reservations are committed or cancelled by respectively calling the
commit_reservation and cancel_reservation methods in neutron.quota.QuotaEngine.
Reservations are not perennial. Eternal reservation would eventually exhaust
tenants' quotas because they would never be removed when an API worker crashes
projects' quotas because they would never be removed when an API worker crashes
whilst in the middle of an operation.
Reservation expiration is currently set to 120 seconds, and is not
configurable, not yet at least. Expired reservations are not counted when
calculating resource usage. While creating a reservation, if any expired
reservation is found, all expired reservation for that tenant and resource
reservation is found, all expired reservation for that project and resource
will be removed from the database, thus avoiding build-up of expired
reservations.

View File

@ -30,7 +30,7 @@ https://wiki.openstack.org/wiki/Neutron/SecurityGroups
API Extension
-------------
The API extension is the 'front' end portion of the code, which handles defining a `REST-ful API`_, which is used by tenants.
The API extension is the 'front' end portion of the code, which handles defining a `REST-ful API`_, which is used by projects.
.. _`REST-ful API`: https://git.openstack.org/cgit/openstack/neutron/tree/neutron/extensions/securitygroup.py
@ -46,7 +46,7 @@ The Security Group API extension adds a number of `methods to the database layer
Agent RPC
---------
This portion of the code handles processing requests from tenants, after they have been stored in the database. It involves messaging all the L2 agents
This portion of the code handles processing requests from projects, after they have been stored in the database. It involves messaging all the L2 agents
running on the compute nodes, and modifying the IPTables rules on each hypervisor.

View File

@ -32,7 +32,7 @@ services. Among those of special interest:
#. neutron-server that provides API endpoints and serves as a single point of
access to the database. It usually runs on nodes called Controllers.
#. Layer2 agent that can utilize Open vSwitch, Linuxbridge or other vendor
specific technology to provide network segmentation and isolation for tenant
specific technology to provide network segmentation and isolation for project
networks. The L2 agent should run on every node where it is deemed
responsible for wiring and securing virtual interfaces (usually both Compute
and Network nodes).

View File

@ -199,7 +199,7 @@ following these suggestions:
the 'why' is important, but can be omitted if obvious (not to you of course).
* Pre-conditions: what is the initial state of your system? Please consider
enumerating resources available in the system, if useful in diagnosing
the problem. Who are you? A regular tenant or a super-user? Are you
the problem. Who are you? A regular user or a super-user? Are you
describing service-to-service interaction?
* Step-by-step reproduction steps: these can be actual neutron client
commands or raw API requests; Grab the output if you think it is useful.