From 3a097cbbf76ac8a431234652a126ff2642e0b8d8 Mon Sep 17 00:00:00 2001 From: Diane Fleming Date: Wed, 18 Sep 2013 12:35:50 -0500 Subject: [PATCH] Corrected spelling, grammar, and formatting errors, and updated pom file to 1.10.0 Change-Id: I627fa3cb3a1135e8655b91d33a07131aef4cb9aa author: diane fleming --- doc/admin-guide-cloud/ch_networking.xml | 4371 ++++++++++------- doc/admin-guide-cloud/pom.xml | 2 +- doc/admin-guide-network/pom.xml | 2 +- doc/common/ch_resources.xml | 2 +- doc/config-reference/pom.xml | 2 +- doc/glossary/pom.xml | 2 +- doc/high-availability-guide/pom.xml | 2 +- doc/image-guide/pom.xml | 2 +- doc/install-guide/pom.xml | 2 +- doc/security-guide/pom.xml | 2 +- doc/training-guide/pom.xml | 2 +- doc/user-guide-admin/pom.xml | 2 +- ..._dashboard_admin_manage_projects_users.xml | 3 +- doc/user-guide/pom.xml | 2 +- 14 files changed, 2479 insertions(+), 1919 deletions(-) diff --git a/doc/admin-guide-cloud/ch_networking.xml b/doc/admin-guide-cloud/ch_networking.xml index 4da4b4c644..0b70ddbd8f 100644 --- a/doc/admin-guide-cloud/ch_networking.xml +++ b/doc/admin-guide-cloud/ch_networking.xml @@ -4,197 +4,220 @@ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_networking"> Networking - This chapter describes the high-level concepts and components of an OpenStack Networking - administration in a cloud system. + Learn OpenStack Networking concepts, architecture, and basic + and advanced neutron and nova command-line interface (CLI) + commands so that you can administer OpenStack Networking in a + cloud.
Introduction to Networking - The OpenStack Networking project was created to provide a rich - API for defining network connectivity and - addressing in the cloud. The OpenStack Networking project gives - operators the ability to leverage different networking + The OpenStack Networking service, code-named neutron, + provides an API for defining network connectivity and + addressing in the cloud. The OpenStack Networking service + enables operators to leverage different networking technologies to power their cloud networking. - For a detailed description of the OpenStack Networking API - abstractions and their attributes, see the The Openstack Networking service also provides an API to + configure and manage a variety of network services ranging + from L3 forwarding and NAT to load balancing, edge + firewalls, and IPSEC VPN. + For a detailed description of the OpenStack Networking + API abstractions and their attributes, see the OpenStack Networking API Guide - (v2.0). + >OpenStack Networking API v2.0 + Reference.
Networking API - Networking is a virtual network service that provides a powerful API to define the - network connectivity and addressing used by devices from other services, such as - OpenStack Compute.   - The Compute API has a virtual server abstraction to describe computing resources. - Similarly, the OpenStack Networking API has virtual network, subnet, and port - abstractions to describe networking resources. In more detail: - - Network. An isolated L2 segment, - analogous to VLAN in the physical networking world. - - - Subnet. A block of v4 or v6 IP - addresses and associated configuration state. - - - Port. A connection point for - attaching a single device, such as the NIC of a virtual server, to a - virtual network. Also describes the associated network configuration, - such as the MAC and IP addresses to be used on that port.   - - You can configure rich network topologies by creating and - configuring networks and subnets, and then instructing other OpenStack services like - OpenStack Compute to attach virtual devices to ports on these networks.  In - particular, OpenStack Networking supports each tenant having multiple private - networks, and allows tenants to choose their own IP addressing scheme (even if those - IP addresses overlap with those used by other tenants). The OpenStack Networking - service: - - Enables advanced cloud networking use cases, such as building - multi-tiered web applications and allowing applications to be migrated - to the cloud without changing IP addresses. - - - Offers flexibility for the cloud administrator to customized network - offerings. - - - Provides a mechanism that lets cloud administrators expose additional - API capabilities through API extensions.  Commonly, new capabilities are - first introduced as an API extension, and over time become part of the - core OpenStack Networking API. - - - + Networking is a virtual network service that + provides a powerful API to define the network + connectivity and IP addressing used by devices from + other services, such as OpenStack Compute. + The Compute API has a virtual server abstraction to + describe computing resources. Similarly, the OpenStack + Networking API has virtual network, subnet, and port + abstractions to describe networking resources. In more + detail: + + + Network. An + isolated L2 segment, analogous to VLAN in the + physical networking world. + + + Subnet. A + block of v4 or v6 IP addresses and associated + configuration state. + + + Port. A + connection point for attaching a single + device, such as the NIC of a virtual server, + to a virtual network. Also describes the + associated network configuration, such as the + MAC and IP addresses to be used on that + port. + + + You can configure rich network topologies by + creating and configuring networks and subnets, and + then instructing other OpenStack services like + OpenStack Compute to attach virtual devices to ports + on these networks. In particular, OpenStack Networking + supports each tenant having multiple private networks, + and allows tenants to choose their own IP addressing + scheme (even if those IP addresses overlap with those + used by other tenants). The OpenStack Networking + service: + + + Enables advanced cloud networking use cases, + such as building multi-tiered web applications + and allowing applications to be migrated to + the cloud without changing IP + addresses. + + + Offers flexibility for the cloud + administrator to customized network + offerings. + + + Provides a mechanism that lets cloud + administrators expose additional API + capabilities through API extensions. At first, + new functionality is introduced as an API + extension. Over time, the functionality + becomes part of the core OpenStack Networking + API. + +
- Plugin architecture + Plug-in architecture Enhancing traditional networking solutions to provide rich cloud networking is challenging. Traditional networking is not designed to scale to - cloud proportions nor to handle automatic configuration. - The original OpenStack Compute network implementation assumed a - very basic model of performing all isolation through - Linux VLANs and IP tables. OpenStack Networking introduces the - concept of a plugin, which is a back-end - implementation of the OpenStack Networking API. A plugin can use a - variety of technologies to implement the logical API - requests.  Some OpenStack Networking plugins might use basic Linux - VLANs and IP tables, while others might use more - advanced technologies, such as L2-in-L3 tunneling or - OpenFlow, to provide similar benefits. - The following plugins are currently included in the OpenStack Networking distribution: - - Big Switch Plugin (Floodlight REST Proxy). - http://www.openflowhub.org/display/floodlightcontroller/Quantum+REST+Proxy+Plugin - - - - Brocade Plugin. - https://github.com/brocade/brocade - - - - Cisco. - http://wiki.openstack.org/cisco-neutron - - - - Cloudbase Hyper-V Plugin. - http://www.cloudbase.it/quantum-hyper-v-plugin/ - - - - Linux Bridge Plugin. - Documentation included in this guide and at - http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin -   - - - Mellanox Plugin. - https://wiki.openstack.org/wiki/Mellanox-Neutron/ - - - - Midonet Plugin. - - http://www.midokura.com/ - - - - NEC OpenFlow Plugin. - http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin - - + cloud proportions nor to handle automatic + configuration. + The original OpenStack Compute network + implementation assumed a very basic model of + performing all isolation through Linux VLANs and IP + tables. OpenStack Networking introduces the concept of + a plug-in, which is + a back-end implementation of the OpenStack Networking + API. A plug-in can use a variety of technologies to + implement the logical API requests. Some OpenStack + Networking plug-ins might use basic Linux VLANs and IP + tables, while others might use more advanced + technologies, such as L2-in-L3 tunneling or OpenFlow, + to provide similar benefits. + OpenStack Networking includes the following + plug-ins: + + + Big Switch Plug-in + (Floodlight REST Proxy). http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin + + + + Brocade + Plug-in. https://github.com/brocade/brocade + + + + Cisco. + http://wiki.openstack.org/cisco-neutron + + + + Cloudbase Hyper-V + Plug-in. http://www.cloudbase.it/quantum-hyper-v-plugin/ + + + + Linux Bridge + Plug-in. Documentation included + in this guide at http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin +   + + + Mellanox + Plug-in. + https://wiki.openstack.org/wiki/Mellanox-Neutron/ + + + + Midonet + Plug-in. + http://www.midokura.com/ + + + + NEC OpenFlow + Plug-in. http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin + + - - Nicira NVP Plugin. - Documentation include in this guide, - - NVP Product Overview , and - NVP Product Support. - - - - Open vSwitch Plugin. - Documentation included in this guide. - - - - PLUMgrid. - https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron - - - - Ryu Plugin. - https://github.com/osrg/ryu/wiki/OpenStack - - - - - Plugins can have different properties for hardware requirements, features, performance, - scale, or operator tools. Because OpenStack Networking supports a large number of plugins, - the cloud administrator is able to weigh different options and decide which networking - technology is right for the deployment. - - - Not all OpenStack networking plugins are compatible with all possible OpenStack compute drivers: + + Nicira NVP + Plug-in. Documentation is + included in this guide, NVP Product Overview, and NVP Product Support + + + + Open vSwitch + Plug-in. Documentation included + in this guide. + + + PLUMgrid. + https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron + + + + Ryu + Plug-in. https://github.com/osrg/ryu/wiki/OpenStack + + + + Plug-ins can have different properties for hardware + requirements, features, performance, scale, or + operator tools. Because OpenStack Networking supports + a large number of plug-ins, the cloud administrator is + able to weigh different options and decide which + networking technology is right for the + deployment. + + Not all OpenStack networking plug-ins are compatible + with all possible OpenStack compute drivers: - - - - - - - - + - + @@ -204,114 +227,114 @@ - + - - - - - + - - - - - + - - - - - + - - - + - - + - - - - - + - - - - - + - + - - - - - + - - - - - + - - - + - + - - - - - + - + - - - + - - - - - +
Plugin Compatibility with OpenStack Compute Drivers Plug-in Compatibility with OpenStack Compute + Drivers
Libvirt (KVM/QEMU) XenServer VMware
Bigswitch / Floodlight Yes + + + +
Brocade Yes + + + +
Cisco Yes + + + +
Cloudbase Hyper-V + + Yes +
Linux Bridge Yes + + + +
Mellanox Yes + + + +
Midonet Yes + + + +
NEC OpenFlow Yes + + + +
Nicira NVP Yes Yes Yes + +
Open vSwitch Yes + + + +
Plumgrid Yes Yes + +
Ryu Yes + + + +
@@ -319,10 +342,9 @@
Networking architecture - This section describes the high-level components of an Networking deployment. Before - you deploy Networking, it is useful to understand the different components that make up - the solution, and how these components interact with each other and with other OpenStack - services. + Before you deploy Networking, it helps to understand the + Networking components and how these components interact + with each other and with other OpenStack services.
Overview @@ -333,1569 +355,2065 @@ services, a deployment of OpenStack Networking often involves deploying several processes on a variety of hosts. - The main process of the OpenStack Networking server is - neutron-server, which is a - Python daemon that exposes the OpenStack Networking API and passes - user requests to the configured OpenStack Networking plugin for - additional processing. Typically, the plugin requires - access to a database for persistent storage (also similar - to other OpenStack services). - If your deployment uses a controller host to run centralized - OpenStack Compute components, you can deploy the OpenStack Networking server on - that same host. However, OpenStack Networking is entirely + The main process of the OpenStack Networking server + is neutron-server, which is a + Python daemon that exposes the OpenStack Networking + API and passes user requests to the configured + OpenStack Networking plug-in for additional + processing. Typically, the plug-in requires access to + a database for persistent storage (also similar to + other OpenStack services). + If your deployment uses a controller host to run + centralized OpenStack Compute components, you can + deploy the OpenStack Networking server on that same + host. However, OpenStack Networking is entirely standalone and can be deployed on its own host as - well. OpenStack Networking also includes additional agents that - might be required, depending on your deployment: + well. OpenStack Networking also includes additional + agents that might be required, depending on your + deployment: + + + plug-in + agent + (neutron-*-agent). Runs + on each hypervisor to perform local vswitch + configuration. The agent to be run will depend + on which plug-in you are using, because some + plug-ins do not actually require an agent. + + + + dhcp agent + (neutron-dhcp-agent). + Provides DHCP services to tenant networks. + This agent is the same for all plug-ins. + + + + l3 agent + (neutron-l3-agent). + Provides L3/NAT forwarding to provide external + network access for VMs on tenant networks. + This agent is the same for all plug-ins. + + + + These agents interact with the main neutron process + through RPC (for example, rabbitmq or qpid) or through + the standard OpenStack Networking API. Further: - plugin agent - (neutron-*-agent). - Runs on each hypervisor to perform local - vswitch configuration. The agent to be run will - depend on which plugin you are using, because - some plugins do not actually require an agent. - + Networking relies on the OpenStack + Identity service (keystone) for the + authentication and authorization of all + API request.  - dhcp agent - (neutron-dhcp-agent). - Provides DHCP services to tenant networks. - This agent is the same for all plugins. - + Compute (nova) interacts with OpenStack + Networking through calls to its standard + API.  As part of creating a VM, the + nova-compute service + communicates with the OpenStack Networking + API to plug each virtual NIC on the VM + into a particular network.    - l3 - agent - (neutron-l3-agent). - Provides L3/NAT forwarding to provide - external network access for VMs on tenant - networks. This agent is the same for all plugins. - + The Dashboard (Horizon) integrates with + the OpenStack Networking API, allowing + administrators and tenant users to create + and manage network services through the + Dashboard GUI. - - - The above agents interact with the main Neutron process through RPC (for example, - rabbitmq or qpid) or through the standard OpenStack Networking API. Further: - - - Networking relies on the OpenStack Identity service (keystone) for the - authentication and authorization of all API request.  - - - Compute (nova) interacts with OpenStack Networking through calls to - its standard API.  As part of creating a VM, the nova-compute service communicates with - the OpenStack Networking API to plug each virtual NIC on the VM into a - particular network.    - - The Dashboard (Horizon) integrates with the OpenStack Networking API, allowing administrators - and tenant users to create and manage network services through the - Dashboard GUI. - -   -
+ +
Place services on physical hosts - Like other OpenStack services, Networking provides cloud administrators with - significant flexibility in deciding which individual services should run on which - physical devices. At one extreme, all service daemons can be run on a single - physical host for evaluation purposes. At the other, each service could have its own - physical hosts and, in some cases, be replicated across multiple hosts for - redundancy. For more information, see OpenStack Configuration Reference. + Like other OpenStack services, Networking provides + cloud administrators with significant flexibility in + deciding which individual services should run on which + physical devices. At one extreme, all service daemons + can be run on a single physical host for evaluation + purposes. At the other, each service could have its + own physical hosts and, in some cases, be replicated + across multiple hosts for redundancy. For more + information, see the OpenStack Configuration + Reference. In this guide, we focus primarily on a standard architecture that includes a “cloud controller” host, a “network gateway” host, and a set of hypervisors for - running VMs.  The "cloud controller" and "network gateway" can be combined - in simple deployments. However, if you expect VMs to send significant amounts of - traffic to or from the Internet, a dedicated network gateway host is recommended - to avoid potential CPU contention between packet forwarding performed by - the neutron-l3-agent and other OpenStack services. + running VMs.  The "cloud controller" and "network + gateway" can be combined in simple deployments. + However, if you expect VMs to send significant amounts + of traffic to or from the Internet, a dedicated + network gateway host is recommended to avoid potential + CPU contention between packet forwarding performed by + the neutron-l3-agent and other + OpenStack services.
Network connectivity for physical hosts - - - + + + - A standard OpenStack Networking setup has up to four distinct physical data center networks: + A standard OpenStack Networking set up has one or + more of the following distinct physical data center + networks: Management - network. Used for internal - communication between OpenStack Components.   - IP addresses on this network should be - reachable only within the data center.  - + network. Provides internal + communication between OpenStack Components. IP + addresses on this network should be reachable + only within the data center.  Data - network. Used for VM data + network. Provides VM data communication within the cloud deployment.  The IP addressing requirements of this network - depend on the OpenStack Networking plugin being used.  + depend on the OpenStack Networking plug-in + being used.  External - network. Used to provide VMs - with Internet access in some deployment - scenarios.  IP addresses on this network - should be reachable by anyone on the - Internet.  + network. Provides VMs with + Internet access in some deployment scenarios.  + IP addresses on this network should be + reachable by anyone on the Internet.  API network. Exposes all OpenStack - APIs, including the OpenStack Networking API, to - tenants. IP addresses on this network + APIs, including the OpenStack Networking API, + to tenants. IP addresses on this network should be reachable by anyone on the Internet. The API network may be the same as - the external network, because it is possible to create - an external-network subnet that is allocated - IP ranges that use less than the full - range of IP addresses in an IP block. + the external network, because it is possible + to create an external-network subnet that is + allocated IP ranges that use less than the + full range of IP addresses in an IP + block.
-
- Use Networking +
+ Use Networking - You can use OpenStack Networking in the following ways: - - Expose the OpenStack Networking API to cloud tenants, - which enables them to build rich network topologies. - - - Have the cloud administrator, or an automated - administrative tool, create network connectivity on behalf of tenants. - - - - - - A tenant or cloud administrator can both perform the following procedures. - -
- Core Networking API features - After you install and run OpenStack Networking, tenants - and administrators can perform create-read-update-delete (CRUD) API - networking operations by using either the - neutron CLI tool or the API. - Like other OpenStack CLI tools, the neutron - tool is just a basic wrapper around the OpenStack Networking API. Any - operation that can be performed using the CLI has an equivalent API call - that can be performed programmatically. - - The CLI includes a number of options. For details, refer to the - OpenStack End User Guide. - -
- API abstractions - The OpenStack Networking v2.0 API provides control over both - L2 network topologies and the IP addresses used on those networks - (IP Address Management or IPAM). There is also an extension to - cover basic L3 forwarding and NAT, which provides capabilities - similar to nova-network. - - In the OpenStack Networking API: - - A 'Network' is an isolated L2 network segment - (similar to a VLAN), which forms the basis for - describing the L2 network topology available in an OpenStack - Networking deployment. - - A 'Subnet' associates a block of IP addresses - and other network configuration (for example, default gateways - or dns-servers) with an OpenStack Networking network. Each - subnet represents an IPv4 or IPv6 address block and, if needed, - each OpenStack Networking network can have multiple subnets. - - A 'Port' represents an attachment port to a L2 - OpenStack Networking network. When a port - is created on the network, by default it is allocated an - available fixed IP address out of one of the designated subnets - for each IP version (if one exists). When the port is destroyed, - its allocated addresses return to the pool of available IPs on - the subnet. Users of the OpenStack Networking API can either - choose a specific IP address from the block, or let OpenStack - Networking choose the first available IP address. - - - - The following table summarizes the attributes available for each - of the previous networking abstractions. For more operations about - API abstraction and operations, please refer to the - Networking API v2.0 Reference. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Network attributes
AttributeTypeDefault valueDescription
admin_state_upboolTrueAdministrative state of the network. If specified as - False (down), this network does not forward - packets. -
iduuid-strGeneratedUUID for this network.
namestringNoneHuman-readable name for this network; is not required - to be unique. -
sharedboolFalseSpecifies whether this network resource can - be accessed by any tenant. The default policy setting restricts - usage of this attribute to administrative users only. -
statusstringN/AIndicates whether this network is - currently operational.
subnetslist(uuid-str)Empty listList of subnets associated with this network. -
tenant_iduuid-strN/ATenant owner of the network. Only administrative users - can set the tenant identifier; this cannot be changed - using authorization policies. -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Subnet Attributes
AttributeTypeDefault ValueDescription
allocation_poolslist(dict)Every address in cidr, - excluding gateway_ip (if - configured). - List of cidr sub-ranges that are available for dynamic - allocation to ports. Syntax: - [ { "start":"10.0.0.2", - "end": "10.0.0.254"} ] -
cidrstringN/AIP range for this subnet, based on the IP version.
dns_nameserverslist(string)Empty listList of DNS name servers used by hosts in this subnet.
enable_dhcpboolTrueSpecifies whether DHCP is enabled for this subnet.
gateway_ipstringFirst address in cidr - Default gateway used by devices in this subnet.
host_routeslist(dict)Empty listRoutes that should be used by devices with - IPs from this subnet (not including local - subnet route).
iduuid-stringGeneratedUUID representing this subnet.
ip_versionint4IP version.
namestringNoneHuman-readable name for this subnet (might - not be unique). -
network_iduuid-stringN/ANetwork with which this subnet is associated.
tenant_iduuid-stringN/AOwner of network. Only administrative users - can set the tenant identifier; this cannot be changed - using authorization policies. -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Port attributes
AttributeTypeDefault ValueDescription
admin_state_upbooltrueAdministrative state of this port. If specified as False - (down), this port does not forward packets. -
device_idstringNoneIdentifies the device using this port (for example, a - virtual server's ID). -
device_ownerstringNoneIdentifies the entity using this port (for example, a - dhcp agent).
fixed_ipslist(dict)Automatically allocated from poolSpecifies IP addresses for this port; associates - the port with the subnets containing the listed IP - addresses. -
iduuid-stringGeneratedUUID for this port.
mac_addressstringGeneratedMac address to use on this port.
namestringNoneHuman-readable name for this port (might - not be unique). -
network_iduuid-stringN/ANetwork with which this port is associated. -
statusstringN/AIndicates whether the network is currently - operational. -
tenant_iduuid-stringN/AOwner of the network. Only administrative users - can set the tenant identifier; this cannot be changed - using authorization policies. -
+ You can use OpenStack Networking in the following ways: + + Expose the OpenStack Networking API to cloud + tenants, which enables them to build rich + network topologies. + + + Have the cloud administrator, or an + automated administrative tool, create network + connectivity on behalf of tenants. + + + A tenant or cloud administrator can both perform the + following procedures. +
+ Core Networking API features + After you install and run OpenStack Networking, + tenants and administrators can perform + create-read-update-delete (CRUD) API networking + operations by using either the + neutron CLI tool or the API. + Like other OpenStack CLI tools, the + neutron tool is just a basic + wrapper around the OpenStack Networking API. Any + operation that can be performed using the CLI has an + equivalent API call that can be performed + programmatically. + The CLI includes a number of options. For details, + refer to the OpenStack End User + Guide. +
+ API abstractions + The OpenStack Networking v2.0 API provides + control over both L2 network topologies and the IP + addresses used on those networks (IP Address + Management or IPAM). There is also an extension to + cover basic L3 forwarding and NAT, which provides + capabilities similar to + nova-network. + In the OpenStack Networking API: + + A 'Network' is an isolated L2 + network segment (similar to a VLAN), + which forms the basis for describing + the L2 network topology available in + an OpenStack Networking deployment. + + + + A 'Subnet' associates a block of IP + addresses and other network + configuration (for example, default + gateways or dns-servers) with an + OpenStack Networking network. Each + subnet represents an IPv4 or IPv6 + address block and, if needed, each + OpenStack Networking network can have + multiple subnets. + + + A 'Port' represents an attachment + port to a L2 OpenStack Networking + network. When a port is created on the + network, by default it is allocated an + available fixed IP address out of one + of the designated subnets for each IP + version (if one exists). When the port + is destroyed, its allocated addresses + return to the pool of available IPs on + the subnet. Users of the OpenStack + Networking API can either choose a + specific IP address from the block, or + let OpenStack Networking choose the + first available IP address. + + + The following table summarizes the attributes + available for each of the previous networking + abstractions. For more operations about API + abstraction and operations, see the Networking API v2.0 Reference. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Network attributes
AttributeTypeDefault valueDescription
admin_state_upboolTrueAdministrative state of the network. + If specified as False (down), this + network does not forward packets. +
iduuid-strGeneratedUUID for this network.
namestringNoneHuman-readable name for this network; + is not required to be unique.
sharedboolFalseSpecifies whether this network + resource can be accessed by any + tenant. The default policy setting + restricts usage of this attribute to + administrative users only.
statusstringN/AIndicates whether this network is + currently operational.
subnetslist(uuid-str)Empty listList of subnets associated with this + network.
tenant_iduuid-strN/ATenant owner of the network. Only + administrative users can set the + tenant identifier; this cannot be + changed using authorization policies. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Subnet Attributes
AttributeTypeDefault ValueDescription
allocation_poolslist(dict)Every address in + cidr, + excluding + gateway_ip + (if configured). List of cidr sub-ranges that are + available for dynamic allocation to + ports. Syntax: + [ { "start":"10.0.0.2", + "end": "10.0.0.254"} ] +
cidrstringN/AIP range for this subnet, based on the + IP version.
dns_nameserverslist(string)Empty listList of DNS name servers used by hosts + in this subnet.
enable_dhcpboolTrueSpecifies whether DHCP is enabled for + this subnet.
gateway_ipstringFirst address in + cidr + Default gateway used by devices in + this subnet.
host_routeslist(dict)Empty listRoutes that should be used by devices + with IPs from this subnet (not + including local subnet route).
iduuid-stringGeneratedUUID representing this subnet.
ip_versionint4IP version.
namestringNoneHuman-readable name for this subnet + (might not be unique).
network_iduuid-stringN/ANetwork with which this subnet is + associated.
tenant_iduuid-stringN/AOwner of network. Only administrative + users can set the tenant identifier; + this cannot be changed using + authorization policies.
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Port attributes
AttributeTypeDefault ValueDescription
admin_state_upbooltrueAdministrative state of this port. If + specified as False (down), this port + does not forward packets.
device_idstringNoneIdentifies the device using this port + (for example, a virtual server's ID). +
device_ownerstringNoneIdentifies the entity using this port + (for example, a dhcp agent).
fixed_ipslist(dict)Automatically allocated from poolSpecifies IP addresses for this port; + associates the port with the subnets + containing the listed IP addresses. +
iduuid-stringGeneratedUUID for this port.
mac_addressstringGeneratedMac address to use on this port.
namestringNoneHuman-readable name for this port + (might not be unique).
network_iduuid-stringN/ANetwork with which this port is + associated.
statusstringN/AIndicates whether the network is + currently operational.
tenant_iduuid-stringN/AOwner of the network. Only + administrative users can set the + tenant identifier; this cannot be + changed using authorization policies. +
+
+
+ Basic Networking operations + To learn about advanced capabilities that are + available through the neutron command-line + interface (CLI), read the networking section in + the OpenStack End User Guide. + The following table shows example neutron + commands that enable you to complete basic + Networking operations: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Basic Networking operations
OperationCommand
Creates a network.$ neutron net-create net1
Creates a subnet that is associated + with net1.$ neutron subnet-create net1 10.0.0.0/24
Lists ports for a specified + tenant.$ neutron port-list
Lists ports for a specified tenant and + displays the + id, + fixed_ips, + and + device_owner + columns.$ neutron port-list -c id -c fixed_ips -c device_owner +
Shows information for a specified + port.$ neutron port-show port-id
+ + The device_owner + field describes who owns the port. A port + whose device_owner + begins with: + + network is + created by OpenStack + Networking. + + + compute is + created by OpenStack Compute. + + + + + +
+
+ Administrative operations + The cloud administrator can perform any + neutron call on + behalf of tenants by specifying an OpenStack + Identity tenant_id in the + request, as follows: + $ neutron net-create --tenant-id=tenant-id network-name + For example: + $ neutron net-create --tenant-id=5e4bbe24b67a4410bc4d9fae29ec394e net1 + + To view all tenant IDs in OpenStack + Identity, run the following command as an + OpenStack Identity (keystone) admin + user: + $ keystone tenant-list + +
+
+ Advanced Networking operations + The following table shows example neutron + commands that enable you to complete advanced + Networking operations: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Advanced Networking operations
OperationCommand
Creates a network that all tenants can + use.$ neutron net-create --shared public-net
Creates a subnet with a specified + gateway IP address.$ neutron subnet-create --gateway 10.0.0.254 net1 10.0.0.0/24
Creates a subnet that has no gateway + IP address.$ neutron subnet-create --no-gateway net1 10.0.0.0/24
Creates a subnet with DHCP + disabled.$ neutron subnet-create net1 10.0.0.0/24 --enable_dhcp False
Creates a subnet with a specified set + of host routes.$ neutron subnet-create test-net1 40.0.0.0/24 --host_routes type=dict list=true destination=40.0.1.0/24,nexthop=40.0.0.2
Creates a subnet with a specified set + of dns name servers.$ neutron subnet-create test-net1 40.0.0.0/24 --dns_nameservers list=true 8.8.8.7 8.8.8.8
Displays all ports and IPs allocated + on a network.$ neutron port-list --network_id net-id
+
-
- Basic operations - Before going further, it is highly recommended that you first - read the few pages in the - OpenStack End User Guide that are specific to OpenStack - Networking. OpenStack Networking's CLI has some advanced - capabilities that are described only in that guide. - - The following table provides just a few examples of the - neutron tool usage. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Basic OpenStack Networking operations
ActionCommand
Create a network.$ neutron net-create net1
Create a subnet associated with net1.$ neutron subnet-create net1 10.0.0.0/24
List ports on a tenant.$ neutron port-list
List ports on a tenant, and display the id, fixed_ips, and - device_owner columns.$ neutron port-list -c id -c fixed_ips -c device_owner -
Display details of a particular port.$ neutron port-show port-id
- - - The device_owner field describes who owns the - port. A port whose device_owner begins with: - - "network:" is created by OpenStack - Networking. - "compute:" is created by OpenStack Compute. - - - - -
-
- Administrative operations - The cloud administrator can perform any neutron - call on behalf of tenants by specifying an OpenStack Identity tenant_id in the request, as follows: - - $ neutron net-create --tenant-id=tenant-id network-name - - For example: - - $ neutron net-create --tenant-id=5e4bbe24b67a4410bc4d9fae29ec394e net1 -To view all tenant IDs in OpenStack Identity, run the - following command as an OpenStack Identity (keystone) admin user: - - $ keystone tenant-list - -
-
- Advanced operations - The following table provides a few advanced examples of using the - neutron tool to create and display - networks, subnets, and ports. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Advanced OpenStack Networking operations
ActionCommand
Create a "shared" network (that is, a network that can be used by all tenants).$ neutron net-create --shared public-net
Create a subnet that has a specific gateway IP address.$ neutron subnet-create --gateway 10.0.0.254 net1 10.0.0.0/24
Create a subnet that has no gateway IP address.$ neutron subnet-create --no-gateway net1 10.0.0.0/24
Create a subnet in which DHCP is disabled.$ neutron subnet-create net1 10.0.0.0/24 --enable_dhcp False
Create subnet with a specific set of host routes.$ neutron subnet-create test-net1 40.0.0.0/24 --host_routes type=dict list=true destination=40.0.1.0/24,nexthop=40.0.0.2
Create subnet with a specific set of dns nameservers.$ neutron subnet-create test-net1 40.0.0.0/24 --dns_nameservers list=true 8.8.8.7 8.8.8.8
Display all ports/IPs allocated on a network.$ neutron port-list --network_id net-id
-
-
-
- Use Compute with Networking -
- Basic operations - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Basic Compute/Networking operations
ActionCommand
Check available networks.$ neutron net-list
Boot a VM with a single NIC on a selected OpenStack Networking network.$ nova boot --image img --flavor flavor --nic net-id=net-id vm-name -
Search for all ports with a device_id corresponding to the OpenStack Compute instance UUID.$ neutron port-list --device_id=vm-id
Search for ports, but limit display to only the port's mac_address.$ neutron port-list -c mac_address --device_id=vm-id
Temporarily disable a port from sending traffic.$ neutron port-update port-id --admin_state_up=False
Delete a VM.$ nova delete --device_id=vm-id
- When you: - - Boot a Compute VM, a port on the network is - automatically created that corresponds to the VM Nic. You may - also need to configure security group rules to allow access to the VM. - Delete a Compute VM, the underlying OpenStack - Networking port is automatically deleted as well. - - -
-
- Advanced VM creation - - - - - - - - - - - - - - - - - - + + + + + + +
VM creation operations
ActionCommand
Boot a VM with multiple NICs.$ nova boot --image img --flavor flavor --nic net-id=net1-id --nic net-id=net2-id vm-name
Boot a VM with a specific IP address: first create an OpenStack - Networking port with a specific IP address, then boot - a VM specifying a port-id rather than a - net-id.$ neutron port-create --fixed-ip subnet_id=subnet-id,ip_address=IP net-id +
+ Use Compute with Networking +
+ Basic Compute and Networking operations + The following table shows example neutron and + nova commands that enable you to complete basic + Compute and Networking operations: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Basic Compute/Networking + operations
ActionCommand
Checks available networks.$ neutron net-list
Boots a VM with a single NIC on a + selected OpenStack Networking + network.$ nova boot --image img --flavor flavor --nic net-id=net-id vm-name +
Searches for ports with a + device_id + that matches the OpenStack Compute + instance UUID. + The + device_id + can also be a logical router + ID. + $ neutron port-list --device_id=vm-id
Searches for ports, but shows only the + mac_address + for the port.$ neutron port-list --field mac_address --device_id=vm-id
Temporarily disables a port from + sending traffic.$ neutron port-update port-id --admin_state_up=False
+ + + + When you boot a Compute VM, a port + on the network is automatically + created that corresponds to the VM NIC + and is automatically associated with + the default security group. You can + configure security group rules to + enable users to access the VM. + + + When you delete a Compute VM, the + underlying OpenStack Networking port + is automatically deleted. + + + +
+
+ Advanced VM creation operations + The following table shows example nova and + neutron commands that enable you to complete + advanced VM creation operations: + + + + + + + + + + + + + + + + + + - - - - - - -
Advanced VM creation operations
OperationCommand
Boots a VM with multiple NICs.$ nova boot --image img --flavor flavor --nic net-id=net1-id --nic net-id=net2-id vm-name
Boots a VM with a specific IP address. + First, create an OpenStack Networking + port with a specific IP address. Then, + boot a VM specifying a + port-id + rather than a + net-id.$ neutron port-create --fixed-ip subnet_id=subnet-id,ip_address=IP net-id $ nova boot --image img --flavor flavor --nic port-id=port-id vm-name -
Boot a VM that connects to all networks that are accessible to - the tenant who submits the request (without the - --nic option). - $ nova boot --image img --flavor flavor vm-name -
- OpenStack Networking does not currently support the v4-fixed-ip parameter of the --nic option for the nova command. - -
-
- Security groups (enabling ping and SSH on VMs) - You must configure security group rules depending on the type of - plugin you are using. If you are using a plugin that: - - - Implements Networking security groups, you can configure security group rules directly by - using neutron security-group-rule-create. The - following example allows ping and - ssh access to your VMs. - $ neutron security-group-rule-create --protocol icmp --direction ingress default +
Boots a VM that connects to all + networks that are accessible to the + tenant who submits the request + (without the + --nic + option). $ nova boot --image img --flavor flavor vm-name +
+ + OpenStack Networking does not currently + support the v4-fixed-ip + parameter of the --nic + option for the nova + command. + +
+
+ Security groups (enabling ping and SSH on + VMs) + You must configure security group rules + depending on the type of plug-in you are using. If + you are using a plug-in that: + + + Implements Networking security groups, + you can configure security group rules + directly by using neutron + security-group-rule-create. + The following example allows + ping and + ssh access to your + VMs. + $ neutron security-group-rule-create --protocol icmp --direction ingress default $ neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default - - - Does not implement Networking security groups, you can configure security group rules - by using the nova secgroup-add-rule or - euca-authorize command. The following - nova commands allow ping - and ssh access to your VMs. - $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 + + + Does not implement Networking security + groups, you can configure security group + rules by using the nova + secgroup-add-rule or + euca-authorize + command. The following + nova commands + allow ping and + ssh access to your + VMs. + $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 - - - - If your plugin implements OpenStack Networking security groups, - you can also leverage Compute security groups by setting - security_group_api = neutron in - nova.conf. After setting this option, all Compute - security group commands are proxied to OpenStack Networking. - - -
-
-
-
- Advanced features through API extensions - This section discusses two API extensions implemented by - several plugins.  We include them in this guide as they - provide capabilities similar to what was available in - nova-network and are thus likely to be relevant to a large - portion of the OpenStack community.  -
- Provider networks - Provider networks allow cloud administrators to create - OpenStack Networking networks that map directly to - physical networks in the data center.  This is commonly - used to give tenants direct access to a "public" network - that can be used to reach the Internet.  It may also be - used to integrate with VLANs in the network that already - have a defined meaning (e.g., allow a VM from the - "marketing" department to be placed on the same VLAN as - bare-metal marketing hosts in the same data - center). - The provider extension allows administrators to - explicitly manage the relationship between OpenStack - Networking virtual networks and underlying physical - mechanisms such as VLANs and tunnels. When this extension - is supported, OpenStack Networking client users with - administrative privileges see additional provider - attributes on all virtual networks, and are able to - specify these attributes in order to create provider - networks. - The provider extension is supported by the openvswitch - and linuxbridge plugins. Configuration of these plugins - requires familiarity with this extension. -
- Terminology - A number of terms are used in the provider extension - and in the configuration of plugins supporting the - provider extension: - - virtual - network - An OpenStack - Networking L2 network (identified by a - UUID and optional name) whose ports can be - attached as vNICs to OpenStack Compute - instances and to various OpenStack - Networking agents. The openvswitch and - linuxbridge plugins each support several - different mechanisms to realize virtual - networks. - - physical - network - A network - connecting virtualization hosts (i.e. - OpenStack Compute nodes) with each other - and with other network resources. Each - physical network may support multiple - virtual networks. The provider extension - and the plugin configurations identify - physical networks using simple string - names. - - - tenant - network - A "normal" - virtual network created by/for a tenant. - The tenant is not aware of how that - network is physically realized. - - - provider - network - A virtual network - administratively created to map to a - specific network in the data center, - typically to enable direct access to - non-OpenStack resources on that network. - Tenants can be given access to provider - networks. - - - VLAN - network - A virtual network - realized as packets on a specific physical - network containing IEEE 802.1Q headers - with a specific VID field value. VLAN - networks sharing the same physical network - are isolated from each other at L2, and - can even have overlapping IP address - spaces. Each distinct physical network - supporting VLAN networks is treated as a - separate VLAN trunk, with a distinct space - of VID values. Valid VID values are 1 - through 4094. - - - flat - network - A virtual network - realized as packets on a specific physical - network containing no IEEE 802.1Q header. - Each physical network can realize at most - one flat network. - - - local - network - A virtual network - that allows communication within each - host, but not across a network. Local - networks are intended mainly for - single-node test scenarios, but may have - other uses. - - - GRE - network - A virtual network - realized as network packets encapsulated - using GRE. GRE networks are also referred - to as "tunnels". GRE tunnel packets are - routed by the host's IP routing table, so - GRE networks are not associated by - OpenStack Networking with specific - physical networks. - - - Both the openvswitch and linuxbridge plugins support - VLAN networks, flat networks, and local networks. Only - the openvswitch plugin currently supports GRE - networks, provided that the host's Linux kernel - supports the required Open vSwitch features. -
-
- Provider attributes - The provider extension extends the OpenStack - Networking network resource with the following three - additional attributes: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Provider Network Attributes
Attribute nameTypeDefault ValueDescription
provider:network_typeStringN/AThe physical mechanism by which the - virtual network is realized. Possible - values are "flat", "vlan", "local", and - "gre", corresponding to flat networks, - VLAN networks, local networks, and GRE - networks as defined above. All types of - provider networks can be created by - administrators, while tenant networks can - be realized as "vlan", "gre", or "local" - network types depending on plugin - configuration.
provider:physical_networkStringIf a physical network named "default" has - been configured, and if - provider:network_type is "flat" or "vlan", - then "default" is used.The name of the physical network over - which the virtual network is realized for - flat and VLAN networks. Not applicable to - the "local" or "gre" network types.
provider:segmentation_idIntegerN/AFor VLAN networks, the VLAN VID on the - physical network that realizes the virtual - network. Valid VLAN VIDs are 1 through - 4094. For GRE networks, the tunnel ID. - Valid tunnel IDs are any 32 bit unsigned - integer. Not applicable to the "flat" or - "local" network types.
- The provider attributes are returned by OpenStack - Networking API operations when the client is - authorized for the - extension:provider_network:view - action via the OpenStack Networking policy - configuration. The provider attributes are only - accepted for network API operations if the client is - authorized for the - extension:provider_network:set - action. The default OpenStack Networking API policy - configuration authorizes both actions for users with - the admin role. See for - details on policy configuration. -
-
- Provider API workflow - Show all attributes of a network, including provider - attributes when invoked with the admin role: - - $ neutron net-show <name or net-id> - - Create a local provider network (admin-only): - - $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type local - - Create a flat provider network (admin-only): - - $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type flat --provider:physical_network <phys-net-name> - - Create a VLAN provider network (admin-only): - - $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type vlan --provider:physical_network <phys-net-name> --provider:segmentation_id <VID> - - Create a GRE provider network (admin-only): - - $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type gre --provider:segmentation_id <tunnel-id> - - When creating flat networks or VLAN networks, <phys-net-name> must be known - to the plugin. See the OpenStack Configuration - Reference for details on configuring network_vlan_ranges to - identify all physical networks. When creating VLAN networks, <VID> can - fall either within or outside any configured ranges of VLAN IDs from which - tenant networks are allocated. Similarly, when creating GRE networks, - <tunnel-id> can fall either within or outside any tunnel ID ranges from - which tenant networks are allocated. - Once provider networks have been created, subnets - can be allocated and they can be used similarly to - other virtual networks, subject to authorization - policy based on the specified - <tenant_id>. + + + If your plug-in implements OpenStack + Networking security groups, you can also + leverage Compute security groups by setting + security_group_api = + neutron in + nova.conf. After + setting this option, all Compute security + group commands are proxied to OpenStack + Networking. + +
-
- L3 Routing and NAT - Just like the core OpenStack Networking API provides abstract L2 network segments that - are decoupled from the technology used to implement the L2 network, OpenStack - Networking includes an API extension that provides abstract L3 routers that API - users can dynamically provision and configure. These OpenStack Networking routers - can connect multiple L2 OpenStack Networking networks, and can also provide a - "gateway" that connects one or more private L2 networks to a shared "external" - network (e.g., a public network for access to the Internet). See the OpenStack Configuration Reference for details on common models of - deploying Networking L3 routers. - The L3 router provides basic NAT capabilities on - "gateway" ports that uplink the router to external - networks. This router SNATs all traffic by default, and - supports "Floating IPs", which creates a static one-to-one - mapping from a public IP on the external network to a - private IP on one of the other subnets attached to the - router. This allows a tenant to selectively expose VMs on - private networks to other hosts on the external network - (and often to all hosts on the Internet). Floating IPs can - be allocated and then mapped from one OpenStack Networking - port to another, as needed. -
- L3 API abstractions - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Router
Attribute nameTypeDefault ValueDescription
iduuid-strgeneratedUUID for the router.
nameStringNoneHuman-readable name for the router. Might - not be unique.
admin_state_upBoolTrueThe administrative state of router. If - false (down), the router does not forward - packets.
statusStringN/AIndicates whether router is - currently operational.
tenant_iduuid-strN/AOwner of the router. Only admin users can - specify a tenant_id other than its own. -
external_gateway_infodict contain 'network_id' key-value - pairNullExternal network that this router connects - to for gateway services (e.g., NAT)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Floating IP
Attribute nameTypeDefault ValueDescription
iduuid-strgeneratedUUID for the floating IP.
floating_ip_addressstring (IP address)allocated by OpenStack NetworkingThe external network IP address available - to be mapped to an internal IP - address.
floating_network_iduuid-strN/AThe network indicating the set of - subnets from which the floating IP - should be allocated
router_iduuid-strN/ARead-only value indicating the router that - connects the external network to the - associated internal port, if a port is - associated.
port_iduuid-strNullIndicates the internal OpenStack - Networking port associated with the - external floating IP.
fixed_ip_addressstring (IP address)NullIndicates the IP address on the internal - port that is mapped to by the floating IP - (since an OpenStack Networking port might - have more than one IP address).
tenant_iduuid-strN/AOwner of the Floating IP. Only admin users - can specify a tenant_id other than its - own.
- +
+ Advanced features through API extensions + Several plug-ins implement API extensions that provide + capabilities similar to what was available in + nova-network: These plug-ins are likely to be of interest + to the OpenStack community. +
+ Provider networks + Provider networks allow cloud administrators to + create OpenStack Networking networks that map directly + to physical networks in the data center.  This is + commonly used to give tenants direct access to a + public network that can be used to reach the + Internet.  It may also be used to integrate with VLANs + in the network that already have a defined meaning + (for example, allow a VM from the "marketing" + department to be placed on the same VLAN as bare-metal + marketing hosts in the same data center). + The provider extension allows administrators to + explicitly manage the relationship between OpenStack + Networking virtual networks and underlying physical + mechanisms such as VLANs and tunnels. When this + extension is supported, OpenStack Networking client + users with administrative privileges see additional + provider attributes on all virtual networks, and are + able to specify these attributes in order to create + provider networks. + The provider extension is supported by the + openvswitch and linuxbridge plug-ins. Configuration of + these plug-ins requires familiarity with this + extension. +
+ Terminology + A number of terms are used in the provider + extension and in the configuration of plug-ins + supporting the provider extension: + + virtual + network. An OpenStack + Networking L2 network (identified by a + UUID and optional name) whose ports + can be attached as vNICs to OpenStack + Compute instances and to various + OpenStack Networking agents. The + openvswitch and linuxbridge plug-ins + each support several different + mechanisms to realize virtual + networks. + + + physical + network. A network + connecting virtualization hosts (such + as, OpenStack Compute nodes) with each + other and with other network + resources. Each physical network may + support multiple virtual networks. The + provider extension and the plug-in + configurations identify physical + networks using simple string + names. + + + tenant + network. A "normal" + virtual network created by/for a + tenant. The tenant is not aware of how + that network is physically + realized. + + + provider + network. A virtual + network administratively created to + map to a specific network in the data + center, typically to enable direct + access to non-OpenStack resources on + that network. Tenants can be given + access to provider networks. + + + VLAN + network. A virtual + network realized as packets on a + specific physical network containing + IEEE 802.1Q headers with a specific + VID field value. VLAN networks sharing + the same physical network are isolated + from each other at L2, and can even + have overlapping IP address spaces. + Each distinct physical network + supporting VLAN networks is treated as + a separate VLAN trunk, with a distinct + space of VID values. Valid VID values + are 1 through 4094. + + + flat + network. A virtual + network realized as packets on a + specific physical network containing + no IEEE 802.1Q header. Each physical + network can realize at most one flat + network. + + + local + network. A virtual + network that allows communication + within each host, but not across a + network. Local networks are intended + mainly for single-node test scenarios, + but may have other uses. + + + GRE + network. A virtual + network realized as network packets + encapsulated using GRE. GRE networks + are also referred to as "tunnels". GRE + tunnel packets are routed by the + host's IP routing table, so GRE + networks are not associated by + OpenStack Networking with specific + physical networks. + + + Both the openvswitch and linuxbridge plug-ins + support VLAN networks, flat networks, and local + networks. Only the openvswitch plug-in currently + supports GRE networks, provided that the host's + Linux kernel supports the required Open vSwitch + features. +
+
+ Provider attributes + The provider extension extends the OpenStack + Networking network resource with the following + three additional attributes: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Provider Network Attributes
Attribute nameTypeDefault ValueDescription
provider:network_typeStringN/AThe physical mechanism by which the + virtual network is realized. Possible + values are "flat", "vlan", "local", + and "gre", corresponding to flat + networks, VLAN networks, local + networks, and GRE networks as defined + above. All types of provider networks + can be created by administrators, + while tenant networks can be realized + as "vlan", "gre", or "local" network + types depending on plug-in + configuration.
provider:physical_networkStringIf a physical network named "default" + has been configured, and if + provider:network_type is "flat" or + "vlan", then "default" is used.The name of the physical network over + which the virtual network is realized + for flat and VLAN networks. Not + applicable to the "local" or "gre" + network types.
provider:segmentation_idIntegerN/AFor VLAN networks, the VLAN VID on the + physical network that realizes the + virtual network. Valid VLAN VIDs are 1 + through 4094. For GRE networks, the + tunnel ID. Valid tunnel IDs are any 32 + bit unsigned integer. Not applicable + to the "flat" or "local" network + types.
+ The provider attributes are returned by + OpenStack Networking API operations when the + client is authorized for the + extension:provider_network:view + action through the OpenStack Networking policy + configuration. The provider attributes are only + accepted for network API operations if the client + is authorized for the + extension:provider_network:set + action. The default OpenStack Networking API + policy configuration authorizes both actions for + users with the admin role. See for details on policy + configuration. +
+
+ Provider Extension API operations + To use the provider extension with the default + policy settings, you must have the administrative + role. + The following table shows example neutron + commands that enable you to complete basic + provider extension API operations: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Basic provider extension API + operations
OperationCommand
+ Shows all attributes of a + network, including provider + attributes. + $ neutron net-show <name or net-id> +
+ Creates a local provider + network. + $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type local +
+ Creates a flat provider network. + When you create flat networks, + <phys-net-name> must be known + to the plug-in. See the OpenStack Configuration + Reference for + details. + $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type flat --provider:physical_network <phys-net-name> +
+ Creates a VLAN provider network. + When you create VLAN networks, + <phys-net-name> must be known + to the plug-in. See the OpenStack Configuration + Reference for details + on configuring network_vlan_ranges + to identify all physical networks. + When you create VLAN networks, + <VID> can fall either within + or outside any configured ranges of + VLAN IDs from which tenant networks + are allocated. + $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type vlan --provider:physical_network <phys-net-name> --provider:segmentation_id <VID> +
+ Creates a GRE provider network. + When you create GRE networks, + <tunnel-id> can be either + inside or outside any tunnel ID + ranges from which tenant networks + are allocated. + After you create provider + networks, you can allocate subnets, + which you can use in the same way + as other virtual networks, subject + to authorization policy based on + the specified + <tenant_id>. + $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type gre --provider:segmentation_id <tunnel-id> +
+
-
- Common L3 workflow - Create external networks (admin-only) - $ neutron net-create public --router:external=True +
+ L3 Routing and NAT + Just like the core OpenStack Networking API provides + abstract L2 network segments that are decoupled from + the technology used to implement the L2 network, + OpenStack Networking includes an API extension that + provides abstract L3 routers that API users can + dynamically provision and configure. These OpenStack + Networking routers can connect multiple L2 OpenStack + Networking networks, and can also provide a "gateway" + that connects one or more private L2 networks to a + shared "external" network (for example, a public + network for access to the Internet). See the + OpenStack Configuration Reference for + details on common models of deploying Networking L3 + routers. + The L3 router provides basic NAT capabilities on + "gateway" ports that uplink the router to external + networks. This router SNATs all traffic by default, + and supports "Floating IPs", which creates a static + one-to-one mapping from a public IP on the external + network to a private IP on one of the other subnets + attached to the router. This allows a tenant to + selectively expose VMs on private networks to other + hosts on the external network (and often to all hosts + on the Internet). Floating IPs can be allocated and + then mapped from one OpenStack Networking port to + another, as needed. +
+ L3 API abstractions + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Router
Attribute nameTypeDefault ValueDescription
iduuid-strgeneratedUUID for the router.
nameStringNoneHuman-readable name for the router. + Might not be unique.
admin_state_upBoolTrueThe administrative state of router. If + false (down), the router does not + forward packets.
statusStringN/AIndicates whether router is + currently operational.
tenant_iduuid-strN/AOwner of the router. Only admin users + can specify a tenant_id other than its + own.
external_gateway_infodict contain 'network_id' key-value + pairNullExternal network that this router + connects to for gateway services (for + example, NAT)
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Floating IP
Attribute nameTypeDefault ValueDescription
iduuid-strgeneratedUUID for the floating IP.
floating_ip_addressstring (IP address)allocated by OpenStack NetworkingThe external network IP address + available to be mapped to an internal + IP address.
floating_network_iduuid-strN/AThe network indicating the set + of subnets from which the floating + IP should be allocated
router_iduuid-strN/ARead-only value indicating the router + that connects the external network to + the associated internal port, if a + port is associated.
port_iduuid-strNullIndicates the internal OpenStack + Networking port associated with the + external floating IP.
fixed_ip_addressstring (IP address)NullIndicates the IP address on the + internal port that is mapped to by the + floating IP (since an OpenStack + Networking port might have more than + one IP address).
tenant_iduuid-strN/AOwner of the Floating IP. Only admin + users can specify a tenant_id other + than its own.
+ +
+
+ Basic L3 operations + External networks are visible to all users. + However, the default policy settings enable only + administrative users to create, update, and delete + external networks. + The following table shows example neutron + commands that enable you to complete basic L3 + operations: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Basic L3 operations
OperationCommand
+ Creates external + networks. + $ neutron net-create public --router:external=True $ neutron subnet-create public 172.16.1.0/24 - Viewing external networks: - $ neutron net-list -- --router:external=True - Creating routers - Internal-only router to connect multiple L2 networks - privately. - $ neutron net-create net1 +
+ Lists external + networks. + $ neutron net-list -- --router:external=True +
Creates an internal-only router + that connects to multiple L2 + networks privately. + $ neutron net-create net1 $ neutron subnet-create net1 10.0.0.0/24 $ neutron net-create net2 $ neutron subnet-create net2 10.0.1.0/24 $ neutron router-create router1 $ neutron router-interface-add router1 <subnet1-uuid> $ neutron router-interface-add router1 <subnet2-uuid> - The router will get an interface with the gateway_ip - address of the subnet, and this interface will be - attached to a port on the L2 OpenStack Networking - network associated with the subnet. The router will - also get an gateway interface to the specified - external network.  This will provide SNAT connectivity - to the external network as well as support for - floating IPs allocated on that external networks (see - below).  Commonly an external network maps to a - network in the provider - A router can also be connected to an “external - network”, allowing that router to act as a NAT gateway - for external connectivity. - $ neutron router-gateway-set router1 <ext-net-id> - Viewing routers: - List all routers: - $ neutron router-list - - Show a specific router: - $ neutron router-show <router_id> - - Show all internal interfaces for a router: - $ neutron port-list -- --device_id=<router_id> - - Associating / Disassociating Floating IPs: - First, identify the port-id representing the VM NIC - that the floating IP should map to: - $ neutron port-list -c id -c fixed_ips -- --device_id=<instance_id> - This port must be on an OpenStack Networking subnet - that is attached to a router uplinked to the external - network that will be used to create the floating IP.  - Conceptually, this is because the router must be able - to perform the Destination NAT (DNAT) rewriting of - packets from the Floating IP address (chosen from a - subnet on the external network) to the internal Fixed - IP (chosen from a private subnet that is “behind” the - router).  - Create floating IP unassociated, then - associate - $ neutron floatingip-create <ext-net-id> -neutron floatingip-associate <floatingip-id> <internal VM port-id> - create floating IP and associate in a single - step - $ neutron floatingip-create --port_id <internal VM port-id> <ext-net-id> - Viewing Floating IP State: - $ neutron floatingip-list - Find floating IP for a particular VM port: - $ neutron floatingip-list -- --port_id=ZZZ - Disassociate a Floating IP: - $ neutron floatingip-disassociate <floatingip-id> - L3 Tear Down - Delete the Floating IP: - $ neutron floatingip-delete <floatingip-id> - Then clear the gateway: - $ neutron router-gateway-clear router1 - Then remove the interfaces from the router: - $ neutron router-interface-delete router1 <subnet-id> - Finally, delete the router: - $ neutron router-delete router1 +
+ Connects a router to an external + network, which enables that router + to act as a NAT gateway for + external connectivity. + $ neutron router-gateway-set router1 <ext-net-id> + The router obtains an interface + with the gateway_ip address of the + subnet, and this interface is + attached to a port on the L2 + OpenStack Networking network + associated with the subnet. The + router also gets a gateway + interface to the specified external + network. This provides SNAT + connectivity to the external + network as well as support for + floating IPs allocated on that + external networks. Commonly an + external network maps to a network + in the provider +
+ Lists routers. + $ neutron router-list +
+ Shows information for a + specified router. + $ neutron router-show <router_id> +
+ Shows all internal interfaces + for a router. +
+ Identifies the + port-id that + represents the VM NIC to which the + floating IP should map. + $ neutron port-list -c id -c fixed_ips -- --device_id=<instance_id> + This port must be on an + OpenStack Networking subnet that is + attached to a router uplinked to + the external network used to create + the floating IP.  Conceptually, + this is because the router must be + able to perform the Destination NAT + (DNAT) rewriting of packets from + the Floating IP address (chosen + from a subnet on the external + network) to the internal Fixed IP + (chosen from a private subnet that + is “behind” the router). +
+ Creates a floating IP address + and associates it with a + port. + $ neutron floatingip-create <ext-net-id> +$ neutron floatingip-associate <floatingip-id> <internal VM port-id> +
+ Creates a floating IP address + and associates it with a port, in a + single step. + $ neutron floatingip-create --port_id <internal VM port-id> <ext-net-id> +
+ Lists floating IPs. + $ neutron floatingip-list +
+ Finds floating IP for a + specified VM port. + $ neutron floatingip-list -- --port_id=ZZZ +
+ Disassociates a floating IP + address. + $ neutron floatingip-disassociate <floatingip-id> +
+ Deletes the floating IP + address. + $ neutron floatingip-delete <floatingip-id> +
+ Clears the gateway. + $ neutron router-gateway-clear router1 +
+ Removes the interfaces from the + router. + $ neutron router-interface-delete router1 <subnet-id> +
+ Deletes the router. + $ neutron router-delete router1 +
+
-
-
- Security groups - Security groups and security group rules allows - administrators and tenants the ability to specify the type - of traffic and direction (ingress/egress) that is allowed - to pass through a port. A security group is a container - for security group rules. - When a port is created in OpenStack Networking it is - associated with a security group. If a security group is - not specified the port will be associated with a 'default' - security group. By default this group will drop all - ingress traffic and allow all egress. Rules can be added - to this group in order to change the behaviour. - If one desires to use the OpenStack Compute security group APIs and/or have OpenStack - Compute orchestrate the creation of new ports for instances on specific security groups, - additional configuration is needed. To enable this, one must configure the following - file /etc/nova/nova.conf and set the config option - security_group_api=neutron on every node running nova-compute and nova-api. After this - change is made restart nova-api and nova-compute in order to pick up this change. After - this change is made one will be able to use both the OpenStack Compute and OpenStack - Network security group API at the same time. - - - To use the OpenStack Compute security group - API with OpenStack Networking, the OpenStack Networking - plugin must implement the security group API. The - following plugins currently implement this: Nicira - NVP, Open vSwitch, Linux Bridge, NEC, and Ryu. - - You must configure the correct firewall driver in the - securitygroup section of the plugin/agent configuration file. - Some plugins and agents, such as Linux Bridge Agent and Open vSwitch Agent, use - the no-operation driver as the default, which results in non-working security - groups. - - When using the security group API through OpenStack - Compute, security groups are applied to all ports on - an instance. The reason for this is that OpenStack - Compute security group APIs are instances based and - not port based as OpenStack Networking. - - -
- Security Group API Abstractions - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Security Group Attributes
Attribute nameTypeDefault ValueDescription
iduuid-strgeneratedUUID for the security group.
nameStringNoneHuman-readable name for the security - group. Might not be unique. Cannot be - named default as that is automatically - created for a tenant.
descriptionStringNoneHuman-readable description of a security - group.
tenant_iduuid-strN/AOwner of the security group. Only admin - users can specify a tenant_id other than - their own.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Security Group Rules
Attribute nameTypeDefault ValueDescription
iduuid-strgeneratedUUID for the security group rule.
security_group_iduuid-str or Integerallocated by OpenStack NetworkingThe security group to associate rule - with.
directionStringN/AThe direction the traffic is allow - (ingress/egress) from a VM.
protocolStringNoneIP Protocol (icmp, tcp, udp, etc).
port_range_minIntegerNonePort at start of range
port_range_maxIntegerNonePort at end of range
ethertypeStringNoneethertype in L2 packet (IPv4, IPv6, - etc)
remote_ip_prefixstring (IP cidr)NoneCIDR for address range
remote_group_iduuid-str or Integerallocated by OpenStack Networking or - OpenStack ComputeSource security group to apply to - rule.
tenant_iduuid-strN/AOwner of the security group rule. Only - admin users can specify a tenant_id other - than its own.
- +
+ Security groups + Security groups and security group rules allows + administrators and tenants the ability to specify the + type of traffic and direction (ingress/egress) that is + allowed to pass through a port. A security group is a + container for security group rules. + When a port is created in OpenStack Networking it is + associated with a security group. If a security group + is not specified the port is associated with a + 'default' security group. By default, this group drops + all ingress traffic and allows all egress. Rules can + be added to this group in order to change the + behaviour. + To use the OpenStack Compute security group APIs or + use OpenStack Compute to orchestrate the creation of + ports for instances on specific security groups, you + must complete additional configuration. You must + configure the /etc/nova/nova.conf + file and set the + security_group_api=neutron option on + every node that runs nova-compute and nova-api. After you + make this change, restart nova-api and nova-compute to pick + up this change. Then, you can use both the OpenStack + Compute and OpenStack Network security group APIs at + the same time. + + + + To use the OpenStack Compute security + group API with OpenStack Networking, the + OpenStack Networking plug-in must + implement the security group API. The + following plug-ins currently implement + this: Nicira NVP, Open vSwitch, Linux + Bridge, NEC, and Ryu. + + + You must configure the correct firewall + driver in the + securitygroup + section of the plug-in/agent configuration + file. Some plug-ins and agents, such as + Linux Bridge Agent and Open vSwitch Agent, + use the no-operation driver as the + default, which results in non-working + security groups. + + + When using the security group API + through OpenStack Compute, security groups + are applied to all ports on an instance. + The reason for this is that OpenStack + Compute security group APIs are instances + based and not port based as OpenStack + Networking. + + + +
+ Security Group API Abstractions + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Security Group Attributes
Attribute nameTypeDefault ValueDescription
iduuid-strgeneratedUUID for the security group.
nameStringNoneHuman-readable name for the security + group. Might not be unique. Cannot be + named default as that is automatically + created for a tenant.
descriptionStringNoneHuman-readable description of a + security group.
tenant_iduuid-strN/AOwner of the security group. Only + admin users can specify a tenant_id + other than their own.
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Security Group Rules
Attribute nameTypeDefault ValueDescription
iduuid-strgeneratedUUID for the security group rule.
security_group_iduuid-str or Integerallocated by OpenStack NetworkingThe security group to associate rule + with.
directionStringN/AThe direction the traffic is allow + (ingress/egress) from a VM.
protocolStringNoneIP Protocol (icmp, tcp, udp, and so + on).
port_range_minIntegerNonePort at start of range
port_range_maxIntegerNonePort at end of range
ethertypeStringNoneethertype in L2 packet (IPv4, IPv6, + and so on)
remote_ip_prefixstring (IP cidr)NoneCIDR for address range
remote_group_iduuid-str or Integerallocated by OpenStack Networking or + OpenStack ComputeSource security group to apply to + rule.
tenant_iduuid-strN/AOwner of the security group rule. Only + admin users can specify a tenant_id + other than its own.
+
+
+ Basic security group operations + The following table shows example neutron + commands that enable you to complete basic + security group operations: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Basic security group operations
OperationCommand
+ Creates a security group for our + web servers. + $ neutron security-group-create webservers --description "security group for webservers"
Lists security + groups.$ neutron security-group-list +
+ Creates a security group rule to + allow port 80 ingress.$ neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 80 --port_range_max 80 <security_group_uuid> +
+ Lists security group + rules.$ neutron security-group-rule-list +
Deletes a security group + rule.$ neutron security-group-rule-delete <security_group_rule_uuid> +
+ Deletes a security + group.$ neutron security-group-delete <security_group_uuid> +
Creates a port and associates + two security groups.$ neutron port-create --security-group <security_group_id1> --security-group <security_group_id2> <network_id> +
+ Removes security groups from a + port.$ neutron port-update --no-security-groups <port_id> +
+
-
- Common security group commands - Create a security group for our web servers: - $ neutron security-group-create webservers --description "security group for webservers" - Viewing security groups: - $ neutron security-group-list - Creating security group rule to allow port 80 - ingress: - $ neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 80 --port_range_max 80 <security_group_uuid> - List security group rules: - $ neutron security-group-rule-list - Delete a security group rule: - $ neutron security-group-rule-delete <security_group_rule_uuid> - Delete security group: - $ neutron security-group-delete <security_group_uuid> - Create a port and associated two security - groups: - $ neutron port-create --security-group <security_group_id1> --security-group <security_group_id2> <network_id> - Remove security groups from a port: - $ neutron port-update --no-security-groups <port_id> -
-
-
- Load-Balancer-as-a-Service - - The Load-Balancer-as-a-Service API is an - API meant to provision and configure load balancers. - The Havana release offers a reference implementation that is based on - the HAProxy software load balancer. -
- Common Load-Balancer-as-a-Service workflow - Create a load balancer pool using specific provider: - $ neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id <subnet-uuid> --provider <provider_name> - --provider is an optional argument; if not used, the pool is created with default provider for LBaaS service. - The default provider however should be configured in the [service_providers] section of neutron.conf file. - If no default provider is specified for LBaaS, the --provider option is mandatory for pool creation. - Associate two web servers with pool: - $ neutron lb-member-create --address <webserver one IP> --protocol-port 80 mypool -$ neutron lb-member-create --address <webserver two IP> --protocol-port 80 mypool - Create a health monitor which checks to make sure - our instances are still running on the specified - protocol-port: - $ neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3 - Associate health monitor with pool: - $ neutron lb-healthmonitor-associate <healthmonitor-uuid> mypool - Create a Virtual IP Address (VIP) that when accessed - via the load balancer will direct the requests to one - of the pool members: - $ neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id <subnet-uuid> mypool + Basic Load-Balancer-as-a-Service operations + + The Load-Balancer-as-a-Service (LBaaS) API + provisions and configures load balancers. The + Havana release offers a reference implementation + that is based on the HAProxy software load + balancer. + + The following table shows example neutron commands + that enable you to complete basic LBaaS + operations: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Basic LBaaS operations
OperationCommand
+ Creates a load balancer pool by + using specific provider. + --provider is + an optional argument. If not used, the + pool is created with default provider + for LBaaS service. You should + configure the default provider in the + [service_providers] + section of + neutron.conf + file. If no default provider is + specified for LBaaS, the + --provider + option is required for pool + creation. + $ neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id <subnet-uuid> --provider <provider_name>
+ Associates two web servers with + pool. + $ neutron lb-member-create --address <webserver one IP> --protocol-port 80 mypool +$ neutron lb-member-create --address <webserver two IP> --protocol-port 80 mypool
+ Creates a health monitor which + checks to make sure our instances are + still running on the specified + protocol-port.$ neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3 +
Associates a health monitor with + pool.$ neutron lb-healthmonitor-associate <healthmonitor-uuid> mypool +
+ Creates a virtual IP (VIP) address + that, when accessed through the load + balancer, directs the requests to one + of the pool members. + $ neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id <subnet-uuid> mypool +
-
-
- Plugin specific extensions - - Each vendor may choose to implement additional API - extensions to the core API. This section describes the - extensions for each plugin. -
- Nicira NVP extensions - The Nicira NVP plugin Extensions -
- Nicira NVP QoS extension - The Nicira NVP QoS extension rate-limits network - ports to guarantee a specific amount of bandwidth - for each port. This extension, by default, is only - accessible by a tenant with an admin role but is - configurable through the - policy.json file. To use - this extension, create a queue and specify the - min/max bandwidth rates (kbps) and optionally set - the QoS Marking and DSCP value (if your network - fabric uses these values to make forwarding - decisions). Once created, you can associate a - queue with a network. Then, when ports are created - on that network they are automatically created and - associated with the specific queue size that was - associated with the network. Because one size - queue for a every port on a network may not be - optimal, a scaling factor from the nova flavor - 'rxtx_factor' is passed in from OpenStack Compute - when creating the port to scale the queue. - Lastly, if you want to set a specific baseline QoS policy for the amount of - bandwidth a single port can use (unless a network queue is specified with the - network a port is created on) a default queue can be created in neutron which - then causes ports created to be associated with a queue of that size times the - rxtx scaling factor. One thing to note is that after a network queue or default - queue is specified this will not add queues to ports previously created and will - only create queues for ports created thereafter. -
- Nicira NVP QoS API abstractions - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + +
Nicira NVP QoS Attributes
Attribute nameTypeDefault ValueDescription
iduuid-strgeneratedUUID for the QoS queue.
defaultBooleanFalse by defaultIf True ports will be created with +
+ Plug-in specific extensions + + Each vendor may choose to implement additional API + extensions to the core API. This section describes the + extensions for each plug-in. +
+ Nicira NVP extensions + The Nicira NVP plug-in Extensions +
+ Nicira NVP QoS extension + The Nicira NVP QoS extension rate-limits + network ports to guarantee a specific amount + of bandwidth for each port. This extension, by + default, is only accessible by a tenant with + an admin role but is configurable through the + policy.json file. To + use this extension, create a queue and specify + the min/max bandwidth rates (kbps) and + optionally set the QoS Marking and DSCP value + (if your network fabric uses these values to + make forwarding decisions). Once created, you + can associate a queue with a network. Then, + when ports are created on that network they + are automatically created and associated with + the specific queue size that was associated + with the network. Because one size queue for a + every port on a network may not be optimal, a + scaling factor from the nova flavor + 'rxtx_factor' is passed in from OpenStack + Compute when creating the port to scale the + queue. + Lastly, if you want to set a specific + baseline QoS policy for the amount of + bandwidth a single port can use (unless a + network queue is specified with the network a + port is created on) a default queue can be + created in neutron which then causes ports + created to be associated with a queue of that + size times the rxtx scaling factor. Note that + after a network or default queue is specified, + queues are added to ports that are + subsequently created but are not added to + existing ports. +
+ Nicira NVP QoS API abstractions + + + + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - - -
Nicira NVP QoS + Attributes
Attribute nameTypeDefault ValueDescription
iduuid-strgeneratedUUID for the QoS queue.
defaultBooleanFalse by defaultIf True, ports are created with this queue size unless the network port is created or associated with a queue at port creation time.
nameStringNoneName for QoS queue.
minInteger0Minimum Bandwidth Rate (kbps). -
maxIntegerN/AMaximum Bandwidth Rate (kbps). -
qos_markingStringuntrusted by defaultWhether QoS marking should be +
nameStringNoneName for QoS queue.
minInteger0Minimum Bandwidth Rate (kbps). +
maxIntegerN/AMaximum Bandwidth Rate (kbps). +
qos_markingStringuntrusted by defaultWhether QoS marking should be trusted or untrusted.
dscpInteger0DSCP Marking value.
tenant_iduuid-strN/AThe owner of the QoS queue.
-
-
- Nicira NVP QoS walkthrough - Create QoS Queue (admin-only) - $ neutron queue-create--min 10 --max 1000 myqueue - Associate queue with a network - $ neutron net-create network --queue_id=<queue_id> - Create default system queue - $ neutron queue-create --default True --min 10 --max 2000 default - List QoS Queues: - $ neutron queue-list - Delete QoS Queue: - $ neutron queue-delete <queue_id or name>' +
dscpInteger0DSCP Marking value.
tenant_iduuid-strN/AThe owner of the QoS + queue.
+
+
+ Common Nicira NVP QoS + operations + The following table shows example + neutron commands that enable you to + complete basic queue operations: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Basic Nicira NVP QoS + operations
OperationCommand
+ Creates QoS Queue + (admin-only). + $ neutron queue-create--min 10 --max 1000 myqueue +
+ Associates a queue with a + network. + $ neutron net-create network --queue_id=<queue_id> +
+ Creates a default system + queue.$ neutron queue-create --default True --min 10 --max 2000 default +
Lists QoS + queues.$ neutron queue-list +
+ Deletes a QoS + queue. + $ neutron queue-delete <queue_id or name>' +
+
-
-
- Advanced operational features -
- Logging settings - Networking components use Python logging module to do logging. Logging configuration - can be provided in neutron.conf or as command line options. - Command options will override ones in neutron.conf. - Two ways to specify the logging configuration for - OpenStack Networking components: - - - Provide logging settings in a logging configuration file. - Please see Python Logging HOWTO for logging configuration file. - - - Provide logging setting in neutron.conf -[DEFAULT] +
+ Advanced operational features +
+ Logging settings + Networking components use Python logging module to + do logging. Logging configuration can be provided in + neutron.conf or as command + line options. Command options override ones in + neutron.conf. + To configure logging for OpenStack Networking + components, use one of the following methods: + + + Provide logging settings in a logging + configuration file. + See Python Logging HOWTO for logging + configuration file. + + + Provide logging setting in + neutron.conf + [DEFAULT] # Default log level is WARNING # Show debugging output in logs (sets DEBUG log level output) # debug = False @@ -1914,17 +2432,21 @@ neutron floatingip-associate <floatingip-id> <internal VM port-id> - - -
-
- Notifications - Notifications can be sent when Networking resources such as network, subnet and port are created, updated or deleted. -
- Notification options - To support DHCP agent, rpc_notifier driver must be set. To set up the notification, - edit notification options in neutron.conf: -# ============ Notification System Options ===================== + + +
+
+ Notifications + Notifications can be sent when Networking resources + such as network, subnet and port are created, updated + or deleted. +
+ Notification options + To support DHCP agent, rpc_notifier driver must + be set. To set up the notification, edit + notification options in + neutron.conf: + # ============ Notification System Options ===================== # Notifications can be sent when network/subnet/port are create, updated or deleted. # There are three methods of sending notifications: logging (via the @@ -1949,19 +2471,24 @@ notification_driver = neutron.openstack.common.notifier.rpc_notifier # Defined in rpc_notifier for rpc way, can be comma separated values. # The actual topic names will be %s.%(default_notification_level)s notification_topics = notifications -
-
+
+
Setting cases -
- Logging and RPC - The options below will make OpenStack Networking server send notifications via - logging and RPC. The logging options are described in + Logging and RPC + The following options configure the + OpenStack Networking server to send + notifications through logging and RPC. The + logging options are described in OpenStack Configuration - Reference . RPC notifications will go to - 'notifications.info' queue bound to a topic exchange defined by - 'control_exchange' in neutron.conf. -# ============ Notification System Options ===================== + xmlns:html="http://www.w3.org/1999/xhtml" + >OpenStack Configuration + Reference . RPC notifications + go to 'notifications.info' queue bound to a + topic exchange defined by 'control_exchange' + in neutron.conf. + # ============ Notification System Options ===================== # Notifications can be sent when network/subnet/port are create, updated or deleted. # There are three methods of sending notifications: logging (via the @@ -1985,16 +2512,19 @@ default_notification_level = INFO # Defined in rpc_notifier for rpc way, can be comma separated values. # The actual topic names will be %s.%(default_notification_level)s -notification_topics = notifications - +notification_topics = notifications
-
- Multiple RPC topics - The options below will make OpenStack Networking server send notifications to - multiple RPC topics. RPC notifications will go to 'notifications_one.info' and - 'notifications_two.info' queues bound to a topic exchange defined by 'control_exchange' - in neutron.conf. -# ============ Notification System Options ===================== +
+ Multiple RPC topics + The following options configure the + OpenStack Networking server to send + notifications to multiple RPC topics. RPC + notifications go to 'notifications_one.info' + and 'notifications_two.info' queues bound to a + topic exchange defined by 'control_exchange' + in neutron.conf. + # ============ Notification System Options ===================== # Notifications can be sent when network/subnet/port are create, updated or deleted. # There are three methods of sending notifications: logging (via the @@ -2018,116 +2548,135 @@ default_notification_level = INFO # Defined in rpc_notifier for rpc way, can be comma separated values. # The actual topic names will be %s.%(default_notification_level)s -notification_topics = notifications_one,notifications_two - -
+notification_topics = notifications_one,notifications_two
+
+
-
Authentication and authorization OpenStack Networking uses the OpenStack Identity service - (project name keystone) as the default authentication service. - When OpenStack Identity is enabled Users submitting requests - to the OpenStack Networking service must provide an - authentication token in X-Auth-Token request header. The - aforementioned token should have been obtained by - authenticating with the OpenStack Identity endpoint. For more - information concerning authentication with OpenStack Identity, - please refer to the OpenStack Identity documentation. When - OpenStack Identity is enabled, it is not mandatory to specify - tenant_id for resources in create requests, as the tenant - identifier will be derived from the Authentication token. - Please note that the default authorization settings only allow - administrative users to create resources on behalf of a - different tenant. OpenStack Networking uses information - received from OpenStack Identity to authorize user requests. - OpenStack Networking handles two kind of authorization - policies: - - Operation-based: policies specify - access criteria for specific operations, possibly - with fine-grained control over specific - attributes; - - - Resource-based: - whether access to specific resource might be - granted or not according to the permissions - configured for the resource (currently available - only for the network resource). The actual - authorization policies enforced in OpenStack - Networking might vary from deployment to - deployment. - - + (project name keystone) as the default authentication + service. When OpenStack Identity is enabled Users + submitting requests to the OpenStack Networking service + must provide an authentication token in X-Auth-Token + request header. The aforementioned token should have been + obtained by authenticating with the OpenStack Identity + endpoint. For more information concerning authentication + with OpenStack Identity, please refer to the OpenStack + Identity documentation. When OpenStack Identity is + enabled, it is not mandatory to specify tenant_id for + resources in create requests, as the tenant ID is derived + from the Authentication token. + + The default authorization settings only allow + administrative users to create resources on behalf of + a different tenant. OpenStack Networking uses + information received from OpenStack Identity to + authorize user requests. OpenStack Networking handles + two kind of authorization policies: + + + + Operation-based + policies specify access criteria for specific + operations, possibly with fine-grained control + over specific attributes; + + + Resource-based + policies specify whether access to specific + resource is granted or not according to the + permissions configured for the resource (currently + available only for the network resource). The + actual authorization policies enforced in + OpenStack Networking might vary from deployment to + deployment. + + The policy engine reads entries from the policy.json file. The actual + role="italic">policy.json file. The actual location of this file might vary from distribution to distribution. Entries can be updated while the system is - running, and no service restart is required. That is to say, - every time the policy file is updated, the policies will be - automatically reloaded. Currently the only way of updating - such policies is to edit the policy file. Please note that in - this section we will use both the terms "policy" and "rule" to - refer to objects which are specified in the same way in the - policy file; in other words, there are no syntax differences - between a rule and a policy. We will define a policy something - which is matched directly from the OpenStack Networking policy - engine, whereas we will define a rule as the elements of such - policies which are then evaluated. For instance in - create_subnet: [["admin_or_network_owner"]], - create_subnet is - regarded as a policy, whereas admin_or_network_owner is regarded as a - rule. - Policies are triggered by the OpenStack Networking policy - engine whenever one of them matches an OpenStack Networking - API operation or a specific attribute being used in a given - operation. For instance the create_subnet policy - is triggered every time a POST /v2.0/subnets - request is sent to the OpenStack Networking server; on the - other hand create_network:shared is triggered - every time the shared + running, and no service restart is required. Every time + the policy file is updated, the policies are automatically + reloaded. Currently the only way of updating such policies + is to edit the policy file. In this section, the terms + policy and + rule refer to + objects that are specified in the same way in the policy + file. There are no syntax differences between a rule and a + policy. A policy is something that is matched directly + from the OpenStack Networking policy engine. A rule is an + element in a policy, which is evaluated. For instance in + create_subnet: + [["admin_or_network_owner"]], create_subnet is a policy, + and admin_or_network_owner is a rule. + Policies are triggered by the OpenStack Networking + policy engine whenever one of them matches an OpenStack + Networking API operation or a specific attribute being + used in a given operation. For instance the + create_subnet policy is triggered every + time a POST /v2.0/subnets request is sent to + the OpenStack Networking server; on the other hand + create_network:shared is triggered every + time the shared attribute is explicitly specified (and set to a value different from its default) in a POST - /v2.0/networks request. It is also worth mentioning - that policies can be also related to specific API extensions; - for instance extension:provider_network:set will - be triggered if the attributes defined by the Provider Network - extensions are specified in an API request. - An authorization policy can be composed by one or more rules. If more rules are specified, - evaluation policy will be successful if any of the rules evaluates successfully; if an API - operation matches multiple policies, then all the policies must evaluate successfully. Also, - authorization rules are recursive. Once a rule is matched, the rule(s) can be resolved to - another rule, until a terminal rule is reached. - The OpenStack Networking policy engine currently defines the - following kinds of terminal rules: + /v2.0/networks request. It is also worth + mentioning that policies can be also related to specific + API extensions; for instance + extension:provider_network:set is be + triggered if the attributes defined by the Provider + Network extensions are specified in an API request. + An authorization policy can be composed by one or more + rules. If more rules are specified, evaluation policy + succeeds if any of the rules evaluates successfully; if an + API operation matches multiple policies, then all the + policies must evaluate successfully. Also, authorization + rules are recursive. Once a rule is matched, the rule(s) + can be resolved to another rule, until a terminal rule is + reached. + The OpenStack Networking policy engine currently defines + the following kinds of terminal rules: - - Role-based rules: evaluate successfully if - the user submitting the request has the specified role. For instance - "role:admin"is successful if the user submitting the request is - an administrator. - - - Field-based rules: evaluate successfully if a - field of the resource specified in the current request matches a specific value. - For instance "field:networks:shared=True" is successful if the - attribute shared of the network resource is set to true. - - - Generic rules: compare an attribute in the - resource with an attribute extracted from the user's security credentials and - evaluates successfully if the comparison is successful. For instance - "tenant_id:%(tenant_id)s" is successful if the tenant - identifier in the resource is equal to the tenant identifier of the user - submitting the request. - - The following is an extract from the default policy.json file: + + Role-based + rules evaluate successfully if + the user who submits the request has the + specified role. For instance + "role:admin"is successful if + the user submitting the request is an + administrator. + + + Field-based rules + evaluate successfully if a field of + the resource specified in the current request + matches a specific value. For instance + "field:networks:shared=True" + is successful if the attribute shared of the + network + resource is set to true. + + + Generic + rules compare an attribute in + the resource with an attribute extracted from + the user's security credentials and evaluates + successfully if the comparison is successful. + For instance + "tenant_id:%(tenant_id)s" is + successful if the tenant identifier in the + resource is equal to the tenant identifier of + the user submitting the request. + + The following is an extract from the + default policy.json file: { [1] "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]], @@ -2152,25 +2701,30 @@ notification_topics = notifications_one,notifications_two "delete_port": [["rule:admin_or_owner"]] } - [1] is a rule which evaluates successfully if the current user is an administrator or the - owner of the resource specified in the request (tenant identifier is equal). - [2] is the default policy which is always evaluated if an API operation does not match any - of the policies in policy.json. - [3] This policy will evaluate successfully if either admin_or_owner, or shared evaluates + [1] is a rule which evaluates successfully if the + current user is an administrator or the owner of the + resource specified in the request (tenant identifier is + equal). + [2] is the default policy which is always evaluated if + an API operation does not match any of the policies in + policy.json. + [3] This policy evaluates successfully if either + admin_or_owner, or + shared evaluates successfully. - [4] This policy will restrict the ability of manipulating the shared attribute for a network to administrators only. - [5] This policy will restrict the ability of manipulating the mac_address attribute for a port only to administrators and the owner of the - network where the port is attached. - In some cases, some operations should be restricted to administrators only; therefore, as - a further example, let us consider how this sample policy file should be modified in a - scenario where tenants are allowed only to define networks and see their resources, and all - the other operations can be performed only in an administrative context: - - - { + [4] This policy restricts the ability to manipulate the + shared attribute + for a network to administrators only. + [5] This policy restricts the ability to manipulate the + mac_address + attribute for a port only to administrators and the owner + of the network where the port is attached. + In some cases, some operations should be restricted to + administrators only. The following example shows you how + to modify a policy file to permit tenants to define + networks and see their resources and permit administrative + users to perform all other operations: + { "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]], "admin_only": [["role:admin"]], "regular_user": [], "default": [["rule:admin_only"]], @@ -2188,51 +2742,56 @@ notification_topics = notifications_one,notifications_two "update_port": [["rule:admin_only"]], "delete_port": [["rule:admin_only"]] } -
High Availability - Several aspects of an Networking deployment benefit from high-availabilty to - withstand individual node failures. In general, neutron-server and neutron-dhcp-agent - can be run in an active-active fashion. The neutron-l3-agent service can be run only as - active/passive, to avoid IP conflicts with respect to gateway IP addresses. + The use of high-availability in a Networking deployment + helps prevent individual node failures. In general, you + can run neutron-server and neutron-dhcp-agent in an + active-active fashion. You can run the neutron-l3-agent + service as active/passive, which avoids IP conflicts with + respect to gateway IP addresses.
OpenStack Networking High Availability with Pacemaker - You can run some OpenStack Networking services into a - cluster (Active / Passive or Active / Active for OpenStack - Networking Server only) with Pacemaker. - Here you can download the latest Resources Agents : + You can run some OpenStack Networking services into + a cluster (Active / Passive or Active / Active for + OpenStack Networking Server only) with + Pacemaker. + Download the latest resources agents: + neutron-server: https://github.com/madkiss/openstack-resource-agents + xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/quantum-server" + >https://github.com/madkiss/openstack-resource-agents neutron-dhcp-agent : https://github.com/madkiss/openstack-resource-agents   + xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/quantum-agent-dhcp" + >https://github.com/madkiss/openstack-resource-agents neutron-l3-agent : https://github.com/madkiss/openstack-resource-agents   + xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/quantum-agent-l3" + >https://github.com/madkiss/openstack-resource-agents - - If you need more informations about "How to build a - cluster", please refer to Pacemaker - documentation. + + + For information about how to build a cluster, + see Pacemaker documentation. +
- Plugin pagination and sorting support + Plug-in pagination and sorting support - - + diff --git a/doc/admin-guide-cloud/pom.xml b/doc/admin-guide-cloud/pom.xml index 23c8e2d067..f0b12b49f1 100644 --- a/doc/admin-guide-cloud/pom.xml +++ b/doc/admin-guide-cloud/pom.xml @@ -42,7 +42,7 @@ com.rackspace.cloud.api clouddocs-maven-plugin - 1.9.2 + 1.10.0 goal1 diff --git a/doc/admin-guide-network/pom.xml b/doc/admin-guide-network/pom.xml index c7a1ff02ee..7f8f730c9f 100644 --- a/doc/admin-guide-network/pom.xml +++ b/doc/admin-guide-network/pom.xml @@ -26,7 +26,7 @@ com.rackspace.cloud.api clouddocs-maven-plugin - 1.9.2 + 1.10.0 networking-admin-webhelp diff --git a/doc/common/ch_resources.xml b/doc/common/ch_resources.xml index 9c7570e119..6e4ac7cec8 100644 --- a/doc/common/ch_resources.xml +++ b/doc/common/ch_resources.xml @@ -6,7 +6,7 @@ xmlns:m="http://www.w3.org/1998/Math/MathML" xmlns:html="http://www.w3.org/1999/xhtml" xmlns:db="http://docbook.org/ns/docbook" version="5.0" - xml:id="resources" label=""> + xml:id="resources" label="A"> Resources For the available OpenStack documentation, see com.rackspace.cloud.api clouddocs-maven-plugin - 1.9.2 + 1.10.0 goal1 diff --git a/doc/glossary/pom.xml b/doc/glossary/pom.xml index f3ff89128c..e4e7843138 100644 --- a/doc/glossary/pom.xml +++ b/doc/glossary/pom.xml @@ -55,7 +55,7 @@ com.rackspace.cloud.api clouddocs-maven-plugin - 1.9.2 + 1.10.0 goal1 diff --git a/doc/high-availability-guide/pom.xml b/doc/high-availability-guide/pom.xml index 6dafeee442..3fd256686b 100644 --- a/doc/high-availability-guide/pom.xml +++ b/doc/high-availability-guide/pom.xml @@ -22,7 +22,7 @@ com.rackspace.cloud.api clouddocs-maven-plugin - 1.9.2 + 1.10.0 generate-webhelp diff --git a/doc/image-guide/pom.xml b/doc/image-guide/pom.xml index 0e1cff7bf2..c37af6c81a 100644 --- a/doc/image-guide/pom.xml +++ b/doc/image-guide/pom.xml @@ -25,7 +25,7 @@ com.rackspace.cloud.api clouddocs-maven-plugin - 1.9.2 + 1.10.0 generate-webhelp diff --git a/doc/install-guide/pom.xml b/doc/install-guide/pom.xml index 8d7233935a..ba66a441c2 100644 --- a/doc/install-guide/pom.xml +++ b/doc/install-guide/pom.xml @@ -27,7 +27,7 @@ com.rackspace.cloud.api clouddocs-maven-plugin - 1.9.2 + 1.10.0 diff --git a/doc/security-guide/pom.xml b/doc/security-guide/pom.xml index a02e5f3210..24bed7b9fd 100644 --- a/doc/security-guide/pom.xml +++ b/doc/security-guide/pom.xml @@ -17,7 +17,7 @@ com.rackspace.cloud.api clouddocs-maven-plugin - 1.9.2 + 1.10.0 security-guide diff --git a/doc/training-guide/pom.xml b/doc/training-guide/pom.xml index 017a776d6e..e88566c2d7 100644 --- a/doc/training-guide/pom.xml +++ b/doc/training-guide/pom.xml @@ -18,7 +18,7 @@ com.rackspace.cloud.api clouddocs-maven-plugin - 1.9.2 + 1.10.0 training-guide diff --git a/doc/user-guide-admin/pom.xml b/doc/user-guide-admin/pom.xml index 84cc35eea7..1d5258dac6 100644 --- a/doc/user-guide-admin/pom.xml +++ b/doc/user-guide-admin/pom.xml @@ -20,7 +20,7 @@ com.rackspace.cloud.api clouddocs-maven-plugin - 1.9.2 + 1.10.0 diff --git a/doc/user-guide-admin/section_dashboard_admin_manage_projects_users.xml b/doc/user-guide-admin/section_dashboard_admin_manage_projects_users.xml index c49c0cdb71..36c2e62363 100644 --- a/doc/user-guide-admin/section_dashboard_admin_manage_projects_users.xml +++ b/doc/user-guide-admin/section_dashboard_admin_manage_projects_users.xml @@ -169,7 +169,8 @@ diff --git a/doc/user-guide/pom.xml b/doc/user-guide/pom.xml index 724443da9e..0888f38bb4 100644 --- a/doc/user-guide/pom.xml +++ b/doc/user-guide/pom.xml @@ -20,7 +20,7 @@ com.rackspace.cloud.api clouddocs-maven-plugin - 1.9.2 + 1.10.0
The plugins are supporting native pagination and + Plug-ins that support native pagination and sorting
PluginPlug-in Support Native Pagination Support Native Sorting