[cloud-admin-guide] Convert networking files to RST

Converting the following files:
1. section_networking_adv_features.xml
2. section_networking-config-identity.xml

Implements: blueprint reorganise-user-guides.

Change-Id: I8a4bf854246eb8282af31404315ced804dbcaf84
This commit is contained in:
asettle 2015-07-15 08:10:09 +10:00
parent b322243fb4
commit b9b274dfa0
4 changed files with 1125 additions and 4 deletions

View File

@ -12,13 +12,11 @@ advanced ``neutron`` and ``nova`` command-line interface (CLI) commands.
networking_config-plugins.rst
networking_config-agents.rst
networking_arch.rst
networking_config-identity.rst
networking_adv-config.rst
networking_multi-dhcp-agents.rst
networking_use.rst
networking_adv-features.rst
networking_adv-operational-features.rst
networking_auth.rst
.. TODO (asettle)
networking_adv-features.rst
networking_config-identity.rst

View File

@ -0,0 +1,839 @@
========================================
Advanced features through API extensions
========================================
Several plug-ins implement API extensions that provide capabilities
similar to what was available in nova-network. These plug-ins are likely
to be of interest to the OpenStack community.
Provider networks
~~~~~~~~~~~~~~~~~
Networks can be categorized as either tenant networks or provider
networks. Tenant networks are created by normal users and details about
how they are physically realized are hidden from those users. Provider
networks are created with administrative credentials, specifying the
details of how the network is physically realized, usually to match some
existing network in the data center.
Provider networks enable cloud administrators to create Networking
networks that map directly to the physical networks in the data center.
This is commonly used to give tenants direct access to a public network
that can be used to reach the Internet. It might also be used to
integrate with VLANs in the network that already have a defined meaning
(for example, enable a VM from the marketing department to be placed
on the same VLAN as bare-metal marketing hosts in the same data center).
The provider extension allows administrators to explicitly manage the
relationship between Networking virtual networks and underlying physical
mechanisms such as VLANs and tunnels. When this extension is supported,
Networking client users with administrative privileges see additional
provider attributes on all virtual networks and are able to specify
these attributes in order to create provider networks.
The provider extension is supported by the Open vSwitch and Linux Bridge
plug-ins. Configuration of these plug-ins requires familiarity with this
extension.
Terminology
-----------
A number of terms are used in the provider extension and in the
configuration of plug-ins supporting the provider extension:
**Provider extension terminology**
+----------------------+-----------------------------------------------------+
| Term | Description |
+======================+=====================================================+
| **virtual network** |An Networking L2 network (identified by a UUID and |
| |optional name) whose ports can be attached as vNICs |
| |to Compute instances and to various Networking |
| |agents. The Open vSwitch and Linux Bridge plug-ins |
| |each support several different mechanisms to |
| |realize virtual networks. |
+----------------------+-----------------------------------------------------+
| **physical network** |A network connecting virtualization hosts (such as |
| |compute nodes) with each other and with other |
| |network resources. Each physical network might |
| |support multiple virtual networks. The provider |
| |extension and the plug-in configurations identify |
| |physical networks using simple string names. |
+----------------------+-----------------------------------------------------+
| **tenant network** |A virtual network that a tenant or an administrator |
| |creates. The physical details of the network are not |
| |exposed to the tenant. |
+----------------------+-----------------------------------------------------+
| **provider network** | A virtual network administratively created to map to|
| | a specific network in the data center, typically to |
| | enable direct access to non-OpenStack resources on |
| | that network. Tenants can be given access to |
| | provider networks. |
+----------------------+-----------------------------------------------------+
| **VLAN network** | A virtual network implemented as packets on a |
| | specific physical network containing IEEE 802.1Q |
| | headers with a specific VID field value. VLAN |
| | networks sharing the same physical network are |
| | isolated from each other at L2 and can even have |
| | overlapping IP address spaces. Each distinct |
| | physical network supporting VLAN networks is |
| | treated as a separate VLAN trunk, with a distinct |
| | space of VID values. Valid VID values are 1 |
| | through 4094. |
+----------------------+-----------------------------------------------------+
| **flat network** | A virtual network implemented as packets on a |
| | specific physical network containing no IEEE 802.1Q |
| | header. Each physical network can realize at most |
| | one flat network. |
+----------------------+-----------------------------------------------------+
| **local network** | A virtual network that allows communication within |
| | each host, but not across a network. Local networks |
| | are intended mainly for single-node test scenarios, |
| | but can have other uses. |
+----------------------+-----------------------------------------------------+
| **GRE network** | A virtual network implemented as network packets |
| | encapsulated using GRE. GRE networks are also |
| | referred to as *tunnels*. GRE tunnel packets are |
| | routed by the IP routing table for the host, so |
| | GRE networks are not associated by Networking with |
| | specific physical networks. |
+----------------------+-----------------------------------------------------+
| **Virtual Extensible | |
| LAN (VXLAN) network**| VXLAN is a proposed encapsulation protocol for |
| | running an overlay network on existing Layer 3 |
| | infrastructure. An overlay network is a virtual |
| | network that is built on top of existing network |
| | Layer 2 and Layer 3 technologies to support elastic |
| | compute architectures. |
+----------------------+-----------------------------------------------------+
The ML2, Open vSwitch, and Linux Bridge plug-ins support VLAN networks,
flat networks, and local networks. Only the ML2 and Open vSwitch
plug-ins currently support GRE and VXLAN networks, provided that the
required features exist in the hosts Linux kernel, Open vSwitch, and
iproute2 packages.
Provider attributes
-------------------
The provider extension extends the Networking network resource with
these attributes:
.. list-table:: **Provider network attributes**
:widths: 10 10 10 49
:header-rows: 1
* - Attribute name
- Type
- Default Value
- Description
* - provider: network\_type
- String
- N/A
- The physical mechanism by which the virtual network is implemented.
Possible values are ``flat``, ``vlan``, ``local``, ``gre``, and
``vxlan``, corresponding to flat networks, VLAN networks, local
networks, GRE networks, and VXLAN networks as defined above.
All types of provider networks can be created by administrators,
while tenant networks can be implemented as ``vlan``, ``gre``,
``vxlan``, or ``local`` network types depending on plug-in
configuration.
* - provider: physical_network
- String
- If a physical network named "default" has been configured and
if provider:network_type is ``flat`` or ``vlan``, then "default"
is used.
- The name of the physical network over which the virtual network
is implemented for flat and VLAN networks. Not applicable to the
``local`` or ``gre`` network types.
* - provider:segmentation_id
- Integer
- N/A
- For VLAN networks, the VLAN VID on the physical network that
realizes the virtual network. Valid VLAN VIDs are 1 through 4094.
For GRE networks, the tunnel ID. Valid tunnel IDs are any 32 bit
unsigned integer. Not applicable to the ``flat`` or ``local``
network types.
To view or set provider extended attributes, a client must be authorized
for the ``extension:provider_network:view`` and
``extension:provider_network:set`` actions in the Networking policy
configuration. The default Networking configuration authorizes both
actions for users with the admin role. An authorized client or an
administrative user can view and set the provider extended attributes
through Networking API calls. See the section called
:ref:`Authentication and authorization` for details on policy configuration.
L3 routing and NAT
~~~~~~~~~~~~~~~~~~
The Networking API provides abstract L2 network segments that are
decoupled from the technology used to implement the L2 network.
Networking includes an API extension that provides abstract L3 routers
that API users can dynamically provision and configure. These Networking
routers can connect multiple L2 Networking networks and can also provide
a gateway that connects one or more private L2 networks to a shared
external network. For example, a public network for access to the
Internet. See the *OpenStack Configuration Reference* for details on
common models of deploying Networking L3 routers.
The L3 router provides basic NAT capabilities on gateway ports that
uplink the router to external networks. This router SNATs all traffic by
default and supports floating IPs, which creates a static one-to-one
mapping from a public IP on the external network to a private IP on one
of the other subnets attached to the router. This allows a tenant to
selectively expose VMs on private networks to other hosts on the
external network (and often to all hosts on the Internet). You can
allocate and map floating IPs from one port to another, as needed.
Basic L3 operations
-------------------
External networks are visible to all users. However, the default policy
settings enable only administrative users to create, update, and delete
external networks.
This table shows example neutron commands that enable you to complete
basic L3 operations:
.. list-table:: **Basic L3 Operations**
:widths: 30 50
:header-rows: 1
* - Operation
- Command
* - Creates external networks.
- .. code-block:: console
# neutron net-create public --router:external True
$ neutron subnet-create public 172.16.1.0/24
* - Lists external networks.
- .. code-block:: console
$ neutron net-list -- --router:external True
* - Creates an internal-only router that connects to multiple L2 networks privately.
- .. code-block:: console
$ neutron net-create net1
$ neutron subnet-create net1 10.0.0.0/24
$ neutron net-create net2
$ neutron subnet-create net2 10.0.1.0/24
$ neutron router-create router1
$ neutron router-interface-add router1 SUBNET1_UUID
$ neutron router-interface-add router1 SUBNET2_UUID
* - Connects a router to an external network, which enables that router to
act as a NAT gateway for external connectivity.
- .. code-block:: console
$ neutron router-gateway-set router1 EXT_NET_ID
The router obtains an interface with the gateway_ip address of the
subnet and this interface is attached to a port on the L2 Networking
network associated with the subnet. The router also gets a gateway
interface to the specified external network. This provides SNAT
connectivity to the external network as well as support for floating
IPs allocated on that external networks. Commonly an external network
maps to a network in the provider.
* - Lists routers.
- .. code-block:: console
$ neutron router-list
* - Shows information for a specified router.
- .. code-block:: console
$ neutron router-show ROUTER_ID
* - Shows all internal interfaces for a router.
- .. code-block:: console
$ neutron router-port-list ROUTER_ID
$ neutron router-port-list ROUTER_NAME
* - Identifies the PORT_ID that represents the VM NIC to which the floating
IP should map.
- .. code-block:: console
$ neutron port-list -c id -c fixed_ips -- --device_id INSTANCE_ID
This port must be on an Networking subnet that is attached to
a router uplinked to the external network used to create the floating
IP.Conceptually, this is because the router must be able to perform the
Destination NAT (DNAT) rewriting of packets from the floating IP address
(chosen from a subnet on the external network) to the internal fixed
IP (chosen from a private subnet that is behind the router).
* - Creates a floating IP address and associates it with a port.
- .. code-block:: console
$ neutron floatingip-create EXT_NET_ID
$ neutron floatingip-associate FLOATING_IP_ID INTERNAL_VM_PORT_ID
* - Creates a floating IP address and associates it with a port, in a single step.
- .. code-block:: console
$ neutron floatingip-create --port_id INTERNAL_VM_PORT_ID EXT_NET_ID
* - Lists floating IPs
- .. code-block:: console
$ neutron floatingip-list
* - Finds floating IP for a specified VM port.
- .. code-block:: console
$ neutron floatingip-list -- --port_id ZZZ
* - Disassociates a floating IP address.
- .. code-block:: console
$ neutron floatingip-disassociate FLOATING_IP_ID
* - Deletes the floating IP address.
- .. code-block:: console
$ neutron floatingip-delete FLOATING_IP_ID
* - Clears the gateway.
- .. code-block:: console
$ neutron router-gateway-clear router1
* - Removes the interfaces from the router.
- .. code-block:: console
$ neutron router-interface-delete router1 SUBNET_ID
* - Deletes the router.
- .. code-block:: console
$ neutron router-delete router1
Security groups
~~~~~~~~~~~~~~~
Security groups and security group rules allows administrators and
tenants the ability to specify the type of traffic and direction
(ingress/egress) that is allowed to pass through a port. A security
group is a container for security group rules.
When a port is created in Networking it is associated with a security
group. If a security group is not specified the port is associated with
a 'default' security group. By default, this group drops all ingress
traffic and allows all egress. Rules can be added to this group in order
to change the behaviour.
To use the Compute security group APIs or use Compute to orchestrate the
creation of ports for instances on specific security groups, you must
complete additional configuration. You must configure the
:file:`/etc/nova/nova.conf` file and set the ``security_group_api=neutron``
option on every node that runs nova-compute and nova-api. After you make
this change, restart nova-api and nova-compute to pick up this change.
Then, you can use both the Compute and OpenStack Network security group
APIs at the same time.
.. note::
- To use the Compute security group API with Networking, the
Networking plug-in must implement the security group API. The
following plug-ins currently implement this: ML2, Open vSwitch,
Linux Bridge, NEC, and VMware NSX.
- You must configure the correct firewall driver in the
``securitygroup`` section of the plug-in/agent configuration
file. Some plug-ins and agents, such as Linux Bridge Agent and
Open vSwitch Agent, use the no-operation driver as the default,
which results in non-working security groups.
- When using the security group API through Compute, security
groups are applied to all ports on an instance. The reason for
this is that Compute security group APIs are instances based and
not port based as Networking.
Basic security group operations
-------------------------------
This table shows example neutron commands that enable you to complete
basic security group operations:
.. list-table:: **Basic security group operations**
:widths: 30 50
:header-rows: 1
* - Operation
- Command
* - Creates a security group for our web servers.
- .. code::
$ neutron security-group-create webservers --description "security group for webservers"
* - Lists security groups.
- .. code::
$ neutron security-group-list
* - Creates a security group rule to allow port 80 ingress.
- .. code::
$ neutron security-group-rule-create --direction ingress \
--protocol tcp --port_range_min 80 --port_range_max 80 SECURITY_GROUP_UUID
* - Lists security group rules.
- .. code::
$ neutron security-group-rule-list
* - Deletes a security group rule.
- .. code::
$ neutron security-group-rule-delete SECURITY_GROUP_RULE_UUID
* - Deletes a security group.
- .. code::
$ neutron security-group-delete SECURITY_GROUP_UUID
* - Creates a port and associates two security groups.
- .. code::
$ neutron port-create --security-group SECURITY_GROUP_ID1 --security-group SECURITY_GROUP_ID2 NETWORK_ID
* - Removes security groups from a port.
- .. code::
$ neutron port-update --no-security-groups PORT_ID
Basic Load-Balancer-as-a-Service operations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. Note::
The Load-Balancer-as-a-Service (LBaaS) API provisions and configures
load balancers. The reference implementation is based on the HAProxy
software load balancer.
This list shows example neutron commands that enable you to complete
basic LBaaS operations:
- Creates a load balancer pool by using specific provider.
:option:`--provider` is an optional argument. If not used, the pool is
created with default provider for LBaaS service. You should configure
the default provider in the ``[service_providers]`` section of
:file:`neutron.conf` file. If no default provider is specified for LBaaS,
the :option:`--provider` parameter is required for pool creation.
.. code:: console
$ neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool
--protocol HTTP --subnet-id SUBNET_UUID --provider PROVIDER_NAME
- Associates two web servers with pool.
.. code:: console
$ neutron lb-member-create --address WEBSERVER1_IP --protocol-port 80 mypool
$ neutron lb-member-create --address WEBSERVER2_IP --protocol-port 80 mypool
- Creates a health monitor that checks to make sure our instances are
still running on the specified protocol-port.
.. code:: console
$ neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3
- Associates a health monitor with pool.
.. code:: console
$ neutron lb-healthmonitor-associate HEALTHMONITOR_UUID mypool
- Creates a virtual IP (VIP) address that, when accessed through the
load balancer, directs the requests to one of the pool members.
.. code:: console
$ neutron lb-vip-create --name myvip --protocol-port 80 --protocol
HTTP --subnet-id SUBNET_UUID mypool
Plug-in specific extensions
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Each vendor can choose to implement additional API extensions to the
core API. This section describes the extensions for each plug-in.
VMware NSX extensions
---------------------
These sections explain NSX plug-in extensions.
VMware NSX QoS extension
^^^^^^^^^^^^^^^^^^^^^^^^
The VMware NSX QoS extension rate-limits network ports to guarantee a
specific amount of bandwidth for each port. This extension, by default,
is only accessible by a tenant with an admin role but is configurable
through the :file:`policy.json` file. To use this extension, create a queue
and specify the min/max bandwidth rates (kbps) and optionally set the
QoS Marking and DSCP value (if your network fabric uses these values to
make forwarding decisions). Once created, you can associate a queue with
a network. Then, when ports are created on that network they are
automatically created and associated with the specific queue size that
was associated with the network. Because one size queue for a every port
on a network might not be optimal, a scaling factor from the nova flavor
``rxtx_factor`` is passed in from Compute when creating the port to scale
the queue.
Lastly, if you want to set a specific baseline QoS policy for the amount
of bandwidth a single port can use (unless a network queue is specified
with the network a port is created on) a default queue can be created in
Networking which then causes ports created to be associated with a queue
of that size times the rxtx scaling factor. Note that after a network or
default queue is specified, queues are added to ports that are
subsequently created but are not added to existing ports.
Basic VMware NSX QoS operations
'''''''''''''''''''''''''''''''
This table shows example neutron commands that enable you to complete
basic queue operations:
.. list-table:: **Basic VMware NSX QoS operations**
:widths: 30 50
:header-rows: 1
* - Operation
- Command
* - Creates QoS queue (admin-only).
- .. code::
$ neutron queue-create --min 10 --max 1000 myqueue
* - Associates a queue with a network.
- .. code::
$ neutron net-create network --queue_id QUEUE_ID
* - Creates a default system queue.
- .. code::
$ neutron queue-create --default True --min 10 --max 2000 default
* - Lists QoS queues.
- .. code::
$ neutron queue-list
* - Deletes a QoS queue.
- .. code::
$ neutron queue-delete QUEUE_ID_OR_NAME'
VMware NSX provider networks extension
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Provider networks can be implemented in different ways by the underlying
NSX platform.
The *FLAT* and *VLAN* network types use bridged transport connectors.
These network types enable the attachment of large number of ports. To
handle the increased scale, the NSX plug-in can back a single OpenStack
Network with a chain of NSX logical switches. You can specify the
maximum number of ports on each logical switch in this chain on the
``max_lp_per_bridged_ls`` parameter, which has a default value of 5,000.
The recommended value for this parameter varies with the NSX version
running in the back-end, as shown in the following table.
**Recommended values for max_lp_per_bridged_ls**
+---------------+---------------------+
| NSX version | Recommended Value |
+===============+=====================+
| 2.x | 64 |
+---------------+---------------------+
| 3.0.x | 5,000 |
+---------------+---------------------+
| 3.1.x | 5,000 |
+---------------+---------------------+
| 3.2.x | 10,000 |
+---------------+---------------------+
In addition to these network types, the NSX plug-in also supports a
special *l3_ext* network type, which maps external networks to specific
NSX gateway services as discussed in the next section.
VMware NSX L3 extension
^^^^^^^^^^^^^^^^^^^^^^^
NSX exposes its L3 capabilities through gateway services which are
usually configured out of band from OpenStack. To use NSX with L3
capabilities, first create an L3 gateway service in the NSX Manager.
Next, in :file:`/etc/neutron/plugins/vmware/nsx.ini` set
``default_l3_gw_service_uuid`` to this value. By default, routers are
mapped to this gateway service.
VMware NSX L3 extension operations
''''''''''''''''''''''''''''''''''
Create external network and map it to a specific NSX gateway service:
.. code:: console
$ neutron net-create public --router:external True --provider:network_type l3_ext \
--provider:physical_network L3_GATEWAY_SERVICE_UUID
Terminate traffic on a specific VLAN from a NSX gateway service:
.. code:: console
$ neutron net-create public --router:external True --provider:network_type l3_ext \
--provider:physical_network L3_GATEWAY_SERVICE_UUID --provider:segmentation_id VLAN_ID
Operational status synchronization in the VMware NSX plug-in
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Starting with the Havana release, the VMware NSX plug-in provides an
asynchronous mechanism for retrieving the operational status for neutron
resources from the NSX back-end; this applies to *network*, *port*, and
*router* resources.
The back-end is polled periodically and the status for every resource is
retrieved; then the status in the Networking database is updated only
for the resources for which a status change occurred. As operational
status is now retrieved asynchronously, performance for ``GET``
operations is consistently improved.
Data to retrieve from the back-end are divided in chunks in order to
avoid expensive API requests; this is achieved leveraging NSX APIs
response paging capabilities. The minimum chunk size can be specified
using a configuration option; the actual chunk size is then determined
dynamically according to: total number of resources to retrieve,
interval between two synchronization task runs, minimum delay between
two subsequent requests to the NSX back-end.
The operational status synchronization can be tuned or disabled using
the configuration options reported in this table; it is however worth
noting that the default values work fine in most cases.
.. list-table:: **Configuration options for tuning operational status synchronization in the NSX plug-in**
:widths: 10 10 10 10 35
:header-rows: 1
* - Option name
- Group
- Default value
- Type and constraints
- Notes
* - ``state_sync_interval``
- ``nsx_sync``
- 10 seconds
- Integer; no constraint.
- Interval in seconds between two run of the synchronization task. If the
synchronization task takes more than ``state_sync_interval`` seconds to
execute, a new instance of the task is started as soon as the other is
completed. Setting the value for this option to 0 will disable the
synchronization task.
* - ``max_random_sync_delay``
- ``nsx_sync``
- 0 seconds
- Integer. Must not exceed ``min_sync_req_delay``
- When different from zero, a random delay between 0 and
``max_random_sync_delay`` will be added before processing the next
chunk.
* - ``min_sync_req_delay``
- ``nsx_sync``
- 1 second
- Integer. Must not exceed ``state_sync_interval``.
- The value of this option can be tuned according to the observed
load on the NSX controllers. Lower values will result in faster
synchronization, but might increase the load on the controller cluster.
* - ``min_chunk_size``
- ``nsx_sync``
- 500 resources
- Integer; no constraint.
- Minimum number of resources to retrieve from the back-end for each
synchronization chunk. The expected number of synchronization chunks
is given by the ratio between ``state_sync_interval`` and
``min_sync_req_delay``. This size of a chunk might increase if the
total number of resources is such that more than ``min_chunk_size``
resources must be fetched in one chunk with the current number of
chunks.
* - ``always_read_status``
- ``nsx_sync``
- False
- Boolean; no constraint.
- When this option is enabled, the operational status will always be
retrieved from the NSX back-end ad every ``GET`` request. In this
case it is advisable to disable the synchronization task.
When running multiple OpenStack Networking server instances, the status
synchronization task should not run on every node; doing so sends
unnecessary traffic to the NSX back-end and performs unnecessary DB
operations. Set the ``state_sync_interval`` configuration option to a
non-zero value exclusively on a node designated for back-end status
synchronization.
The ``fields=status`` parameter in Networking API requests always
triggers an explicit query to the NSX back end, even when you enable
asynchronous state synchronization. For example, ``GET
/v2.0/networks/NET_ID?fields=status&fields=name``.
Big Switch plug-in extensions
-----------------------------
This section explains the Big Switch neutron plug-in-specific extension.
Big Switch router rules
^^^^^^^^^^^^^^^^^^^^^^^
Big Switch allows router rules to be added to each tenant router. These
rules can be used to enforce routing policies such as denying traffic
between subnets or traffic to external networks. By enforcing these at
the router level, network segmentation policies can be enforced across
many VMs that have differing security groups.
Router rule attributes
''''''''''''''''''''''
Each tenant router has a set of router rules associated with it. Each
router rule has the attributes in this table. Router rules and their
attributes can be set using the :command:`neutron router-update` command,
through the horizon interface or the Networking API.
.. list-table:: **Big Switch Router rule attributes**
:widths: 10 10 10 35
:header-rows: 1
* - Attribute name
- Required
- Input type
- Description
* - source
- Yes
- A valid CIDR or one of the keywords 'any' or 'external'
- The network that a packet's source IP must match for the
rule to be applied.
* - destination
- Yes
- A valid CIDR or one of the keywords 'any' or 'external'
- The network that a packet's destination IP must match for the rule to
be applied.
* - action
- Yes
- 'permit' or 'deny'
- Determines whether or not the matched packets will allowed to cross the
router.
* - nexthop
- No
- A plus-separated (+) list of next-hop IP addresses. For example,
``1.1.1.1+1.1.1.2``.
- Overrides the default virtual router used to handle traffic for packets
that match the rule.
Order of rule processing
''''''''''''''''''''''''
The order of router rules has no effect. Overlapping rules are evaluated
using longest prefix matching on the source and destination fields. The
source field is matched first so it always takes higher precedence over
the destination field. In other words, longest prefix matching is used
on the destination field only if there are multiple matching rules with
the same source.
Big Switch router rules operations
''''''''''''''''''''''''''''''''''
Router rules are configured with a router update operation in OpenStack
Networking. The update overrides any previous rules so all rules must be
provided at the same time.
Update a router with rules to permit traffic by default but block
traffic from external networks to the 10.10.10.0/24 subnet:
.. code:: console
$ neutron router-update ROUTER_UUID --router_rules type=dict list=true\
source=any,destination=any,action=permit \
source=external,destination=10.10.10.0/24,action=deny
Specify alternate next-hop addresses for a specific subnet:
.. code:: console
$ neutron router-update ROUTER_UUID --router_rules type=dict list=true\
source=any,destination=any,action=permit \
source=10.10.10.0/24,destination=any,action=permit,nexthops=10.10.10.254+10.10.10.253
Block traffic between two subnets while allowing everything else:
.. code:: console
$ neutron router-update ROUTER_UUID --router_rules type=dict list=true\
source=any,destination=any,action=permit \
source=10.10.10.0/24,destination=10.20.20.20/24,action=deny
L3 metering
~~~~~~~~~~~
The L3 metering API extension enables administrators to configure IP
ranges and assign a specified label to them to be able to measure
traffic that goes through a virtual router.
The L3 metering extension is decoupled from the technology that
implements the measurement. Two abstractions have been added: One is the
metering label that can contain metering rules. Because a metering label
is associated with a tenant, all virtual routers in this tenant are
associated with this label.
Basic L3 metering operations
----------------------------
Only administrators can manage the L3 metering labels and rules.
This table shows example :command:`neutron` commands that enable you to
complete basic L3 metering operations:
.. list-table:: **Basic L3 operations**
:widths: 20 50
:header-rows: 1
* - Operation
- Command
* - Creates a metering label.
- .. code::
$ neutron meter-label-create LABEL1 --description "DESCRIPTION_LABEL1"
* - Lists metering labels.
- .. code::
$ neutron meter-label-list
* - Shows information for a specified label.
- .. code::
$ neutron meter-label-show LABEL_UUID
$ neutron meter-label-show LABEL1
* - Deletes a metering label.
- .. code::
$ neutron meter-label-delete LABEL_UUID
$ neutron meter-label-delete LABEL1
* - Creates a metering rule.
- .. code::
$ neutron meter-label-rule-create LABEL_UUID CIDR --direction DIRECTION
excluded
For example:
.. code::
$ neutron meter-label-rule-create label1 10.0.0.0/24 --direction ingress
$ neutron meter-label-rule-create label1 20.0.0.0/24 --excluded
* - Lists metering all label rules.
- .. code::
$ neutron meter-label-rule-list
* - Shows information for a specified label rule.
- .. code::
$ neutron meter-label-rule-show RULE_UUID
* - Deletes a metering label rule.
- .. code::
$ neutron meter-label-rule-delete RULE_UUID
* - Lists the value of created metering label rules.
- .. code::
$ ceilometer sample-list -m SNMP_MEASUREMENT
For example:
.. code::
$ ceilometer sample-list -m hardware.network.bandwidth.bytes
$ ceilometer sample-list -m hardware.network.incoming.bytes
$ ceilometer sample-list -m hardware.network.outgoing.bytes
$ ceilometer sample-list -m hardware.network.outgoing.errors

View File

@ -1,3 +1,5 @@
.. _Authentication and authorization:
================================
Authentication and authorization
================================

View File

@ -0,0 +1,282 @@
=========================================
Configure Identity service for Networking
=========================================
**To configure the Identity service for use with Networking**
#. Create the ``get_id()`` function
The ``get_id()`` function stores the ID of created objects, and removes
the need to copy and paste object IDs in later steps:
a. Add the following function to your :file:`.bashrc` file:
.. code:: ini
function get_id () {
echo `"$@" | awk '/ id / { print $4 }'`
}
b. Source the :file:`.bashrc` file:
.. code:: console
$ source .bashrc
#. Create the Networking service entry
Networking must be available in the Compute service catalog. Create the
service:
.. code:: console
$ NEUTRON_SERVICE_ID=$(get_id keystone service-create --name
neutron --type network --description 'OpenStack Networking Service')
#. Create the Networking service endpoint entry
The way that you create a Networking endpoint entry depends on whether
you are using the SQL or the template catalog driver:
- If you use the *SQL driver*, run the following command with the
specified region (``$REGION``), IP address of the Networking server
(``$IP``), and service ID (``$NEUTRON_SERVICE_ID``, obtained in the
previous step).
.. code:: console
$ keystone endpoint-create --region $REGION --service-id
$NEUTRON_SERVICE_ID \
--publicurl 'http://$IP:9696/' --adminurl 'http://$IP:9696/'
--internalurl 'http://$IP:9696/'
For example:
.. code:: console
$ keystone endpoint-create --region myregion --service-id
$NEUTRON_SERVICE_ID \
--publicurl "http://10.211.55.17:9696/" --adminurl
"http://10.211.55.17:9696/" --internalurl
"http://10.211.55.17:9696/"
- If you are using the *template driver*, specify the following
parameters in your Compute catalog template file
(:file:`default_catalog.templates`), along with the region (``$REGION``)
and IP address of the Networking server (``$IP``).
.. code:: bash
catalog.$REGION.network.publicURL = http://$IP:9696
catalog.$REGION.network.adminURL = http://$IP:9696
catalog.$REGION.network.internalURL = http://$IP:9696
catalog.$REGION.network.name = Network Service
For example:
.. code:: bash
catalog.$Region.network.publicURL = http://10.211.55.17:9696
catalog.$Region.network.adminURL = http://10.211.55.17:9696
catalog.$Region.network.internalURL = http://10.211.55.17:9696
catalog.$Region.network.name = Network Service
#. Create the Networking service user
You must provide admin user credentials that Compute and some internal
Networking components can use to access the Networking API. Create a
special ``service`` tenant and a ``neutron`` user within this tenant,
and assign an ``admin`` role to this role.
a. Create the ``admin`` role:
.. code:: console
$ ADMIN_ROLE=$(get_id keystone role-create --name admin)
b. Create the ``neutron`` user:
.. code:: console
$ NEUTRON_USER=$(get_id keystone user-create --name neutron\
--pass "$NEUTRON_PASSWORD" --email demo@example.com --tenant-id service)
c. Create the ``service`` tenant:
.. code:: console
$ SERVICE_TENANT=$(get_id keystone tenant-create --name
service --description "Services Tenant")
d. Establish the relationship among the tenant, user, and role:
.. code:: console
$ keystone user-role-add --user_id $NEUTRON_USER
--role_id $ADMIN_ROLE --tenant_id $SERVICE_TENANT
For information about how to create service entries and users, see the
OpenStack Installation Guide for your distribution
(`docs.openstack.org <http://docs.openstack.org>`__).
Compute
~~~~~~~
If you use Networking, do not run the Compute nova-network service (like
you do in traditional Compute deployments). Instead, Compute delegates
most network-related decisions to Networking. Compute proxies
tenant-facing API calls to manage security groups and floating IPs to
Networking APIs. However, operator-facing tools such as nova-manage, are
not proxied and should not be used.
.. warning::
When you configure networking, you must use this guide. Do not rely
on Compute networking documentation or past experience with Compute.
If a :command:`nova` command or configuration option related to networking
is not mentioned in this guide, the command is probably not
supported for use with Networking. In particular, you cannot use CLI
tools like ``nova-manage`` and ``nova`` to manage networks or IP
addressing, including both fixed and floating IPs, with Networking.
.. note::
Uninstall nova-network and reboot any physical nodes that have been
running nova-network before using them to run Networking.
Inadvertently running the nova-network process while using
Networking can cause problems, as can stale iptables rules pushed
down by previously running nova-network.
To ensure that Compute works properly with Networking (rather than the
legacy nova-network mechanism), you must adjust settings in the
:file:`nova.conf` configuration file.
Networking API and credential configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Each time you provision or de-provision a VM in Compute, nova-\*
services communicate with Networking using the standard API. For this to
happen, you must configure the following items in the :file:`nova.conf` file
(used by each nova-compute and nova-api instance).
.. list-table:: **nova.conf API and credential settings**
:widths: 20 50
:header-rows: 1
* - Attribute name
- Required
* - ``[DEFAULT] network_api_class``
- Modify from the default to ``nova.network.neutronv2.api.API``, to
indicate that Networking should be used rather than the traditional
nova-network networking model.
* - ``[neutron] url``
- Update to the hostname/IP and port of the neutron-server instance
for this deployment.
* - ``[neutron] auth_strategy``
- Keep the default ``keystone`` value for all production deployments.
* - ``[neutron] admin_tenant_name``
- Update to the name of the service tenant created in the above section on
Identity configuration.
* - ``[neutron] admin_username``
- Update to the name of the user created in the above section on Identity
configuration.
* - ``[neutron] admin_password``
- Update to the password of the user created in the above section on
Identity configuration.
* - ``[neutron] admin_auth_url``
- Update to the Identity server IP and port. This is the Identity
(keystone) admin API server IP and port value, and not the Identity
service API IP and port.
Configure security groups
~~~~~~~~~~~~~~~~~~~~~~~~~
The Networking service provides security group functionality using a
mechanism that is more flexible and powerful than the security group
capabilities built into Compute. Therefore, if you use Networking, you
should always disable built-in security groups and proxy all security
group calls to the Networking API. If you do not, security policies
will conflict by being simultaneously applied by both services.
To proxy security groups to Networking, use the following configuration
values in :file:`nova.conf`:
**nova.conf security group settings**
+-----------------------+-----------------------------------------------------+
| Item | Configuration |
+=======================+=====================================================+
| ``firewall_driver`` | Update to ``nova.virt.firewall.NoopFirewallDriver``,|
| | so that nova-compute does not perform |
| | iptables-based filtering itself. |
+-----------------------+-----------------------------------------------------+
| ``security_group_api``| Update to ``neutron``, so that all security group |
| | requests are proxied to the Network Service. |
+-----------------------+-----------------------------------------------------+
Configure metadata
~~~~~~~~~~~~~~~~~~
The Compute service allows VMs to query metadata associated with a VM by
making a web request to a special 169.254.169.254 address. Networking
supports proxying those requests to nova-api, even when the requests are
made from isolated networks, or from multiple networks that use
overlapping IP addresses.
To enable proxying the requests, you must update the following fields in
``[neutron]`` section in :file:`nova.conf`.
**nova.conf metadata settings**
+---------------------------------+------------------------------------------+
| Item | Configuration |
+=================================+==========================================+
| ``service_metadata_proxy`` | Update to ``true``, otherwise nova-api |
| | will not properly respond to requests |
| | from the neutron-metadata-agent. |
+---------------------------------+------------------------------------------+
| ``metadata_proxy_shared_secret``| Update to a string "password" value. |
| | You must also configure the same value in|
| | the ``metadata_agent.ini`` file, to |
| | authenticate requests made for metadata. |
| | |
| | The default value of an empty string in |
| | both files will allow metadata to |
| | function, but will not be secure if any |
| | non-trusted entities have access to the |
| | metadata APIs exposed by nova-api. |
+---------------------------------+------------------------------------------+
.. note::
As a precaution, even when using ``metadata_proxy_shared_secret``,
we recommend that you do not expose metadata using the same
nova-api instances that are used for tenants. Instead, you should
run a dedicated set of nova-api instances for metadata that are
available only on your management network. Whether a given nova-api
instance exposes metadata APIs is determined by the value of
``enabled_apis`` in its :file:`nova.conf`.
Example nova.conf (for nova-compute and nova-api)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Example values for the above settings, assuming a cloud controller node
running Compute and Networking with an IP address of 192.168.1.2:
.. code-block:: ini
:linenos:
[DEFAULT]
security_group_api=neutron
network_api_class=nova.network.neutronv2.api.API
firewall_driver=nova.virt.firewall.NoopFirewallDriver
[neutron]
url=http://192.168.1.2:9696
auth_strategy=keystone
admin_tenant_name=service
admin_username=neutron
admin_password=password
admin_auth_url=http://192.168.1.2:35357/v2.0
service_metadata_proxy=true
metadata_proxy_shared_secret=foo