[admin-guide] Fix rst markups for Networking files
Change-Id: Iedb2c675e8c4e448d6022f587609aa1a3172ddc7
This commit is contained in:
parent
a68ac6d95a
commit
df7353484e
@ -5,7 +5,7 @@ Advanced configuration options
|
||||
This section describes advanced configuration options for various system
|
||||
components. For example, configuration options where the default works
|
||||
but that the user wants to customize options. After installing from
|
||||
packages, ``$NEUTRON_CONF_DIR`` is :file:`/etc/neutron`.
|
||||
packages, ``$NEUTRON_CONF_DIR`` is ``/etc/neutron``.
|
||||
|
||||
L3 metering agent
|
||||
~~~~~~~~~~~~~~~~~
|
||||
@ -14,10 +14,10 @@ You can run an L3 metering agent that enables layer-3 traffic metering.
|
||||
In general, you should launch the metering agent on all nodes that run
|
||||
the L3 agent:
|
||||
|
||||
::
|
||||
.. code-block:: console
|
||||
|
||||
neutron-metering-agent --config-file NEUTRON_CONFIG_FILE
|
||||
--config-file L3_METERING_CONFIG_FILE
|
||||
$ neutron-metering-agent --config-file NEUTRON_CONFIG_FILE \
|
||||
--config-file L3_METERING_CONFIG_FILE
|
||||
|
||||
You must configure a driver that matches the plug-in that runs on the
|
||||
service. The driver adds metering to the routing interface.
|
||||
@ -46,12 +46,12 @@ namespaces configuration.
|
||||
|
||||
.. note::
|
||||
|
||||
If the Linux installation does not support network namespaces, you
|
||||
must disable network namespaces in the L3 metering configuration
|
||||
file. The default value of the ``use_namespaces`` option is
|
||||
``True``.
|
||||
If the Linux installation does not support network namespaces, you
|
||||
must disable network namespaces in the L3 metering configuration
|
||||
file. The default value of the ``use_namespaces`` option is
|
||||
``True``.
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
use_namespaces = False
|
||||
|
||||
@ -61,17 +61,17 @@ L3 metering driver
|
||||
You must configure any driver that implements the metering abstraction.
|
||||
Currently the only available implementation uses iptables for metering.
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
driver = neutron.services.metering.drivers.
|
||||
iptables.iptables_driver.IptablesMeteringDriver
|
||||
driver = neutron.services.metering.drivers.
|
||||
iptables.iptables_driver.IptablesMeteringDriver
|
||||
|
||||
L3 metering service driver
|
||||
--------------------------
|
||||
|
||||
To enable L3 metering, you must set the following option in the
|
||||
:file:`neutron.conf` file on the host that runs neutron-server:
|
||||
``neutron.conf`` file on the host that runs ``neutron-server``:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
service_plugins = metering
|
||||
service_plugins = metering
|
||||
|
@ -3,7 +3,7 @@ Advanced features through API extensions
|
||||
========================================
|
||||
|
||||
Several plug-ins implement API extensions that provide capabilities
|
||||
similar to what was available in nova-network. These plug-ins are likely
|
||||
similar to what was available in ``nova-network``. These plug-ins are likely
|
||||
to be of interest to the OpenStack community.
|
||||
|
||||
Provider networks
|
||||
@ -337,7 +337,7 @@ to change the behavior.
|
||||
To use the Compute security group APIs or use Compute to orchestrate the
|
||||
creation of ports for instances on specific security groups, you must
|
||||
complete additional configuration. You must configure the
|
||||
:file:`/etc/nova/nova.conf` file and set the ``security_group_api=neutron``
|
||||
``/etc/nova/nova.conf`` file and set the ``security_group_api=neutron``
|
||||
option on every node that runs nova-compute and nova-api. After you make
|
||||
this change, restart nova-api and nova-compute to pick up this change.
|
||||
Then, you can use both the Compute and OpenStack Network security group
|
||||
@ -374,43 +374,43 @@ basic security group operations:
|
||||
* - Operation
|
||||
- Command
|
||||
* - Creates a security group for our web servers.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron security-group-create webservers --description "security group for webservers"
|
||||
* - Lists security groups.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron security-group-list
|
||||
* - Creates a security group rule to allow port 80 ingress.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron security-group-rule-create --direction ingress \
|
||||
--protocol tcp --port_range_min 80 --port_range_max 80 SECURITY_GROUP_UUID
|
||||
* - Lists security group rules.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron security-group-rule-list
|
||||
* - Deletes a security group rule.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron security-group-rule-delete SECURITY_GROUP_RULE_UUID
|
||||
* - Deletes a security group.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron security-group-delete SECURITY_GROUP_UUID
|
||||
* - Creates a port and associates two security groups.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron port-create --security-group SECURITY_GROUP_ID1 --security-group SECURITY_GROUP_ID2 NETWORK_ID
|
||||
* - Removes security groups from a port.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron port-update --no-security-groups PORT_ID
|
||||
|
||||
Basic Load-Balancer-as-a-Service operations
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
The Load-Balancer-as-a-Service (LBaaS) API provisions and configures
|
||||
load balancers. The reference implementation is based on the HAProxy
|
||||
@ -423,42 +423,42 @@ basic LBaaS operations:
|
||||
|
||||
:option:`--provider` is an optional argument. If not used, the pool is
|
||||
created with default provider for LBaaS service. You should configure
|
||||
the default provider in the ``[service_providers]`` section of
|
||||
:file:`neutron.conf` file. If no default provider is specified for LBaaS,
|
||||
the default provider in the ``[service_providers]`` section of the
|
||||
``neutron.conf`` file. If no default provider is specified for LBaaS,
|
||||
the :option:`--provider` parameter is required for pool creation.
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool
|
||||
--protocol HTTP --subnet-id SUBNET_UUID --provider PROVIDER_NAME
|
||||
$ neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool \
|
||||
--protocol HTTP --subnet-id SUBNET_UUID --provider PROVIDER_NAME
|
||||
|
||||
- Associates two web servers with pool.
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron lb-member-create --address WEBSERVER1_IP --protocol-port 80 mypool
|
||||
$ neutron lb-member-create --address WEBSERVER2_IP --protocol-port 80 mypool
|
||||
$ neutron lb-member-create --address WEBSERVER1_IP --protocol-port 80 mypool
|
||||
$ neutron lb-member-create --address WEBSERVER2_IP --protocol-port 80 mypool
|
||||
|
||||
- Creates a health monitor that checks to make sure our instances are
|
||||
still running on the specified protocol-port.
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3
|
||||
$ neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3
|
||||
|
||||
- Associates a health monitor with pool.
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron lb-healthmonitor-associate HEALTHMONITOR_UUID mypool
|
||||
$ neutron lb-healthmonitor-associate HEALTHMONITOR_UUID mypool
|
||||
|
||||
- Creates a virtual IP (VIP) address that, when accessed through the
|
||||
load balancer, directs the requests to one of the pool members.
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron lb-vip-create --name myvip --protocol-port 80 --protocol
|
||||
HTTP --subnet-id SUBNET_UUID mypool
|
||||
$ neutron lb-vip-create --name myvip --protocol-port 80 --protocol \
|
||||
HTTP --subnet-id SUBNET_UUID mypool
|
||||
|
||||
Plug-in specific extensions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -477,7 +477,7 @@ VMware NSX QoS extension
|
||||
The VMware NSX QoS extension rate-limits network ports to guarantee a
|
||||
specific amount of bandwidth for each port. This extension, by default,
|
||||
is only accessible by a tenant with an admin role but is configurable
|
||||
through the :file:`policy.json` file. To use this extension, create a queue
|
||||
through the ``policy.json`` file. To use this extension, create a queue
|
||||
and specify the min/max bandwidth rates (kbps) and optionally set the
|
||||
QoS Marking and DSCP value (if your network fabric uses these values to
|
||||
make forwarding decisions). Once created, you can associate a queue with
|
||||
@ -509,25 +509,25 @@ basic queue operations:
|
||||
* - Operation
|
||||
- Command
|
||||
* - Creates QoS queue (admin-only).
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron queue-create --min 10 --max 1000 myqueue
|
||||
* - Associates a queue with a network.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron net-create network --queue_id QUEUE_ID
|
||||
* - Creates a default system queue.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron queue-create --default True --min 10 --max 2000 default
|
||||
* - Lists QoS queues.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron queue-list
|
||||
* - Deletes a QoS queue.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron queue-delete QUEUE_ID_OR_NAME'
|
||||
$ neutron queue-delete QUEUE_ID_OR_NAME
|
||||
|
||||
VMware NSX provider networks extension
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
@ -569,7 +569,7 @@ VMware NSX L3 extension
|
||||
NSX exposes its L3 capabilities through gateway services which are
|
||||
usually configured out of band from OpenStack. To use NSX with L3
|
||||
capabilities, first create an L3 gateway service in the NSX Manager.
|
||||
Next, in :file:`/etc/neutron/plugins/vmware/nsx.ini` set
|
||||
Next, in ``/etc/neutron/plugins/vmware/nsx.ini`` set
|
||||
``default_l3_gw_service_uuid`` to this value. By default, routers are
|
||||
mapped to this gateway service.
|
||||
|
||||
@ -578,17 +578,17 @@ VMware NSX L3 extension operations
|
||||
|
||||
Create external network and map it to a specific NSX gateway service:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-create public --router:external True --provider:network_type l3_ext \
|
||||
--provider:physical_network L3_GATEWAY_SERVICE_UUID
|
||||
$ neutron net-create public --router:external True --provider:network_type l3_ext \
|
||||
--provider:physical_network L3_GATEWAY_SERVICE_UUID
|
||||
|
||||
Terminate traffic on a specific VLAN from a NSX gateway service:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-create public --router:external True --provider:network_type l3_ext \
|
||||
--provider:physical_network L3_GATEWAY_SERVICE_UUID --provider:segmentation_id VLAN_ID
|
||||
$ neutron net-create public --router:external True --provider:network_type l3_ext \
|
||||
--provider:physical_network L3_GATEWAY_SERVICE_UUID --provider:segmentation_id VLAN_ID
|
||||
|
||||
Operational status synchronization in the VMware NSX plug-in
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
@ -751,27 +751,27 @@ provided at the same time.
|
||||
Update a router with rules to permit traffic by default but block
|
||||
traffic from external networks to the 10.10.10.0/24 subnet:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron router-update ROUTER_UUID --router_rules type=dict list=true\
|
||||
source=any,destination=any,action=permit \
|
||||
source=external,destination=10.10.10.0/24,action=deny
|
||||
$ neutron router-update ROUTER_UUID --router_rules type=dict list=true\
|
||||
source=any,destination=any,action=permit \
|
||||
source=external,destination=10.10.10.0/24,action=deny
|
||||
|
||||
Specify alternate next-hop addresses for a specific subnet:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron router-update ROUTER_UUID --router_rules type=dict list=true\
|
||||
source=any,destination=any,action=permit \
|
||||
source=10.10.10.0/24,destination=any,action=permit,nexthops=10.10.10.254+10.10.10.253
|
||||
$ neutron router-update ROUTER_UUID --router_rules type=dict list=true\
|
||||
source=any,destination=any,action=permit \
|
||||
source=10.10.10.0/24,destination=any,action=permit,nexthops=10.10.10.254+10.10.10.253
|
||||
|
||||
Block traffic between two subnets while allowing everything else:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron router-update ROUTER_UUID --router_rules type=dict list=true\
|
||||
source=any,destination=any,action=permit \
|
||||
source=10.10.10.0/24,destination=10.20.20.20/24,action=deny
|
||||
$ neutron router-update ROUTER_UUID --router_rules type=dict list=true\
|
||||
source=any,destination=any,action=permit \
|
||||
source=10.10.10.0/24,destination=10.20.20.20/24,action=deny
|
||||
|
||||
L3 metering
|
||||
~~~~~~~~~~~
|
||||
@ -801,56 +801,56 @@ complete basic L3 metering operations:
|
||||
* - Operation
|
||||
- Command
|
||||
* - Creates a metering label.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron meter-label-create LABEL1 --description "DESCRIPTION_LABEL1"
|
||||
* - Lists metering labels.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron meter-label-list
|
||||
* - Shows information for a specified label.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron meter-label-show LABEL_UUID
|
||||
$ neutron meter-label-show LABEL1
|
||||
* - Deletes a metering label.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron meter-label-delete LABEL_UUID
|
||||
$ neutron meter-label-delete LABEL1
|
||||
* - Creates a metering rule.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron meter-label-rule-create LABEL_UUID CIDR --direction DIRECTION
|
||||
excluded
|
||||
|
||||
For example:
|
||||
|
||||
.. code::
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron meter-label-rule-create label1 10.0.0.0/24 --direction ingress
|
||||
$ neutron meter-label-rule-create label1 20.0.0.0/24 --excluded
|
||||
|
||||
* - Lists metering all label rules.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron meter-label-rule-list
|
||||
* - Shows information for a specified label rule.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron meter-label-rule-show RULE_UUID
|
||||
* - Deletes a metering label rule.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ neutron meter-label-rule-delete RULE_UUID
|
||||
* - Lists the value of created metering label rules.
|
||||
- .. code::
|
||||
- .. code-block:: console
|
||||
|
||||
$ ceilometer sample-list -m SNMP_MEASUREMENT
|
||||
|
||||
For example:
|
||||
|
||||
.. code::
|
||||
.. code-block:: console
|
||||
|
||||
$ ceilometer sample-list -m hardware.network.bandwidth.bytes
|
||||
$ ceilometer sample-list -m hardware.network.incoming.bytes
|
||||
|
@ -6,8 +6,8 @@ Logging settings
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Networking components use Python logging module to do logging. Logging
|
||||
configuration can be provided in :file:`neutron.conf` or as command-line
|
||||
options. Command options override ones in :file:`neutron.conf`.
|
||||
configuration can be provided in ``neutron.conf`` or as command-line
|
||||
options. Command options override ones in ``neutron.conf``.
|
||||
|
||||
To configure logging for Networking components, use one of these
|
||||
methods:
|
||||
@ -18,7 +18,7 @@ methods:
|
||||
how-to <http://docs.python.org/howto/logging.html>`__ to learn more
|
||||
about logging.
|
||||
|
||||
- Provide logging setting in :file:`neutron.conf`.
|
||||
- Provide logging setting in ``neutron.conf``.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
@ -53,7 +53,7 @@ Notification options
|
||||
--------------------
|
||||
|
||||
To support DHCP agent, rpc\_notifier driver must be set. To set up the
|
||||
notification, edit notification options in :file:`neutron.conf`:
|
||||
notification, edit notification options in ``neutron.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
@ -75,7 +75,7 @@ These options configure the Networking server to send notifications
|
||||
through logging and RPC. The logging options are described in OpenStack
|
||||
Configuration Reference . RPC notifications go to ``notifications.info``
|
||||
queue bound to a topic exchange defined by ``control_exchange`` in
|
||||
:file:`neutron.conf`.
|
||||
``neutron.conf``.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
@ -123,7 +123,7 @@ Multiple RPC topics
|
||||
These options configure the Networking server to send notifications to
|
||||
multiple RPC topics. RPC notifications go to ``notifications_one.info``
|
||||
and ``notifications_two.info`` queues bound to a topic exchange defined
|
||||
by ``control_exchange`` in :file:`neutron.conf`.
|
||||
by ``control_exchange`` in ``neutron.conf``.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
|
@ -9,8 +9,8 @@ Overview
|
||||
~~~~~~~~
|
||||
|
||||
Networking is a standalone component in the OpenStack modular
|
||||
architecture. It's positioned alongside OpenStack components such as
|
||||
Compute, Image service, Identity, or dashboard. Like those
|
||||
architecture. It is positioned alongside OpenStack components such as
|
||||
Compute, Image service, Identity, or Dashboard. Like those
|
||||
components, a deployment of Networking often involves deploying several
|
||||
services to a variety of hosts.
|
||||
|
||||
@ -57,7 +57,7 @@ ways:
|
||||
authentication and authorization of all API requests.
|
||||
|
||||
- Compute (nova) interacts with Networking through calls to its
|
||||
standard API. As part of creating a VM, the nova-compute service
|
||||
standard API. As part of creating a VM, the ``nova-compute`` service
|
||||
communicates with the Networking API to plug each virtual NIC on the
|
||||
VM into a particular network.
|
||||
|
||||
@ -72,7 +72,7 @@ OpenStack Networking uses the NSX plug-in to integrate with an existing
|
||||
VMware vCenter deployment. When installed on the network nodes, the NSX
|
||||
plug-in enables a NSX controller to centrally manage configuration
|
||||
settings and push them to managed network nodes. Network nodes are
|
||||
considered managed when they're added as hypervisors to the NSX
|
||||
considered managed when they are added as hypervisors to the NSX
|
||||
controller.
|
||||
|
||||
The diagrams below depict some VMware NSX deployment examples. The first
|
||||
@ -82,11 +82,7 @@ Note the placement of the VMware NSX plug-in and the neutron-server
|
||||
service on the network node. The green arrow indicates the management
|
||||
relationship between the NSX controller and the network node.
|
||||
|
||||
|VMware NSX deployment example - two Compute nodes|
|
||||
|
||||
|VMware NSX deployment example - single Compute node|
|
||||
.. figure:: ../../common/figures/vmware_nsx_ex1.png
|
||||
|
||||
.. |VMware NSX deployment example - two Compute nodes|
|
||||
image:: ../../common/figures/vmware_nsx_ex1.png
|
||||
.. |VMware NSX deployment example - single Compute node|
|
||||
image:: ../../common/figures/vmware_nsx_ex2.png
|
||||
.. figure:: ../../common/figures/vmware_nsx_ex2.png
|
||||
|
@ -31,7 +31,7 @@ Networking handles two kind of authorization policies:
|
||||
The actual authorization policies enforced in Networking might vary
|
||||
from deployment to deployment.
|
||||
|
||||
The policy engine reads entries from the :file:`policy.json` file. The
|
||||
The policy engine reads entries from the ``policy.json`` file. The
|
||||
actual location of this file might vary from distribution to
|
||||
distribution. Entries can be updated while the system is running, and no
|
||||
service restart is required. Every time the policy file is updated, the
|
||||
@ -84,7 +84,7 @@ terminal rules:
|
||||
in the resource is equal to the tenant identifier of the user
|
||||
submitting the request.
|
||||
|
||||
This extract is from the default :file:`policy.json` file:
|
||||
This extract is from the default ``policy.json`` file:
|
||||
|
||||
- A rule that evaluates successfully if the current user is an
|
||||
administrator or the owner of the resource specified in the request
|
||||
@ -92,36 +92,36 @@ This extract is from the default :file:`policy.json` file:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"admin_or_owner": [
|
||||
[
|
||||
"role:admin"
|
||||
],
|
||||
[
|
||||
"tenant_id:%(tenant_id)s"
|
||||
]
|
||||
],
|
||||
"admin_or_network_owner": [
|
||||
[
|
||||
"role:admin"
|
||||
],
|
||||
[
|
||||
"tenant_id:%(network_tenant_id)s"
|
||||
]
|
||||
],
|
||||
"admin_only": [
|
||||
[
|
||||
"role:admin"
|
||||
]
|
||||
],
|
||||
"regular_user": [],
|
||||
"shared": [
|
||||
[
|
||||
"field:networks:shared=True"
|
||||
]
|
||||
],
|
||||
"default": [
|
||||
[
|
||||
{
|
||||
"admin_or_owner": [
|
||||
[
|
||||
"role:admin"
|
||||
],
|
||||
[
|
||||
"tenant_id:%(tenant_id)s"
|
||||
]
|
||||
],
|
||||
"admin_or_network_owner": [
|
||||
[
|
||||
"role:admin"
|
||||
],
|
||||
[
|
||||
"tenant_id:%(network_tenant_id)s"
|
||||
]
|
||||
],
|
||||
"admin_only": [
|
||||
[
|
||||
"role:admin"
|
||||
]
|
||||
],
|
||||
"regular_user": [],
|
||||
"shared": [
|
||||
[
|
||||
"field:networks:shared=True"
|
||||
]
|
||||
],
|
||||
"default": [
|
||||
[
|
||||
|
||||
- The default policy that is always evaluated if an API operation does
|
||||
not match any of the policies in ``policy.json``.
|
||||
|
@ -14,8 +14,8 @@ network.
|
||||
|
||||
This section shows you how to install and configure a subset of the
|
||||
available plug-ins, which might include the installation of switching
|
||||
software (for example, Open vSwitch) and as agents used to communicate
|
||||
with the neutron-server process running elsewhere in the data center.
|
||||
software (for example, ``Open vSwitch``) and as agents used to communicate
|
||||
with the ``neutron-server`` process running elsewhere in the data center.
|
||||
|
||||
Configure data-forwarding nodes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -27,13 +27,13 @@ If you use the NSX plug-in, you must also install Open vSwitch on each
|
||||
data-forwarding node. However, you do not need to install an additional
|
||||
agent on each node.
|
||||
|
||||
.. Warning::
|
||||
.. warning::
|
||||
|
||||
It is critical that you run an Open vSwitch version that is
|
||||
compatible with the current version of the NSX Controller software.
|
||||
Do not use the Open vSwitch version that is installed by default on
|
||||
Ubuntu. Instead, use the Open vSwitch version that is provided on
|
||||
the VMware support portal for your NSX Controller version.
|
||||
It is critical that you run an Open vSwitch version that is
|
||||
compatible with the current version of the NSX Controller software.
|
||||
Do not use the Open vSwitch version that is installed by default on
|
||||
Ubuntu. Instead, use the Open vSwitch version that is provided on
|
||||
the VMware support portal for your NSX Controller version.
|
||||
|
||||
**To set up each node for the NSX plug-in**
|
||||
|
||||
@ -66,24 +66,24 @@ addresses through DHCP.
|
||||
|
||||
#. Install the DHCP agent:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install neutron-dhcp-agent
|
||||
# apt-get install neutron-dhcp-agent
|
||||
|
||||
#. Update any options in the :file:`/etc/neutron/dhcp_agent.ini` file
|
||||
#. Update any options in the ``/etc/neutron/dhcp_agent.ini`` file
|
||||
that depend on the plug-in in use. See the sub-sections.
|
||||
|
||||
.. Important::
|
||||
.. important::
|
||||
|
||||
If you reboot a node that runs the DHCP agent, you must run the
|
||||
:command:`neutron-ovs-cleanup` command before the neutron-dhcp-agent
|
||||
service starts.
|
||||
If you reboot a node that runs the DHCP agent, you must run the
|
||||
:command:`neutron-ovs-cleanup` command before the ``neutron-dhcp-agent``
|
||||
service starts.
|
||||
|
||||
On Red Hat, SUSE, and Ubuntu based systems, the
|
||||
``neutron-ovs-cleanup`` service runs the :command:`neutron-ovs-cleanup`
|
||||
command automatically. However, on Debian-based systems, you
|
||||
must manually run this command or write your own system script
|
||||
that runs on boot before the ``neutron-dhcp-agent`` service starts.
|
||||
On Red Hat, SUSE, and Ubuntu based systems, the
|
||||
``neutron-ovs-cleanup`` service runs the :command:`neutron-ovs-cleanup`
|
||||
command automatically. However, on Debian-based systems, you
|
||||
must manually run this command or write your own system script
|
||||
that runs on boot before the ``neutron-dhcp-agent`` service starts.
|
||||
|
||||
Networking dhcp-agent can use
|
||||
`dnsmasq <http://www.thekelleys.org.uk/dnsmasq/doc.html>`__ driver which
|
||||
@ -93,15 +93,15 @@ supports stateful and stateless DHCPv6 for subnets created with
|
||||
|
||||
For example:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-create --ip-version 6 --ipv6_ra_mode dhcpv6-stateful
|
||||
--ipv6_address_mode dhcpv6-stateful NETWORK CIDR
|
||||
$ neutron subnet-create --ip-version 6 --ipv6_ra_mode dhcpv6-stateful
|
||||
--ipv6_address_mode dhcpv6-stateful NETWORK CIDR
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-create --ip-version 6 --ipv6_ra_mode dhcpv6-stateless
|
||||
--ipv6_address_mode dhcpv6-stateless NETWORK CIDR
|
||||
$ neutron subnet-create --ip-version 6 --ipv6_ra_mode dhcpv6-stateless
|
||||
--ipv6_address_mode dhcpv6-stateless NETWORK CIDR
|
||||
|
||||
If no dnsmasq process for subnet's network is launched, Networking will
|
||||
launch a new one on subnet's dhcp port in ``qdhcp-XXX`` namespace. If
|
||||
@ -111,9 +111,9 @@ configuration.
|
||||
Networking will update dnsmasq process and restart it when subnet gets
|
||||
updated.
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
For dhcp-agent to operate in IPv6 mode use at least dnsmasq v2.63.
|
||||
For dhcp-agent to operate in IPv6 mode use at least dnsmasq v2.63.
|
||||
|
||||
After a certain, configured timeframe, networks uncouple from DHCP
|
||||
agents when the agents are no longer in use. You can configure the DHCP
|
||||
@ -130,28 +130,28 @@ DHCP agent setup: OVS plug-in
|
||||
-----------------------------
|
||||
|
||||
These DHCP agent options are required in the
|
||||
:file:`/etc/neutron/dhcp_agent.ini` file for the OVS plug-in:
|
||||
``/etc/neutron/dhcp_agent.ini`` file for the OVS plug-in:
|
||||
|
||||
.. code:: bash
|
||||
.. code-block:: bash
|
||||
|
||||
[DEFAULT]
|
||||
enable_isolated_metadata = True
|
||||
use_namespaces = True
|
||||
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
|
||||
[DEFAULT]
|
||||
enable_isolated_metadata = True
|
||||
use_namespaces = True
|
||||
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
|
||||
|
||||
DHCP agent setup: NSX plug-in
|
||||
-----------------------------
|
||||
|
||||
These DHCP agent options are required in the
|
||||
:file:`/etc/neutron/dhcp_agent.ini` file for the NSX plug-in:
|
||||
``/etc/neutron/dhcp_agent.ini`` file for the NSX plug-in:
|
||||
|
||||
.. code:: bash
|
||||
.. code-block:: bash
|
||||
|
||||
[DEFAULT]
|
||||
enable_metadata_network = True
|
||||
enable_isolated_metadata = True
|
||||
use_namespaces = True
|
||||
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
|
||||
[DEFAULT]
|
||||
enable_metadata_network = True
|
||||
enable_isolated_metadata = True
|
||||
use_namespaces = True
|
||||
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
|
||||
|
||||
Configure L3 agent
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
@ -169,11 +169,11 @@ capabilities:
|
||||
`Floodlight <http://www.projectfloodlight.org/floodlight/>`__
|
||||
controller and the proprietary Big Switch controller.
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
Only the proprietary BigSwitch controller implements L3
|
||||
functionality. When using Floodlight as your OpenFlow controller,
|
||||
L3 functionality is not available.
|
||||
Only the proprietary BigSwitch controller implements L3
|
||||
functionality. When using Floodlight as your OpenFlow controller,
|
||||
L3 functionality is not available.
|
||||
|
||||
- IBM SDN-VE plug-in
|
||||
|
||||
@ -183,43 +183,43 @@ capabilities:
|
||||
|
||||
- PLUMgrid plug-in
|
||||
|
||||
.. Warning::
|
||||
.. warning::
|
||||
|
||||
Do not configure or use neutron-l3-agent if you use one of these
|
||||
plug-ins.
|
||||
Do not configure or use ``neutron-l3-agent`` if you use one of these
|
||||
plug-ins.
|
||||
|
||||
**To install the L3 agent for all other plug-ins**
|
||||
|
||||
#. Install the neutron-l3-agent binary on the network node:
|
||||
#. Install the ``neutron-l3-agent`` binary on the network node:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install neutron-l3-agent
|
||||
# apt-get install neutron-l3-agent
|
||||
|
||||
#. To uplink the node that runs neutron-l3-agent to the external network,
|
||||
create a bridge named "br-ex" and attach the NIC for the external
|
||||
#. To uplink the node that runs ``neutron-l3-agent`` to the external network,
|
||||
create a bridge named ``br-ex`` and attach the NIC for the external
|
||||
network to this bridge.
|
||||
|
||||
For example, with Open vSwitch and NIC eth1 connected to the external
|
||||
network, run:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# ovs-vsctl add-br br-ex
|
||||
# ovs-vsctl add-port br-ex eth1
|
||||
# ovs-vsctl add-br br-ex
|
||||
# ovs-vsctl add-port br-ex eth1
|
||||
|
||||
Do not manually configure an IP address on the NIC connected to the
|
||||
external network for the node running neutron-l3-agent. Rather, you must
|
||||
have a range of IP addresses from the external network that can be used
|
||||
by OpenStack Networking for routers that uplink to the external network.
|
||||
This range must be large enough to have an IP address for each router in
|
||||
the deployment, as well as each floating IP.
|
||||
external network for the node running ``neutron-l3-agent``. Rather, you
|
||||
must have a range of IP addresses from the external network that can be
|
||||
used by OpenStack Networking for routers that uplink to the external
|
||||
network. This range must be large enough to have an IP address for each
|
||||
router in the deployment, as well as each floating IP.
|
||||
|
||||
#. The neutron-l3-agent uses the Linux IP stack and iptables to perform L3
|
||||
#. The ``neutron-l3-agent`` uses the Linux IP stack and iptables to perform L3
|
||||
forwarding and NAT. In order to support multiple routers with
|
||||
potentially overlapping IP addresses, neutron-l3-agent defaults to using
|
||||
Linux network namespaces to provide isolated forwarding contexts. As a
|
||||
result, the IP addresses of routers are not visible simply by running
|
||||
potentially overlapping IP addresses, ``neutron-l3-agent`` defaults to
|
||||
using Linux network namespaces to provide isolated forwarding contexts.
|
||||
As a result, the IP addresses of routers are not visible simply by running
|
||||
the :command:`ip addr list` or :command:`ifconfig` command on the node.
|
||||
Similarly, you cannot directly :command:`ping` fixed IPs.
|
||||
|
||||
@ -228,43 +228,43 @@ capabilities:
|
||||
``qrouter-ROUTER_UUID``. These example commands run in the router
|
||||
namespace with UUID 47af3868-0fa8-4447-85f6-1304de32153b:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list
|
||||
# ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping FIXED_IP
|
||||
# ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping FIXED_IP
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
For iproute version 3.12.0 and above, networking namespaces
|
||||
are configured to be deleted by default. This behavior can be
|
||||
changed for both DHCP and L3 agents. The configuration files are
|
||||
:file:`/etc/neutron/dhcp_agent.ini` and
|
||||
:file:`/etc/neutron/l3_agent.ini` respectively.
|
||||
For iproute version 3.12.0 and above, networking namespaces
|
||||
are configured to be deleted by default. This behavior can be
|
||||
changed for both DHCP and L3 agents. The configuration files are
|
||||
``/etc/neutron/dhcp_agent.ini`` and
|
||||
``/etc/neutron/l3_agent.ini`` respectively.
|
||||
|
||||
For DHCP namespace the configuration key:
|
||||
``dhcp_delete_namespaces = True``. You can set it to False
|
||||
in case namespaces cannot be deleted cleanly on the host running the
|
||||
DHCP agent.
|
||||
For DHCP namespace the configuration key:
|
||||
``dhcp_delete_namespaces = True``. You can set it to ``False``
|
||||
in case namespaces cannot be deleted cleanly on the host running the
|
||||
DHCP agent.
|
||||
|
||||
For L3 namespace, the configuration key:
|
||||
``router_delete_namespaces = True``. You can set it to False
|
||||
in case namespaces cannot be deleted cleanly on the host running the
|
||||
L3 agent.
|
||||
For L3 namespace, the configuration key:
|
||||
``router_delete_namespaces = True``. You can set it to False
|
||||
in case namespaces cannot be deleted cleanly on the host running the
|
||||
L3 agent.
|
||||
|
||||
.. Important::
|
||||
.. important::
|
||||
|
||||
If you reboot a node that runs the L3 agent, you must run the
|
||||
:command:`neutron-ovs-cleanup` command before the neutron-l3-agent
|
||||
service starts.
|
||||
If you reboot a node that runs the L3 agent, you must run the
|
||||
:command:`neutron-ovs-cleanup` command before the ``neutron-l3-agent``
|
||||
service starts.
|
||||
|
||||
On Red Hat, SUSE and Ubuntu based systems, the neutron-ovs-cleanup
|
||||
service runs the :command:`neutron-ovs-cleanup` command
|
||||
automatically. However, on Debian-based systems, you must manually
|
||||
run this command or write your own system script that runs on boot
|
||||
before the neutron-l3-agent service starts.
|
||||
On Red Hat, SUSE and Ubuntu based systems, the neutron-ovs-cleanup
|
||||
service runs the :command:`neutron-ovs-cleanup` command
|
||||
automatically. However, on Debian-based systems, you must manually
|
||||
run this command or write your own system script that runs on boot
|
||||
before the neutron-l3-agent service starts.
|
||||
|
||||
Configure metering agent
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -275,46 +275,46 @@ The Neutron Metering agent resides beside neutron-l3-agent.
|
||||
|
||||
#. Install the agent by running:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install neutron-metering-agent
|
||||
# apt-get install neutron-metering-agent
|
||||
|
||||
#. If you use one of the following plug-ins, you need to configure the
|
||||
metering agent with these lines as well:
|
||||
|
||||
- An OVS-based plug-in such as OVS, NSX, NEC, BigSwitch/Floodlight:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
|
||||
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
|
||||
|
||||
- A plug-in that uses LinuxBridge:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
interface_driver = neutron.agent.linux.interface.
|
||||
BridgeInterfaceDriver
|
||||
interface_driver = neutron.agent.linux.interface.
|
||||
BridgeInterfaceDriver
|
||||
|
||||
#. To use the reference implementation, you must set:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
driver = neutron.services.metering.drivers.iptables.iptables_driver
|
||||
.IptablesMeteringDriver
|
||||
driver = neutron.services.metering.drivers.iptables.iptables_driver
|
||||
.IptablesMeteringDriver
|
||||
|
||||
#. Set the ``service_plugins`` option in the :file:`/etc/neutron/neutron.conf`
|
||||
file on the host that runs neutron-server:
|
||||
#. Set the ``service_plugins`` option in the ``/etc/neutron/neutron.conf``
|
||||
file on the host that runs ``neutron-server``:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
service_plugins = metering
|
||||
service_plugins = metering
|
||||
|
||||
If this option is already defined, add ``metering`` to the list, using a
|
||||
comma as separator. For example:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
service_plugins = router,metering
|
||||
service_plugins = router,metering
|
||||
|
||||
Configure Load-Balancer-as-a-Service (LBaaS v2)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -326,70 +326,65 @@ For the back end, use either Octavia or Haproxy. This example uses Octavia.
|
||||
#. Install Octavia using your distribution's package manager.
|
||||
|
||||
|
||||
#. Edit the :file:`/etc/neutron/neutron_lbaas.conf` file and change
|
||||
#. Edit the ``/etc/neutron/neutron_lbaas.conf`` file and change
|
||||
the ``service_provider`` parameter to enable Octavia:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.
|
||||
drivers.octavia.driver.OctaviaDriver:default
|
||||
service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.
|
||||
drivers.octavia.driver.OctaviaDriver:default
|
||||
|
||||
.. Warning::
|
||||
.. warning::
|
||||
|
||||
The ``service_provider`` option is already
|
||||
defined in the :file:`/usr/share/neutron/neutron-dist.conf` file on
|
||||
Red Hat based systems. Do not define it in :file:`neutron_lbaas.conf`
|
||||
otherwise the Networking services will fail to restart.
|
||||
The ``service_provider`` option is already
|
||||
defined in the ``/usr/share/neutron/neutron-dist.conf`` file on
|
||||
Red Hat based systems. Do not define it in ``neutron_lbaas.conf``
|
||||
otherwise the Networking services will fail to restart.
|
||||
|
||||
|
||||
#. Edit the :file:`/etc/neutron/neutron.conf` file and add the
|
||||
#. Edit the ``/etc/neutron/neutron.conf`` file and add the
|
||||
``service_plugins`` parameter to enable the load-balancing plug-in:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
device_driver = neutron_lbaas.services.loadbalancer.plugin.
|
||||
LoadBalancerPluginv2
|
||||
device_driver = neutron_lbaas.services.loadbalancer.plugin.
|
||||
LoadBalancerPluginv2
|
||||
|
||||
If this option is already defined, add the load-balancing plug-in to
|
||||
the list using a comma as a separator. For example:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
service_plugins = [already defined plugins],
|
||||
neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
|
||||
service_plugins = [already defined plugins],
|
||||
neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
|
||||
|
||||
|
||||
|
||||
#. Create the required tables in the database:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# neutron-db-manage --service lbaas upgrade head
|
||||
# neutron-db-manage --service lbaas upgrade head
|
||||
|
||||
#. Restart the neutron-server service.
|
||||
#. Restart the ``neutron-server`` service.
|
||||
|
||||
|
||||
#. Enable load balancing in the Project section of the dashboard.
|
||||
|
||||
.. Warning::
|
||||
.. warning::
|
||||
|
||||
Horizon panels are enabled only for LBaaSV1. LBaaSV2 panels are still
|
||||
being developed.
|
||||
Horizon panels are enabled only for LBaaSV1. LBaaSV2 panels are still
|
||||
being developed.
|
||||
|
||||
Change the ``enable_lb`` option to ``True`` in the :file:`local_settings`
|
||||
file (on Fedora, RHEL, and CentOS:
|
||||
:file:`/etc/openstack-dashboard/local_settings`, on Ubuntu and Debian:
|
||||
:file:`/etc/openstack-dashboard/local_settings.py`, and on openSUSE and
|
||||
SLES:
|
||||
:file:`/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings
|
||||
.py`):
|
||||
Change the ``enable_lb`` option to ``True`` in the `local_settings.py`
|
||||
file
|
||||
|
||||
.. code:: python
|
||||
.. code-block:: python
|
||||
|
||||
OPENSTACK_NEUTRON_NETWORK = {
|
||||
'enable_lb': True,
|
||||
...
|
||||
}
|
||||
OPENSTACK_NEUTRON_NETWORK = {
|
||||
'enable_lb': True,
|
||||
...
|
||||
}
|
||||
|
||||
Apply the settings by restarting the web server. You can now view the
|
||||
Load Balancer management options in the Project view in the dashboard.
|
||||
@ -407,25 +402,25 @@ hyper-v-virtualization-platform.html>`__.
|
||||
|
||||
#. Download the OpenStack Networking code from the repository:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
> cd C:\OpenStack\
|
||||
> git clone https://git.openstack.org/cgit/openstack/neutron
|
||||
> cd C:\OpenStack\
|
||||
> git clone https://git.openstack.org/cgit/openstack/neutron
|
||||
|
||||
#. Install the OpenStack Networking Hyper-V Agent:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
> cd C:\OpenStack\neutron\
|
||||
> python setup.py install
|
||||
> cd C:\OpenStack\neutron\
|
||||
> python setup.py install
|
||||
|
||||
#. Copy the :file:`policy.json` file:
|
||||
#. Copy the ``policy.json`` file:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
> xcopy C:\OpenStack\neutron\etc\policy.json C:\etc\
|
||||
> xcopy C:\OpenStack\neutron\etc\policy.json C:\etc\
|
||||
|
||||
#. Create the :file:`C:\etc\neutron-hyperv-agent.conf` file and add the proper
|
||||
#. Create the ``C:\etc\neutron-hyperv-agent.conf`` file and add the proper
|
||||
configuration options and the `Hyper-V related
|
||||
options <http://docs.openstack.org/liberty/config-reference/content/
|
||||
networking-plugin-hyperv_agent.html>`__. Here is a sample config file:
|
||||
@ -456,10 +451,10 @@ hyper-v-virtualization-platform.html>`__.
|
||||
|
||||
#. Start the OpenStack Networking Hyper-V agent:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
> C:\Python27\Scripts\neutron-hyperv-agent.exe --config-file
|
||||
C:\etc\neutron-hyperv-agent.conf
|
||||
> C:\Python27\Scripts\neutron-hyperv-agent.exe --config-file
|
||||
C:\etc\neutron-hyperv-agent.conf
|
||||
|
||||
Basic operations on agents
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -482,7 +477,8 @@ complete basic operations on agents:
|
||||
| Update the admin status and description| |
|
||||
| for a specified agent. The command can | |
|
||||
| be used to enable and disable agents by| |
|
||||
| using ``--admin-state-up`` parameter | |
|
||||
| using :option:`--admin-state-up` | |
|
||||
| parameter | |
|
||||
| set to ``False`` or ``True``. | |
|
||||
| | |
|
||||
| | ``$ neutron agent-update --admin`` |
|
||||
|
@ -9,17 +9,17 @@ Configure Identity service for Networking
|
||||
The ``get_id()`` function stores the ID of created objects, and removes
|
||||
the need to copy and paste object IDs in later steps:
|
||||
|
||||
a. Add the following function to your :file:`.bashrc` file:
|
||||
a. Add the following function to your ``.bashrc`` file:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
function get_id () {
|
||||
echo `"$@" | awk '/ id / { print $4 }'`
|
||||
}
|
||||
|
||||
b. Source the :file:`.bashrc` file:
|
||||
b. Source the ``.bashrc`` file:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ source .bashrc
|
||||
|
||||
@ -28,7 +28,7 @@ Configure Identity service for Networking
|
||||
Networking must be available in the Compute service catalog. Create the
|
||||
service:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ NEUTRON_SERVICE_ID=$(get_id keystone service-create --name
|
||||
neutron --type network --description 'OpenStack Networking Service')
|
||||
@ -38,12 +38,12 @@ Configure Identity service for Networking
|
||||
The way that you create a Networking endpoint entry depends on whether
|
||||
you are using the SQL or the template catalog driver:
|
||||
|
||||
- If you are using the *SQL driver*, run the following command with the
|
||||
- If you are using the ``SQL driver``, run the following command with the
|
||||
specified region (``$REGION``), IP address of the Networking server
|
||||
(``$IP``), and service ID (``$NEUTRON_SERVICE_ID``, obtained in the
|
||||
previous step).
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ keystone endpoint-create --region $REGION --service-id
|
||||
$NEUTRON_SERVICE_ID \
|
||||
@ -52,7 +52,7 @@ Configure Identity service for Networking
|
||||
|
||||
For example:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ keystone endpoint-create --region myregion --service-id
|
||||
$NEUTRON_SERVICE_ID \
|
||||
@ -60,12 +60,12 @@ Configure Identity service for Networking
|
||||
"http://10.211.55.17:9696/" --internalurl
|
||||
"http://10.211.55.17:9696/"
|
||||
|
||||
- If you are using the *template driver*, specify the following
|
||||
- If you are using the ``template driver``, specify the following
|
||||
parameters in your Compute catalog template file
|
||||
(:file:`default_catalog.templates`), along with the region (``$REGION``)
|
||||
(``default_catalog.templates``), along with the region (``$REGION``)
|
||||
and IP address of the Networking server (``$IP``).
|
||||
|
||||
.. code:: bash
|
||||
.. code-block:: bash
|
||||
|
||||
catalog.$REGION.network.publicURL = http://$IP:9696
|
||||
catalog.$REGION.network.adminURL = http://$IP:9696
|
||||
@ -74,7 +74,7 @@ Configure Identity service for Networking
|
||||
|
||||
For example:
|
||||
|
||||
.. code:: bash
|
||||
.. code-block:: bash
|
||||
|
||||
catalog.$Region.network.publicURL = http://10.211.55.17:9696
|
||||
catalog.$Region.network.adminURL = http://10.211.55.17:9696
|
||||
@ -90,27 +90,27 @@ Configure Identity service for Networking
|
||||
|
||||
a. Create the ``admin`` role:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ ADMIN_ROLE=$(get_id keystone role-create --name admin)
|
||||
|
||||
b. Create the ``neutron`` user:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ NEUTRON_USER=$(get_id keystone user-create --name neutron\
|
||||
--pass "$NEUTRON_PASSWORD" --email demo@example.com --tenant-id service)
|
||||
|
||||
c. Create the ``service`` tenant:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ SERVICE_TENANT=$(get_id keystone tenant-create --name
|
||||
service --description "Services Tenant")
|
||||
|
||||
d. Establish the relationship among the tenant, user, and role:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ keystone user-role-add --user_id $NEUTRON_USER
|
||||
--role_id $ADMIN_ROLE --tenant_id $SERVICE_TENANT
|
||||
@ -122,11 +122,11 @@ OpenStack Installation Guide for your distribution
|
||||
Compute
|
||||
~~~~~~~
|
||||
|
||||
If you use Networking, do not run the Compute nova-network service (like
|
||||
If you use Networking, do not run the Compute ``nova-network`` service (like
|
||||
you do in traditional Compute deployments). Instead, Compute delegates
|
||||
most network-related decisions to Networking. Compute proxies
|
||||
tenant-facing API calls to manage security groups and floating IPs to
|
||||
Networking APIs. However, operator-facing tools such as nova-manage, are
|
||||
Networking APIs. However, operator-facing tools such as ``nova-manage``, are
|
||||
not proxied and should not be used.
|
||||
|
||||
.. warning::
|
||||
@ -141,23 +141,23 @@ not proxied and should not be used.
|
||||
|
||||
.. note::
|
||||
|
||||
Uninstall nova-network and reboot any physical nodes that have been
|
||||
running nova-network before using them to run Networking.
|
||||
Inadvertently running the nova-network process while using
|
||||
Uninstall ``nova-network`` and reboot any physical nodes that have been
|
||||
running ``nova-network`` before using them to run Networking.
|
||||
Inadvertently running the ``nova-network`` process while using
|
||||
Networking can cause problems, as can stale iptables rules pushed
|
||||
down by previously running nova-network.
|
||||
down by previously running ``nova-network``.
|
||||
|
||||
To ensure that Compute works properly with Networking (rather than the
|
||||
legacy nova-network mechanism), you must adjust settings in the
|
||||
:file:`nova.conf` configuration file.
|
||||
legacy ``nova-network`` mechanism), you must adjust settings in the
|
||||
``nova.conf`` configuration file.
|
||||
|
||||
Networking API and credential configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Each time you provision or de-provision a VM in Compute, nova-\*
|
||||
Each time you provision or de-provision a VM in Compute, ``nova-\*``
|
||||
services communicate with Networking using the standard API. For this to
|
||||
happen, you must configure the following items in the :file:`nova.conf` file
|
||||
(used by each nova-compute and nova-api instance).
|
||||
happen, you must configure the following items in the ``nova.conf`` file
|
||||
(used by each ``nova-compute`` and ``nova-api`` instance).
|
||||
|
||||
.. list-table:: **nova.conf API and credential settings**
|
||||
:widths: 20 50
|
||||
@ -170,7 +170,7 @@ happen, you must configure the following items in the :file:`nova.conf` file
|
||||
indicate that Networking should be used rather than the traditional
|
||||
nova-network networking model.
|
||||
* - ``[neutron] url``
|
||||
- Update to the hostname/IP and port of the neutron-server instance
|
||||
- Update to the host name/IP and port of the neutron-server instance
|
||||
for this deployment.
|
||||
* - ``[neutron] auth_strategy``
|
||||
- Keep the default ``keystone`` value for all production deployments.
|
||||
@ -199,7 +199,7 @@ group calls to the Networking API. If you do not, security policies
|
||||
will conflict by being simultaneously applied by both services.
|
||||
|
||||
To proxy security groups to Networking, use the following configuration
|
||||
values in :file:`nova.conf`:
|
||||
values in the ``nova.conf`` file:
|
||||
|
||||
**nova.conf security group settings**
|
||||
|
||||
@ -224,7 +224,7 @@ made from isolated networks, or from multiple networks that use
|
||||
overlapping IP addresses.
|
||||
|
||||
To enable proxying the requests, you must update the following fields in
|
||||
``[neutron]`` section in :file:`nova.conf`.
|
||||
``[neutron]`` section in the ``nova.conf``.
|
||||
|
||||
**nova.conf metadata settings**
|
||||
|
||||
@ -255,7 +255,7 @@ To enable proxying the requests, you must update the following fields in
|
||||
run a dedicated set of nova-api instances for metadata that are
|
||||
available only on your management network. Whether a given nova-api
|
||||
instance exposes metadata APIs is determined by the value of
|
||||
``enabled_apis`` in its :file:`nova.conf`.
|
||||
``enabled_apis`` in its ``nova.conf``.
|
||||
|
||||
Example nova.conf (for nova-compute and nova-api)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -3,7 +3,7 @@ Plug-in configurations
|
||||
======================
|
||||
|
||||
For configurations options, see `Networking configuration
|
||||
options <http://docs.openstack.org/kilo/config-reference
|
||||
options <http://docs.openstack.org/liberty/config-reference
|
||||
/content/section_networking-options-reference.html>`__
|
||||
in Configuration Reference. These sections explain how to configure
|
||||
specific plug-ins.
|
||||
@ -11,23 +11,23 @@ specific plug-ins.
|
||||
Configure Big Switch (Floodlight REST Proxy) plug-in
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Edit the :file:`/etc/neutron/neutron.conf` file and add this line:
|
||||
#. Edit the ``/etc/neutron/neutron.conf`` file and add this line:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
core_plugin = bigswitch
|
||||
|
||||
#. In the :file:`/etc/neutron/neutron.conf` file, set the ``service_plugins``
|
||||
#. In the ``/etc/neutron/neutron.conf`` file, set the ``service_plugins``
|
||||
option:
|
||||
|
||||
::
|
||||
.. code-block:: ini
|
||||
|
||||
service_plugins = neutron.plugins.bigswitch.l3_router_plugin.L3RestProxy
|
||||
|
||||
#. Edit the :file:`/etc/neutron/plugins/bigswitch/restproxy.ini` file for the
|
||||
#. Edit the ``/etc/neutron/plugins/bigswitch/restproxy.ini`` file for the
|
||||
plug-in and specify a comma-separated list of controller\_ip:port pairs:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
server = CONTROLLER_IP:PORT
|
||||
|
||||
@ -38,11 +38,11 @@ Configure Big Switch (Floodlight REST Proxy) plug-in
|
||||
index <http://docs.openstack.org>`__. (The link defaults to the Ubuntu
|
||||
version.)
|
||||
|
||||
#. Restart neutron-server to apply the settings:
|
||||
#. Restart the ``neutron-server`` to apply the settings:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# service neutron-server restart
|
||||
# service neutron-server restart
|
||||
|
||||
Configure Brocade plug-in
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -50,34 +50,34 @@ Configure Brocade plug-in
|
||||
#. Install the Brocade-modified Python netconf client (ncclient) library,
|
||||
which is available at https://github.com/brocade/ncclient:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ git clone https://github.com/brocade/ncclient
|
||||
$ git clone https://github.com/brocade/ncclient
|
||||
|
||||
#. As root, run this command:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# cd ncclient;python setup.py install
|
||||
# cd ncclient;python setup.py install
|
||||
|
||||
#. Edit the :file:`/etc/neutron/neutron.conf` file and set the following
|
||||
#. Edit the ``/etc/neutron/neutron.conf`` file and set the following
|
||||
option:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
core_plugin = brocade
|
||||
core_plugin = brocade
|
||||
|
||||
#. Edit the :file:`/etc/neutron/plugins/brocade/brocade.ini` file for the
|
||||
#. Edit the ``/etc/neutron/plugins/brocade/brocade.ini`` file for the
|
||||
Brocade plug-in and specify the admin user name, password, and IP
|
||||
address of the Brocade switch:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[SWITCH]
|
||||
username = ADMIN
|
||||
password = PASSWORD
|
||||
address = SWITCH_MGMT_IP_ADDRESS
|
||||
ostype = NOS
|
||||
[SWITCH]
|
||||
username = ADMIN
|
||||
password = PASSWORD
|
||||
address = SWITCH_MGMT_IP_ADDRESS
|
||||
ostype = NOS
|
||||
|
||||
For database configuration, see `Install Networking
|
||||
Services <http://docs.openstack.org/liberty/install-guide-ubuntu/
|
||||
@ -86,11 +86,11 @@ Configure Brocade plug-in
|
||||
index <http://docs.openstack.org>`__. (The link defaults to the Ubuntu
|
||||
version.)
|
||||
|
||||
#. Restart the neutron-server service to apply the settings:
|
||||
#. Restart the ``neutron-server`` service to apply the settings:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# service neutron-server restart
|
||||
# service neutron-server restart
|
||||
|
||||
Configure NSX-mh plug-in
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
@ -100,27 +100,27 @@ formerly known as Nicira NVP.
|
||||
|
||||
#. Install the NSX plug-in:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# apt-get install neutron-plugin-vmware
|
||||
# apt-get install neutron-plugin-vmware
|
||||
|
||||
#. Edit the :file:`/etc/neutron/neutron.conf` file and set this line:
|
||||
#. Edit the ``/etc/neutron/neutron.conf`` file and set this line:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
core_plugin = vmware
|
||||
core_plugin = vmware
|
||||
|
||||
Example :file:`neutron.conf`: file for NSX-mh integration:
|
||||
Example ``neutron.conf``: file for NSX-mh integration:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
core_plugin = vmware
|
||||
rabbit_host = 192.168.203.10
|
||||
allow_overlapping_ips = True
|
||||
core_plugin = vmware
|
||||
rabbit_host = 192.168.203.10
|
||||
allow_overlapping_ips = True
|
||||
|
||||
#. To configure the NSX-mh controller cluster for OpenStack Networking,
|
||||
locate the ``[default]`` section in the
|
||||
:file:`/etc/neutron/plugins/vmware/nsx.ini` file and add the following
|
||||
``/etc/neutron/plugins/vmware/nsx.ini`` file and add the following
|
||||
entries:
|
||||
|
||||
- To establish and configure the connection with the controller cluster
|
||||
@ -128,14 +128,14 @@ formerly known as Nicira NVP.
|
||||
credentials, and optionally specify settings for HTTP timeouts,
|
||||
redirects and retries in case of connection failures:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
nsx_user = ADMIN_USER_NAME
|
||||
nsx_password = NSX_USER_PASSWORD
|
||||
http_timeout = HTTP_REQUEST_TIMEOUT # (seconds) default 75 seconds
|
||||
retries = HTTP_REQUEST_RETRIES # default 2
|
||||
redirects = HTTP_REQUEST_MAX_REDIRECTS # default 2
|
||||
nsx_controllers = API_ENDPOINT_LIST # comma-separated list
|
||||
nsx_user = ADMIN_USER_NAME
|
||||
nsx_password = NSX_USER_PASSWORD
|
||||
http_timeout = HTTP_REQUEST_TIMEOUT # (seconds) default 75 seconds
|
||||
retries = HTTP_REQUEST_RETRIES # default 2
|
||||
redirects = HTTP_REQUEST_MAX_REDIRECTS # default 2
|
||||
nsx_controllers = API_ENDPOINT_LIST # comma-separated list
|
||||
|
||||
To ensure correct operations, the ``nsx_user`` user must have
|
||||
administrator credentials on the NSX-mh platform.
|
||||
@ -157,22 +157,22 @@ formerly known as Nicira NVP.
|
||||
Alternatively the transport zone identifier can be retrieved by query
|
||||
the NSX-mh API: ``/ws.v1/transport-zone``
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
default_tz_uuid = TRANSPORT_ZONE_UUID
|
||||
default_tz_uuid = TRANSPORT_ZONE_UUID
|
||||
|
||||
- .. code:: ini
|
||||
- .. code-block:: ini
|
||||
|
||||
default_l3_gw_service_uuid = GATEWAY_SERVICE_UUID
|
||||
|
||||
.. Warning::
|
||||
.. warning::
|
||||
|
||||
Ubuntu packaging currently does not update the neutron init
|
||||
script to point to the NSX-mh configuration file. Instead, you
|
||||
must manually update :file:`/etc/default/neutron-server` to add this
|
||||
must manually update ``/etc/default/neutron-server`` to add this
|
||||
line:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/vmware/nsx.ini
|
||||
|
||||
@ -181,58 +181,58 @@ formerly known as Nicira NVP.
|
||||
neutron-controller-install.html>`__
|
||||
in the Installation Guide.
|
||||
|
||||
#. Restart neutron-server to apply settings:
|
||||
#. Restart ``neutron-server`` to apply settings:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# service neutron-server restart
|
||||
|
||||
.. Warning::
|
||||
.. warning::
|
||||
|
||||
The neutron NSX-mh plug-in does not implement initial
|
||||
re-synchronization of Neutron resources. Therefore resources that
|
||||
might already exist in the database when Neutron is switched to the
|
||||
NSX-mh plug-in will not be created on the NSX-mh backend upon
|
||||
restart.
|
||||
The neutron NSX-mh plug-in does not implement initial
|
||||
re-synchronization of Neutron resources. Therefore resources that
|
||||
might already exist in the database when Neutron is switched to the
|
||||
NSX-mh plug-in will not be created on the NSX-mh backend upon
|
||||
restart.
|
||||
|
||||
Example :file:`nsx.ini` file:
|
||||
Example ``nsx.ini`` file:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c
|
||||
default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf
|
||||
nsx_user=admin
|
||||
nsx_password=changeme
|
||||
nsx_controllers=10.127.0.100,10.127.0.200:8888
|
||||
[DEFAULT]
|
||||
default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c
|
||||
default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf
|
||||
nsx_user=admin
|
||||
nsx_password=changeme
|
||||
nsx_controllers=10.127.0.100,10.127.0.200:8888
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
To debug :file:`nsx.ini` configuration issues, run this command from the
|
||||
host that runs neutron-server:
|
||||
To debug :file:`nsx.ini` configuration issues, run this command from the
|
||||
host that runs neutron-server:
|
||||
|
||||
..code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# neutron-check-nsx-config PATH_TO_NSX.INI
|
||||
# neutron-check-nsx-config PATH_TO_NSX.INI
|
||||
|
||||
This command tests whether neutron-server can log into all of the
|
||||
NSX-mh controllers and the SQL server, and whether all UUID values
|
||||
are correct.
|
||||
This command tests whether ``neutron-server`` can log into all of the
|
||||
NSX-mh controllers and the SQL server, and whether all UUID values
|
||||
are correct.
|
||||
|
||||
Configure PLUMgrid plug-in
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. Edit the :file:`/etc/neutron/neutron.conf` file and set this line:
|
||||
#. Edit the ``/etc/neutron/neutron.conf`` file and set this line:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
core_plugin = plumgrid
|
||||
core_plugin = plumgrid
|
||||
|
||||
#. Edit the [PLUMgridDirector] section in the
|
||||
:file:`/etc/neutron/plugins/plumgrid/plumgrid.ini` file and specify the IP
|
||||
``/etc/neutron/plugins/plumgrid/plumgrid.ini`` file and specify the IP
|
||||
address, port, admin user name, and password of the PLUMgrid Director:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[PLUMgridDirector]
|
||||
director_server = "PLUMgrid-director-ip-address"
|
||||
@ -245,8 +245,8 @@ Configure PLUMgrid plug-in
|
||||
neutron-controller-install.html>`__
|
||||
in the Installation Guide.
|
||||
|
||||
#. Restart the neutron-server service to apply the settings:
|
||||
#. Restart the ``neutron-server`` service to apply the settings:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# service neutron-server restart
|
||||
|
@ -14,12 +14,12 @@ For a detailed description of the Networking API abstractions and their
|
||||
attributes, see the `OpenStack Networking API v2.0
|
||||
Reference <http://developer.openstack.org/api-ref-networking-v2.html>`__.
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
If you use the Networking service, do not run the Compute
|
||||
nova-network service (like you do in traditional Compute deployments).
|
||||
When you configure networking, see the Compute-related topics in this
|
||||
Networking section.
|
||||
If you use the Networking service, do not run the Compute
|
||||
``nova-network`` service (like you do in traditional Compute deployments).
|
||||
When you configure networking, see the Compute-related topics in this
|
||||
Networking section.
|
||||
|
||||
Networking API
|
||||
~~~~~~~~~~~~~~
|
||||
@ -73,7 +73,7 @@ Configure SSL support for networking API
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
OpenStack Networking supports SSL for the Networking API server. By
|
||||
default, SSL is disabled but you can enable it in the :file:`neutron.conf`
|
||||
default, SSL is disabled but you can enable it in the ``neutron.conf``
|
||||
file.
|
||||
|
||||
Set these options to configure SSL:
|
||||
@ -138,7 +138,7 @@ Least connections
|
||||
| | administration and scripting. Users perform |
|
||||
| | administrative management of load balancers |
|
||||
| | through either the CLI (``neutron``) or the |
|
||||
| | OpenStack dashboard. |
|
||||
| | OpenStack Dashboard. |
|
||||
+-------------------------+---------------------------------------------------+
|
||||
| **Connection limits** | Ingress traffic can be shaped with *connection |
|
||||
| | limits*. This feature allows workload control, |
|
||||
@ -164,63 +164,64 @@ policy and logical firewall instance per project.
|
||||
Whereas security groups operate at the instance-level, FWaaS operates at
|
||||
the perimeter to filter traffic at the neutron router.
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
FWaaS is currently in technical preview; untested operation is not
|
||||
recommended.
|
||||
FWaaS is currently in technical preview; untested operation is not
|
||||
recommended.
|
||||
|
||||
The example diagram illustrates the flow of ingress and egress traffic
|
||||
for the VM2 instance:
|
||||
|
||||
|FWaaS architecture|
|
||||
.. figure:: ../../common/figures/fwaas.png
|
||||
|
||||
**To enable FWaaS**
|
||||
|
||||
FWaaS management options are also available in the OpenStack dashboard.
|
||||
|
||||
#. Enable the FWaaS plug-in in the :file:`/etc/neutron/neutron.conf` file:
|
||||
#. Enable the FWaaS plug-in in the ``/etc/neutron/neutron.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
service_plugins = firewall
|
||||
[service_providers]
|
||||
...
|
||||
service_provider = FIREWALL:Iptables:neutron.agent.linux.iptables_
|
||||
firewall.OVSHybridIptablesFirewallDriver:default
|
||||
service_plugins = firewall
|
||||
[service_providers]
|
||||
...
|
||||
service_provider = FIREWALL:Iptables:neutron.agent.linux.iptables_
|
||||
firewall.OVSHybridIptablesFirewallDriver:default
|
||||
|
||||
[fwaas]
|
||||
driver = neutron_fwaas.services.firewall.drivers.linux.iptables_
|
||||
fwaas.IptablesFwaasDriver
|
||||
enabled = True
|
||||
[fwaas]
|
||||
driver = neutron_fwaas.services.firewall.drivers.linux.iptables_
|
||||
fwaas.IptablesFwaasDriver
|
||||
enabled = True
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
On Ubuntu, modify the ``[fwaas]`` section in the
|
||||
:file:`/etc/neutron/fwaas_driver.ini` file instead of
|
||||
:file:`/etc/neutron/neutron.conf`.
|
||||
On Ubuntu, modify the ``[fwaas]`` section in the
|
||||
``/etc/neutron/fwaas_driver.ini`` file instead of
|
||||
``/etc/neutron/neutron.conf``.
|
||||
|
||||
#. Create the required tables in the database:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# neutron-db-manage --service fwaas upgrade head
|
||||
# neutron-db-manage --service fwaas upgrade head
|
||||
|
||||
#. Enable the option in the
|
||||
:file:`/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py`
|
||||
``/usr/share/openstack-dashboard/openstack_dashboard/
|
||||
local/local_settings.py``
|
||||
file, which is typically located on the controller node:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
OPENSTACK_NEUTRON_NETWORK = {
|
||||
...
|
||||
'enable_firewall' = True,
|
||||
...
|
||||
}
|
||||
OPENSTACK_NEUTRON_NETWORK = {
|
||||
...
|
||||
'enable_firewall' = True,
|
||||
...
|
||||
}
|
||||
|
||||
Apply the settings by restarting the web server.
|
||||
|
||||
#. Restart the neutron-l3-agent and neutron-server services to apply the
|
||||
settings.
|
||||
#. Restart the ``neutron-l3-agent`` and ``neutron-server`` services
|
||||
to apply the settings.
|
||||
|
||||
**To configure Firewall-as-a-Service**
|
||||
|
||||
@ -229,20 +230,20 @@ create a firewall that applies the policy.
|
||||
|
||||
#. Create a firewall rule:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron firewall-rule-create --protocol {tcp|udp|icmp|any}
|
||||
--destination-port PORT_RANGE --action {allow|deny}
|
||||
$ neutron firewall-rule-create --protocol {tcp|udp|icmp|any}
|
||||
--destination-port PORT_RANGE --action {allow|deny}
|
||||
|
||||
The Networking client requires a protocol value; if the rule is protocol
|
||||
agnostic, you can use the ``any`` value.
|
||||
|
||||
#. Create a firewall policy:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron firewall-policy-create --firewall-rules
|
||||
"FIREWALL_RULE_IDS_OR_NAMES" myfirewallpolicy
|
||||
$ neutron firewall-policy-create --firewall-rules
|
||||
"FIREWALL_RULE_IDS_OR_NAMES" myfirewallpolicy
|
||||
|
||||
Separate firewall rule IDs or names with spaces. The order in which you
|
||||
specify the rules is important.
|
||||
@ -260,22 +261,22 @@ create a firewall that applies the policy.
|
||||
firewall-policy-create>`__
|
||||
in the OpenStack Command-Line Interface Reference.
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
FWaaS always adds a default ``deny all`` rule at the lowest precedence of
|
||||
each policy. Consequently, a firewall policy with no rules blocks
|
||||
all traffic by default.
|
||||
FWaaS always adds a default ``deny all`` rule at the lowest precedence of
|
||||
each policy. Consequently, a firewall policy with no rules blocks
|
||||
all traffic by default.
|
||||
|
||||
#. Create a firewall:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron firewall-create FIREWALL_POLICY_UUID
|
||||
$ neutron firewall-create FIREWALL_POLICY_UUID
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
The firewall remains in PENDING\_CREATE state until you create a
|
||||
Networking router and attach an interface to it.
|
||||
The firewall remains in PENDING\_CREATE state until you create a
|
||||
Networking router and attach an interface to it.
|
||||
|
||||
**Allowed-address-pairs.**
|
||||
|
||||
@ -284,7 +285,7 @@ mac_address/ip_address(cidr) pairs that pass through a port regardless
|
||||
of subnet. This enables the use of protocols such as VRRP, which floats
|
||||
an IP address between two instances to enable fast data plane failover.
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
Currently, only the ML2, Open vSwitch, and VMware NSX plug-ins
|
||||
support the allowed-address-pairs extension.
|
||||
@ -293,20 +294,18 @@ an IP address between two instances to enable fast data plane failover.
|
||||
|
||||
- Create a port with a specified allowed address pair:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron port-create net1 --allowed-address-pairs type=dict
|
||||
list=true mac_address=MAC_ADDRESS,ip_address=IP_CIDR
|
||||
|
||||
- Update a port by adding allowed address pairs:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron port-update PORT_UUID --allowed-address-pairs type=dict
|
||||
list=true mac_address=MAC_ADDRESS,ip_address=IP_CIDR
|
||||
|
||||
.. |FWaaS architecture| image:: ../../common/figures/fwaas.png
|
||||
|
||||
|
||||
Virtual-Private-Network-as-a-Service (VPNaaS)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -6,29 +6,29 @@ This section describes how to use the agent management (alias agent) and
|
||||
scheduler (alias agent_scheduler) extensions for DHCP agents
|
||||
scalability and HA.
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
Use the :command:`neutron ext-list` client command to check if these
|
||||
extensions are enabled:
|
||||
Use the :command:`neutron ext-list` client command to check if these
|
||||
extensions are enabled:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron ext-list -c name -c alias
|
||||
$ neutron ext-list -c name -c alias
|
||||
|
||||
+-----------------+--------------------------+
|
||||
| alias | name |
|
||||
+-----------------+--------------------------+
|
||||
| agent_scheduler | Agent Schedulers |
|
||||
| binding | Port Binding |
|
||||
| quotas | Quota management support |
|
||||
| agent | agent |
|
||||
| provider | Provider Network |
|
||||
| router | Neutron L3 Router |
|
||||
| lbaas | Load Balancing service |
|
||||
| extraroute | Neutron Extra Route |
|
||||
+-----------------+--------------------------+
|
||||
+-----------------+--------------------------+
|
||||
| alias | name |
|
||||
+-----------------+--------------------------+
|
||||
| agent_scheduler | Agent Schedulers |
|
||||
| binding | Port Binding |
|
||||
| quotas | Quota management support |
|
||||
| agent | agent |
|
||||
| provider | Provider Network |
|
||||
| router | Neutron L3 Router |
|
||||
| lbaas | Load Balancing service |
|
||||
| extraroute | Neutron Extra Route |
|
||||
+-----------------+--------------------------+
|
||||
|
||||
|image0|
|
||||
.. figure:: ../../common/figures/demo_multiple_dhcp_agents.png
|
||||
|
||||
There will be three hosts in the setup.
|
||||
|
||||
@ -55,78 +55,78 @@ Configuration
|
||||
|
||||
**controlnode: neutron server**
|
||||
|
||||
#. Neutron configuration file :file:`/etc/neutron/neutron.conf`:
|
||||
#. Neutron configuration file ``/etc/neutron/neutron.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
core_plugin = linuxbridge
|
||||
rabbit_host = controlnode
|
||||
allow_overlapping_ips = True
|
||||
host = controlnode
|
||||
agent_down_time = 5
|
||||
[DEFAULT]
|
||||
core_plugin = linuxbridge
|
||||
rabbit_host = controlnode
|
||||
allow_overlapping_ips = True
|
||||
host = controlnode
|
||||
agent_down_time = 5
|
||||
|
||||
#. Update the plug-in configuration file
|
||||
:file:`/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini`:
|
||||
``/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[vlans]
|
||||
tenant_network_type = vlan
|
||||
network_vlan_ranges = physnet1:1000:2999
|
||||
[database]
|
||||
connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge
|
||||
retry_interval = 2
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = physnet1:eth0
|
||||
[vlans]
|
||||
tenant_network_type = vlan
|
||||
network_vlan_ranges = physnet1:1000:2999
|
||||
[database]
|
||||
connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge
|
||||
retry_interval = 2
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = physnet1:eth0
|
||||
|
||||
**HostA and HostB: L2 agent**
|
||||
|
||||
#. Neutron configuration file :file:`/etc/neutron/neutron.conf`:
|
||||
#. Neutron configuration file ``/etc/neutron/neutron.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
rabbit_host = controlnode
|
||||
rabbit_password = openstack
|
||||
# host = HostB on hostb
|
||||
host = HostA
|
||||
[DEFAULT]
|
||||
rabbit_host = controlnode
|
||||
rabbit_password = openstack
|
||||
# host = HostB on hostb
|
||||
host = HostA
|
||||
|
||||
#. Update the plug-in configuration file
|
||||
:file:`/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini`:
|
||||
``/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[vlans]
|
||||
tenant_network_type = vlan
|
||||
network_vlan_ranges = physnet1:1000:2999
|
||||
[database]
|
||||
connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge
|
||||
retry_interval = 2
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = physnet1:eth0
|
||||
[vlans]
|
||||
tenant_network_type = vlan
|
||||
network_vlan_ranges = physnet1:1000:2999
|
||||
[database]
|
||||
connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge
|
||||
retry_interval = 2
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = physnet1:eth0
|
||||
|
||||
#. Update the nova configuration file :file:`/etc/nova/nova.conf`:
|
||||
#. Update the nova configuration file ``/etc/nova/nova.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
network_api_class=nova.network.neutronv2.api.API
|
||||
firewall_driver=nova.virt.firewall.NoopFirewallDriver
|
||||
[DEFAULT]
|
||||
network_api_class=nova.network.neutronv2.api.API
|
||||
firewall_driver=nova.virt.firewall.NoopFirewallDriver
|
||||
|
||||
[neutron]
|
||||
admin_username=neutron
|
||||
admin_password=servicepassword
|
||||
admin_auth_url=http://controlnode:35357/v2.0/
|
||||
auth_strategy=keystone
|
||||
admin_tenant_name=servicetenant
|
||||
url=http://100.1.1.10:9696/
|
||||
[neutron]
|
||||
admin_username=neutron
|
||||
admin_password=servicepassword
|
||||
admin_auth_url=http://controlnode:35357/v2.0/
|
||||
auth_strategy=keystone
|
||||
admin_tenant_name=servicetenant
|
||||
url=http://100.1.1.10:9696/
|
||||
|
||||
**HostA and HostB: DHCP agent**
|
||||
|
||||
- Update the DHCP configuration file :file:`/etc/neutron/dhcp_agent.ini`:
|
||||
- Update the DHCP configuration file ``/etc/neutron/dhcp_agent.ini``:
|
||||
|
||||
.. code:: ini
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
||||
@ -137,23 +137,23 @@ Commands in agent management and scheduler extensions
|
||||
The following commands require the tenant running the command to have an
|
||||
admin role.
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
Ensure that the following environment variables are set. These are
|
||||
used by the various clients to access the Identity service.
|
||||
Ensure that the following environment variables are set. These are
|
||||
used by the various clients to access the Identity service.
|
||||
|
||||
.. code:: bash
|
||||
.. code-block:: bash
|
||||
|
||||
export OS_USERNAME=admin
|
||||
export OS_PASSWORD=adminpassword
|
||||
export OS_TENANT_NAME=admin
|
||||
export OS_AUTH_URL=http://controlnode:5000/v2.0/
|
||||
export OS_USERNAME=admin
|
||||
export OS_PASSWORD=adminpassword
|
||||
export OS_TENANT_NAME=admin
|
||||
export OS_AUTH_URL=http://controlnode:5000/v2.0/
|
||||
|
||||
**Settings**
|
||||
|
||||
To experiment, you need VMs and a neutron network:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ nova list
|
||||
|
||||
@ -180,22 +180,22 @@ neutron server when it starts up.
|
||||
|
||||
#. List all agents:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-list
|
||||
$ neutron agent-list
|
||||
|
||||
+--------------------------------------+--------------------+-------+-------+----------------+
|
||||
| id | agent_type | host | alive | admin_state_up |
|
||||
+--------------------------------------+--------------------+-------+-------+----------------+
|
||||
| 1b69828d-6a9b-4826-87cd-1757f0e27f31 | Linux bridge agent | HostA | :-) | True |
|
||||
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | DHCP agent | HostA | :-) | True |
|
||||
| ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent | HostB | :-) | True |
|
||||
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent | HostB | :-) | True |
|
||||
+--------------------------------------+--------------------+-------+-------+----------------+
|
||||
+--------------------------------------+--------------------+-------+-------+----------------+
|
||||
| id | agent_type | host | alive | admin_state_up |
|
||||
+--------------------------------------+--------------------+-------+-------+----------------+
|
||||
| 1b69828d-6a9b-4826-87cd-1757f0e27f31 | Linux bridge agent | HostA | :-) | True |
|
||||
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | DHCP agent | HostA | :-) | True |
|
||||
| ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent | HostB | :-) | True |
|
||||
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent | HostB | :-) | True |
|
||||
+--------------------------------------+--------------------+-------+-------+----------------+
|
||||
|
||||
The output shows information for four agents. The ``alive`` field shows
|
||||
``:-)`` if the agent reported its state within the period defined by the
|
||||
``agent_down_time`` option in the :file:`neutron.conf` file. Otherwise the
|
||||
``agent_down_time`` option in the ``neutron.conf`` file. Otherwise the
|
||||
``alive`` is ``xxx``.
|
||||
|
||||
#. List the DHCP agents that host a specified network:
|
||||
@ -207,63 +207,63 @@ neutron server when it starts up.
|
||||
|
||||
#. List DHCP agents that host a specified network:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron dhcp-agent-list-hosting-net net1
|
||||
$ neutron dhcp-agent-list-hosting-net net1
|
||||
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| id | host | admin_state_up | alive |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| id | host | admin_state_up | alive |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
|
||||
#. List the networks hosted by a given DHCP agent:
|
||||
|
||||
This command is to show which networks a given dhcp agent is managing.
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-list-on-dhcp-agent a0c1c21c-d4f4-4577-9ec7-908f2d48622d
|
||||
$ neutron net-list-on-dhcp-agent a0c1c21c-d4f4-4577-9ec7-908f2d48622d
|
||||
|
||||
+------------------------+------+---------------------------------+
|
||||
| id | name | subnets |
|
||||
+------------------------+------+---------------------------------+
|
||||
| 89dca1c6-c7d4-4f7a | | |
|
||||
| -b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd |
|
||||
| | | -8e45-d5cf646db9d1 10.0.1.0/24 |
|
||||
+------------------------+------+---------------------------------+
|
||||
+------------------------+------+---------------------------------+
|
||||
| id | name | subnets |
|
||||
+------------------------+------+---------------------------------+
|
||||
| 89dca1c6-c7d4-4f7a | | |
|
||||
| -b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd |
|
||||
| | | -8e45-d5cf646db9d1 10.0.1.0/24 |
|
||||
+------------------------+------+---------------------------------+
|
||||
|
||||
#. Show agent details.
|
||||
|
||||
The :command:`agent-show` command shows details for a specified agent:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-show a0c1c21c-d4f4-4577-9ec7-908f2d48622d
|
||||
$ neutron agent-show a0c1c21c-d4f4-4577-9ec7-908f2d48622d
|
||||
|
||||
+--------------------+---------------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------------+---------------------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| agent_type | DHCP agent |
|
||||
| alive | False |
|
||||
| binary | neutron-dhcp-agent |
|
||||
| configurations |{ |
|
||||
| | "subnets": 1, |
|
||||
| | "use_namespaces": true, |
|
||||
| | "dhcp_driver": "neutron.agent.linux.dhcp.Dnsmasq",|
|
||||
| | "networks": 1, |
|
||||
| | "dhcp_lease_time": 120, |
|
||||
| | "ports": 3 |
|
||||
| |} |
|
||||
| created_at | 2013-03-16T01:16:18.000000 |
|
||||
| description | |
|
||||
| heartbeat_timestamp| 2013-03-17T01:37:22.000000 |
|
||||
| host | HostA |
|
||||
| id | 58f4ce07-6789-4bb3-aa42-ed3779db2b03 |
|
||||
| started_at | 2013-03-16T06:48:39.000000 |
|
||||
| topic | dhcp_agent |
|
||||
+--------------------+---------------------------------------------------+
|
||||
+--------------------+---------------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------------+---------------------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| agent_type | DHCP agent |
|
||||
| alive | False |
|
||||
| binary | neutron-dhcp-agent |
|
||||
| configurations |{ |
|
||||
| | "subnets": 1, |
|
||||
| | "use_namespaces": true, |
|
||||
| | "dhcp_driver": "neutron.agent.linux.dhcp.Dnsmasq",|
|
||||
| | "networks": 1, |
|
||||
| | "dhcp_lease_time": 120, |
|
||||
| | "ports": 3 |
|
||||
| |} |
|
||||
| created_at | 2013-03-16T01:16:18.000000 |
|
||||
| description | |
|
||||
| heartbeat_timestamp| 2013-03-17T01:37:22.000000 |
|
||||
| host | HostA |
|
||||
| id | 58f4ce07-6789-4bb3-aa42-ed3779db2b03 |
|
||||
| started_at | 2013-03-16T06:48:39.000000 |
|
||||
| topic | dhcp_agent |
|
||||
+--------------------+---------------------------------------------------+
|
||||
|
||||
In this output, ``heartbeat_timestamp`` is the time on the neutron
|
||||
server. You do not need to synchronize all agents to this time for this
|
||||
@ -274,30 +274,30 @@ neutron server when it starts up.
|
||||
Different types of agents show different details. The following output
|
||||
shows information for a Linux bridge agent:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-show ed96b856-ae0f-4d75-bb28-40a47ffd7695
|
||||
$ neutron agent-show ed96b856-ae0f-4d75-bb28-40a47ffd7695
|
||||
|
||||
+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| binary | neutron-linuxbridge-agent |
|
||||
| configurations | { |
|
||||
| | "physnet1": "eth0", |
|
||||
| | "devices": "4" |
|
||||
| | } |
|
||||
| created_at | 2013-03-16T01:49:52.000000 |
|
||||
| description | |
|
||||
| disabled | False |
|
||||
| group | agent |
|
||||
| heartbeat_timestamp | 2013-03-16T01:59:45.000000 |
|
||||
| host | HostB |
|
||||
| id | ed96b856-ae0f-4d75-bb28-40a47ffd7695 |
|
||||
| topic | N/A |
|
||||
| started_at | 2013-03-16T06:48:39.000000 |
|
||||
| type | Linux bridge agent |
|
||||
+---------------------+--------------------------------------+
|
||||
+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| binary | neutron-linuxbridge-agent |
|
||||
| configurations | { |
|
||||
| | "physnet1": "eth0", |
|
||||
| | "devices": "4" |
|
||||
| | } |
|
||||
| created_at | 2013-03-16T01:49:52.000000 |
|
||||
| description | |
|
||||
| disabled | False |
|
||||
| group | agent |
|
||||
| heartbeat_timestamp | 2013-03-16T01:59:45.000000 |
|
||||
| host | HostB |
|
||||
| id | ed96b856-ae0f-4d75-bb28-40a47ffd7695 |
|
||||
| topic | N/A |
|
||||
| started_at | 2013-03-16T06:48:39.000000 |
|
||||
| type | Linux bridge agent |
|
||||
+---------------------+--------------------------------------+
|
||||
|
||||
The output shows ``bridge-mapping`` and the number of virtual network
|
||||
devices on this L2 agent.
|
||||
@ -315,18 +315,18 @@ DHCP agent and remove one from it.
|
||||
randomly. You can design more sophisticated scheduling algorithms in the
|
||||
same way as nova-schedule later on.
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-create net2
|
||||
$ neutron subnet-create net2 9.0.1.0/24 --name subnet2
|
||||
$ neutron port-create net2
|
||||
$ neutron dhcp-agent-list-hosting-net net2
|
||||
$ neutron net-create net2
|
||||
$ neutron subnet-create net2 9.0.1.0/24 --name subnet2
|
||||
$ neutron port-create net2
|
||||
$ neutron dhcp-agent-list-hosting-net net2
|
||||
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| id | host | admin_state_up | alive |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| id | host | admin_state_up | alive |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
|
||||
It is allocated to DHCP agent on HostA. If you want to validate the
|
||||
behavior through the :command:`dnsmasq` command, you must create a subnet for
|
||||
@ -337,18 +337,18 @@ DHCP agent and remove one from it.
|
||||
|
||||
To add another DHCP agent to host the network, run this command:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron dhcp-agent-network-add f28aa126-6edb-4ea5-a81e-8850876bc0a8 net2
|
||||
Added network net2 to dhcp agent
|
||||
$ neutron dhcp-agent-list-hosting-net net2
|
||||
$ neutron dhcp-agent-network-add f28aa126-6edb-4ea5-a81e-8850876bc0a8 net2
|
||||
Added network net2 to dhcp agent
|
||||
$ neutron dhcp-agent-list-hosting-net net2
|
||||
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| id | host | admin_state_up | alive |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
|
||||
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| id | host | admin_state_up | alive |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
|
||||
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
|
||||
Both DHCP agents host the ``net2`` network.
|
||||
|
||||
@ -357,18 +357,18 @@ DHCP agent and remove one from it.
|
||||
This command is the sibling command for the previous one. Remove
|
||||
``net2`` from the DHCP agent for HostA:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron dhcp-agent-network-remove a0c1c21c-d4f4-4577-9ec7-908f2d48622d
|
||||
net2
|
||||
Removed network net2 to dhcp agent
|
||||
$ neutron dhcp-agent-list-hosting-net net2
|
||||
$ neutron dhcp-agent-network-remove a0c1c21c-d4f4-4577-9ec7-908f2d48622d
|
||||
net2
|
||||
Removed network net2 to dhcp agent
|
||||
$ neutron dhcp-agent-list-hosting-net net2
|
||||
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| id | host | admin_state_up | alive |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| id | host | admin_state_up | alive |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
|
||||
You can see that only the DHCP agent for HostB is hosting the ``net2``
|
||||
network.
|
||||
@ -380,7 +380,7 @@ in turn to see if the VM can still get the desired IP.
|
||||
|
||||
#. Boot a VM on net2:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-list
|
||||
|
||||
@ -395,38 +395,38 @@ in turn to see if the VM can still get the desired IP.
|
||||
| | | 65f68eedcaaa 9.0.1.0/24 |
|
||||
+-------------------------+------+-----------------------------+
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ nova boot --image tty --flavor 1 myserver4 \
|
||||
--nic net-id=9b96b14f-71b8-4918-90aa-c5d705606b1a
|
||||
$ nova boot --image tty --flavor 1 myserver4 \
|
||||
--nic net-id=9b96b14f-71b8-4918-90aa-c5d705606b1a
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ nova list
|
||||
$ nova list
|
||||
|
||||
+-------------------------------------+----------+-------+---------------+
|
||||
| ID | Name | Status| Networks |
|
||||
+-------------------------------------+----------+-------+---------------+
|
||||
|c394fcd0-0baa-43ae-a793-201815c3e8ce |myserver1 |ACTIVE | net1=10.0.1.3 |
|
||||
|2d604e05-9a6c-4ddb-9082-8a1fbdcc797d |myserver2 |ACTIVE | net1=10.0.1.4 |
|
||||
|c7c0481c-3db8-4d7a-a948-60ce8211d585 |myserver3 |ACTIVE | net1=10.0.1.5 |
|
||||
|f62f4731-5591-46b1-9d74-f0c901de567f |myserver4 |ACTIVE | net2=9.0.1.2 |
|
||||
+-------------------------------------+----------+-------+---------------+
|
||||
+-------------------------------------+----------+-------+---------------+
|
||||
| ID | Name | Status| Networks |
|
||||
+-------------------------------------+----------+-------+---------------+
|
||||
|c394fcd0-0baa-43ae-a793-201815c3e8ce |myserver1 |ACTIVE | net1=10.0.1.3 |
|
||||
|2d604e05-9a6c-4ddb-9082-8a1fbdcc797d |myserver2 |ACTIVE | net1=10.0.1.4 |
|
||||
|c7c0481c-3db8-4d7a-a948-60ce8211d585 |myserver3 |ACTIVE | net1=10.0.1.5 |
|
||||
|f62f4731-5591-46b1-9d74-f0c901de567f |myserver4 |ACTIVE | net2=9.0.1.2 |
|
||||
+-------------------------------------+----------+-------+---------------+
|
||||
|
||||
#. Make sure both DHCP agents hosting 'net2':
|
||||
#. Make sure both DHCP agents hosting ``net2``:
|
||||
|
||||
Use the previous commands to assign the network to agents.
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron dhcp-agent-list-hosting-net net2
|
||||
$ neutron dhcp-agent-list-hosting-net net2
|
||||
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| id | host | admin_state_up | alive |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
|
||||
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| id | host | admin_state_up | alive |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
|
||||
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
|
||||
**Test the HA**
|
||||
|
||||
@ -455,7 +455,7 @@ Remove the resources on the agent before you delete the agent.
|
||||
|
||||
To run the following commands, you must stop the DHCP agent on HostA.
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-update --admin-state-up False a0c1c21c-d4f4-4577
|
||||
-9ec7-908f2d48622d
|
||||
@ -470,21 +470,20 @@ To run the following commands, you must stop the DHCP agent on HostA.
|
||||
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent | HostB | :-) | True |
|
||||
+--------------------------------------+--------------------+-------+-------+----------------+
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-delete a0c1c21c-d4f4-4577-9ec7-908f2d48622d
|
||||
Deleted agent: a0c1c21c-d4f4-4577-9ec7-908f2d48622d
|
||||
$ neutron agent-list
|
||||
$ neutron agent-delete a0c1c21c-d4f4-4577-9ec7-908f2d48622d
|
||||
Deleted agent: a0c1c21c-d4f4-4577-9ec7-908f2d48622d
|
||||
$ neutron agent-list
|
||||
|
||||
+--------------------------------------+--------------------+-------+-------+----------------+
|
||||
| id | agent_type | host | alive | admin_state_up |
|
||||
+--------------------------------------+--------------------+-------+-------+----------------+
|
||||
| 1b69828d-6a9b-4826-87cd-1757f0e27f31 | Linux bridge agent | HostA | :-) | True |
|
||||
| ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent | HostB | :-) | True |
|
||||
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent | HostB | :-) | True |
|
||||
+--------------------------------------+--------------------+-------+-------+----------------+
|
||||
+--------------------------------------+--------------------+-------+-------+----------------+
|
||||
| id | agent_type | host | alive | admin_state_up |
|
||||
+--------------------------------------+--------------------+-------+-------+----------------+
|
||||
| 1b69828d-6a9b-4826-87cd-1757f0e27f31 | Linux bridge agent | HostA | :-) | True |
|
||||
| ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent | HostB | :-) | True |
|
||||
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent | HostB | :-) | True |
|
||||
+--------------------------------------+--------------------+-------+-------+----------------+
|
||||
|
||||
After deletion, if you restart the DHCP agent, it appears on the agent
|
||||
list again.
|
||||
|
||||
.. |image0| image:: ../../common/figures/demo_multiple_dhcp_agents.png
|
||||
|
@ -5,16 +5,16 @@ Use Networking
|
||||
You can manage OpenStack Networking services by using the service
|
||||
command. For example:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
# service neutron-server stop
|
||||
# service neutron-server status
|
||||
# service neutron-server start
|
||||
# service neutron-server restart
|
||||
# service neutron-server stop
|
||||
# service neutron-server status
|
||||
# service neutron-server start
|
||||
# service neutron-server restart
|
||||
|
||||
Log files are in the :file:`/var/log/neutron` directory.
|
||||
Log files are in the ``/var/log/neutron`` directory.
|
||||
|
||||
Configuration files are in the :file:`/etc/neutron` directory.
|
||||
Configuration files are in the ``/etc/neutron`` directory.
|
||||
|
||||
Cloud administrators and tenants can use OpenStack Networking to build
|
||||
rich network topologies. Cloud administrators can create network
|
||||
@ -78,14 +78,14 @@ basic network operations:
|
||||
|
||||
**Basic Networking operations**
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
The ``device_owner`` field describes who owns the port. A port whose
|
||||
``device_owner`` begins with:
|
||||
The ``device_owner`` field describes who owns the port. A port whose
|
||||
``device_owner`` begins with:
|
||||
|
||||
- ``network`` is created by Networking.
|
||||
- ``network`` is created by Networking.
|
||||
|
||||
- ``compute`` is created by Compute.
|
||||
- ``compute`` is created by Compute.
|
||||
|
||||
Administrative operations
|
||||
-------------------------
|
||||
@ -94,24 +94,24 @@ The cloud administrator can run any :command:`neutron` command on behalf of
|
||||
tenants by specifying an Identity ``tenant_id`` in the command, as
|
||||
follows:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-create --tenant-id TENANT_ID NETWORK_NAME
|
||||
$ neutron net-create --tenant-id TENANT_ID NETWORK_NAME
|
||||
|
||||
For example:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-create --tenant-id 5e4bbe24b67a4410bc4d9fae29ec394e net1
|
||||
$ neutron net-create --tenant-id 5e4bbe24b67a4410bc4d9fae29ec394e net1
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
To view all tenant IDs in Identity, run the following command as an
|
||||
Identity service admin user:
|
||||
To view all tenant IDs in Identity, run the following command as an
|
||||
Identity service admin user:
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ keystone tenant-list
|
||||
$ keystone tenant-list
|
||||
|
||||
Advanced Networking operations
|
||||
------------------------------
|
||||
@ -215,17 +215,17 @@ complete basic VM networking operations:
|
||||
|
||||
**Basic Compute and Networking operations**
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
The ``device_id`` can also be a logical router ID.
|
||||
The ``device_id`` can also be a logical router ID.
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
- When you boot a Compute VM, a port on the network that
|
||||
corresponds to the VM NIC is automatically created and associated
|
||||
with the default security group. You can configure `security
|
||||
group rules <#enabling_ping_and_ssh>`__ to enable users to access
|
||||
the VM.
|
||||
- When you boot a Compute VM, a port on the network that
|
||||
corresponds to the VM NIC is automatically created and associated
|
||||
with the default security group. You can configure `security
|
||||
group rules <#enabling_ping_and_ssh>`__ to enable users to access
|
||||
the VM.
|
||||
|
||||
.. _Create and delete VMs:
|
||||
- When you delete a Compute VM, the underlying Networking port is
|
||||
@ -267,28 +267,28 @@ complete advanced VM creation operations:
|
||||
|
||||
**Advanced VM creation operations**
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
Cloud images that distribution vendors offer usually have only one
|
||||
active NIC configured. When you boot with multiple NICs, you must
|
||||
configure additional interfaces on the image or the NICs are not
|
||||
reachable.
|
||||
Cloud images that distribution vendors offer usually have only one
|
||||
active NIC configured. When you boot with multiple NICs, you must
|
||||
configure additional interfaces on the image or the NICs are not
|
||||
reachable.
|
||||
|
||||
The following Debian/Ubuntu-based example shows how to set up the
|
||||
interfaces within the instance in the ``/etc/network/interfaces``
|
||||
file. You must apply this configuration to the image.
|
||||
The following Debian/Ubuntu-based example shows how to set up the
|
||||
interfaces within the instance in the ``/etc/network/interfaces``
|
||||
file. You must apply this configuration to the image.
|
||||
|
||||
.. code:: bash
|
||||
.. code-block:: bash
|
||||
|
||||
# The loopback network interface
|
||||
auto lo
|
||||
iface lo inet loopback
|
||||
# The loopback network interface
|
||||
auto lo
|
||||
iface lo inet loopback
|
||||
|
||||
auto eth0
|
||||
iface eth0 inet dhcp
|
||||
auto eth0
|
||||
iface eth0 inet dhcp
|
||||
|
||||
auto eth1
|
||||
iface eth1 inet dhcp
|
||||
auto eth1
|
||||
iface eth1 inet dhcp
|
||||
|
||||
Enable ping and SSH on VMs (security groups)
|
||||
--------------------------------------------
|
||||
@ -300,30 +300,30 @@ you are using. If you are using a plug-in that:
|
||||
group rules directly by using the :command:`neutron security-group-rule-create`
|
||||
command. This example enables ``ping`` and ``ssh`` access to your VMs.
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron security-group-rule-create --protocol icmp \
|
||||
--direction ingress default
|
||||
$ neutron security-group-rule-create --protocol icmp \
|
||||
--direction ingress default
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron security-group-rule-create --protocol tcp --port-range-min 22 \
|
||||
--port-range-max 22 --direction ingress default
|
||||
$ neutron security-group-rule-create --protocol tcp --port-range-min 22 \
|
||||
--port-range-max 22 --direction ingress default
|
||||
|
||||
- Does not implement Networking security groups, you can configure
|
||||
security group rules by using the :command:`nova secgroup-add-rule` or
|
||||
:command:`euca-authorize` command. These :command:`nova` commands enable
|
||||
``ping`` and ``ssh`` access to your VMs.
|
||||
|
||||
.. code:: console
|
||||
.. code-block:: console
|
||||
|
||||
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
|
||||
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
|
||||
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
|
||||
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
|
||||
|
||||
.. Note::
|
||||
.. note::
|
||||
|
||||
If your plug-in implements Networking security groups, you can also
|
||||
leverage Compute security groups by setting
|
||||
``security_group_api = neutron`` in the :file:`nova.conf` file. After
|
||||
``security_group_api = neutron`` in the ``nova.conf`` file. After
|
||||
you set this option, all Compute security group commands are proxied
|
||||
to Networking.
|
||||
|
Loading…
Reference in New Issue
Block a user