[cloud-admin-guide] Converting networking files to RST

Converted the following files:
1. networking.xml
2. section_networking-adv-config.xml
3. section_networking_adv_operational_features.xml
4. section_networking_arch.xml
5. section_networking_config-plugins.xml
6. section_networking-use.xml
7. section_networking_auth.xml

Change-Id: I2502951a9bf17511defc07efc03bc6676529342a
Implements: blueprint reorganise-user-guides
This commit is contained in:
asettle
2015-07-08 11:47:11 +10:00
committed by Darren Chan
parent a2f2e7e49b
commit e287ad3c3b
9 changed files with 1191 additions and 0 deletions

View File

@@ -24,6 +24,7 @@ Contents
database.rst
orchestration.rst
blockstorage.rst
networking.rst
telemetry.rst
common/app_support.rst
common/glossary.rst

View File

@@ -0,0 +1,24 @@
==========
Networking
==========
Learn OpenStack Networking concepts, architecture, and basic and
advanced ``neutron`` and ``nova`` command-line interface (CLI) commands.
.. toctree::
:maxdepth: 2
networking_config-plugins.rst
networking_arch.rst
networking_adv-config.rst
networking_use.rst
networking_adv-operational-features.rst
networking_auth.rst
.. TODO (asettle)
networking_adv-features.rst
networking_multi-dhcp-agents.rst
networking_introduction.rst
networking_config-agents.rst
networking_config-identity.rst

View File

@@ -0,0 +1,77 @@
==============================
Advanced configuration options
==============================
This section describes advanced configuration options for various system
components. For example, configuration options where the default works
but that the user wants to customize options. After installing from
packages, ``$NEUTRON_CONF_DIR`` is :file:`/etc/neutron`.
L3 metering agent
~~~~~~~~~~~~~~~~~
You can run an L3 metering agent that enables layer-3 traffic metering.
In general, you should launch the metering agent on all nodes that run
the L3 agent:
::
neutron-metering-agent --config-file NEUTRON_CONFIG_FILE
--config-file L3_METERING_CONFIG_FILE
You must configure a driver that matches the plug-in that runs on the
service. The driver adds metering to the routing interface.
+------------------------------------------+---------------------------------+
| Option | Value |
+==========================================+=================================+
| **Open vSwitch** | |
+------------------------------------------+---------------------------------+
| interface\_driver | |
| ($NEUTRON\_CONF\_DIR/metering\_agent.ini)| neutron.agent.linux.interface. |
| | OVSInterfaceDriver |
+------------------------------------------+---------------------------------+
| **Linux Bridge** | |
+------------------------------------------+---------------------------------+
| interface\_driver | |
| ($NEUTRON\_CONF\_DIR/metering\_agent.ini)| neutron.agent.linux.interface. |
| | BridgeInterfaceDriver |
+------------------------------------------+---------------------------------+
Namespace
---------
The metering agent and the L3 agent must have the same network
namespaces configuration.
.. note::
If the Linux installation does not support network namespaces, you
must disable network namespaces in the L3 metering configuration
file. The default value of the ``use_namespaces`` option is
``True``.
.. code:: ini
use_namespaces = False
L3 metering driver
------------------
You must configure any driver that implements the metering abstraction.
Currently the only available implementation uses iptables for metering.
.. code:: ini
driver = neutron.services.metering.drivers.
iptables.iptables_driver.IptablesMeteringDriver
L3 metering service driver
--------------------------
To enable L3 metering, you must set the following option in the
:file:`neutron.conf` file on the host that runs neutron-server:
.. code:: ini
service_plugins = metering

View File

@@ -0,0 +1,160 @@
=============================
Advanced operational features
=============================
Logging settings
~~~~~~~~~~~~~~~~
Networking components use Python logging module to do logging. Logging
configuration can be provided in :file:`neutron.conf` or as command-line
options. Command options override ones in :file:`neutron.conf`.
To configure logging for Networking components, use one of these
methods:
- Provide logging settings in a logging configuration file.
See `Python logging
how-to <http://docs.python.org/howto/logging.html>`__ to learn more
about logging.
- Provide logging setting in :file:`neutron.conf`.
.. code-block:: ini
:linenos:
[DEFAULT]
# Default log level is WARNING
# Show debugging output in logs (sets DEBUG log level output)
# debug = False
# Show more verbose log output (sets INFO log level output) if debug
is False
# verbose = False
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# log_date_format = %Y-%m-%d %H:%M:%S
# use_syslog = False
# syslog_log_facility = LOG_USER
# if use_syslog is False, we can set log_file and log_dir.
# if use_syslog is False and we do not set log_file,
# the log will be printed to stdout.
# log_file =
# log_dir =
Notifications
~~~~~~~~~~~~~
Notifications can be sent when Networking resources such as network,
subnet and port are created, updated or deleted.
Notification options
--------------------
To support DHCP agent, rpc\_notifier driver must be set. To set up the
notification, edit notification options in :file:`neutron.conf`:
.. code-block:: ini
:linenos:
# Driver or drivers to handle sending notifications. (multi
# valued)
#notification_driver=
# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
notification_topics = notifications
Setting cases
-------------
Logging and RPC
^^^^^^^^^^^^^^^
These options configure the Networking server to send notifications
through logging and RPC. The logging options are described in OpenStack
Configuration Reference . RPC notifications go to ``notifications.info``
queue bound to a topic exchange defined by ``control_exchange`` in
:file:`neutron.conf`.
.. code-block:: ini
:linenos:
# ============ Notification System Options ====================
# Notifications can be sent when network/subnet/port are create,
updated or deleted.
# There are three methods of sending notifications: logging
(via the log_file directive), rpc (via a message queue) and
# noop (no notifications sent, the default)
# Notification_driver can be defined multiple times
# Do nothing driver
# notification_driver = neutron.openstack.common.notifier.
no_op_notifier
# Logging driver
notification_driver = neutron.openstack.common.notifier.
log_notifier
# RPC driver
notification_driver = neutron.openstack.common.notifier.
rpc_notifier
# default_notification_level is used to form actual topic
names or to set logging level
default_notification_level = INFO
# default_publisher_id is a part of the notification payload
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma-separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications
# Options defined in oslo.messaging
# The default exchange under which topics are scoped. May be
# overridden by an exchange name specified in the
# transport_url option. (string value)
#control_exchange=openstack
Multiple RPC topics
^^^^^^^^^^^^^^^^^^^
These options configure the Networking server to send notifications to
multiple RPC topics. RPC notifications go to ``notifications_one.info``
and ``notifications_two.info`` queues bound to a topic exchange defined
by ``control_exchange`` in :file:`neutron.conf`.
.. code-block:: ini
:linenos:
# ============ Notification System Options =====================
# Notifications can be sent when network/subnet/port are create,
updated or deleted.
# There are three methods of sending notifications: logging (via the
# log_file directive), rpc (via a message queue) and
# noop (no notifications sent, the default)
# Notification_driver can be defined multiple times
# Do nothing driver
# notification_driver = neutron.openstack.common.notifier.no_op_notifier
# Logging driver
# notification_driver = neutron.openstack.common.notifier.log_notifier
# RPC driver
notification_driver = neutron.openstack.common.notifier.rpc_notifier
# default_notification_level is used to form actual topic names or to set
logging level
default_notification_level = INFO
# default_publisher_id is a part of the notification payload
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma-separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications_one,notifications_two

View File

@@ -0,0 +1,92 @@
=======================
Networking architecture
=======================
Before you deploy Networking, it is useful to understand the Networking
services and how they interact with the OpenStack components.
Overview
~~~~~~~~
Networking is a standalone component in the OpenStack modular
architecture. It's positioned alongside OpenStack components such as
Compute, Image service, Identity, or the Dashboard. Like those
components, a deployment of Networking often involves deploying several
services to a variety of hosts.
The Networking server uses the neutron-server daemon to expose the
Networking API and enable administration of the configured Networking
plug-in. Typically, the plug-in requires access to a database for
persistent storage (also similar to other OpenStack services).
If your deployment uses a controller host to run centralized Compute
components, you can deploy the Networking server to that same host.
However, Networking is entirely standalone and can be deployed to a
dedicated host. Depending on your configuration, Networking can also
include the following agents:
+----------------------------+---------------------------------------------+
| Agent | Description |
+============================+=============================================+
|**plug-in agent** | |
|(``neutron-*-agent``) | Runs on each hypervisor to perform |
| | local vSwitch configuration. The agent that |
| | runs, depends on the plug-in that you use. |
| | Certain plug-ins do not require an agent. |
+----------------------------+---------------------------------------------+
|**dhcp agent** | |
|(``neutron-dhcp-agent``) | Provides DHCP services to tenant networks. |
| | Required by certain plug-ins. |
+----------------------------+---------------------------------------------+
|**l3 agent** | |
|(``neutron-l3-agent``) | Provides L3/NAT forwarding to provide |
| | external network access for VMs on tenant |
| | networks. Required by certain plug-ins. |
+----------------------------+---------------------------------------------+
|**metering agent** | |
|(``neutron-metering-agent``)| Provides L3 traffic metering for tenant |
| | networks. |
+----------------------------+---------------------------------------------+
These agents interact with the main neutron process through RPC (for
example, RabbitMQ or Qpid) or through the standard Networking API. In
addition, Networking integrates with OpenStack components in a number of
ways:
- Networking relies on the Identity service (keystone) for the
authentication and authorization of all API requests.
- Compute (nova) interacts with Networking through calls to its
standard API. As part of creating a VM, the nova-compute service
communicates with the Networking API to plug each virtual NIC on the
VM into a particular network.
- The dashboard (horizon) integrates with the Networking API, enabling
administrators and tenant users to create and manage network services
through a web-based GUI.
VMware NSX integration
~~~~~~~~~~~~~~~~~~~~~~
OpenStack Networking uses the NSX plug-in to integrate with an existing
VMware vCenter deployment. When installed on the network nodes, the NSX
plug-in enables a NSX controller to centrally manage configuration
settings and push them to managed network nodes. Network nodes are
considered managed when they're added as hypervisors to the NSX
controller.
The diagrams below depict some VMware NSX deployment examples. The first
diagram illustrates the traffic flow between VMs on separate Compute
nodes, and the second diagram between two VMs on a single Compute node.
Note the placement of the VMware NSX plug-in and the neutron-server
service on the network node. The green arrow indicates the management
relationship between the NSX controller and the network node.
|VMware NSX deployment example - two Compute nodes|
|VMware NSX deployment example - single Compute node|
.. |VMware NSX deployment example - two Compute nodes|
image:: ../../common/figures/vmware_nsx_ex1.png
.. |VMware NSX deployment example - single Compute node|
image:: ../../common/figures/vmware_nsx_ex2.png

View File

@@ -0,0 +1,256 @@
================================
Authentication and authorization
================================
Networking uses the Identity Service as the default authentication
service. When the Identity Service is enabled, users who submit requests
to the Networking service must provide an authentication token in
``X-Auth-Token`` request header. Users obtain this token by
authenticating with the Identity Service endpoint. For more information
about authentication with the Identity Service, see `OpenStack Identity
Service API v2.0
Reference <http://developer.openstack.org/api-ref-identity-v2.html>`__.
When the Identity Service is enabled, it is not mandatory to specify the
tenant ID for resources in create requests because the tenant ID is
derived from the authentication token.
The default authorization settings only allow administrative users
to create resources on behalf of a different tenant. Networking uses
information received from Identity to authorize user requests.
Networking handles two kind of authorization policies:
- **Operation-based** policies specify access criteria for specific
operations, possibly with fine-grained control over specific
attributes.
- **Resource-based** policies specify whether access to specific
resource is granted or not according to the permissions configured
for the resource (currently available only for the network resource).
The actual authorization policies enforced in Networking might vary
from deployment to deployment.
The policy engine reads entries from the :file:`policy.json` file. The
actual location of this file might vary from distribution to
distribution. Entries can be updated while the system is running, and no
service restart is required. Every time the policy file is updated, the
policies are automatically reloaded. Currently the only way of updating
such policies is to edit the policy file. In this section, the terms
*policy* and *rule* refer to objects that are specified in the same way
in the policy file. There are no syntax differences between a rule and a
policy. A policy is something that is matched directly from the
Networking policy engine. A rule is an element in a policy, which is
evaluated. For instance in ``create_subnet:
[["admin_or_network_owner"]]``, *create_subnet* is a
policy, and *admin_or_network_owner* is a rule.
Policies are triggered by the Networking policy engine whenever one of
them matches a Networking API operation or a specific attribute being
used in a given operation. For instance the ``create_subnet`` policy is
triggered every time a ``POST /v2.0/subnets`` request is sent to the
Networking server; on the other hand ``create_network:shared`` is
triggered every time the *shared* attribute is explicitly specified (and
set to a value different from its default) in a ``POST /v2.0/networks``
request. It is also worth mentioning that policies can also be related
to specific API extensions; for instance
``extension:provider_network:set`` is triggered if the attributes
defined by the Provider Network extensions are specified in an API
request.
An authorization policy can be composed by one or more rules. If more
rules are specified then the evaluation policy succeeds if any of the
rules evaluates successfully; if an API operation matches multiple
policies, then all the policies must evaluate successfully. Also,
authorization rules are recursive. Once a rule is matched, the rule(s)
can be resolved to another rule, until a terminal rule is reached.
The Networking policy engine currently defines the following kinds of
terminal rules:
- **Role-based rules** evaluate successfully if the user who submits
the request has the specified role. For instance ``"role:admin"`` is
successful if the user who submits the request is an administrator.
- **Field-based rules** evaluate successfully if a field of the
resource specified in the current request matches a specific value.
For instance ``"field:networks:shared=True"`` is successful if the
``shared`` attribute of the ``network`` resource is set to true.
- **Generic rules** compare an attribute in the resource with an
attribute extracted from the user's security credentials and
evaluates successfully if the comparison is successful. For instance
``"tenant_id:%(tenant_id)s"`` is successful if the tenant identifier
in the resource is equal to the tenant identifier of the user
submitting the request.
This extract is from the default :file:`policy.json` file:
- A rule that evaluates successfully if the current user is an
administrator or the owner of the resource specified in the request
(tenant identifier is equal).
.. code-block:: json
:linenos:
{
"admin_or_owner": [
[
"role:admin"
],
[
"tenant_id:%(tenant_id)s"
]
],
"admin_or_network_owner": [
[
"role:admin"
],
[
"tenant_id:%(network_tenant_id)s"
]
],
"admin_only": [
[
"role:admin"
]
],
"regular_user": [],
"shared": [
[
"field:networks:shared=True"
]
],
"default": [
[
- The default policy that is always evaluated if an API operation does
not match any of the policies in ``policy.json``.
.. code-block:: json
:linenos:
"rule:admin_or_owner"
]
],
"create_subnet": [
[
"rule:admin_or_network_owner"
]
],
"get_subnet": [
[
"rule:admin_or_owner"
],
[
"rule:shared"
]
],
"update_subnet": [
[
"rule:admin_or_network_owner"
]
],
"delete_subnet": [
[
"rule:admin_or_network_owner"
]
],
"create_network": [],
"get_network": [
[
"rule:admin_or_owner"
],
- This policy evaluates successfully if either *admin\_or\_owner*, or
*shared* evaluates successfully.
.. code-block:: json
:linenos:
[
"rule:shared"
]
],
"create_network:shared": [
[
"rule:admin_only"
]
- This policy restricts the ability to manipulate the *shared*
attribute for a network to administrators only.
.. code-block:: json
:linenos:
],
"update_network": [
[
"rule:admin_or_owner"
]
],
"delete_network": [
[
"rule:admin_or_owner"
]
],
"create_port": [],
"create_port:mac_address": [
[
"rule:admin_or_network_owner"
]
],
"create_port:fixed_ips": [
- This policy restricts the ability to manipulate the *mac\_address*
attribute for a port only to administrators and the owner of the
network where the port is attached.
.. code-block:: json
:linenos:
[
"rule:admin_or_network_owner"
]
],
"get_port": [
[
"rule:admin_or_owner"
]
],
"update_port": [
[
"rule:admin_or_owner"
]
],
"delete_port": [
[
"rule:admin_or_owner"
]
]
}
In some cases, some operations are restricted to administrators only.
This example shows you how to modify a policy file to permit tenants to
define networks, see their resources, and permit administrative users to
perform all other operations:
.. code-block:: ini
:linenos:
{
"admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
"admin_only": [["role:admin"]], "regular_user": [],
"default": [["rule:admin_only"]],
"create_subnet": [["rule:admin_only"]],
"get_subnet": [["rule:admin_or_owner"]],
"update_subnet": [["rule:admin_only"]],
"delete_subnet": [["rule:admin_only"]],
"create_network": [],
"get_network": [["rule:admin_or_owner"]],
"create_network:shared": [["rule:admin_only"]],
"update_network": [["rule:admin_or_owner"]],
"delete_network": [["rule:admin_or_owner"]],
"create_port": [["rule:admin_only"]],
"get_port": [["rule:admin_or_owner"]],
"update_port": [["rule:admin_only"]],
"delete_port": [["rule:admin_only"]]
}

View File

@@ -0,0 +1,252 @@
======================
Plug-in configurations
======================
For configurations options, see `Networking configuration
options <http://docs.openstack.org/kilo/config-reference
/content/section_networking-options-reference.html>`__
in Configuration Reference. These sections explain how to configure
specific plug-ins.
Configure Big Switch (Floodlight REST Proxy) plug-in
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Edit the :file:`/etc/neutron/neutron.conf` file and add this line:
.. code:: ini
core_plugin = bigswitch
#. In the :file:`/etc/neutron/neutron.conf` file, set the ``service_plugins``
option:
::
service_plugins = neutron.plugins.bigswitch.l3_router_plugin.L3RestProxy
#. Edit the :file:`/etc/neutron/plugins/bigswitch/restproxy.ini` file for the
plug-in and specify a comma-separated list of controller\_ip:port pairs:
.. code:: ini
server = CONTROLLER_IP:PORT
For database configuration, see `Install Networking
Services <http://docs.openstack.org/kilo/install-guide/install
/apt/content/neutron-controller-node.html>`__
in the Installation Guide in the `OpenStack Documentation
index <http://docs.openstack.org>`__. (The link defaults to the Ubuntu
version.)
#. Restart neutron-server to apply the settings:
.. code:: console
# service neutron-server restart
Configure Brocade plug-in
~~~~~~~~~~~~~~~~~~~~~~~~~
#. Install the Brocade-modified Python netconf client (ncclient) library,
which is available at https://github.com/brocade/ncclient:
.. code:: console
$ git clone https://github.com/brocade/ncclient
#. As root, run this command:
.. code:: console
# cd ncclient;python setup.py install
#. Edit the :file:`/etc/neutron/neutron.conf` file and set the following
option:
.. code:: ini
core_plugin = brocade
#. Edit the :file:`/etc/neutron/plugins/brocade/brocade.ini` file for the
Brocade plug-in and specify the admin user name, password, and IP
address of the Brocade switch:
.. code:: ini
[SWITCH]
username = ADMIN
password = PASSWORD
address = SWITCH_MGMT_IP_ADDRESS
ostype = NOS
For database configuration, see `Install Networking
Services <http://docs.openstack.org/kilo/install-guide/install/apt
/content/neutron-controller-node.html>`__
in any of the Installation Guides in the `OpenStack Documentation
index <http://docs.openstack.org>`__. (The link defaults to the Ubuntu
version.)
#. Restart the neutron-server service to apply the settings:
.. code:: console
# service neutron-server restart
Configure NSX-mh plug-in
~~~~~~~~~~~~~~~~~~~~~~~~
The instructions in this section refer to the VMware NSX-mh platform,
formerly known as Nicira NVP.
#. Install the NSX plug-in:
.. code:: console
# apt-get install neutron-plugin-vmware
#. Edit the :file:`/etc/neutron/neutron.conf` file and set this line:
.. code:: ini
core_plugin = vmware
Example :file:`neutron.conf`: file for NSX-mh integration:
.. code:: ini
core_plugin = vmware
rabbit_host = 192.168.203.10
allow_overlapping_ips = True
#. To configure the NSX-mh controller cluster for OpenStack Networking,
locate the ``[default]`` section in the
:file:`/etc/neutron/plugins/vmware/nsx.ini` file and add the following
entries:
- To establish and configure the connection with the controller cluster
you must set some parameters, including NSX-mh API endpoints, access
credentials, and optionally specify settings for HTTP timeouts,
redirects and retries in case of connection failures:
.. code:: ini
nsx_user = ADMIN_USER_NAME
nsx_password = NSX_USER_PASSWORD
http_timeout = HTTP_REQUEST_TIMEOUT # (seconds) default 75 seconds
retries = HTTP_REQUEST_RETRIES # default 2
redirects = HTTP_REQUEST_MAX_REDIRECTS # default 2
nsx_controllers = API_ENDPOINT_LIST # comma-separated list
To ensure correct operations, the ``nsx_user`` user must have
administrator credentials on the NSX-mh platform.
A controller API endpoint consists of the IP address and port for the
controller; if you omit the port, port 443 is used. If multiple API
endpoints are specified, it is up to the user to ensure that all
these endpoints belong to the same controller cluster. The OpenStack
Networking VMware NSX-mh plug-in does not perform this check, and
results might be unpredictable.
When you specify multiple API endpoints, the plug-in takes care of
load balancing requests on the various API endpoints.
- The UUID of the NSX-mh transport zone that should be used by default
when a tenant creates a network. You can get this value from the
Transport Zones page for the NSX-mh manager:
Alternatively the transport zone identfier can be retrieved by query
the NSX-mh API: ``/ws.v1/transport-zone``
.. code:: ini
default_tz_uuid = TRANSPORT_ZONE_UUID
- .. code:: ini
default_l3_gw_service_uuid = GATEWAY_SERVICE_UUID
.. Warning::
Ubuntu packaging currently does not update the neutron init
script to point to the NSX-mh configuration file. Instead, you
must manually update :file:`/etc/default/neutron-server` to add this
line:
.. code:: ini
NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/vmware/nsx.ini
For database configuration, see `Install Networking
Services <http://docs.openstack.org/kilo/install-guide/install/
apt/content/neutron-controller-node.html>`__
in the Installation Guide.
#. Restart neutron-server to apply settings:
.. code:: console
# service neutron-server restart
.. Warning::
The neutron NSX-mh plug-in does not implement initial
re-synchronization of Neutron resources. Therefore resources that
might already exist in the database when Neutron is switched to the
NSX-mh plug-in will not be created on the NSX-mh backend upon
restart.
Example :file:`nsx.ini` file:
.. code:: ini
[DEFAULT]
default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c
default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf
nsx_user=admin
nsx_password=changeme
nsx_controllers=10.127.0.100,10.127.0.200:8888
.. Note::
To debug :file:`nsx.ini` configuration issues, run this command from the
host that runs neutron-server:
..code:: console
# neutron-check-nsx-config PATH_TO_NSX.INI
This command tests whether neutron-server can log into all of the
NSX-mh controllers and the SQL server, and whether all UUID values
are correct.
Configure PLUMgrid plug-in
~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Edit the :file:`/etc/neutron/neutron.conf` file and set this line:
.. code:: ini
core_plugin = plumgrid
#. Edit the [PLUMgridDirector] section in the
:file:`/etc/neutron/plugins/plumgrid/plumgrid.ini` file and specify the IP
address, port, admin user name, and password of the PLUMgrid Director:
.. code:: ini
[PLUMgridDirector]
director_server = "PLUMgrid-director-ip-address"
director_server_port = "PLUMgrid-director-port"
username = "PLUMgrid-director-admin-username"
password = "PLUMgrid-director-admin-password"
For database configuration, see `Install Networking
Services <http://docs.openstack.org/kilo/install-guide/install/
apt/content/neutron-controller-node.html>`__
in the Installation Guide.
#. Restart the neutron-server service to apply the settings:
.. code:: console
# service neutron-server restart

View File

@@ -0,0 +1,329 @@
==============
Use Networking
==============
You can manage OpenStack Networking services by using the service
command. For example:
.. code:: console
# service neutron-server stop
# service neutron-server status
# service neutron-server start
# service neutron-server restart
Log files are in the :file:`/var/log/neutron` directory.
Configuration files are in the :file:`/etc/neutron` directory.
Cloud administrators and tenants can use OpenStack Networking to build
rich network topologies. Cloud administrators can create network
connectivity on behalf of tenants.
Core Networking API features
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
After you install and configure Networking, tenants and administrators
can perform create-read-update-delete (CRUD) API networking operations
by using the Networking API directly or neutron command-line interface
(CLI). The neutron CLI is a wrapper around the Networking API. Every
Networking API call has a corresponding neutron command.
The CLI includes a number of options. For details, see the `OpenStack
End User Guide <http://docs.openstack.org/user-guide/index.html>`__.
Basic Networking operations
---------------------------
To learn about advanced capabilities available through the neutron
command-line interface (CLI), read the networking section in the
`OpenStack End User
Guide <http://docs.openstack.org/user-guide/index.html>`__.
This table shows example neutron commands that enable you to complete
basic network operations:
+-------------------------+-------------------------------------------------+
| Operation | Command |
+=========================+=================================================+
|Creates a network. | |
| | |
| | ``$ neutron net-create net1`` |
+-------------------------+-------------------------------------------------+
|Creates a subnet that is | |
|associated with net1. | |
| | |
| | ``$ neutron subnet-create`` |
| | ``net1 10.0.0.0/24`` |
+-------------------------+-------------------------------------------------+
|Lists ports for a | |
|specified tenant. | |
| | |
| | ``$ neutron port-list`` |
+-------------------------+-------------------------------------------------+
|Lists ports for a | |
|specified tenant | |
|and displays the ``id``, | |
|``fixed_ips``, | |
|and ``device_owner`` | |
|columns. | |
| | |
| | ``$ neutron port-list -c id`` |
| | ``-c fixed_ips -c device_owner`` |
+-------------------------+-------------------------------------------------+
|Shows information for a | |
|specified port. | |
| | ``$ neutron port-show PORT_ID`` |
+-------------------------+-------------------------------------------------+
**Basic Networking operations**
.. Note::
The ``device_owner`` field describes who owns the port. A port whose
``device_owner`` begins with:
- ``network`` is created by Networking.
- ``compute`` is created by Compute.
Administrative operations
-------------------------
The cloud administrator can run any ``neutron`` command on behalf of
tenants by specifying an Identity ``tenant_id`` in the command, as
follows:
.. code:: console
$ neutron net-create --tenant-id TENANT_ID NETWORK_NAME
For example:
.. code:: console
$ neutron net-create --tenant-id 5e4bbe24b67a4410bc4d9fae29ec394e net1
.. Note::
To view all tenant IDs in Identity, run the following command as an
Identity service admin user:
.. code:: console
$ keystone tenant-list
Advanced Networking operations
------------------------------
This table shows example Networking commands that enable you to complete
advanced network operations:
+-------------------------------+--------------------------------------------+
| Operation | Command |
+===============================+============================================+
|Creates a network that | |
|all tenants can use. | |
| | |
| | ``$ neutron net-create`` |
| | ``--shared public-net`` |
+-------------------------------+--------------------------------------------+
|Creates a subnet with a | |
|specified gateway IP address. | |
| | |
| | ``$ neutron subnet-create`` |
| | ``--gateway 10.0.0.254 net1 10.0.0.0/24``|
+-------------------------------+--------------------------------------------+
|Creates a subnet that has | |
|no gateway IP address. | |
| | |
| | ``$ neutron subnet-create`` |
| | ``--no-gateway net1 10.0.0.0/24`` |
+-------------------------------+--------------------------------------------+
|Creates a subnet with DHCP | |
|disabled. | |
| | |
| | ``$ neutron subnet-create`` |
| | ``net1 10.0.0.0/24 --enable-dhcp False`` |
+-------------------------------+--------------------------------------------+
|Specified set of host routes. | |
| | |
| | ``$ neutron subnet-create`` |
| | ``test-net1 40.0.0.0/24 --host-routes``|
| | ``type=dict list=true`` |
| | ``destination=40.0.1.0/24,`` |
| | ``nexthop=40.0.0.2`` |
+-------------------------------+--------------------------------------------+
|Creates a subnet with a | |
|specified set of dns name | |
|servers. | |
| | |
| | ``$ neutron subnet-create test-net1`` |
| | ``40.0.0.0/24 --dns-nameservers`` |
| | ``list=true 8.8.4.4 8.8.8.8`` |
+-------------------------------+--------------------------------------------+
|Displays all ports and | |
|IPs allocated on a network. | |
| | |
| | ``$ neutron port-list --network_id NET_ID``|
+-------------------------------+--------------------------------------------+
**Advanced Networking operations**
Use Compute with Networking
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Basic Compute and Networking operations
---------------------------------------
This table shows example neutron and nova commands that enable you to
complete basic VM networking operations:
+----------------------------------+-----------------------------------------+
| Action | Command |
+==================================+=========================================+
|Checks available networks. | |
| | |
| | ``$ neutron net-list`` |
+----------------------------------+-----------------------------------------+
|Boots a VM with a single NIC on | |
|a selected Networking network. | |
| | |
| | ``$ nova boot --image IMAGE --flavor`` |
| | ``FLAVOR --nic net-id=NET_ID VM_NAME`` |
+----------------------------------+-----------------------------------------+
|Searches for ports with a | |
|``device_id`` that matches the | |
|Compute instance UUID. See :ref: | |
|`Create and delete VMs` | |
| | |
| |``$ neutron port-list --device_id VM_ID``|
+----------------------------------+-----------------------------------------+
|Searches for ports, but shows | |
|onlythe ``mac_address`` of | |
|the port. | |
| | |
| | ``$ neutron port-list --field`` |
| | ``mac_address --device_id VM_ID`` |
+----------------------------------+-----------------------------------------+
|Temporarily disables a port from | |
|sending traffic. | |
| | |
| | ``$ neutron port-update PORT_ID`` |
| | ``--admin_state_up False`` |
+----------------------------------+-----------------------------------------+
**Basic Compute and Networking operations**
.. Note::
The ``device_id`` can also be a logical router ID.
.. Note::
- When you boot a Compute VM, a port on the network that
corresponds to the VM NIC is automatically created and associated
with the default security group. You can configure `security
group rules <#enabling_ping_and_ssh>`__ to enable users to access
the VM.
.. _Create and delete VMs:
- When you delete a Compute VM, the underlying Networking port is
automatically deleted.
Advanced VM creation operations
-------------------------------
This table shows example nova and neutron commands that enable you to
complete advanced VM creation operations:
+-------------------------------------+--------------------------------------+
| Operation | Command |
+=====================================+======================================+
|Boots a VM with multiple | |
|NICs. | |
| | |
| |``$ nova boot --image IMAGE --flavor``|
| |``FLAVOR --nic net-id=NET1-ID --nic`` |
| |``net-id=NET2-ID VM_NAME`` |
+-------------------------------------+--------------------------------------+
|Boots a VM with a specific IP | |
|address. Note that you cannot | |
|use the ``--num-instances`` | |
|parameter in this case. | |
| | |
| |``$ nova boot --image IMAGE --flavor``|
| | ``FLAVOR --nic net-id=NET-ID,`` |
| | ``v4-fixed-ip=IP-ADDR VM_NAME`` |
+-------------------------------------+--------------------------------------+
|Boots a VM that connects to all | |
|networks that are accessible to the | |
|tenant who submits the request | |
|(without the ``--nic`` option). | |
| | |
| |``$ nova boot --image IMAGE --flavor``|
| |``FLAVOR VM_NAME`` |
+-------------------------------------+--------------------------------------+
**Advanced VM creation operations**
.. Note::
Cloud images that distribution vendors offer usually have only one
active NIC configured. When you boot with multiple NICs, you must
configure additional interfaces on the image or the NICS are not
reachable.
The following Debian/Ubuntu-based example shows how to set up the
interfaces within the instance in the ``/etc/network/interfaces``
file. You must apply this configuration to the image.
.. code:: bash
# The loopback network interface
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet dhcp
Enable ping and SSH on VMs (security groups)
--------------------------------------------
You must configure security group rules depending on the type of plug-in
you are using. If you are using a plug-in that:
- Implements Networking security groups, you can configure security
group rules directly by using the ``neutron security-group-rule-create``
command. This example enables ``ping`` and ``ssh`` access to your VMs.
.. code:: console
$ neutron security-group-rule-create --protocol icmp \
--direction ingress default
.. code:: console
$ neutron security-group-rule-create --protocol tcp --port-range-min 22 \
--port-range-max 22 --direction ingress default
- Does not implement Networking security groups, you can configure
security group rules by using the ``nova secgroup-add-rule`` or
``euca-authorize`` command. These ``nova`` commands enable ``ping``
and ``ssh`` access to your VMs.
.. code:: console
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
.. Note::
If your plug-in implements Networking security groups, you can also
leverage Compute security groups by setting
``security_group_api = neutron`` in the :file:`nova.conf` file. After
you set this option, all Compute security group commands are proxied
to Networking.

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB