[cloud-admin-guide] Converting networking files to RST

Converts the following files:
1. section_networking_config-agents.xml
2. section_networking-multi-dhcp-agents.xml
3. section_networking_introduction.xml

Change-Id: I594f57cb261a8cfea019e5d2a965853370164934
Implements: blueprint reorganise-user-guides
This commit is contained in:
asettle 2015-07-13 14:45:40 +10:00
parent af6464156b
commit b142355f83
4 changed files with 1319 additions and 3 deletions

View File

@ -8,9 +8,12 @@ advanced ``neutron`` and ``nova`` command-line interface (CLI) commands.
.. toctree::
:maxdepth: 2
networking_introduction.rst
networking_config-plugins.rst
networking_config-agents.rst
networking_arch.rst
networking_adv-config.rst
networking_multi-dhcp-agents.rst
networking_use.rst
networking_adv-operational-features.rst
networking_auth.rst
@ -18,7 +21,4 @@ advanced ``neutron`` and ``nova`` command-line interface (CLI) commands.
.. TODO (asettle)
networking_adv-features.rst
networking_multi-dhcp-agents.rst
networking_introduction.rst
networking_config-agents.rst
networking_config-identity.rst

View File

@ -0,0 +1,519 @@
========================
Configure neutron agents
========================
Plug-ins typically have requirements for particular software that must
be run on each node that handles data packets. This includes any node
that runs nova-compute and nodes that run dedicated OpenStack Networking
service agents such as ``neutron-dhcp-agent``, ``neutron-l3-agent``,
``neutron-metering-agent`` or ``neutron-lbaas-agent``.
A data-forwarding node typically has a network interface with an IP
address on the management network and another interface on the data
network.
This section shows you how to install and configure a subset of the
available plug-ins, which might include the installation of switching
software (for example, Open vSwitch) and as agents used to communicate
with the neutron-server process running elsewhere in the data center.
Configure data-forwarding nodes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node set up: NSX plug-in
------------------------
If you use the NSX plug-in, you must also install Open vSwitch on each
data-forwarding node. However, you do not need to install an additional
agent on each node.
.. Warning::
It is critical that you run an Open vSwitch version that is
compatible with the current version of the NSX Controller software.
Do not use the Open vSwitch version that is installed by default on
Ubuntu. Instead, use the Open vSwitch version that is provided on
the VMware support portal for your NSX Controller version.
**To set up each node for the NSX plug-in**
#. Ensure that each data-forwarding node has an IP address on the
management network, and an IP address on the data network that is used
for tunneling data traffic. For full details on configuring your
forwarding node, see the ``NSX Administrator Guide``.
#. Use the ``NSX Administrator Guide`` to add the node as a Hypervisor by
using the NSX Manager GUI. Even if your forwarding node has no VMs and
is only used for services agents like ``neutron-dhcp-agent`` or
``neutron-lbaas-agent``, it should still be added to NSX as a Hypervisor.
#. After following the NSX Administrator Guide, use the page for this
Hypervisor in the NSX Manager GUI to confirm that the node is properly
connected to the NSX Controller Cluster and that the NSX Controller
Cluster can see the ``br-int`` integration bridge.
Configure DHCP agent
~~~~~~~~~~~~~~~~~~~~
The DHCP service agent is compatible with all existing plug-ins and is
required for all deployments where VMs should automatically receive IP
addresses through DHCP.
**To install and configure the DHCP agent**
#. You must configure the host running the neutron-dhcp-agent as a data
forwarding node according to the requirements for your plug-in.
#. Install the DHCP agent:
.. code:: console
# apt-get install neutron-dhcp-agent
#. Update any options in the :file:`/etc/neutron/dhcp_agent.ini` file
that depend on the plug-in in use. See the sub-sections.
.. Important::
If you reboot a node that runs the DHCP agent, you must run the
:command:`neutron-ovs-cleanup` command before the neutron-dhcp-agent
service starts.
On Red Hat, SUSE, and Ubuntu based systems, the
``neutron-ovs-cleanup`` service runs the :command:`neutron-ovs-cleanup`
command automatically. However, on Debian-based systems, you
must manually run this command or write your own system script
that runs on boot before the ``neutron-dhcp-agent`` service starts.
Networking dhcp-agent can use
`dnsmasq <http://www.thekelleys.org.uk/dnsmasq/doc.html>`__ driver which
supports stateful and stateless DHCPv6 for subnets created with
``--ipv6_address_mode`` set to ``dhcpv6-stateful`` or
``dhcpv6-stateless``.
For example:
.. code:: console
$ neutron subnet-create --ip-version 6 --ipv6_ra_mode dhcpv6-stateful
--ipv6_address_mode dhcpv6-stateful NETWORK CIDR
.. code:: console
$ neutron subnet-create --ip-version 6 --ipv6_ra_mode dhcpv6-stateless
--ipv6_address_mode dhcpv6-stateless NETWORK CIDR
If no dnsmasq process for subnet's network is launched, Networking will
launch a new one on subnet's dhcp port in ``qdhcp-XXX`` namespace. If
previous dnsmasq process is already launched, restart dnsmasq with a new
configuration.
Networking will update dnsmasq process and restart it when subnet gets
updated.
.. Note::
For dhcp-agent to operate in IPv6 mode use at least dnsmasq v2.63.
After a certain, configured timeframe, networks uncouple from DHCP
agents when the agents are no longer in use. You can configure the DHCP
agent to automatically detach from a network when the agent is out of
service, or no longer needed.
This feature applies to all plug-ins that support DHCP scaling. For more
information, see the `DHCP agent configuration
options <http://docs.openstack.org/kilo/config-reference/content/networking
-options-dhcp.html>`__
listed in the OpenStack Configuration Reference.
DHCP agent setup: OVS plug-in
-----------------------------
These DHCP agent options are required in the
:file:`/etc/neutron/dhcp_agent.ini` file for the OVS plug-in:
.. code:: bash
[DEFAULT]
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
DHCP agent setup: NSX plug-in
-----------------------------
These DHCP agent options are required in the
:file:`/etc/neutron/dhcp_agent.ini` file for the NSX plug-in:
.. code:: bash
[DEFAULT]
enable_metadata_network = True
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
Configure L3 agent
~~~~~~~~~~~~~~~~~~
The OpenStack Networking service has a widely used API extension to
allow administrators and tenants to create routers to interconnect L2
networks, and floating IPs to make ports on private networks publicly
accessible.
Many plug-ins rely on the L3 service agent to implement the L3
functionality. However, the following plug-ins already have built-in L3
capabilities:
- Big Switch/Floodlight plug-in, which supports both the open source
`Floodlight <http://www.projectfloodlight.org/floodlight/>`__
controller and the proprietary Big Switch controller.
.. Note::
Only the proprietary BigSwitch controller implements L3
functionality. When using Floodlight as your OpenFlow controller,
L3 functionality is not available.
- IBM SDN-VE plug-in
- MidoNet plug-in
- NSX plug-in
- PLUMgrid plug-in
.. Warning::
Do not configure or use neutron-l3-agent if you use one of these
plug-ins.
**To install the L3 agent for all other plug-ins**
#. Install the neutron-l3-agent binary on the network node:
.. code:: console
# apt-get install neutron-l3-agent
#. To uplink the node that runs neutron-l3-agent to the external network,
create a bridge named "br-ex" and attach the NIC for the external
network to this bridge.
For example, with Open vSwitch and NIC eth1 connected to the external
network, run:
.. code:: console
# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth1
Do not manually configure an IP address on the NIC connected to the
external network for the node running neutron-l3-agent. Rather, you must
have a range of IP addresses from the external network that can be used
by OpenStack Networking for routers that uplink to the external network.
This range must be large enough to have an IP address for each router in
the deployment, as well as each floating IP.
#. The neutron-l3-agent uses the Linux IP stack and iptables to perform L3
forwarding and NAT. In order to support multiple routers with
potentially overlapping IP addresses, neutron-l3-agent defaults to using
Linux network namespaces to provide isolated forwarding contexts. As a
result, the IP addresses of routers are not visible simply by running
the :command:``ip addr list`` or :command:``ifconfig`` command on the node.
Similarly, you cannot directly :command:``ping`` fixed IPs.
To do either of these things, you must run the command within a
particular network namespace for the router. The namespace has the name
``qrouter-ROUTER_UUID``. These example commands run in the router
namespace with UUID 47af3868-0fa8-4447-85f6-1304de32153b:
.. code:: console
# ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list
.. code:: console
# ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping FIXED_IP
.. Note::
By default, networking namespaces are configured to not be deleted.
This behavior can be changed for both DHCP and L3 agents. The
configuration files are :file:`/etc/neutron/dhcp_agent.ini` and
:file:`/etc/neutron/l3_agent.ini` respectively.
For DHCP namespace the configuration key is:
``dhcp_delete_namespaces``. This parameter should be set to True to
change the default behaviour.
For L3 namespace, the configuration key is:
``router_delete_namespaces``. This parameter should be set to True
to change the default behaviour.
.. Important::
If you reboot a node that runs the L3 agent, you must run the
:command:`neutron-ovs-cleanup` command before the neutron-l3-agent
service starts.
On Red Hat, SUSE and Ubuntu based systems, the neutron-ovs-cleanup
service runs the :command:``neutron-ovs-cleanup`` command
automatically. However, on Debian-based systems, you must manually
run this command or write your own system script that runs on boot
before the neutron-l3-agent service starts.
Configure metering agent
~~~~~~~~~~~~~~~~~~~~~~~~
The Neutron Metering agent resides beside neutron-l3-agent.
**To install the metering agent and configure the node**
#. Install the agent by running:
.. code:: console
# apt-get install neutron-metering-agent
#. If you use one of the following plug-ins, you need to configure the
metering agent with these lines as well:
- An OVS-based plug-in such as OVS, NSX, NEC, BigSwitch/Floodlight:
.. code:: ini
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
- A plug-in that uses LinuxBridge:
.. code:: ini
interface_driver = neutron.agent.linux.interface.
BridgeInterfaceDriver
#. To use the reference implementation, you must set:
.. code:: ini
driver = neutron.services.metering.drivers.iptables.iptables_driver
.IptablesMeteringDriver
#. Set the ``service_plugins`` option in the :file:`/etc/neutron/neutron.conf`
file on the host that runs neutron-server:
.. code:: ini
service_plugins = metering
If this option is already defined, add ``metering`` to the list, using a
comma as separator. For example:
.. code:: ini
service_plugins = router,metering
Configure Load-Balancer-as-a-Service (LBaaS)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Configure Load-Balancer-as-a-Service (LBaas) with the Open vSwitch or
Linux Bridge plug-in. The Open vSwitch LBaaS driver is required when
enabling LBaaS for OVS-based plug-ins, including BigSwitch, Floodlight,
NEC, and NSX.
**To configure LBaas with Open vSwitch or Linux Bridge plug-in**
#. Install the agent:
.. code:: console
# apt-get install neutron-lbaas-agent
#. Enable the HAProxy plug-in by using the ``service_provider`` option in
the :file:`/etc/neutron/neutron.conf` file:
.. code:: ini
service_provider = LOADBALANCER:Haproxy:neutron_lbaas.services
loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver
:default
.. Warning::
The ``service_provider`` option is already defined in the
:file:`/usr/share/neutron/neutron-dist.conf` file on Red Hat based
systems. Do not define it in :file:`neutron.conf` otherwise the
Networking services will fail to restart.
#. Enable the load-balancing plug-in by using the ``service_plugins``
option in the :file:`/etc/neutron/neutron.conf` file:
.. code:: ini
service_plugins = lbaas
If this option is already defined, add ``lbaas`` to the list, using a
comma as separator. For example:
.. code:: ini
service_plugins = router,lbaas
#. Enable the HAProxy load balancer in the :file:`/etc/neutron/lbaas_agent.ini`
file:
.. code:: ini
device_driver = neutron_lbaas.services.loadbalancer.drivers
haproxy.namespace_driver.HaproxyNSDriver
#. Select the required driver in the :file:`/etc/neutron/lbaas_agent.ini`
file:
Enable the Open vSwitch LBaaS driver:
.. code:: ini
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
Or, enable the Linux Bridge LBaaS driver:
.. code:: ini
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
#. Create the required tables in the database:
.. code:: console
# neutron-db-manage --service lbaas upgrade head
#. Apply the settings by restarting the neutron-server and
neutron-lbaas-agent services.
#. Enable load balancing in the Project section of the dashboard.
Change the ``enable_lb`` option to ``True`` in the :file:`local_settings`
file (on Fedora, RHEL, and CentOS:
:file:`/etc/openstack-dashboard/local_settings`, on Ubuntu and Debian:
:file:`/etc/openstack-dashboard/local_settings.py`, and on openSUSE and
SLES:
:file:`/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings
.py`):
.. code:: python
OPENSTACK_NEUTRON_NETWORK = {
'enable_lb': True,
...
}
Apply the settings by restarting the web server. You can now view the
Load Balancer management options in the Project view in the dashboard.
Configure Hyper-V L2 agent
~~~~~~~~~~~~~~~~~~~~~~~~~~
Before you install the OpenStack Networking Hyper-V L2 agent on a
Hyper-V compute node, ensure the compute node has been configured
correctly using these
`instructions <http://docs.openstack.org/kilo/config-reference/content/
hyper-v-virtualization-platform.html>`__.
**To install the OpenStack Networking Hyper-V agent and configure the node**
#. Download the OpenStack Networking code from the repository:
.. code:: console
> cd C:\OpenStack\
> git clone https://git.openstack.org/cgit/openstack/neutron
#. Install the OpenStack Networking Hyper-V Agent:
.. code:: console
> cd C:\OpenStack\neutron\
> python setup.py install
#. Copy the :file:`policy.json` file:
.. code:: console
> xcopy C:\OpenStack\neutron\etc\policy.json C:\etc\
#. Create the :file:`C:\etc\neutron-hyperv-agent.conf` file and add the proper
configuration options and the `Hyper-V related
options <http://docs.openstack.org/kilo/config-reference/content/
networking-plugin-hyperv_agent.html>`__. Here is a sample config file:
.. code-block:: ini
:linenos:
[DEFAULT]
verbose = true
control_exchange = neutron
policy_file = C:\etc\policy.json
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = IP_ADDRESS
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = <password>
logdir = C:\OpenStack\Log
logfile = neutron-hyperv-agent.log
[AGENT]
polling_interval = 2
physical_network_vswitch_mappings = *:YOUR_BRIDGE_NAME
enable_metrics_collection = true
[SECURITYGROUP]
firewall_driver = neutron.plugins.hyperv.agent.security_groups_driver.
HyperVSecurityGroupsDriver
enable_security_group = true
#. Start the OpenStack Networking Hyper-V agent:
.. code:: console
> C:\Python27\Scripts\neutron-hyperv-agent.exe --config-file
C:\etc\neutron-hyperv-agent.conf
Basic operations on agents
~~~~~~~~~~~~~~~~~~~~~~~~~~
This table shows examples of Networking commands that enable you to
complete basic operations on agents:
+----------------------------------------+------------------------------------+
| Operation | Command |
+========================================+====================================+
| List all available agents. | |
| | |
| | ``$ neutron agent-list`` |
+----------------------------------------+------------------------------------+
| Show information of a given | |
| agent. | |
| | |
| | ``$ neutron agent-show AGENT_ID`` |
+----------------------------------------+------------------------------------+
| Update the admin status and description| |
| for a specified agent. The command can | |
| be used to enable and disable agents by| |
| using ``--admin-state-up`` parameter | |
| set to ``False`` or ``True``. | |
| | |
| | ``$ neutron agent-update --admin`` |
| | ``-state-up False AGENT_ID`` |
+----------------------------------------+------------------------------------+
| Delete a given agent. Consider | |
| disabling the agent before deletion. | |
| | |
| |``$ neutron agent-delete AGENT_ID`` |
+----------------------------------------+------------------------------------+
**Basic operations on Networking agents**
See the `OpenStack Command-Line Interface
Reference <http://docs.openstack.org/cli-reference/content/index.html>`__
for more information on Networking commands.

View File

@ -0,0 +1,302 @@
==========================
Introduction to Networking
==========================
The Networking service, code-named neutron, provides an API that lets
you define network connectivity and addressing in the cloud. The
Networking service enables operators to leverage different networking
technologies to power their cloud networking. The Networking service
also provides an API to configure and manage a variety of network
services ranging from L3 forwarding and NAT to load balancing, edge
firewalls, and IPsec VPN.
For a detailed description of the Networking API abstractions and their
attributes, see the `OpenStack Networking API v2.0
Reference <http://developer.openstack.org/api-ref-networking-v2.html>`__.
Networking API
~~~~~~~~~~~~~~
Networking is a virtual network service that provides a powerful API to
define the network connectivity and IP addressing that devices from
other services, such as Compute, use.
The Compute API has a virtual server abstraction to describe computing
resources. Similarly, the Networking API has virtual network, subnet,
and port abstractions to describe networking resources.
+---------------+-------------------------------------------------------------+
| Resource | Description |
+===============+=============================================================+
| **Network** | An isolated L2 segment, analogous to VLAN in the physical |
| | networking world. |
+---------------+-------------------------------------------------------------+
| **Subnet** | A block of v4 or v6 IP addresses and associated |
| | configuration state. |
+---------------+-------------------------------------------------------------+
| **Port** | A connection point for attaching a single device, such as |
| | the NIC of a virtual server, to a virtual network. Also |
| | describes the associated network configuration, such as |
| | the MAC and IP addresses to be used on that port. |
+---------------+-------------------------------------------------------------+
**Networking resources**
To configure rich network topologies, you can create and configure
networks and subnets and instruct other OpenStack services like Compute
to attach virtual devices to ports on these networks.
In particular, Networking supports each tenant having multiple private
networks and enables tenants to choose their own IP addressing scheme,
even if those IP addresses overlap with those that other tenants use.
The Networking service:
- Enables advanced cloud networking use cases, such as building
multi-tiered web applications and enabling migration of applications
to the cloud without changing IP addresses.
- Offers flexibility for the cloud administrator to customize network
offerings.
- Enables developers to extend the Networking API. Over time, the
extended functionality becomes part of the core Networking API.
Configure SSL support for networking API
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack Networking supports SSL for the Networking API server. By
default, SSL is disabled but you can enable it in the :file:`neutron.conf`
file.
Set these options to configure SSL:
``use_ssl = True``
Enables SSL on the networking API server.
``ssl_cert_file = PATH_TO_CERTFILE``
Certificate file that is used when you securely start the Networking
API server.
``ssl_key_file = PATH_TO_KEYFILE``
Private key file that is used when you securely start the Networking
API server.
``ssl_ca_file = PATH_TO_CAFILE``
Optional. CA certificate file that is used when you securely start
the Networking API server. This file verifies connecting clients.
Set this option when API clients must authenticate to the API server
by using SSL certificates that are signed by a trusted CA.
``tcp_keepidle = 600``
The value of TCP\_KEEPIDLE, in seconds, for each server socket when
starting the API server. Not supported on OS X.
``retry_until_window = 30``
Number of seconds to keep retrying to listen.
``backlog = 4096``
Number of backlog requests with which to configure the socket.
Load-Balancer-as-a-Service (LBaaS) overview
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Load-Balancer-as-a-Service (LBaaS) enables Networking to distribute
incoming requests evenly among designated instances. This distribution
ensures that the workload is shared predictably among instances and
enables more effective use of system resources. Use one of these load
balancing methods to distribute incoming requests:
Round robin
Rotates requests evenly between multiple instances.
Source IP
Requests from a unique source IP address are consistently directed
to the same instance.
Least connections
Allocates requests to the instance with the least number of active
connections.
+-------------------------+---------------------------------------------------+
| Feature | Description |
+=========================+===================================================+
| **Monitors** | LBaaS provides availability monitoring with the |
| | ``ping``, TCP, HTTP and HTTPS GET methods. |
| | Monitors are implemented to determine whether |
| | pool members are available to handle requests. |
+-------------------------+---------------------------------------------------+
| **Management** | LBaaS is managed using a variety of tool sets. |
| | The REST API is available for programmatic |
| | administration and scripting. Users perform |
| | administrative management of load balancers |
| | through either the CLI (``neutron``) or the |
| | OpenStack dashboard. |
+-------------------------+---------------------------------------------------+
| **Connection limits** | Ingress traffic can be shaped with *connection |
| | limits*. This feature allows workload control, |
| | and can also assist with mitigating DoS (Denial |
| | of Service) attacks. |
+-------------------------+---------------------------------------------------+
| **Session persistence** | LBaaS supports session persistence by ensuring |
| | incoming requests are routed to the same instance |
| | within a pool of multiple instances. LBaaS |
| | supports routing decisions based on cookies and |
| | source IP address. |
+-------------------------+---------------------------------------------------+
Firewall-as-a-Service (FWaaS) overview
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Firewall-as-a-Service (FWaaS) plug-in adds perimeter firewall
management to Networking. FWaaS uses iptables to apply firewall policy
to all Networking routers within a project. FWaaS supports one firewall
policy and logical firewall instance per project.
Whereas security groups operate at the instance-level, FWaaS operates at
the perimeter to filter traffic at the neutron router.
.. Note::
FWaaS is currently in technical preview; untested operation is not
recommended.
The example diagram illustrates the flow of ingress and egress traffic
for the VM2 instance:
|FWaaS architecture|
**To enable FWaaS**
FWaaS management options are also available in the OpenStack dashboard.
#. Enable the FWaaS plug-in in the :file:`/etc/neutron/neutron.conf` file:
.. code-block:: ini
:linenos:
service_plugins = firewall
[service_providers]
...
service_provider = FIREWALL:Iptables:neutron.agent.linux.iptables_
firewall.OVSHybridIptablesFirewallDriver:default
[fwaas]
driver = neutron_fwaas.services.firewall.drivers.linux.iptables_
fwaas.IptablesFwaasDriver
enabled = True
.. Note::
On Ubuntu, modify the ``[fwaas]`` section in the
:file:`/etc/neutron/fwaas_driver.ini` file instead of
:file:`/etc/neutron/neutron.conf`.
#. Create the required tables in the database:
.. code:: console
# neutron-db-manage --service fwaas upgrade head
#. Enable the option in the
:file:`/usr/share/openstack-dashboard/openstack_dashboard/local/local_settings.py`
file, which is typically located on the controller node:
.. code:: ini
'enable_firewall' = True
#. Restart the neutron-l3-agent and neutron-server services to apply the
settings.
**To configure Firewall-as-a-Service**
Create the firewall rules and create a policy that contains them. Then,
create a firewall that applies the policy.
#. Create a firewall rule:
.. code:: console
$ neutron firewall-rule-create --protocol {tcp|udp|icmp|any}
--destination-port PORT_RANGE --action {allow|deny}
The Networking client requires a protocol value; if the rule is protocol
agnostic, you can use the ``any`` value.
#. Create a firewall policy:
.. code:: console
$ neutron firewall-policy-create --firewall-rules
"FIREWALL_RULE_IDS_OR_NAMES" myfirewallpolicy
Separate firewall rule IDs or names with spaces. The order in which you
specify the rules is important.
You can create a firewall policy without any rules and add rules later,
as follows:
- To add multiple rules, use the update operation.
- To add a single rule, use the insert-rule operation.
For more details, see `Networking command-line
client <http://docs.openstack.org/cli-reference/content/
neutronclient_commands.html#neutronclient_subcommand_
firewall-policy-create>`__
in the OpenStack Command-Line Interface Reference.
.. Note::
FWaaS always adds a default ``deny all`` rule at the lowest precedence of
each policy. Consequently, a firewall policy with no rules blocks
all traffic by default.
#. Create a firewall:
.. code:: console
$ neutron firewall-create FIREWALL_POLICY_UUID
.. Note::
The firewall remains in PENDING\_CREATE state until you create a
Networking router and attach an interface to it.
**Allowed-address-pairs.**
``Allowed-address-pairs`` enable you to specify
mac_address/ip_address(cidr) pairs that pass through a port regardless
of subnet. This enables the use of protocols such as VRRP, which floats
an IP address between two instances to enable fast data plane failover.
.. Note::
Currently, only the ML2, Open vSwitch, and VMware NSX plug-ins
support the allowed-address-pairs extension.
**Basic allowed-address-pairs operations.**
- Create a port with a specified allowed address pairs:
.. code:: console
$ neutron port-create net1 --allowed-address-pairs type=dict
list=true mac_address=MAC_ADDRESS,ip_address=IP_CIDR
- Update a port by adding allowed address pairs:
.. code:: console
$ neutron port-update PORT_UUID --allowed-address-pairs type=dict
list=true mac_address=MAC_ADDRESS,ip_address=IP_CIDR
.. Note::
In releases earlier than Juno, OpenStack Networking prevents setting
an allowed address pair on a port that matches the MAC address and
one of the fixed IP addresses of the port.
.. |FWaaS architecture| image:: ../../common/figures/fwaas.png

View File

@ -0,0 +1,495 @@
=========================================
Scalable and highly available DHCP agents
=========================================
This section describes how to use the agent management (alias agent) and
scheduler (alias agent_scheduler) extensions for DHCP agents
scalability and HA.
.. Note::
Use the :command:`neutron ext-list` client command to check if these
extensions are enabled:
.. code:: console
$ neutron ext-list -c name -c alias
+-----------------+--------------------------+
| alias | name |
+-----------------+--------------------------+
| agent_scheduler | Agent Schedulers |
| binding | Port Binding |
| quotas | Quota management support |
| agent | agent |
| provider | Provider Network |
| router | Neutron L3 Router |
| lbaas | LoadBalancing service |
| extraroute | Neutron Extra Route |
+-----------------+--------------------------+
|image0|
There will be three hosts in the setup.
.. list-table::
:widths: 25 50
:header-rows: 1
* - Host
- Description
* - OpenStack controller host - controlnod
- Runs the Networking, Identity, and Compute services that are required
to deploy VMs. The node must have at least one network interface that
is connected to the Management Network. Note that ``nova-network`` should
not be running because it is replaced by Neutron.
* - HostA
- Runs ``nova-compute``, the Neutron L2 agent and DHCP agent
* - HostB
- Same as HostA
**Hosts for demo**
Configuration
~~~~~~~~~~~~~
**controlnode: neutron server**
#. Neutron configuration file :file:`/etc/neutron/neutron.conf`:
.. code-block:: ini
:linenos:
[DEFAULT]
core_plugin = linuxbridge
rabbit_host = controlnode
allow_overlapping_ips = True
host = controlnode
agent_down_time = 5
#. Update the plug-in configuration file
:file:`/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini`:
.. code-block:: ini
:linenos:
[vlans]
tenant_network_type = vlan
network_vlan_ranges = physnet1:1000:2999
[database]
connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge
retry_interval = 2
[linux_bridge]
physical_interface_mappings = physnet1:eth0
**HostA and HostB: L2 agent**
#. Neutron configuration file :file:`/etc/neutron/neutron.conf`:
.. code-block:: ini
:linenos:
[DEFAULT]
rabbit_host = controlnode
rabbit_password = openstack
# host = HostB on hostb
host = HostA
#. Update the plug-in configuration file
:file:`/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini`:
.. code-block:: ini
:linenos:
[vlans]
tenant_network_type = vlan
network_vlan_ranges = physnet1:1000:2999
[database]
connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge
retry_interval = 2
[linux_bridge]
physical_interface_mappings = physnet1:eth0
#. Update the nova configuration file :file:`/etc/nova/nova.conf`:
.. code-block:: ini
:linenos:
[DEFAULT]
network_api_class=nova.network.neutronv2.api.API
firewall_driver=nova.virt.firewall.NoopFirewallDriver
[neutron]
admin_username=neutron
admin_password=servicepassword
admin_auth_url=http://controlnode:35357/v2.0/
auth_strategy=keystone
admin_tenant_name=servicetenant
url=http://100.1.1.10:9696/
**HostA and HostB: DHCP agent**
- Update the DHCP configuration file :file:`/etc/neutron/dhcp_agent.ini`:
.. code:: ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
Commands in agent management and scheduler extensions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following commands require the tenant running the command to have an
admin role.
.. Note::
Ensure that the following environment variables are set. These are
used by the various clients to access the Identity service.
.. code:: bash
export OS_USERNAME=admin
export OS_PASSWORD=adminpassword
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controlnode:5000/v2.0/
**Settings**
To experiment, you need VMs and a neutron network:
.. code:: console
$ nova list
+-------------------------------------+----------+--------+--------------+
| ID | Name | Status | Networks |
+-------------------------------------+----------+--------+--------------+
| c394fcd0-0baa-43ae-a793-201815c3e8ce| myserver1| ACTIVE | net1=10.0.1.3|
| 2d604e05-9a6c-4ddb-9082-8a1fbdcc797d| myserver2| ACTIVE | net1=10.0.1.4|
| c7c0481c-3db8-4d7a-a948-60ce8211d585| myserver3| ACTIVE | net1=10.0.1.5|
+-------------------------------------+----------+--------+--------------+
$ neutron net-list
+-------------------------+------+--------------------------------------+
| id | name | subnets |
+-------------------------+------+--------------------------------------+
| 89dca1c6-c7d4-4f7a- | | |
| b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd-8e45-d5cf646db9d1 |
+-------------------------+------+--------------------------------------+
**Manage agents in neutron deployment**
Every agent that supports these extensions will register itself with the
neutron server when it starts up.
#. List all agents:
.. code:: console
$ neutron agent-list
+--------------------------------------+--------------------+-------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-------+-------+----------------+
| 1b69828d-6a9b-4826-87cd-1757f0e27f31 | Linux bridge agent | HostA | :-) | True |
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | DHCP agent | HostA | :-) | True |
| ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent | HostB | :-) | True |
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent | HostB | :-) | True |
+--------------------------------------+--------------------+-------+-------+----------------+
The output shows information for four agents. The ``alive`` field shows
``:-)`` if the agent reported its state within the period defined by the
``agent_down_time`` option in the :file:`neutron.conf` file. Otherwise the
``alive`` is ``xxx``.
#. List the DHCP agents that host a specified network:
In some deployments, one DHCP agent is not enough to hold all network
data. In addition, you must have a backup for it even when the
deployment is small. The same network can be assigned to more than one
DHCP agent and one DHCP agent can host more than one network.
#. List DHCP agents that host a specified network:
.. code:: console
$ neutron dhcp-agent-list-hosting-net net1
+--------------------------------------+-------+----------------+-------+
| id | host | admin_state_up | alive |
+--------------------------------------+-------+----------------+-------+
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
+--------------------------------------+-------+----------------+-------+
#. List the networks hosted by a given DHCP agent:
This command is to show which networks a given dhcp agent is managing.
.. code:: console
$ neutron net-list-on-dhcp-agent a0c1c21c-d4f4-4577-9ec7-908f2d48622d
+------------------------+------+---------------------------------+
| id | name | subnets |
+------------------------+------+---------------------------------+
| 89dca1c6-c7d4-4f7a | | |
| -b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd |
| | | -8e45-d5cf646db9d1 10.0.1.0/24 |
+------------------------+------+---------------------------------+
#. Show agent details.
The :command:``agent-show`` command shows details for a specified agent:
.. code:: console
$ neutron agent-show a0c1c21c-d4f4-4577-9ec7-908f2d48622d
+--------------------+---------------------------------------------------+
| Field | Value |
+--------------------+---------------------------------------------------+
| admin_state_up | True |
| agent_type | DHCP agent |
| alive | False |
| binary | neutron-dhcp-agent |
| configurations |{ |
| | "subnets": 1, |
| | "use_namespaces": true, |
| | "dhcp_driver": "neutron.agent.linux.dhcp.Dnsmasq",|
| | "networks": 1, |
| | "dhcp_lease_time": 120, |
| | "ports": 3 |
| |} |
| created_at | 2013-03-16T01:16:18.000000 |
| description | |
| heartbeat_timestamp| 2013-03-17T01:37:22.000000 |
| host | HostA |
| id | 58f4ce07-6789-4bb3-aa42-ed3779db2b03 |
| started_at | 2013-03-16T06:48:39.000000 |
| topic | dhcp_agent |
+--------------------+---------------------------------------------------+
In this output, ``heartbeat_timestamp`` is the time on the neutron
server. You do not need to synchronize all agents to this time for this
extension to run correctly. ``configurations`` describes the static
configuration for the agent or run time data. This agent is a DHCP agent
and it hosts one network, one subnet, and three ports.
Different types of agents show different details. The following output
shows information for a Linux bridge agent:
.. code:: console
$ neutron agent-show ed96b856-ae0f-4d75-bb28-40a47ffd7695
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| binary | neutron-linuxbridge-agent |
| configurations | { |
| | "physnet1": "eth0", |
| | "devices": "4" |
| | } |
| created_at | 2013-03-16T01:49:52.000000 |
| description | |
| disabled | False |
| group | agent |
| heartbeat_timestamp | 2013-03-16T01:59:45.000000 |
| host | HostB |
| id | ed96b856-ae0f-4d75-bb28-40a47ffd7695 |
| topic | N/A |
| started_at | 2013-03-16T06:48:39.000000 |
| type | Linux bridge agent |
+---------------------+--------------------------------------+
The output shows ``bridge-mapping`` and the number of virtual network
devices on this L2 agent.
**Manage assignment of networks to DHCP agent**
Now that you have run the :command:`net-list-on-dhcp-agent` and
``dhcp-agent-list-hosting-net`` commands, you can add a network to a
DHCP agent and remove one from it.
#. Default scheduling.
When you create a network with one port, you can schedule it to an
active DHCP agent. If many active DHCP agents are running, select one
randomly. You can design more sophisticated scheduling algorithms in the
same way as nova-schedule later on.
.. code:: console
$ neutron net-create net2
$ neutron subnet-create net2 9.0.1.0/24 --name subnet2
$ neutron port-create net2
$ neutron dhcp-agent-list-hosting-net net2
+--------------------------------------+-------+----------------+-------+
| id | host | admin_state_up | alive |
+--------------------------------------+-------+----------------+-------+
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
+--------------------------------------+-------+----------------+-------+
It is allocated to DHCP agent on HostA. If you want to validate the
behavior through the :command:`dnsmasq` command, you must create a subnet for
the network because the DHCP agent starts the dnsmasq service only if
there is a DHCP.
#. Assign a network to a given DHCP agent.
To add another DHCP agent to host the network, run this command:
.. code:: console
$ neutron dhcp-agent-network-add f28aa126-6edb-4ea5-a81e-8850876bc0a8 net2
Added network net2 to dhcp agent
$ neutron dhcp-agent-list-hosting-net net2
+--------------------------------------+-------+----------------+-------+
| id | host | admin_state_up | alive |
+--------------------------------------+-------+----------------+-------+
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) |
+--------------------------------------+-------+----------------+-------+
Both DHCP agents host the ``net2`` network.
#. Remove a network from a specified DHCP agent.
This command is the sibling command for the previous one. Remove
``net2`` from the DHCP agent for HostA:
.. code:: console
$ neutron dhcp-agent-network-remove a0c1c21c-d4f4-4577-9ec7-908f2d48622d
net2
Removed network net2 to dhcp agent
$ neutron dhcp-agent-list-hosting-net net2
+--------------------------------------+-------+----------------+-------+
| id | host | admin_state_up | alive |
+--------------------------------------+-------+----------------+-------+
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) |
+--------------------------------------+-------+----------------+-------+
You can see that only the DHCP agent for HostB is hosting the ``net2``
network.
**HA of DHCP agents**
Boot a VM on net2. Let both DHCP agents host ``net2``. Fail the agents
in turn to see if the VM can still get the desired IP.
#. Boot a VM on net2:
.. code:: console
$ neutron net-list
+-------------------------+------+-----------------------------+
| id | name | subnets |
+-------------------------+------+-----------------------------+
| 89dca1c6-c7d4-4f7a- | | |
| b730-549af0fb6e34 | net1 | f6c832e3-9968-46fd-8e45 |
| | | -d5cf646db9d1 10.0.1.0/24 |
| 9b96b14f-71b8-4918-90aa-| | |
| c5d705606b1a | net2 | 6979b71a-0ae8-448c-aa87- |
| | | 65f68eedcaaa 9.0.1.0/24 |
+-------------------------+------+-----------------------------+
.. code:: console
$ nova boot --image tty --flavor 1 myserver4 \
--nic net-id=9b96b14f-71b8-4918-90aa-c5d705606b1a
.. code:: console
$ nova list
+-------------------------------------+----------+-------+---------------+
| ID | Name | Status| Networks |
+-------------------------------------+----------+-------+---------------+
|c394fcd0-0baa-43ae-a793-201815c3e8ce |myserver1 |ACTIVE | net1=10.0.1.3 |
|2d604e05-9a6c-4ddb-9082-8a1fbdcc797d |myserver2 |ACTIVE | net1=10.0.1.4 |
|c7c0481c-3db8-4d7a-a948-60ce8211d585 |myserver3 |ACTIVE | net1=10.0.1.5 |
|f62f4731-5591-46b1-9d74-f0c901de567f |myserver4 |ACTIVE | net2=9.0.1.2 |
+-------------------------------------+----------+-------+---------------+
#. Make sure both DHCP agents hosting 'net2':
Use the previous commands to assign the network to agents.
.. code:: console
$ neutron dhcp-agent-list-hosting-net net2
+--------------------------------------+-------+----------------+-------+
| id | host | admin_state_up | alive |
+--------------------------------------+-------+----------------+-------+
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | HostA | True | :-) |
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | HostB | True | :-) |
+--------------------------------------+-------+----------------+-------+
**Test the HA**
#. Log in to the ``myserver4`` VM, and run ``udhcpc``, ``dhclient`` or
other DHCP client.
#. Stop the DHCP agent on HostA. Besides stopping the
``neutron-dhcp-agent`` binary, you must stop the ``dnsmasq`` processes.
#. Run a DHCP client in VM to see if it can get the wanted IP.
#. Stop the DHCP agent on HostB too.
#. Run ``udhcpc`` in the VM; it cannot get the wanted IP.
#. Start DHCP agent on HostB. The VM gets the wanted IP again.
**Disable and remove an agent**
An administrator might want to disable an agent if a system hardware or
software upgrade is planned. Some agents that support scheduling also
support disabling and enabling agents, such as L3 and DHCP agents. After
the agent is disabled, the scheduler does not schedule new resources to
the agent. After the agent is disabled, you can safely remove the agent.
Remove the resources on the agent before you delete the agent.
To run the following commands, you must stop the DHCP agent on HostA.
.. code:: console
$ neutron agent-update --admin-state-up False a0c1c21c-d4f4-4577
-9ec7-908f2d48622d
$ neutron agent-list
+--------------------------------------+--------------------+-------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-------+-------+----------------+
| 1b69828d-6a9b-4826-87cd-1757f0e27f31 | Linux bridge agent | HostA | :-) | True |
| a0c1c21c-d4f4-4577-9ec7-908f2d48622d | DHCP agent | HostA | :-) | False |
| ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent | HostB | :-) | True |
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent | HostB | :-) | True |
+--------------------------------------+--------------------+-------+-------+----------------+
.. code:: console
$ neutron agent-delete a0c1c21c-d4f4-4577-9ec7-908f2d48622d
Deleted agent: a0c1c21c-d4f4-4577-9ec7-908f2d48622d
$ neutron agent-list
+--------------------------------------+--------------------+-------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-------+-------+----------------+
| 1b69828d-6a9b-4826-87cd-1757f0e27f31 | Linux bridge agent | HostA | :-) | True |
| ed96b856-ae0f-4d75-bb28-40a47ffd7695 | Linux bridge agent | HostB | :-) | True |
| f28aa126-6edb-4ea5-a81e-8850876bc0a8 | DHCP agent | HostB | :-) | True |
+--------------------------------------+--------------------+-------+-------+----------------+
After deletion, if you restart the DHCP agent, it appears on the agent
list again.
.. |image0| image:: ../../common/figures/demo_multiple_dhcp_agents.png