docs: Blast most references to nova-network

The only ones remaining are some real crufty SVGs and references to
things that still exist because nova-network was once a thing.

Change-Id: I1aebf86c05c7b8c1562d0071d45de2fe53f4588b
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
This commit is contained in:
Stephen Finucane 2019-10-02 17:21:40 +01:00 committed by Eric Fried
parent c56a635de1
commit 29b9f788de
12 changed files with 26 additions and 235 deletions

View File

@ -56,8 +56,7 @@ visible in the OpenStack dashboard and you can manage it as you would any other
OpenStack VM. You can perform advanced vSphere operations in vCenter while you
configure OpenStack resources such as VMs through the OpenStack dashboard.
The figure does not show how networking fits into the architecture. Both
``nova-network`` and the OpenStack Networking Service are supported. For
The figure does not show how networking fits into the architecture. For
details, see :ref:`vmware-networking`.
Configuration overview
@ -73,8 +72,7 @@ high-level steps:
#. Load desired VMDK images into the Image service. See :ref:`vmware-images`.
#. Configure networking with either ``nova-network`` or
the Networking service. See :ref:`vmware-networking`.
#. Configure the Networking service (neutron). See :ref:`vmware-networking`.
.. _vmware-prereqs:
@ -110,8 +108,7 @@ Networking
Security groups
If you use the VMware driver with OpenStack Networking and the NSX plug-in,
security groups are supported. If you use ``nova-network``, security groups
are not supported.
security groups are supported.
.. note::
@ -937,37 +934,11 @@ section in the ``nova.conf`` file:
Networking with VMware vSphere
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The VMware driver supports networking with the ``nova-network`` service or the
Networking Service. Depending on your installation, complete these
configuration steps before you provision VMs:
The VMware driver supports networking with the Networking Service (neutron).
Depending on your installation, complete these configuration steps before you
provision VMs:
#. **The nova-network service with the FlatManager or FlatDHCPManager**.
Create a port group with the same name as the ``flat_network_bridge`` value
in the ``nova.conf`` file. The default value is ``br100``. If you specify
another value, the new value must be a valid Linux bridge identifier that
adheres to Linux bridge naming conventions.
All VM NICs are attached to this port group.
Ensure that the flat interface of the node that runs the ``nova-network``
service has a path to this network.
.. note::
When configuring the port binding for this port group in vCenter, specify
``ephemeral`` for the port binding type. For more information, see
`Choosing a port binding type in ESX/ESXi <http://kb.vmware.com/
selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC
&amp;externalId=1022312>`_ in the VMware Knowledge Base.
#. **The nova-network service with the VlanManager**.
Set the ``vlan_interface`` configuration option to match the ESX host
interface that handles VLAN-tagged VM traffic.
OpenStack Compute automatically creates the corresponding port groups.
#. If you are using the OpenStack Networking Service:
Before provisioning VMs, create a port group with the same name as the
#. Before provisioning VMs, create a port group with the same name as the
``vmware.integration_bridge`` value in ``nova.conf`` (default is
``br-int``). All VM NICs are attached to this port group for management by
the OpenStack Networking plug-in.

View File

@ -203,13 +203,6 @@ Reporting_Bugs_against_Xen>`_ against Xen.
Known issues
~~~~~~~~~~~~
* **Networking**: Xen via libvirt is currently only supported with
nova-network. Fixes for a number of bugs are currently being worked on to
make sure that Xen via libvirt will also work with OpenStack Networking
(neutron).
.. todo:: Is this still true?
* **Live migration**: Live migration is supported in the libvirt libxl driver
since version 1.2.5. However, there were a number of issues when used with
OpenStack, in particular with libvirt migration protocol compatibility. It is

View File

@ -22,18 +22,9 @@ The corresponding log file of each Compute service is stored in the
* - ``nova-conductor.log``
- ``openstack-nova-conductor``
- ``nova-conductor``
* - ``nova-network.log`` [#a]_
- ``openstack-nova-network``
- ``nova-network``
* - ``nova-manage.log``
- ``nova-manage``
- ``nova-manage``
* - ``nova-scheduler.log``
- ``openstack-nova-scheduler``
- ``nova-scheduler``
.. rubric:: Footnotes
.. [#a] The ``nova`` network service (``openstack-nova-network``/
``nova-network``) only runs in deployments that are not configured
to use the Networking service (``neutron``).

View File

@ -49,18 +49,6 @@ responsibilities of services and drivers are:
Provides database-access support for compute nodes (thereby reducing security
risks).
``nova-network``
Manages floating and fixed IPs, DHCP, bridging and VLANs. Loads a Service
object which exposes the public methods on one of the subclasses of
NetworkManager. Different networking strategies are available by changing the
``network_manager`` configuration option to ``FlatManager``,
``FlatDHCPManager``, or ``VLANManager`` (defaults to ``VLANManager`` if
nothing is specified).
.. deprecated:: 14.0.0
``nova-network`` was deprecated in the OpenStack Newton release.
``nova-scheduler``
Dispatches requests for new virtual machines to the correct node.
@ -72,8 +60,7 @@ responsibilities of services and drivers are:
Some services have drivers that change how the service implements its core
functionality. For example, the ``nova-compute`` service supports drivers
that let you choose which hypervisor type it can use. ``nova-network`` and
``nova-scheduler`` also have drivers.
that let you choose which hypervisor type it can use.
.. toctree::
:maxdepth: 2

View File

@ -145,7 +145,7 @@ A disk crash, network loss, or power failure can affect several components in
your cloud architecture. The worst disaster for a cloud is a power loss. A
power loss affects these components:
- A cloud controller (``nova-api``, ``nova-objectstore``, ``nova-network``)
- A cloud controller (``nova-api``, ``nova-conductor``, ``nova-scheduler``)
- A compute node (``nova-compute``)
@ -178,9 +178,6 @@ After power resumes and all hardware components restart:
- The iSCSI session from the cloud controller to the compute node no longer
exists.
- nova-network reapplies configurations on boot and, as a result, recreates
the iptables and ebtables from the cloud controller to the compute node.
- Instances stop running.
Instances are not lost because neither ``destroy`` nor ``terminate`` ran.

View File

@ -12,8 +12,8 @@ that has no other defined security group. Unless you change the default, this
security group denies all incoming traffic and allows only outgoing traffic to
your instance.
By default, security groups (and their quota) are managed by the
:neutron-doc:`Neutron networking service </admin/archives/adv-features.html#security-groups>`.
Security groups (and their quota) are managed by :neutron-doc:`Neutron, the
networking service </admin/archives/adv-features.html#security-groups>`.
Working with security groups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -235,27 +235,3 @@ member of the cluster.
The ``cluster`` rule allows SSH access from any other instance that uses the
``global_http`` group.
nova-network configuration (deprecated)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use the :oslo.config:option:`allow_same_net_traffic` option in the
``/etc/nova/nova.conf`` file to globally control whether the rules apply to
hosts which share a network. There are two possible values:
``True`` (default)
Hosts on the same subnet are not filtered and are allowed to pass all types
of traffic between them. On a flat network, this allows all instances from
all projects unfiltered communication. With VLAN networking, this allows
access between instances within the same project. You can also simulate this
setting by configuring the default security group to allow all traffic from
the subnet.
``False``
Security groups are enforced for all connections.
Additionally, the number of maximum rules per security group is controlled by
the ``security_group_rules`` and the number of allowed security groups per
project is controlled by the ``security_groups`` quota (see
:doc:`/admin/quotas`).

View File

@ -258,10 +258,9 @@ Deprecating APIs
Compute REST API routes may be deprecated by capping a method or functionality
using microversions. For example, the
:ref:`2.36 microversion <2.36 microversion>` deprecated
several compute REST API routes which only work when using the ``nova-network``
service, which itself was deprecated, or are proxies to other external
services like Cinder, Neutron, etc.
:ref:`2.36 microversion <2.36 microversion>` deprecated several compute REST
API routes which only worked when using the since-removed ``nova-network``
service or are proxies to other external services like cinder, neutron, etc.
The point of deprecating with microversions is users can still get the same
functionality at a lower microversion but there is at least some way to signal

View File

@ -194,11 +194,10 @@ As Glance moves to deprecate its v1 API, we need to translate calls
from the old v1 API we expose, to Glance's v2 API.
The next API to mention is the networking APIs, in particular the
security groups API. If you are using nova-network, Nova is still the only
way to perform these network operations.
But if you use Neutron, security groups has a much richer Neutron API,
and if you use both Nova API and Neutron API, the miss match can lead to
some very unexpected results, in certain cases.
security groups API. Most of these APIs exist from when ``nova-network``
existed and the proxies were added during the transition. However, security
groups has a much richer Neutron API, and if you use both Nova API and Neutron
API, the mismatch can lead to some very unexpected results, in certain cases.
Our intention is to avoid adding to the problems we already have in this area.

View File

@ -59,16 +59,12 @@ Swap
Amount of swap space (in megabytes) to use. This property is optional. If
unspecified, the value is ``0`` by default.
RXTX Factor
The receive/transmit factor of any network ports on the instance. This
property is optional. If unspecified, the value is ``1.0`` by default.
.. note::
This property only applies if using the ``xen`` compute driver with the
``nova-network`` network driver. It will likely be deprecated in a future
release. ``neutron`` users should refer to the :neutron-doc:`neutron QoS
documentation <admin/config-qos.html>`
RXTX Factor (DEPRECATED)
This value was only applicable when using the ``xen`` compute driver with the
``nova-network`` network driver. Since ``nova-network`` has been removed,
this no longer applies and should not be specified. It will likely be
removed in a future release. ``neutron`` users should refer to the
:neutron-doc:`neutron QoS documentation <admin/config-qos.html>`
Is Public
Boolean value that defines whether the flavor is available to all users or

View File

@ -48,7 +48,7 @@ The following quotas were previously available but were removed in microversion
* - floating_ips
- Number of floating IP addresses allowed per project.
* - networks
- Number of networks allowed per project (nova-network only).
- Number of networks allowed per project (no longer used).
* - security_groups
- Number of security groups per project.
* - security_group_rules

View File

@ -1325,123 +1325,6 @@ driver.libvirt-vz-ct=complete
driver.powervm=complete
driver.zvm=complete
[networking.firewallrules]
title=Network firewall rules
status=optional
notes=Unclear how this is different from security groups
cli=
driver.xenserver=complete
driver.libvirt-kvm-x86=complete
driver.libvirt-kvm-aarch64=complete
driver.libvirt-kvm-ppc64=complete
driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.libvirt-xen=complete
driver.vmware=missing
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
driver.powervm=complete
driver.zvm=missing
[networking.routing]
title=Network routing
status=optional
notes=Unclear what this refers to
cli=
driver.xenserver=complete
driver.libvirt-kvm-x86=complete
driver.libvirt-kvm-aarch64=unknown
driver.libvirt-kvm-ppc64=missing
driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.libvirt-xen=complete
driver.vmware=complete
driver.hyperv=missing
driver.ironic=complete
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
driver.powervm=complete
driver.zvm=missing
[networking.securitygroups]
title=Network security groups
status=optional
notes=The security groups feature provides a way to define rules
to isolate the network traffic of different instances running
on a compute host. This would prevent actions such as MAC and
IP address spoofing, or the ability to setup rogue DHCP servers.
In a private cloud environment this may be considered to be a
superfluous requirement. Therefore this is considered to be an
optional configuration to support.
cli=
driver.xenserver=complete
driver.libvirt-kvm-x86=complete
driver.libvirt-kvm-aarch64=complete
driver.libvirt-kvm-ppc64=complete
driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.libvirt-xen=complete
driver.vmware=partial
driver-notes.vmware=This is supported by the Neutron NSX plugins
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
driver.powervm=complete
driver.zvm=missing
[networking.topology.flat]
title=Flat networking
status=choice(networking.topology)
notes=Provide network connectivity to guests using a
flat topology across all compute nodes. At least one
of the networking configurations is mandatory to
support in the drivers.
cli=
driver.xenserver=complete
driver.libvirt-kvm-x86=complete
driver.libvirt-kvm-aarch64=unknown
driver.libvirt-kvm-ppc64=complete
driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.libvirt-xen=complete
driver.vmware=complete
driver.hyperv=complete
driver.ironic=complete
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
driver.powervm=complete
driver.zvm=complete
[networking.topology.vlan]
title=VLAN networking
status=choice(networking.topology)
notes=Provide network connectivity to guests using VLANs to define the
topology when using nova-network. At least one of the networking
configurations is mandatory to support in the drivers.
cli=
driver.xenserver=complete
driver.libvirt-kvm-x86=complete
driver.libvirt-kvm-aarch64=unknown
driver.libvirt-kvm-ppc64=complete
driver.libvirt-kvm-s390x=complete
driver.libvirt-qemu-x86=complete
driver.libvirt-lxc=complete
driver.libvirt-xen=complete
driver.vmware=complete
driver.hyperv=missing
driver.ironic=missing
driver.libvirt-vz-vm=complete
driver.libvirt-vz-ct=complete
driver.powervm=complete
driver.zvm=complete
[operation.uefi-boot]
title=uefi boot
status=optional

View File

@ -20,8 +20,7 @@ Upgrades
Nova aims to provide upgrades with minimal downtime.
Firstly, the data plane. There should be no VM downtime when you upgrade
Nova. Nova has had this since the early days, with the exception of
some nova-network related services.
Nova. Nova has had this since the early days.
Secondly, we want no downtime during upgrades of the Nova control plane.
This document is trying to describe how we can achieve that.