From 29b9f788de3e2d889e522ac419cb06ef54ce9269 Mon Sep 17 00:00:00 2001 From: Stephen Finucane Date: Wed, 2 Oct 2019 17:21:40 +0100 Subject: [PATCH] docs: Blast most references to nova-network The only ones remaining are some real crufty SVGs and references to things that still exist because nova-network was once a thing. Change-Id: I1aebf86c05c7b8c1562d0071d45de2fe53f4588b Signed-off-by: Stephen Finucane --- .../admin/configuration/hypervisor-vmware.rst | 43 ++----- .../configuration/hypervisor-xen-libvirt.rst | 7 -- doc/source/admin/configuration/logs.rst | 9 -- doc/source/admin/index.rst | 15 +-- doc/source/admin/node-down.rst | 5 +- doc/source/admin/security-groups.rst | 28 +---- doc/source/contributor/api.rst | 7 +- doc/source/contributor/project-scope.rst | 9 +- doc/source/user/flavors.rst | 16 +-- doc/source/user/quotas.rst | 2 +- doc/source/user/support-matrix.ini | 117 ------------------ doc/source/user/upgrade.rst | 3 +- 12 files changed, 26 insertions(+), 235 deletions(-) diff --git a/doc/source/admin/configuration/hypervisor-vmware.rst b/doc/source/admin/configuration/hypervisor-vmware.rst index 204f290b28de..3d308dddd64e 100644 --- a/doc/source/admin/configuration/hypervisor-vmware.rst +++ b/doc/source/admin/configuration/hypervisor-vmware.rst @@ -56,8 +56,7 @@ visible in the OpenStack dashboard and you can manage it as you would any other OpenStack VM. You can perform advanced vSphere operations in vCenter while you configure OpenStack resources such as VMs through the OpenStack dashboard. -The figure does not show how networking fits into the architecture. Both -``nova-network`` and the OpenStack Networking Service are supported. For +The figure does not show how networking fits into the architecture. For details, see :ref:`vmware-networking`. Configuration overview @@ -73,8 +72,7 @@ high-level steps: #. Load desired VMDK images into the Image service. See :ref:`vmware-images`. -#. Configure networking with either ``nova-network`` or - the Networking service. See :ref:`vmware-networking`. +#. Configure the Networking service (neutron). See :ref:`vmware-networking`. .. _vmware-prereqs: @@ -110,8 +108,7 @@ Networking Security groups If you use the VMware driver with OpenStack Networking and the NSX plug-in, - security groups are supported. If you use ``nova-network``, security groups - are not supported. + security groups are supported. .. note:: @@ -937,37 +934,11 @@ section in the ``nova.conf`` file: Networking with VMware vSphere ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The VMware driver supports networking with the ``nova-network`` service or the -Networking Service. Depending on your installation, complete these -configuration steps before you provision VMs: +The VMware driver supports networking with the Networking Service (neutron). +Depending on your installation, complete these configuration steps before you +provision VMs: -#. **The nova-network service with the FlatManager or FlatDHCPManager**. - Create a port group with the same name as the ``flat_network_bridge`` value - in the ``nova.conf`` file. The default value is ``br100``. If you specify - another value, the new value must be a valid Linux bridge identifier that - adheres to Linux bridge naming conventions. - - All VM NICs are attached to this port group. - - Ensure that the flat interface of the node that runs the ``nova-network`` - service has a path to this network. - - .. note:: - - When configuring the port binding for this port group in vCenter, specify - ``ephemeral`` for the port binding type. For more information, see - `Choosing a port binding type in ESX/ESXi `_ in the VMware Knowledge Base. - -#. **The nova-network service with the VlanManager**. - Set the ``vlan_interface`` configuration option to match the ESX host - interface that handles VLAN-tagged VM traffic. - - OpenStack Compute automatically creates the corresponding port groups. - -#. If you are using the OpenStack Networking Service: - Before provisioning VMs, create a port group with the same name as the +#. Before provisioning VMs, create a port group with the same name as the ``vmware.integration_bridge`` value in ``nova.conf`` (default is ``br-int``). All VM NICs are attached to this port group for management by the OpenStack Networking plug-in. diff --git a/doc/source/admin/configuration/hypervisor-xen-libvirt.rst b/doc/source/admin/configuration/hypervisor-xen-libvirt.rst index 31b34d6fff78..795e83e8e84b 100644 --- a/doc/source/admin/configuration/hypervisor-xen-libvirt.rst +++ b/doc/source/admin/configuration/hypervisor-xen-libvirt.rst @@ -203,13 +203,6 @@ Reporting_Bugs_against_Xen>`_ against Xen. Known issues ~~~~~~~~~~~~ -* **Networking**: Xen via libvirt is currently only supported with - nova-network. Fixes for a number of bugs are currently being worked on to - make sure that Xen via libvirt will also work with OpenStack Networking - (neutron). - - .. todo:: Is this still true? - * **Live migration**: Live migration is supported in the libvirt libxl driver since version 1.2.5. However, there were a number of issues when used with OpenStack, in particular with libvirt migration protocol compatibility. It is diff --git a/doc/source/admin/configuration/logs.rst b/doc/source/admin/configuration/logs.rst index 53ad09f7ddc9..7ecdf1b358f8 100644 --- a/doc/source/admin/configuration/logs.rst +++ b/doc/source/admin/configuration/logs.rst @@ -22,18 +22,9 @@ The corresponding log file of each Compute service is stored in the * - ``nova-conductor.log`` - ``openstack-nova-conductor`` - ``nova-conductor`` - * - ``nova-network.log`` [#a]_ - - ``openstack-nova-network`` - - ``nova-network`` * - ``nova-manage.log`` - ``nova-manage`` - ``nova-manage`` * - ``nova-scheduler.log`` - ``openstack-nova-scheduler`` - ``nova-scheduler`` - -.. rubric:: Footnotes - -.. [#a] The ``nova`` network service (``openstack-nova-network``/ - ``nova-network``) only runs in deployments that are not configured - to use the Networking service (``neutron``). diff --git a/doc/source/admin/index.rst b/doc/source/admin/index.rst index 745a69179b5d..7fad2740da9b 100644 --- a/doc/source/admin/index.rst +++ b/doc/source/admin/index.rst @@ -49,18 +49,6 @@ responsibilities of services and drivers are: Provides database-access support for compute nodes (thereby reducing security risks). -``nova-network`` - Manages floating and fixed IPs, DHCP, bridging and VLANs. Loads a Service - object which exposes the public methods on one of the subclasses of - NetworkManager. Different networking strategies are available by changing the - ``network_manager`` configuration option to ``FlatManager``, - ``FlatDHCPManager``, or ``VLANManager`` (defaults to ``VLANManager`` if - nothing is specified). - - .. deprecated:: 14.0.0 - - ``nova-network`` was deprecated in the OpenStack Newton release. - ``nova-scheduler`` Dispatches requests for new virtual machines to the correct node. @@ -72,8 +60,7 @@ responsibilities of services and drivers are: Some services have drivers that change how the service implements its core functionality. For example, the ``nova-compute`` service supports drivers - that let you choose which hypervisor type it can use. ``nova-network`` and - ``nova-scheduler`` also have drivers. + that let you choose which hypervisor type it can use. .. toctree:: :maxdepth: 2 diff --git a/doc/source/admin/node-down.rst b/doc/source/admin/node-down.rst index f2d5c5096884..58311e808887 100644 --- a/doc/source/admin/node-down.rst +++ b/doc/source/admin/node-down.rst @@ -145,7 +145,7 @@ A disk crash, network loss, or power failure can affect several components in your cloud architecture. The worst disaster for a cloud is a power loss. A power loss affects these components: -- A cloud controller (``nova-api``, ``nova-objectstore``, ``nova-network``) +- A cloud controller (``nova-api``, ``nova-conductor``, ``nova-scheduler``) - A compute node (``nova-compute``) @@ -178,9 +178,6 @@ After power resumes and all hardware components restart: - The iSCSI session from the cloud controller to the compute node no longer exists. -- nova-network reapplies configurations on boot and, as a result, recreates - the iptables and ebtables from the cloud controller to the compute node. - - Instances stop running. Instances are not lost because neither ``destroy`` nor ``terminate`` ran. diff --git a/doc/source/admin/security-groups.rst b/doc/source/admin/security-groups.rst index 7c1d6f750ea3..4419111fe757 100644 --- a/doc/source/admin/security-groups.rst +++ b/doc/source/admin/security-groups.rst @@ -12,8 +12,8 @@ that has no other defined security group. Unless you change the default, this security group denies all incoming traffic and allows only outgoing traffic to your instance. -By default, security groups (and their quota) are managed by the -:neutron-doc:`Neutron networking service `. +Security groups (and their quota) are managed by :neutron-doc:`Neutron, the +networking service `. Working with security groups ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -235,27 +235,3 @@ member of the cluster. The ``cluster`` rule allows SSH access from any other instance that uses the ``global_http`` group. - - -nova-network configuration (deprecated) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -You can use the :oslo.config:option:`allow_same_net_traffic` option in the -``/etc/nova/nova.conf`` file to globally control whether the rules apply to -hosts which share a network. There are two possible values: - -``True`` (default) - Hosts on the same subnet are not filtered and are allowed to pass all types - of traffic between them. On a flat network, this allows all instances from - all projects unfiltered communication. With VLAN networking, this allows - access between instances within the same project. You can also simulate this - setting by configuring the default security group to allow all traffic from - the subnet. - -``False`` - Security groups are enforced for all connections. - -Additionally, the number of maximum rules per security group is controlled by -the ``security_group_rules`` and the number of allowed security groups per -project is controlled by the ``security_groups`` quota (see -:doc:`/admin/quotas`). diff --git a/doc/source/contributor/api.rst b/doc/source/contributor/api.rst index 553deffa76c4..98456c8d7d75 100644 --- a/doc/source/contributor/api.rst +++ b/doc/source/contributor/api.rst @@ -258,10 +258,9 @@ Deprecating APIs Compute REST API routes may be deprecated by capping a method or functionality using microversions. For example, the -:ref:`2.36 microversion <2.36 microversion>` deprecated -several compute REST API routes which only work when using the ``nova-network`` -service, which itself was deprecated, or are proxies to other external -services like Cinder, Neutron, etc. +:ref:`2.36 microversion <2.36 microversion>` deprecated several compute REST +API routes which only worked when using the since-removed ``nova-network`` +service or are proxies to other external services like cinder, neutron, etc. The point of deprecating with microversions is users can still get the same functionality at a lower microversion but there is at least some way to signal diff --git a/doc/source/contributor/project-scope.rst b/doc/source/contributor/project-scope.rst index 85b032dfdab2..2ff344d78986 100644 --- a/doc/source/contributor/project-scope.rst +++ b/doc/source/contributor/project-scope.rst @@ -194,11 +194,10 @@ As Glance moves to deprecate its v1 API, we need to translate calls from the old v1 API we expose, to Glance's v2 API. The next API to mention is the networking APIs, in particular the -security groups API. If you are using nova-network, Nova is still the only -way to perform these network operations. -But if you use Neutron, security groups has a much richer Neutron API, -and if you use both Nova API and Neutron API, the miss match can lead to -some very unexpected results, in certain cases. +security groups API. Most of these APIs exist from when ``nova-network`` +existed and the proxies were added during the transition. However, security +groups has a much richer Neutron API, and if you use both Nova API and Neutron +API, the mismatch can lead to some very unexpected results, in certain cases. Our intention is to avoid adding to the problems we already have in this area. diff --git a/doc/source/user/flavors.rst b/doc/source/user/flavors.rst index 3e24fc0072d9..f8d2e652d078 100644 --- a/doc/source/user/flavors.rst +++ b/doc/source/user/flavors.rst @@ -59,16 +59,12 @@ Swap Amount of swap space (in megabytes) to use. This property is optional. If unspecified, the value is ``0`` by default. -RXTX Factor - The receive/transmit factor of any network ports on the instance. This - property is optional. If unspecified, the value is ``1.0`` by default. - - .. note:: - - This property only applies if using the ``xen`` compute driver with the - ``nova-network`` network driver. It will likely be deprecated in a future - release. ``neutron`` users should refer to the :neutron-doc:`neutron QoS - documentation ` +RXTX Factor (DEPRECATED) + This value was only applicable when using the ``xen`` compute driver with the + ``nova-network`` network driver. Since ``nova-network`` has been removed, + this no longer applies and should not be specified. It will likely be + removed in a future release. ``neutron`` users should refer to the + :neutron-doc:`neutron QoS documentation ` Is Public Boolean value that defines whether the flavor is available to all users or diff --git a/doc/source/user/quotas.rst b/doc/source/user/quotas.rst index d4a5d36ac9f2..7377e0f1a242 100644 --- a/doc/source/user/quotas.rst +++ b/doc/source/user/quotas.rst @@ -48,7 +48,7 @@ The following quotas were previously available but were removed in microversion * - floating_ips - Number of floating IP addresses allowed per project. * - networks - - Number of networks allowed per project (nova-network only). + - Number of networks allowed per project (no longer used). * - security_groups - Number of security groups per project. * - security_group_rules diff --git a/doc/source/user/support-matrix.ini b/doc/source/user/support-matrix.ini index 1b8ec7b15653..5c69f921ebca 100644 --- a/doc/source/user/support-matrix.ini +++ b/doc/source/user/support-matrix.ini @@ -1325,123 +1325,6 @@ driver.libvirt-vz-ct=complete driver.powervm=complete driver.zvm=complete -[networking.firewallrules] -title=Network firewall rules -status=optional -notes=Unclear how this is different from security groups -cli= -driver.xenserver=complete -driver.libvirt-kvm-x86=complete -driver.libvirt-kvm-aarch64=complete -driver.libvirt-kvm-ppc64=complete -driver.libvirt-kvm-s390x=complete -driver.libvirt-qemu-x86=complete -driver.libvirt-lxc=complete -driver.libvirt-xen=complete -driver.vmware=missing -driver.hyperv=missing -driver.ironic=missing -driver.libvirt-vz-vm=complete -driver.libvirt-vz-ct=complete -driver.powervm=complete -driver.zvm=missing - -[networking.routing] -title=Network routing -status=optional -notes=Unclear what this refers to -cli= -driver.xenserver=complete -driver.libvirt-kvm-x86=complete -driver.libvirt-kvm-aarch64=unknown -driver.libvirt-kvm-ppc64=missing -driver.libvirt-kvm-s390x=complete -driver.libvirt-qemu-x86=complete -driver.libvirt-lxc=complete -driver.libvirt-xen=complete -driver.vmware=complete -driver.hyperv=missing -driver.ironic=complete -driver.libvirt-vz-vm=complete -driver.libvirt-vz-ct=complete -driver.powervm=complete -driver.zvm=missing - -[networking.securitygroups] -title=Network security groups -status=optional -notes=The security groups feature provides a way to define rules - to isolate the network traffic of different instances running - on a compute host. This would prevent actions such as MAC and - IP address spoofing, or the ability to setup rogue DHCP servers. - In a private cloud environment this may be considered to be a - superfluous requirement. Therefore this is considered to be an - optional configuration to support. -cli= -driver.xenserver=complete -driver.libvirt-kvm-x86=complete -driver.libvirt-kvm-aarch64=complete -driver.libvirt-kvm-ppc64=complete -driver.libvirt-kvm-s390x=complete -driver.libvirt-qemu-x86=complete -driver.libvirt-lxc=complete -driver.libvirt-xen=complete -driver.vmware=partial -driver-notes.vmware=This is supported by the Neutron NSX plugins -driver.hyperv=missing -driver.ironic=missing -driver.libvirt-vz-vm=complete -driver.libvirt-vz-ct=complete -driver.powervm=complete -driver.zvm=missing - -[networking.topology.flat] -title=Flat networking -status=choice(networking.topology) -notes=Provide network connectivity to guests using a - flat topology across all compute nodes. At least one - of the networking configurations is mandatory to - support in the drivers. -cli= -driver.xenserver=complete -driver.libvirt-kvm-x86=complete -driver.libvirt-kvm-aarch64=unknown -driver.libvirt-kvm-ppc64=complete -driver.libvirt-kvm-s390x=complete -driver.libvirt-qemu-x86=complete -driver.libvirt-lxc=complete -driver.libvirt-xen=complete -driver.vmware=complete -driver.hyperv=complete -driver.ironic=complete -driver.libvirt-vz-vm=complete -driver.libvirt-vz-ct=complete -driver.powervm=complete -driver.zvm=complete - -[networking.topology.vlan] -title=VLAN networking -status=choice(networking.topology) -notes=Provide network connectivity to guests using VLANs to define the - topology when using nova-network. At least one of the networking - configurations is mandatory to support in the drivers. -cli= -driver.xenserver=complete -driver.libvirt-kvm-x86=complete -driver.libvirt-kvm-aarch64=unknown -driver.libvirt-kvm-ppc64=complete -driver.libvirt-kvm-s390x=complete -driver.libvirt-qemu-x86=complete -driver.libvirt-lxc=complete -driver.libvirt-xen=complete -driver.vmware=complete -driver.hyperv=missing -driver.ironic=missing -driver.libvirt-vz-vm=complete -driver.libvirt-vz-ct=complete -driver.powervm=complete -driver.zvm=complete - [operation.uefi-boot] title=uefi boot status=optional diff --git a/doc/source/user/upgrade.rst b/doc/source/user/upgrade.rst index d2d7cec94255..91093d49a2ef 100644 --- a/doc/source/user/upgrade.rst +++ b/doc/source/user/upgrade.rst @@ -20,8 +20,7 @@ Upgrades Nova aims to provide upgrades with minimal downtime. Firstly, the data plane. There should be no VM downtime when you upgrade -Nova. Nova has had this since the early days, with the exception of -some nova-network related services. +Nova. Nova has had this since the early days. Secondly, we want no downtime during upgrades of the Nova control plane. This document is trying to describe how we can achieve that.