Improve terminology in the Neutron tree

There is no real reason we should be using some of the
terms we do, they're outdated, and we're behind other
open-source projects in this respect. Let's switch to
using more inclusive terms in all possible places.

Change-Id: I99913107e803384b34cbd5ca588451b1cf64d594
changes/51/735251/4
Brian Haley 2020-06-11 17:02:41 -04:00
parent 114ac0ae89
commit 055036ba2b
48 changed files with 416 additions and 392 deletions

View File

@ -16,15 +16,15 @@ SNAT high availability is implemented in a manner similar to the
:ref:`deploy-lb-ha-vrrp` and :ref:`deploy-ovs-ha-vrrp` examples where
``keepalived`` uses VRRP to provide quick failover of SNAT services.
During normal operation, the master router periodically transmits *heartbeat*
During normal operation, the primary router periodically transmits *heartbeat*
packets over a hidden project network that connects all HA routers for a
particular project.
If the DVR/SNAT backup router stops receiving these packets, it assumes failure
of the master DVR/SNAT router and promotes itself to master router by
of the primary DVR/SNAT router and promotes itself to primary router by
configuring IP addresses on the interfaces in the ``snat`` namespace. In
environments with more than one backup router, the rules of VRRP are followed
to select a new master router.
to select a new primary router.
.. warning::

View File

@ -263,15 +263,15 @@ For more details, see the
Supported VNIC types
^^^^^^^^^^^^^^^^^^^^
The ``vnic_type_blacklist`` option is used to remove values from the mechanism driver's
``supported_vnic_types`` list.
The ``vnic_type_prohibit_list`` option is used to remove values from the
mechanism driver's ``supported_vnic_types`` list.
.. list-table:: Mechanism drivers and supported VNIC types
:header-rows: 1
* - mech driver / supported_vnic_types
- supported VNIC types
- blacklisting available
- prohibiting available
* - Linux bridge
- normal
- no
@ -280,10 +280,10 @@ The ``vnic_type_blacklist`` option is used to remove values from the mechanism d
- no
* - Open vSwitch
- normal, direct
- yes (ovs_driver vnic_type_blacklist, see: `Configuration Reference <../configuration/ml2-conf.html#ovs_driver>`__)
- yes (ovs_driver vnic_type_prohibit_list, see: `Configuration Reference <../configuration/ml2-conf.html#ovs_driver>`__)
* - SRIOV
- direct, macvtap, direct_physical
- yes (sriov_driver vnic_type_blacklist, see: `Configuration Reference <../configuration/ml2-conf.html#sriov_driver>`__)
- yes (sriov_driver vnic_type_prohibit_list, see: `Configuration Reference <../configuration/ml2-conf.html#sriov_driver>`__)
Extension Drivers

View File

@ -166,8 +166,8 @@ If a vnic_type is supported by default by multiple ML2 mechanism
drivers (e.g. ``vnic_type=direct`` by both ``openvswitch`` and
``sriovnicswitch``) and multiple agents' resources are also meant to be
tracked by Placement, then the admin must decide which driver to take
ports of that vnic_type by blacklisting the vnic_type for the unwanted
drivers. Use :oslo.config:option:`ovs_driver.vnic_type_blacklist` in this
ports of that vnic_type by prohibiting the vnic_type for the unwanted
drivers. Use :oslo.config:option:`ovs_driver.vnic_type_prohibit_list` in this
case. Valid values are all the ``supported_vnic_types`` of the
`respective mechanism drivers
<https://docs.openstack.org/neutron/latest/admin/config-ml2.html#supported-vnic-types>`_.
@ -177,10 +177,10 @@ case. Valid values are all the ``supported_vnic_types`` of the
.. code-block:: ini
[ovs_driver]
vnic_type_blacklist = direct
vnic_type_prohibit_list = direct
[sriov_driver]
#vnic_type_blacklist = direct
#vnic_type_prohibit_list = direct
neutron-openvswitch-agent config
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -87,7 +87,7 @@ Using SR-IOV interfaces
In order to enable SR-IOV, the following steps are required:
#. Create Virtual Functions (Compute)
#. Whitelist PCI devices in nova-compute (Compute)
#. Configure allow list for PCI devices in nova-compute (Compute)
#. Configure neutron-server (Controller)
#. Configure nova-scheduler (Controller)
#. Enable neutron sriov-agent (Compute)
@ -223,8 +223,8 @@ network and has access to the private networks of all machines.
the ``sysfsutils`` tool. However, this is not available by default on
many major distributions.
Whitelist PCI devices nova-compute (Compute)
--------------------------------------------
Configuring allow list for PCI devices nova-compute (Compute)
-------------------------------------------------------------
#. Configure which PCI devices the ``nova-compute`` service may use. Edit
the ``nova.conf`` file:
@ -239,7 +239,7 @@ Whitelist PCI devices nova-compute (Compute)
``physnet2``.
Alternatively the ``[pci] passthrough_whitelist`` parameter also supports
whitelisting by:
allowing devices by:
- PCI address: The address uses the same syntax as in ``lspci`` and an
asterisk (``*``) can be used to match anything.
@ -604,8 +604,8 @@ you must:
machines with no switch and the cards are plugged in back-to-back. A
subnet manager is required for the link on the cards to come up.
It is possible to have more than one subnet manager. In this case, one
of them will act as the master, and any other will act as a slave that
will take over when the master subnet manager fails.
of them will act as the primary, and any other will act as a backup that
will take over when the primary subnet manager fails.
#. Install the ``ebrctl`` utility on the compute nodes.

View File

@ -27,7 +27,7 @@ Architecture
:alt: High-availability using VRRP with Linux bridge - overview
The following figure shows components and connectivity for one self-service
network and one untagged (flat) network. The master router resides on network
network and one untagged (flat) network. The primary router resides on network
node 1. In this particular case, the instance resides on the same compute
node as the DHCP agent for the network. If the DHCP agent resides on another
compute node, the latter only contains a DHCP namespace and Linux bridge
@ -178,6 +178,6 @@ Network traffic flow
~~~~~~~~~~~~~~~~~~~~
This high-availability mechanism simply augments :ref:`deploy-ovs-selfservice`
with failover of layer-3 services to another router if the master router
with failover of layer-3 services to another router if the primary router
fails. Thus, you can reference :ref:`Self-service network traffic flow
<deploy-ovs-selfservice-networktrafficflow>` for normal operation.

View File

@ -43,7 +43,7 @@ addition or deletion of the chassis, following approach can be considered:
* Find a list of chassis where router is scheduled and reschedule it
up to *MAX_GW_CHASSIS* gateways using list of available candidates.
Do not modify the master chassis association to not interrupt network flows.
Do not modify the primary chassis association to not interrupt network flows.
Rescheduling is an event triggered operation which will occur whenever a
chassis is added or removed. When it happend, ``schedule_unhosted_gateways()``
@ -58,7 +58,7 @@ southbound database table, would be the ones eligible for hosting the routers.
Rescheduling of router depends on current prorities set. Each chassis is given
a specific priority for the router's gateway and priority increases with
increasing value ( i.e. 1 < 2 < 3 ...). The highest prioritized chassis hosts
gateway port. Other chassis are selected as slaves.
gateway port. Other chassis are selected as backups.
There are two approaches for rescheduling supported by ovn driver right
now:
@ -72,7 +72,7 @@ Few points to consider for the design:
C1 to C3 and C2 to C3. Rescheduling from C1 to C2 and vice-versa should not
be allowed.
* In order to reschedule the router's chassis, the ``master`` chassis for a
* In order to reschedule the router's chassis, the ``primary`` chassis for a
gateway router will be left untouched. However, for the scenario where all
routers are scheduled in only one chassis which is available as gateway,
the addition of the second gateway chassis would schedule the router
@ -89,11 +89,11 @@ Following scenarios are possible which have been considered in the design:
- System has 2 chassis C1 and C2 during installation. C1 goes down.
- Behavior: In this case, all routers would be rescheduled to C2.
Once C1 is back up, routers would be rescheduled on it. However,
since C2 is now the new master, routers on C1 would have lower priority.
since C2 is now the new primary, routers on C1 would have lower priority.
* Case #3:
- System has 2 chassis C1 and C2 during installation. C3 is added to it.
- Behavior: In this case, routers would not move their master chassis
associations. So routers which have their master on C1, would remain
- Behavior: In this case, routers would not move their primary chassis
associations. So routers which have their primary on C1, would remain
there, and same for routers on C2. However, lower proritized candidates
of existing gateways would be scheduled on the chassis C3, depending
on the type of used scheduler (Random or LeastLoaded).
@ -102,23 +102,23 @@ Following scenarios are possible which have been considered in the design:
Rebalancing of Gateway Chassis
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Rebalancing is the second part of the design and it assigns a new master to
Rebalancing is the second part of the design and it assigns a new primary to
already scheduled router gateway ports. Downtime is expected in this
operation. Rebalancing of routers can be achieved using external cli script.
Similar approach has been implemeneted for DHCP rescheduling `[4]`_.
The master chassis gateway could be moved only to other, previously scheduled
gateway. Rebalancing of chassis occurs only if number of scheduled master
The primary chassis gateway could be moved only to other, previously scheduled
gateway. Rebalancing of chassis occurs only if number of scheduled primary
chassis ports per each provider network hosted by given chassis is higher than
average number of hosted master gateway ports per chassis per provider network.
average number of hosted primary gateway ports per chassis per provider network.
This dependency is determined by formula:
avg_gw_per_chassis = num_gw_by_provider_net / num_chassis_with_provider_net
Where:
- avg_gw_per_chassis - average number of scheduler master gateway chassis
- avg_gw_per_chassis - average number of scheduler primary gateway chassis
withing same provider network.
- num_gw_by_provider_net - number of master chassis gateways scheduled in
- num_gw_by_provider_net - number of primary chassis gateways scheduled in
given provider networks.
- num_chassis_with_provider_net - number of chassis that has connectivity
to given provider network.
@ -128,9 +128,9 @@ The rebalancing occurs only if:
num_gw_by_provider_net_by_chassis > avg_gw_per_chassis
Where:
- num_gw_by_provider_net_by_chassis - number of hosted master gateways
- num_gw_by_provider_net_by_chassis - number of hosted primary gateways
by given provider network by given chassis
- avg_gw_per_chassis - average number of scheduler master gateway chassis
- avg_gw_per_chassis - average number of scheduler primary gateway chassis
withing same provider network.

View File

@ -88,8 +88,8 @@ class ExclusiveResourceProcessor(object):
Other instances may be created for the same ID while the first
instance has exclusive access. If that happens then it doesn't block and
wait for access. Instead, it signals to the master instance that an update
came in with the timestamp.
wait for access. Instead, it signals to the primary instance that an
update came in with the timestamp.
This way, a thread will not block to wait for access to a resource.
Instead it effectively signals to the thread that is working on the
@ -102,27 +102,27 @@ class ExclusiveResourceProcessor(object):
as possible. The timestamp should not be recorded, however, until the
resource has been processed using the fetch data.
"""
_masters = {}
_primaries = {}
_resource_timestamps = {}
def __init__(self, id):
self._id = id
if id not in self._masters:
self._masters[id] = self
if id not in self._primaries:
self._primaries[id] = self
self._queue = queue.PriorityQueue(-1)
self._master = self._masters[id]
self._primary = self._primaries[id]
def _i_am_master(self):
return self == self._master
def _i_am_primary(self):
return self == self._primary
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
if self._i_am_master():
del self._masters[self._id]
if self._i_am_primary():
del self._primaries[self._id]
def _get_resource_data_timestamp(self):
return self._resource_timestamps.get(self._id,
@ -140,16 +140,16 @@ class ExclusiveResourceProcessor(object):
resource is being processed. These updates have already bubbled to
the front of the ResourceProcessingQueue.
"""
self._master._queue.put(update)
self._primary._queue.put(update)
def updates(self):
"""Processes the resource until updates stop coming
Only the master instance will process the resource. However, updates
Only the primary instance will process the resource. However, updates
may come in from other workers while it is in progress. This method
loops until they stop coming.
"""
while self._i_am_master():
while self._i_am_primary():
if self._queue.empty():
return
# Get the update from the queue even if it is old.
@ -177,10 +177,10 @@ class ResourceProcessingQueue(object):
next_update = self._queue.get()
with ExclusiveResourceProcessor(next_update.id) as rp:
# Queue the update whether this worker is the master or not.
# Queue the update whether this worker is the primary or not.
rp.queue_update(next_update)
# Here, if the current worker is not the master, the call to
# Here, if the current worker is not the primary, the call to
# rp.updates() will not yield and so this will essentially be a
# noop.
for update in rp.updates():

View File

@ -63,7 +63,7 @@ class DvrEdgeHaRouter(dvr_edge_router.DvrEdgeRouter,
self._add_vip(fip_cidr, interface_name)
self.set_ha_port()
if (self.is_router_master() and self.ha_port and
if (self.is_router_primary() and self.ha_port and
self.ha_port['status'] == constants.PORT_STATUS_ACTIVE):
return super(DvrEdgeHaRouter, self).add_centralized_floatingip(
fip, fip_cidr)
@ -72,7 +72,7 @@ class DvrEdgeHaRouter(dvr_edge_router.DvrEdgeRouter,
def remove_centralized_floatingip(self, fip_cidr):
self._remove_vip(fip_cidr)
if self.is_router_master():
if self.is_router_primary():
super(DvrEdgeHaRouter, self).remove_centralized_floatingip(
fip_cidr)

View File

@ -29,7 +29,7 @@ LOG = logging.getLogger(__name__)
KEEPALIVED_STATE_CHANGE_SERVER_BACKLOG = 4096
TRANSLATION_MAP = {'master': constants.HA_ROUTER_STATE_ACTIVE,
TRANSLATION_MAP = {'primary': constants.HA_ROUTER_STATE_ACTIVE,
'backup': constants.HA_ROUTER_STATE_STANDBY,
'fault': constants.HA_ROUTER_STATE_STANDBY,
'unknown': constants.HA_ROUTER_STATE_UNKNOWN}
@ -129,28 +129,28 @@ class AgentMixin(object):
This function will also update the metadata proxy, the radvd daemon,
process the prefix delegation and inform to the L3 extensions. If the
HA router changes to "master", this transition will be delayed for at
least "ha_vrrp_advert_int" seconds. When the "master" router
HA router changes to "primary", this transition will be delayed for at
least "ha_vrrp_advert_int" seconds. When the "primary" router
transitions to "backup", "keepalived" will set the rest of HA routers
to "master" until it decides which one should be the only "master".
The transition from "backup" to "master" and then to "backup" again,
to "primary" until it decides which one should be the only "primary".
The transition from "backup" to "primary" and then to "backup" again,
should not be registered in the Neutron server.
:param router_id: router ID
:param state: ['master', 'backup']
:param state: ['primary', 'backup']
"""
if not self._update_transition_state(router_id, state):
eventlet.spawn_n(self._enqueue_state_change, router_id, state)
eventlet.sleep(0)
def _enqueue_state_change(self, router_id, state):
# NOTE(ralonsoh): move 'master' and 'backup' constants to n-lib
if state == 'master':
# NOTE(ralonsoh): move 'primary' and 'backup' constants to n-lib
if state == 'primary':
eventlet.sleep(self.conf.ha_vrrp_advert_int)
if self._update_transition_state(router_id) != state:
# If the current "transition state" is not the initial "state" sent
# to update the router, that means the actual router state is the
# same as the "transition state" (e.g.: backup-->master-->backup).
# same as the "transition state" (e.g.: backup-->primary-->backup).
return
ri = self._get_router_info(router_id)
@ -164,7 +164,7 @@ class AgentMixin(object):
state_change_data)
# Set external gateway port link up or down according to state
if state == 'master':
if state == 'primary':
ri.set_external_gw_port_link_status(link_up=True, set_gw=True)
elif state == 'backup':
ri.set_external_gw_port_link_status(link_up=False)
@ -181,7 +181,7 @@ class AgentMixin(object):
if self.conf.enable_metadata_proxy:
self._update_metadata_proxy(ri, router_id, state)
self._update_radvd_daemon(ri, state)
self.pd.process_ha_state(router_id, state == 'master')
self.pd.process_ha_state(router_id, state == 'primary')
self.state_change_notifier.queue_event((router_id, state))
self.l3_ext_manager.ha_state_change(self.context, state_change_data)
@ -189,7 +189,7 @@ class AgentMixin(object):
if not self.use_ipv6:
return
ipv6_forwarding_enable = state == 'master'
ipv6_forwarding_enable = state == 'primary'
if ri.router.get('distributed', False):
namespace = ri.ha_namespace
else:
@ -202,7 +202,7 @@ class AgentMixin(object):
# If ipv6 is enabled on the platform, ipv6_gateway config flag is
# not set and external_network associated to the router does not
# include any IPv6 subnet, enable the gateway interface to accept
# Router Advts from upstream router for default route on master
# Router Advts from upstream router for default route on primary
# instances as well as ipv6 forwarding. Otherwise, disable them.
ex_gw_port_id = ri.ex_gw_port and ri.ex_gw_port['id']
if ex_gw_port_id:
@ -215,7 +215,7 @@ class AgentMixin(object):
# NOTE(slaweq): Since the metadata proxy is spawned in the qrouter
# namespace and not in the snat namespace, even standby DVR-HA
# routers needs to serve metadata requests to local ports.
if state == 'master' or ri.router.get('distributed', False):
if state == 'primary' or ri.router.get('distributed', False):
LOG.debug('Spawning metadata proxy for router %s', router_id)
self.metadata_driver.spawn_monitored_metadata_proxy(
self.process_monitor, ri.ns_name, self.conf.metadata_port,
@ -226,9 +226,9 @@ class AgentMixin(object):
self.process_monitor, ri.router_id, self.conf, ri.ns_name)
def _update_radvd_daemon(self, ri, state):
# Radvd has to be spawned only on the Master HA Router. If there are
# Radvd has to be spawned only on the primary HA Router. If there are
# any state transitions, we enable/disable radvd accordingly.
if state == 'master':
if state == 'primary':
ri.enable_radvd()
else:
ri.disable_radvd()

View File

@ -55,7 +55,7 @@ class HaRouterNamespace(namespaces.RouterNamespace):
It does so to prevent sending gratuitous ARPs for interfaces that got VIP
removed in the middle of processing.
It also disables ipv6 forwarding by default. Forwarding will be
enabled during router configuration processing only for the master node.
enabled during router configuration processing only for the primary node.
It has to be disabled on all other nodes to avoid sending MLD packets
which cause lost connectivity to Floating IPs.
"""
@ -96,12 +96,12 @@ class HaRouter(router.RouterInfo):
return self.router.get('ha_vr_id')
def _check_and_set_real_state(self):
# When the physical host was down/up, the 'master' router may still
# When the physical host was down/up, the 'primary' router may still
# have its original state in the _ha_state_path file. We directly
# reset it to 'backup'.
if (not self.keepalived_manager.check_processes() and
os.path.exists(self.ha_state_path) and
self.ha_state == 'master'):
self.ha_state == 'primary'):
self.ha_state = 'backup'
@property
@ -110,7 +110,12 @@ class HaRouter(router.RouterInfo):
return self._ha_state
try:
with open(self.ha_state_path, 'r') as f:
self._ha_state = f.read()
# TODO(haleyb): put old code back after a couple releases,
# Y perhaps, just for backwards-compat
# self._ha_state = f.read()
ha_state = f.read()
ha_state = 'primary' if ha_state == 'master' else ha_state
self._ha_state = ha_state
except (OSError, IOError):
LOG.debug('Error while reading HA state for %s', self.router_id)
return self._ha_state or 'unknown'
@ -129,7 +134,7 @@ class HaRouter(router.RouterInfo):
def ha_namespace(self):
return self.ns_name
def is_router_master(self):
def is_router_primary(self):
"""this method is normally called before the ha_router object is fully
initialized
"""
@ -298,14 +303,14 @@ class HaRouter(router.RouterInfo):
onlink_route_cidr in onlink_route_cidrs]
def _should_delete_ipv6_lladdr(self, ipv6_lladdr):
"""Only the master should have any IP addresses configured.
"""Only the primary should have any IP addresses configured.
Let keepalived manage IPv6 link local addresses, the same way we let
it manage IPv4 addresses. If the router is not in the master state,
it manage IPv4 addresses. If the router is not in the primary state,
we must delete the address first as it is autoconfigured by the kernel.
"""
manager = self.keepalived_manager
if manager.get_process().active:
if self.ha_state != 'master':
if self.ha_state != 'primary':
conf = manager.get_conf_on_disk()
managed_by_keepalived = conf and ipv6_lladdr in conf
if managed_by_keepalived:
@ -317,7 +322,7 @@ class HaRouter(router.RouterInfo):
def _disable_ipv6_addressing_on_interface(self, interface_name):
"""Disable IPv6 link local addressing on the device and add it as
a VIP to keepalived. This means that the IPv6 link local address
will only be present on the master.
will only be present on the primary.
"""
device = ip_lib.IPDevice(interface_name, namespace=self.ha_namespace)
ipv6_lladdr = ip_lib.get_ipv6_lladdr(device.link.address)
@ -446,7 +451,7 @@ class HaRouter(router.RouterInfo):
name=self.get_ha_device_name())
cidrs = (address['cidr'] for address in addresses)
ha_cidr = self._get_primary_vip()
state = 'master' if ha_cidr in cidrs else 'backup'
state = 'primary' if ha_cidr in cidrs else 'backup'
self.ha_state = state
callback(self.router_id, state)
@ -468,10 +473,10 @@ class HaRouter(router.RouterInfo):
self._add_gateway_vip(ex_gw_port, interface_name)
self._disable_ipv6_addressing_on_interface(interface_name)
# Enable RA and IPv6 forwarding only for master instances. This will
# Enable RA and IPv6 forwarding only for primary instances. This will
# prevent backup routers from sending packets to the upstream switch
# and disrupt connections.
enable = self.ha_state == 'master'
enable = self.ha_state == 'primary'
self._configure_ipv6_params_on_gw(ex_gw_port, self.ns_name,
interface_name, enable)
@ -486,11 +491,11 @@ class HaRouter(router.RouterInfo):
def external_gateway_removed(self, ex_gw_port, interface_name):
self._clear_vips(interface_name)
if self.ha_state == 'master':
if self.ha_state == 'primary':
super(HaRouter, self).external_gateway_removed(ex_gw_port,
interface_name)
else:
# We are not the master node, so no need to delete ip addresses.
# We are not the primary node, so no need to delete ip addresses.
self.driver.unplug(interface_name,
namespace=self.ns_name,
prefix=router.EXTERNAL_DEV_PREFIX)
@ -526,13 +531,13 @@ class HaRouter(router.RouterInfo):
@runtime.synchronized('enable_radvd')
def enable_radvd(self, internal_ports=None):
if (self.keepalived_manager.get_process().active and
self.ha_state == 'master'):
self.ha_state == 'primary'):
super(HaRouter, self).enable_radvd(internal_ports)
def external_gateway_link_up(self):
# Check HA router ha_state for its gateway port link state.
# 'backup' instance will not link up the gateway port.
return self.ha_state == 'master'
return self.ha_state == 'primary'
def set_external_gw_port_link_status(self, link_up, set_gw=False):
link_state = "up" if link_up else "down"

View File

@ -84,7 +84,10 @@ class MonitorDaemon(daemon.Daemon):
continue
if event['name'] == self.interface and event['cidr'] == self.cidr:
new_state = 'master' if event['event'] == 'added' else 'backup'
if event['event'] == 'added':
new_state = 'primary'
else:
new_state = 'backup'
self.write_state_change(new_state)
self.notify_agent(new_state)
elif event['name'] != self.interface and event['event'] == 'added':
@ -103,7 +106,7 @@ class MonitorDaemon(daemon.Daemon):
ip = ip_lib.IPDevice(self.interface, self.namespace)
for address in ip.addr.list():
if address.get('cidr') == self.cidr:
state = 'master'
state = 'primary'
self.write_state_change(state)
self.notify_agent(state)
break

View File

@ -170,7 +170,7 @@ class RouterInfo(BaseRouterInfo):
return namespaces.RouterNamespace(
router_id, agent_conf, iface_driver, use_ipv6)
def is_router_master(self):
def is_router_primary(self):
return True
def _update_routing_table(self, operation, route, namespace):

View File

@ -35,7 +35,7 @@ PRIMARY_VIP_RANGE_SIZE = 24
KEEPALIVED_SERVICE_NAME = 'keepalived'
KEEPALIVED_EMAIL_FROM = 'neutron@openstack.local'
KEEPALIVED_ROUTER_ID = 'neutron'
GARP_MASTER_DELAY = 60
GARP_PRIMARY_DELAY = 60
HEALTH_CHECK_NAME = 'ha_health_check'
LOG = logging.getLogger(__name__)
@ -167,7 +167,7 @@ class KeepalivedInstance(object):
def __init__(self, state, interface, vrouter_id, ha_cidrs,
priority=HA_DEFAULT_PRIORITY, advert_int=None,
mcast_src_ip=None, nopreempt=False,
garp_master_delay=GARP_MASTER_DELAY,
garp_primary_delay=GARP_PRIMARY_DELAY,
vrrp_health_check_interval=0,
ha_conf_dir=None):
self.name = 'VR_%s' % vrouter_id
@ -182,7 +182,7 @@ class KeepalivedInstance(object):
self.nopreempt = nopreempt
self.advert_int = advert_int
self.mcast_src_ip = mcast_src_ip
self.garp_master_delay = garp_master_delay
self.garp_primary_delay = garp_primary_delay
self.track_interfaces = []
self.vips = []
self.virtual_routes = KeepalivedInstanceRoutes()
@ -294,7 +294,7 @@ class KeepalivedInstance(object):
' interface %s' % self.interface,
' virtual_router_id %s' % self.vrouter_id,
' priority %s' % self.priority,
' garp_master_delay %s' % self.garp_master_delay])
' garp_master_delay %s' % self.garp_primary_delay])
if self.nopreempt:
config.append(' nopreempt')
@ -380,7 +380,7 @@ class KeepalivedManager(object):
self.process_monitor = process_monitor
self.conf_path = conf_path
# configure throttler for spawn to introduce delay between SIGHUPs,
# otherwise keepalived master may unnecessarily flip to slave
# otherwise keepalived primary may unnecessarily flip to backup
if throttle_restart_value is not None:
self._throttle_spawn(throttle_restart_value)

View File

@ -56,8 +56,8 @@ class PrefixDelegation(object):
events.AFTER_DELETE)
self._get_sync_data()
def _is_pd_master_router(self, router):
return router['master']
def _is_pd_primary_router(self, router):
return router['primary']
@runtime.synchronized("l3-agent-pd")
def enable_subnet(self, router_id, subnet_id, prefix, ri_ifname, mac):
@ -74,11 +74,11 @@ class PrefixDelegation(object):
if pd_info.sync:
pd_info.mac = mac
pd_info.old_prefix = prefix
elif self._is_pd_master_router(router):
elif self._is_pd_primary_router(router):
self._add_lla(router, pd_info.get_bind_lla_with_mask())
def _delete_pd(self, router, pd_info):
if not self._is_pd_master_router(router):
if not self._is_pd_primary_router(router):
return
self._delete_lla(router, pd_info.get_bind_lla_with_mask())
if pd_info.client_started:
@ -94,7 +94,7 @@ class PrefixDelegation(object):
if not pd_info:
return
self._delete_pd(router, pd_info)
if self._is_pd_master_router(router):
if self._is_pd_primary_router(router):
prefix_update[subnet_id] = n_const.PROVISIONAL_IPV6_PD_PREFIX
LOG.debug("Update server with prefixes: %s", prefix_update)
self.notifier(self.context, prefix_update)
@ -117,7 +117,7 @@ class PrefixDelegation(object):
if not router:
return
router['gw_interface'] = gw_ifname
if not self._is_pd_master_router(router):
if not self._is_pd_primary_router(router):
return
prefix_update = {}
for pd_info in router['subnets'].values():
@ -141,7 +141,7 @@ class PrefixDelegation(object):
self.notifier(self.context, prefix_update)
def delete_router_pd(self, router):
if not self._is_pd_master_router(router):
if not self._is_pd_primary_router(router):
return
prefix_update = {}
for subnet_id, pd_info in router['subnets'].items():
@ -260,13 +260,13 @@ class PrefixDelegation(object):
return False
@runtime.synchronized("l3-agent-pd")
def process_ha_state(self, router_id, master):
def process_ha_state(self, router_id, primary):
router = self.routers.get(router_id)
if router is None or router['master'] == master:
if router is None or router['primary'] == primary:
return
router['master'] = master
if master:
router['primary'] = primary
if primary:
for pd_info in router['subnets'].values():
bind_lla_with_mask = pd_info.get_bind_lla_with_mask()
self._add_lla(router, bind_lla_with_mask)
@ -285,7 +285,7 @@ class PrefixDelegation(object):
prefix_update = {}
for router_id, router in self.routers.items():
if not (self._is_pd_master_router(router) and
if not (self._is_pd_primary_router(router) and
router['gw_interface']):
continue
@ -338,7 +338,7 @@ class PrefixDelegation(object):
for pd_info in sync_data:
router_id = pd_info.router_id
if not self.routers.get(router_id):
self.routers[router_id] = {'master': True,
self.routers[router_id] = {'primary': True,
'gw_interface': None,
'ns_name': None,
'subnets': {}}
@ -356,8 +356,8 @@ def remove_router(resource, event, l3_agent, **kwargs):
del l3_agent.pd.routers[router_id]
def get_router_entry(ns_name, master):
return {'master': master,
def get_router_entry(ns_name, primary):
return {'primary': primary,
'gw_interface': None,
'ns_name': ns_name,
'subnets': {}}
@ -368,14 +368,14 @@ def add_router(resource, event, l3_agent, **kwargs):
added_router = kwargs['router']
router = l3_agent.pd.routers.get(added_router.router_id)
gw_ns_name = added_router.get_gw_ns_name()
master = added_router.is_router_master()
primary = added_router.is_router_primary()
if not router:
l3_agent.pd.routers[added_router.router_id] = (
get_router_entry(gw_ns_name, master))
get_router_entry(gw_ns_name, primary))
else:
# This will happen during l3 agent restart
router['ns_name'] = gw_ns_name
router['master'] = master
router['primary'] = primary
@runtime.synchronized("l3-agent-pd")

View File

@ -174,7 +174,7 @@ DHCPV6_STATELESS_OPT = 'dhcpv6_stateless'
# When setting global DHCP options, these options will be ignored
# as they are required for basic network functions and will be
# set by Neutron.
GLOBAL_DHCP_OPTS_BLACKLIST = {
GLOBAL_DHCP_OPTS_PROHIBIT_LIST = {
4: ['server_id', 'lease_time', 'mtu', 'router', 'server_mac',
'dns_server', 'classless_static_route'],
6: ['dhcpv6_stateless', 'dns_server', 'server_id']}

View File

@ -49,11 +49,11 @@ OPTS = [
'VRRP health checks. Recommended value is 5. '
'This will cause pings to be sent to the gateway '
'IP address(es) - requires ICMP_ECHO_REQUEST '
'to be enabled on the gateway. '
'If gateway fails, all routers will be reported '
'as master, and master election will be repeated '
'in round-robin fashion, until one of the router '
'restore the gateway connection.')),
'to be enabled on the gateway(s). '
'If a gateway fails, all routers will be reported '
'as primary, and a primary election will be repeated '
'in a round-robin fashion, until one of the routers '
'restores the gateway connection.')),
]

View File

@ -18,8 +18,9 @@ from neutron._i18n import _
sriov_driver_opts = [
cfg.ListOpt('vnic_type_blacklist',
cfg.ListOpt('vnic_type_prohibit_list',
default=[],
deprecated_name='vnic_type_blacklist',
help=_("Comma-separated list of VNIC types for which support "
"is administratively prohibited by the mechanism "
"driver. Please note that the supported vnic_types "

View File

@ -18,8 +18,9 @@ from neutron._i18n import _
ovs_driver_opts = [
cfg.ListOpt('vnic_type_blacklist',
cfg.ListOpt('vnic_type_prohibit_list',
default=[],
deprecated_name='vnic_type_blacklist',
help=_("Comma-separated list of VNIC types for which support "
"is administratively prohibited by the mechanism "
"driver. Please note that the supported vnic_types "

View File

@ -750,10 +750,10 @@ class L3_HA_NAT_db_mixin(l3_dvr_db.L3_NAT_with_dvr_db_mixin,
if ha_binding_state != constants.HA_ROUTER_STATE_ACTIVE:
continue
# For create router gateway, the gateway port may not be ACTIVE
# yet, so we return 'master' host directly.
# yet, so we return 'primary' host directly.
if gateway_port_status != constants.PORT_STATUS_ACTIVE:
return ha_binding_agent.host
# Do not let the original 'master' (current is backup) host,
# Do not let the original 'primary' (current is backup) host,
# override the gateway port binding host.
if (gateway_port_status == constants.PORT_STATUS_ACTIVE and
ha_binding_agent.host == gateway_port_binding_host):

View File

@ -458,7 +458,7 @@ class LinuxBridgeManager(amb.CommonAgentManagerBase):
# Check if the interface is part of the bridge
if not bridge_device.owns_interface(interface):
try:
# Check if the interface is not enslaved in another bridge
# Check if the interface is attached to another bridge
bridge = bridge_lib.BridgeDevice.get_interface_bridge(
interface)
if bridge:

View File

@ -138,19 +138,20 @@ class AgentMechanismDriverBase(api.MechanismDriver, metaclass=abc.ABCMeta):
return True. Otherwise, it must return False.
"""
def blacklist_supported_vnic_types(self, vnic_types, blacklist):
"""Validate the blacklist and blacklist the supported_vnic_types
def prohibit_list_supported_vnic_types(self, vnic_types, prohibit_list):
"""Validate the prohibit_list and prohibit the supported_vnic_types
:param vnic_types: The supported_vnic_types list
:param blacklist: The blacklist as in vnic_type_blacklist
:return The blacklisted vnic_types
:param prohibit_list: The prohibit_list as in vnic_type_prohibit_list
:return The prohibited vnic_types
"""
if not blacklist:
if not prohibit_list:
return vnic_types
# Not valid values in the blacklist:
if not all(bl in vnic_types for bl in blacklist):
raise ValueError(_("Not all of the items from vnic_type_blacklist "
# Not valid values in the prohibit_list:
if not all(bl in vnic_types for bl in prohibit_list):
raise ValueError(_("Not all of the items from "
"vnic_type_prohibit_list "
"are valid vnic_types for %(agent)s mechanism "
"driver. The valid values are: "
"%(valid_vnics)s.") %
@ -158,11 +159,11 @@ class AgentMechanismDriverBase(api.MechanismDriver, metaclass=abc.ABCMeta):
'valid_vnics': vnic_types})
supported_vnic_types = [vnic_t for vnic_t in vnic_types if
vnic_t not in blacklist]
vnic_t not in prohibit_list]
# Nothing left in the supported vnict types list:
if len(supported_vnic_types) < 1:
raise ValueError(_("All possible vnic_types were blacklisted for "
raise ValueError(_("All possible vnic_types were prohibited for "
"%s mechanism driver!") % self.agent_type)
return supported_vnic_types

View File

@ -69,12 +69,12 @@ class SriovNicSwitchMechanismDriver(mech_agent.SimpleAgentMechanismDriverBase):
"""
self.agent_type = agent_type
# TODO(lajoskatona): move this blacklisting to
# SimpleAgentMechanismDriverBase. By that e blacklisting and validation
# TODO(lajoskatona): move this prohibition to
# SimpleAgentMechanismDriverBase. By that, prohibition and validation
# of the vnic_types would be available for all mechanism drivers.
self.supported_vnic_types = self.blacklist_supported_vnic_types(
self.supported_vnic_types = self.prohibit_list_supported_vnic_types(
vnic_types=supported_vnic_types,
blacklist=cfg.CONF.SRIOV_DRIVER.vnic_type_blacklist
prohibit_list=cfg.CONF.SRIOV_DRIVER.vnic_type_prohibit_list
)
# NOTE(ndipanov): PF passthrough requires a different vif type

View File

@ -69,14 +69,14 @@ class OpenvswitchMechanismDriver(mech_agent.SimpleAgentMechanismDriverBase):
portbindings.VIF_TYPE_OVS,
vif_details)
# TODO(lajoskatona): move this blacklisting to
# SimpleAgentMechanismDriverBase. By that e blacklisting and validation
# TODO(lajoskatona): move this prohibition to
# SimpleAgentMechanismDriverBase. By that, prohibition and validation
# of the vnic_types would be available for all mechanism drivers.
self.supported_vnic_types = self.blacklist_supported_vnic_types(
self.supported_vnic_types = self.prohibit_list_supported_vnic_types(
vnic_types=[portbindings.VNIC_NORMAL,
portbindings.VNIC_DIRECT,
portbindings.VNIC_SMARTNIC],
blacklist=cfg.CONF.OVS_DRIVER.vnic_type_blacklist
prohibit_list=cfg.CONF.OVS_DRIVER.vnic_type_prohibit_list
)
LOG.info("%s's supported_vnic_types: %s",
self.agent_type, self.supported_vnic_types)

View File

@ -1754,7 +1754,7 @@ class OVNClient(object):
global_options = ovn_conf.get_global_dhcpv6_opts()
for option, value in global_options.items():
if option in ovn_const.GLOBAL_DHCP_OPTS_BLACKLIST[ip_version]:
if option in ovn_const.GLOBAL_DHCP_OPTS_PROHIBIT_LIST[ip_version]:
# This option is not allowed to be set with a global setting
LOG.debug('DHCP option %s is not permitted to be set in '
'global options. This option will be ignored.',

View File

@ -220,7 +220,7 @@ class PortBindingChassisEvent(row_event.RowEvent):
When a chassisredirect port is updated with chassis, this event get
generated. We will update corresponding router's gateway port with
the chassis's host_id. Later, users can check router's gateway port
host_id to find the location of master HA router.
host_id to find the location of primary HA router.
"""
def __init__(self, driver):

View File

@ -379,7 +379,7 @@ class OVNL3RouterPlugin(service_base.ServicePluginBase,
# Remove any invalid gateway chassis from the list, otherwise
# we can have a situation where all existing_chassis are invalid
existing_chassis = self._ovn.get_gateway_chassis_binding(g_name)
master = existing_chassis[0] if existing_chassis else None
primary = existing_chassis[0] if existing_chassis else None
existing_chassis = self.scheduler.filter_existing_chassis(
nb_idl=self._ovn, gw_chassis=all_gw_chassis,
physnet=physnet, chassis_physnets=chassis_with_physnets,
@ -392,21 +392,21 @@ class OVNL3RouterPlugin(service_base.ServicePluginBase,
chassis = self.scheduler.select(
self._ovn, self._sb_ovn, g_name, candidates=candidates,
existing_chassis=existing_chassis)
if master and master != chassis[0]:
if master not in chassis:
LOG.debug("Master gateway chassis %(old)s "
if primary and primary != chassis[0]:
if primary not in chassis:
LOG.debug("Primary gateway chassis %(old)s "
"has been removed from the system. Moving "
"gateway %(gw)s to other chassis %(new)s.",
{'gw': g_name,
'old': master,
'old': primary,
'new': chassis[0]})
else:
LOG.debug("Gateway %s is hosted at %s.", g_name, master)
# NOTE(mjozefcz): It means scheduler moved master chassis
LOG.debug("Gateway %s is hosted at %s.", g_name, primary)
# NOTE(mjozefcz): It means scheduler moved primary chassis
# to other gw based on scheduling method. But we don't
# want network flap - so moving actual master to be on
# want network flap - so moving actual primary to be on
# the top.
index = chassis.index(master)
index = chassis.index(primary)
chassis[0], chassis[index] = chassis[index], chassis[0]
# NOTE(dalvarez): Let's commit the changes in separate transactions
# as we will rely on those for scheduling subsequent gateways.

View File

@ -453,15 +453,15 @@ class TestHAL3Agent(TestL3Agent):
with open(keepalived_state_file, "r") as fd:
return fd.read()
def _get_state_file_for_master_agent(self, router_id):
def _get_state_file_for_primary_agent(self, router_id):
for host in self.environment.hosts:
keepalived_state_file = os.path.join(
host.neutron_config.state_path, "ha_confs", router_id, "state")
if self._get_keepalived_state(keepalived_state_file) == "master":
if self._get_keepalived_state(keepalived_state_file) == "primary":
return keepalived_state_file
def test_keepalived_multiple_sighups_does_not_forfeit_mastership(self):
def test_keepalived_multiple_sighups_does_not_forfeit_primary(self):
"""Setup a complete "Neutron stack" - both an internal and an external
network+subnet, and a router connected to both.
"""
@ -479,7 +479,7 @@ class TestHAL3Agent(TestL3Agent):
self._is_ha_router_active_on_one_agent,
router['id']),
timeout=90)
keepalived_state_file = self._get_state_file_for_master_agent(
keepalived_state_file = self._get_state_file_for_primary_agent(
router['id'])
self.assertIsNotNone(keepalived_state_file)
network = self.safe_client.create_network(tenant_id)
@ -498,8 +498,9 @@ class TestHAL3Agent(TestL3Agent):
tenant_id, ext_net['id'], vm.ip, vm.neutron_port['id'])
# Check that the keepalived's state file has not changed and is still
# master. This will indicate that the Throttler works. We want to check
# for ha_vrrp_advert_int (the default is 2 seconds), plus a bit more.
# primary. This will indicate that the Throttler works. We want to
# check for ha_vrrp_advert_int (the default is 2 seconds), plus a bit
# more.
time_to_stop = (time.time() +
(common_utils.DEFAULT_THROTTLER_VALUE *
ha_router.THROTTLER_MULTIPLIER * 1.3))
@ -507,7 +508,7 @@ class TestHAL3Agent(TestL3Agent):
if time.time() > time_to_stop:
break
self.assertEqual(
"master",
"primary",
self._get_keepalived_state(keepalived_state_file))
@tests_base.unstable_test("bug 1798475")

View File

@ -202,7 +202,7 @@ class L3AgentTestFramework(base.BaseSudoTestCase):
n, len([line for line in out.strip().split('\n') if line]))
if ha:
common_utils.wait_until_true(lambda: router.ha_state == 'master')
common_utils.wait_until_true(lambda: router.ha_state == 'primary')
with self.assert_max_execution_time(100):
assert_num_of_conntrack_rules(0)
@ -322,7 +322,7 @@ class L3AgentTestFramework(base.BaseSudoTestCase):
router.process()
if enable_ha:
common_utils.wait_until_true(lambda: router.ha_state == 'master')
common_utils.wait_until_true(lambda: router.ha_state == 'primary')
# Keepalived notifies of a state transition when it starts,
# not when it ends. Thus, we have to wait until keepalived finishes
@ -629,34 +629,34 @@ class L3AgentTestFramework(base.BaseSudoTestCase):
return (router1, router2)
def _get_master_and_slave_routers(self, router1, router2,
check_external_device=True):
def _get_primary_and_backup_routers(self, router1, router2,
check_external_device=True):
try:
common_utils.wait_until_true(
lambda: router1.ha_state == 'master')
lambda: router1.ha_state == 'primary')
if check_external_device:
common_utils.wait_until_true(
lambda: self._check_external_device(router1))
master_router = router1
slave_router = router2
primary_router = router1
backup_router = router2
except common_utils.WaitTimeout:
common_utils.wait_until_true(
lambda: router2.ha_state == 'master')
lambda: router2.ha_state == 'primary')
if check_external_device:
common_utils.wait_until_true(
lambda: self._check_external_device(router2))
master_router = router2
slave_router = router1
primary_router = router2
backup_router = router1
common_utils.wait_until_true(
lambda: master_router.ha_state == 'master')
lambda: primary_router.ha_state == 'primary')
if check_external_device:
common_utils.wait_until_true(
lambda: self._check_external_device(master_router))
lambda: self._check_external_device(primary_router))
common_utils.wait_until_true(
lambda: slave_router.ha_state == 'backup')
return master_router, slave_router
lambda: backup_router.ha_state == 'backup')
return primary_router, backup_router
def fail_ha_router(self, router):
device_name = router.get_ha_device_name()

View File

@ -585,7 +585,7 @@ class TestDvrRouter(DvrRouterTestFramework, framework.L3AgentTestFramework):
interface_name = router.get_external_device_name(port['id'])
self._assert_no_ip_addresses_on_interface(router.ha_namespace,
interface_name)
utils.wait_until_true(lambda: router.ha_state == 'master')
utils.wait_until_true(lambda: router.ha_state == 'primary')
# Keepalived notifies of a state transition when it starts,
# not when it ends. Thus, we have to wait until keepalived finishes
@ -1348,7 +1348,7 @@ class TestDvrRouter(DvrRouterTestFramework, framework.L3AgentTestFramework):
snat_ports = router.get_snat_interfaces()
if not snat_ports:
return
if router.is_router_master():
if router.is_router_primary():
centralized_floatingips = (
router.router[lib_constants.FLOATINGIP_KEY])
for fip in centralized_floatingips:
@ -1484,31 +1484,31 @@ class TestDvrRouter(DvrRouterTestFramework, framework.L3AgentTestFramework):
ha_port_ip="169.254.192.107",
ha_port_mac="12:34:56:78:3a:bb")
master, backup = self._get_master_and_slave_routers(
primary, backup = self._get_primary_and_backup_routers(
router1, router2, check_external_device=False)
self._assert_ip_addresses_in_dvr_ha_snat_namespace(master)
self._assert_ip_addresses_in_dvr_ha_snat_namespace(primary)
self._assert_no_ip_addresses_in_dvr_ha_snat_namespace(backup)
master_ha_device = master.get_ha_device_name()
primary_ha_device = primary.get_ha_device_name()
backup_ha_device = backup.get_ha_device_name()
self.assertTrue(
ip_lib.device_exists(master_ha_device, master.ha_namespace))
ip_lib.device_exists(primary_ha_device, primary.ha_namespace))
self.assertTrue(
ip_lib.device_exists(backup_ha_device, backup.ha_namespace))
new_master_router = copy.deepcopy(master.router)
new_master_router['_ha_interface'] = None
self.agent._process_updated_router(new_master_router)
router_updated = self.agent.router_info[master.router_id]
new_primary_router = copy.deepcopy(primary.router)
new_primary_router['_ha_interface'] = None
self.agent._process_updated_router(new_primary_router)
router_updated = self.agent.router_info[primary.router_id]
self.assertTrue(self._namespace_exists(router_updated.ns_name))
self._assert_snat_namespace_exists(router_updated)
snat_namespace_name = dvr_snat_ns.SnatNamespace.get_snat_ns_name(
router_updated.router_id)
self.assertFalse(
ip_lib.device_exists(master_ha_device, snat_namespace_name))
ip_lib.device_exists(primary_ha_device, snat_namespace_name))
utils.wait_until_true(lambda: backup.ha_state == 'master')
utils.wait_until_true(lambda: backup.ha_state == 'primary')
self._assert_ip_addresses_in_dvr_ha_snat_namespace(backup)
self.assertTrue(
ip_lib.device_exists(backup_ha_device, backup.ha_namespace))