Improve terminology in the Neutron tree
There is no real reason we should be using some of the terms we do, they're outdated, and we're behind other open-source projects in this respect. Let's switch to using more inclusive terms in all possible places. Change-Id: I99913107e803384b34cbd5ca588451b1cf64d594
This commit is contained in:
parent
114ac0ae89
commit
055036ba2b
@ -16,15 +16,15 @@ SNAT high availability is implemented in a manner similar to the
|
||||
:ref:`deploy-lb-ha-vrrp` and :ref:`deploy-ovs-ha-vrrp` examples where
|
||||
``keepalived`` uses VRRP to provide quick failover of SNAT services.
|
||||
|
||||
During normal operation, the master router periodically transmits *heartbeat*
|
||||
During normal operation, the primary router periodically transmits *heartbeat*
|
||||
packets over a hidden project network that connects all HA routers for a
|
||||
particular project.
|
||||
|
||||
If the DVR/SNAT backup router stops receiving these packets, it assumes failure
|
||||
of the master DVR/SNAT router and promotes itself to master router by
|
||||
of the primary DVR/SNAT router and promotes itself to primary router by
|
||||
configuring IP addresses on the interfaces in the ``snat`` namespace. In
|
||||
environments with more than one backup router, the rules of VRRP are followed
|
||||
to select a new master router.
|
||||
to select a new primary router.
|
||||
|
||||
.. warning::
|
||||
|
||||
|
@ -263,15 +263,15 @@ For more details, see the
|
||||
Supported VNIC types
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The ``vnic_type_blacklist`` option is used to remove values from the mechanism driver's
|
||||
``supported_vnic_types`` list.
|
||||
The ``vnic_type_prohibit_list`` option is used to remove values from the
|
||||
mechanism driver's ``supported_vnic_types`` list.
|
||||
|
||||
.. list-table:: Mechanism drivers and supported VNIC types
|
||||
:header-rows: 1
|
||||
|
||||
* - mech driver / supported_vnic_types
|
||||
- supported VNIC types
|
||||
- blacklisting available
|
||||
- prohibiting available
|
||||
* - Linux bridge
|
||||
- normal
|
||||
- no
|
||||
@ -280,10 +280,10 @@ The ``vnic_type_blacklist`` option is used to remove values from the mechanism d
|
||||
- no
|
||||
* - Open vSwitch
|
||||
- normal, direct
|
||||
- yes (ovs_driver vnic_type_blacklist, see: `Configuration Reference <../configuration/ml2-conf.html#ovs_driver>`__)
|
||||
- yes (ovs_driver vnic_type_prohibit_list, see: `Configuration Reference <../configuration/ml2-conf.html#ovs_driver>`__)
|
||||
* - SRIOV
|
||||
- direct, macvtap, direct_physical
|
||||
- yes (sriov_driver vnic_type_blacklist, see: `Configuration Reference <../configuration/ml2-conf.html#sriov_driver>`__)
|
||||
- yes (sriov_driver vnic_type_prohibit_list, see: `Configuration Reference <../configuration/ml2-conf.html#sriov_driver>`__)
|
||||
|
||||
|
||||
Extension Drivers
|
||||
|
@ -166,8 +166,8 @@ If a vnic_type is supported by default by multiple ML2 mechanism
|
||||
drivers (e.g. ``vnic_type=direct`` by both ``openvswitch`` and
|
||||
``sriovnicswitch``) and multiple agents' resources are also meant to be
|
||||
tracked by Placement, then the admin must decide which driver to take
|
||||
ports of that vnic_type by blacklisting the vnic_type for the unwanted
|
||||
drivers. Use :oslo.config:option:`ovs_driver.vnic_type_blacklist` in this
|
||||
ports of that vnic_type by prohibiting the vnic_type for the unwanted
|
||||
drivers. Use :oslo.config:option:`ovs_driver.vnic_type_prohibit_list` in this
|
||||
case. Valid values are all the ``supported_vnic_types`` of the
|
||||
`respective mechanism drivers
|
||||
<https://docs.openstack.org/neutron/latest/admin/config-ml2.html#supported-vnic-types>`_.
|
||||
@ -177,10 +177,10 @@ case. Valid values are all the ``supported_vnic_types`` of the
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs_driver]
|
||||
vnic_type_blacklist = direct
|
||||
vnic_type_prohibit_list = direct
|
||||
|
||||
[sriov_driver]
|
||||
#vnic_type_blacklist = direct
|
||||
#vnic_type_prohibit_list = direct
|
||||
|
||||
neutron-openvswitch-agent config
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -87,7 +87,7 @@ Using SR-IOV interfaces
|
||||
In order to enable SR-IOV, the following steps are required:
|
||||
|
||||
#. Create Virtual Functions (Compute)
|
||||
#. Whitelist PCI devices in nova-compute (Compute)
|
||||
#. Configure allow list for PCI devices in nova-compute (Compute)
|
||||
#. Configure neutron-server (Controller)
|
||||
#. Configure nova-scheduler (Controller)
|
||||
#. Enable neutron sriov-agent (Compute)
|
||||
@ -223,8 +223,8 @@ network and has access to the private networks of all machines.
|
||||
the ``sysfsutils`` tool. However, this is not available by default on
|
||||
many major distributions.
|
||||
|
||||
Whitelist PCI devices nova-compute (Compute)
|
||||
--------------------------------------------
|
||||
Configuring allow list for PCI devices nova-compute (Compute)
|
||||
-------------------------------------------------------------
|
||||
|
||||
#. Configure which PCI devices the ``nova-compute`` service may use. Edit
|
||||
the ``nova.conf`` file:
|
||||
@ -239,7 +239,7 @@ Whitelist PCI devices nova-compute (Compute)
|
||||
``physnet2``.
|
||||
|
||||
Alternatively the ``[pci] passthrough_whitelist`` parameter also supports
|
||||
whitelisting by:
|
||||
allowing devices by:
|
||||
|
||||
- PCI address: The address uses the same syntax as in ``lspci`` and an
|
||||
asterisk (``*``) can be used to match anything.
|
||||
@ -604,8 +604,8 @@ you must:
|
||||
machines with no switch and the cards are plugged in back-to-back. A
|
||||
subnet manager is required for the link on the cards to come up.
|
||||
It is possible to have more than one subnet manager. In this case, one
|
||||
of them will act as the master, and any other will act as a slave that
|
||||
will take over when the master subnet manager fails.
|
||||
of them will act as the primary, and any other will act as a backup that
|
||||
will take over when the primary subnet manager fails.
|
||||
|
||||
#. Install the ``ebrctl`` utility on the compute nodes.
|
||||
|
||||
|
@ -27,7 +27,7 @@ Architecture
|
||||
:alt: High-availability using VRRP with Linux bridge - overview
|
||||
|
||||
The following figure shows components and connectivity for one self-service
|
||||
network and one untagged (flat) network. The master router resides on network
|
||||
network and one untagged (flat) network. The primary router resides on network
|
||||
node 1. In this particular case, the instance resides on the same compute
|
||||
node as the DHCP agent for the network. If the DHCP agent resides on another
|
||||
compute node, the latter only contains a DHCP namespace and Linux bridge
|
||||
@ -178,6 +178,6 @@ Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This high-availability mechanism simply augments :ref:`deploy-ovs-selfservice`
|
||||
with failover of layer-3 services to another router if the master router
|
||||
with failover of layer-3 services to another router if the primary router
|
||||
fails. Thus, you can reference :ref:`Self-service network traffic flow
|
||||
<deploy-ovs-selfservice-networktrafficflow>` for normal operation.
|
||||
|
@ -43,7 +43,7 @@ addition or deletion of the chassis, following approach can be considered:
|
||||
|
||||
* Find a list of chassis where router is scheduled and reschedule it
|
||||
up to *MAX_GW_CHASSIS* gateways using list of available candidates.
|
||||
Do not modify the master chassis association to not interrupt network flows.
|
||||
Do not modify the primary chassis association to not interrupt network flows.
|
||||
|
||||
Rescheduling is an event triggered operation which will occur whenever a
|
||||
chassis is added or removed. When it happend, ``schedule_unhosted_gateways()``
|
||||
@ -58,7 +58,7 @@ southbound database table, would be the ones eligible for hosting the routers.
|
||||
Rescheduling of router depends on current prorities set. Each chassis is given
|
||||
a specific priority for the router's gateway and priority increases with
|
||||
increasing value ( i.e. 1 < 2 < 3 ...). The highest prioritized chassis hosts
|
||||
gateway port. Other chassis are selected as slaves.
|
||||
gateway port. Other chassis are selected as backups.
|
||||
|
||||
There are two approaches for rescheduling supported by ovn driver right
|
||||
now:
|
||||
@ -72,7 +72,7 @@ Few points to consider for the design:
|
||||
C1 to C3 and C2 to C3. Rescheduling from C1 to C2 and vice-versa should not
|
||||
be allowed.
|
||||
|
||||
* In order to reschedule the router's chassis, the ``master`` chassis for a
|
||||
* In order to reschedule the router's chassis, the ``primary`` chassis for a
|
||||
gateway router will be left untouched. However, for the scenario where all
|
||||
routers are scheduled in only one chassis which is available as gateway,
|
||||
the addition of the second gateway chassis would schedule the router
|
||||
@ -89,11 +89,11 @@ Following scenarios are possible which have been considered in the design:
|
||||
- System has 2 chassis C1 and C2 during installation. C1 goes down.
|
||||
- Behavior: In this case, all routers would be rescheduled to C2.
|
||||
Once C1 is back up, routers would be rescheduled on it. However,
|
||||
since C2 is now the new master, routers on C1 would have lower priority.
|
||||
since C2 is now the new primary, routers on C1 would have lower priority.
|
||||
* Case #3:
|
||||
- System has 2 chassis C1 and C2 during installation. C3 is added to it.
|
||||
- Behavior: In this case, routers would not move their master chassis
|
||||
associations. So routers which have their master on C1, would remain
|
||||
- Behavior: In this case, routers would not move their primary chassis
|
||||
associations. So routers which have their primary on C1, would remain
|
||||
there, and same for routers on C2. However, lower proritized candidates
|
||||
of existing gateways would be scheduled on the chassis C3, depending
|
||||
on the type of used scheduler (Random or LeastLoaded).
|
||||
@ -102,23 +102,23 @@ Following scenarios are possible which have been considered in the design:
|
||||
Rebalancing of Gateway Chassis
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Rebalancing is the second part of the design and it assigns a new master to
|
||||
Rebalancing is the second part of the design and it assigns a new primary to
|
||||
already scheduled router gateway ports. Downtime is expected in this
|
||||
operation. Rebalancing of routers can be achieved using external cli script.
|
||||
Similar approach has been implemeneted for DHCP rescheduling `[4]`_.
|
||||
The master chassis gateway could be moved only to other, previously scheduled
|
||||
gateway. Rebalancing of chassis occurs only if number of scheduled master
|
||||
The primary chassis gateway could be moved only to other, previously scheduled
|
||||
gateway. Rebalancing of chassis occurs only if number of scheduled primary
|
||||
chassis ports per each provider network hosted by given chassis is higher than
|
||||
average number of hosted master gateway ports per chassis per provider network.
|
||||
average number of hosted primary gateway ports per chassis per provider network.
|
||||
|
||||
This dependency is determined by formula:
|
||||
|
||||
avg_gw_per_chassis = num_gw_by_provider_net / num_chassis_with_provider_net
|
||||
|
||||
Where:
|
||||
- avg_gw_per_chassis - average number of scheduler master gateway chassis
|
||||
- avg_gw_per_chassis - average number of scheduler primary gateway chassis
|
||||
withing same provider network.
|
||||
- num_gw_by_provider_net - number of master chassis gateways scheduled in
|
||||
- num_gw_by_provider_net - number of primary chassis gateways scheduled in
|
||||
given provider networks.
|
||||
- num_chassis_with_provider_net - number of chassis that has connectivity
|
||||
to given provider network.
|
||||
@ -128,9 +128,9 @@ The rebalancing occurs only if:
|
||||
num_gw_by_provider_net_by_chassis > avg_gw_per_chassis
|
||||
|
||||
Where:
|
||||
- num_gw_by_provider_net_by_chassis - number of hosted master gateways
|
||||
- num_gw_by_provider_net_by_chassis - number of hosted primary gateways
|
||||
by given provider network by given chassis
|
||||
- avg_gw_per_chassis - average number of scheduler master gateway chassis
|
||||
- avg_gw_per_chassis - average number of scheduler primary gateway chassis
|
||||
withing same provider network.
|
||||
|
||||
|
||||
|
@ -88,8 +88,8 @@ class ExclusiveResourceProcessor(object):
|
||||
|
||||
Other instances may be created for the same ID while the first
|
||||
instance has exclusive access. If that happens then it doesn't block and
|
||||
wait for access. Instead, it signals to the master instance that an update
|
||||
came in with the timestamp.
|
||||
wait for access. Instead, it signals to the primary instance that an
|
||||
update came in with the timestamp.
|
||||
|
||||
This way, a thread will not block to wait for access to a resource.
|
||||
Instead it effectively signals to the thread that is working on the
|
||||
@ -102,27 +102,27 @@ class ExclusiveResourceProcessor(object):
|
||||
as possible. The timestamp should not be recorded, however, until the
|
||||
resource has been processed using the fetch data.
|
||||
"""
|
||||
_masters = {}
|
||||
_primaries = {}
|
||||
_resource_timestamps = {}
|
||||
|
||||
def __init__(self, id):
|
||||
self._id = id
|
||||
|
||||
if id not in self._masters:
|
||||
self._masters[id] = self
|
||||
if id not in self._primaries:
|
||||
self._primaries[id] = self
|
||||
self._queue = queue.PriorityQueue(-1)
|
||||
|
||||
self._master = self._masters[id]
|
||||
self._primary = self._primaries[id]
|
||||
|
||||
def _i_am_master(self):
|
||||
return self == self._master
|
||||
def _i_am_primary(self):
|
||||
return self == self._primary
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, type, value, traceback):
|
||||
if self._i_am_master():
|
||||
del self._masters[self._id]
|
||||
if self._i_am_primary():
|
||||
del self._primaries[self._id]
|
||||
|
||||
def _get_resource_data_timestamp(self):
|
||||
return self._resource_timestamps.get(self._id,
|
||||
@ -140,16 +140,16 @@ class ExclusiveResourceProcessor(object):
|
||||
resource is being processed. These updates have already bubbled to
|
||||
the front of the ResourceProcessingQueue.
|
||||
"""
|
||||
self._master._queue.put(update)
|
||||
self._primary._queue.put(update)
|
||||
|
||||
def updates(self):
|
||||
"""Processes the resource until updates stop coming
|
||||
|
||||
Only the master instance will process the resource. However, updates
|
||||
Only the primary instance will process the resource. However, updates
|
||||
may come in from other workers while it is in progress. This method
|
||||
loops until they stop coming.
|
||||
"""
|
||||
while self._i_am_master():
|
||||
while self._i_am_primary():
|
||||
if self._queue.empty():
|
||||
return
|
||||
# Get the update from the queue even if it is old.
|
||||
@ -177,10 +177,10 @@ class ResourceProcessingQueue(object):
|
||||
next_update = self._queue.get()
|
||||
|
||||
with ExclusiveResourceProcessor(next_update.id) as rp:
|
||||
# Queue the update whether this worker is the master or not.
|
||||
# Queue the update whether this worker is the primary or not.
|
||||
rp.queue_update(next_update)
|
||||
|
||||
# Here, if the current worker is not the master, the call to
|
||||
# Here, if the current worker is not the primary, the call to
|
||||
# rp.updates() will not yield and so this will essentially be a
|
||||
# noop.
|
||||
for update in rp.updates():
|
||||
|
@ -63,7 +63,7 @@ class DvrEdgeHaRouter(dvr_edge_router.DvrEdgeRouter,
|
||||
self._add_vip(fip_cidr, interface_name)
|
||||
|
||||
self.set_ha_port()
|
||||
if (self.is_router_master() and self.ha_port and
|
||||
if (self.is_router_primary() and self.ha_port and
|
||||
self.ha_port['status'] == constants.PORT_STATUS_ACTIVE):
|
||||
return super(DvrEdgeHaRouter, self).add_centralized_floatingip(
|
||||
fip, fip_cidr)
|
||||
@ -72,7 +72,7 @@ class DvrEdgeHaRouter(dvr_edge_router.DvrEdgeRouter,
|
||||
|
||||
def remove_centralized_floatingip(self, fip_cidr):
|
||||
self._remove_vip(fip_cidr)
|
||||
if self.is_router_master():
|
||||
if self.is_router_primary():
|
||||
super(DvrEdgeHaRouter, self).remove_centralized_floatingip(
|
||||
fip_cidr)
|
||||
|
||||
|
@ -29,7 +29,7 @@ LOG = logging.getLogger(__name__)
|
||||
|
||||
KEEPALIVED_STATE_CHANGE_SERVER_BACKLOG = 4096
|
||||
|
||||
TRANSLATION_MAP = {'master': constants.HA_ROUTER_STATE_ACTIVE,
|
||||
TRANSLATION_MAP = {'primary': constants.HA_ROUTER_STATE_ACTIVE,
|
||||
'backup': constants.HA_ROUTER_STATE_STANDBY,
|
||||
'fault': constants.HA_ROUTER_STATE_STANDBY,
|
||||
'unknown': constants.HA_ROUTER_STATE_UNKNOWN}
|
||||
@ -129,28 +129,28 @@ class AgentMixin(object):
|
||||
|
||||
This function will also update the metadata proxy, the radvd daemon,
|
||||
process the prefix delegation and inform to the L3 extensions. If the
|
||||
HA router changes to "master", this transition will be delayed for at
|
||||
least "ha_vrrp_advert_int" seconds. When the "master" router
|
||||
HA router changes to "primary", this transition will be delayed for at
|
||||
least "ha_vrrp_advert_int" seconds. When the "primary" router
|
||||
transitions to "backup", "keepalived" will set the rest of HA routers
|
||||
to "master" until it decides which one should be the only "master".
|
||||
The transition from "backup" to "master" and then to "backup" again,
|
||||
to "primary" until it decides which one should be the only "primary".
|
||||
The transition from "backup" to "primary" and then to "backup" again,
|
||||
should not be registered in the Neutron server.
|
||||
|
||||
:param router_id: router ID
|
||||
:param state: ['master', 'backup']
|
||||
:param state: ['primary', 'backup']
|
||||
"""
|
||||
if not self._update_transition_state(router_id, state):
|
||||
eventlet.spawn_n(self._enqueue_state_change, router_id, state)
|
||||
eventlet.sleep(0)
|
||||
|
||||
def _enqueue_state_change(self, router_id, state):
|
||||
# NOTE(ralonsoh): move 'master' and 'backup' constants to n-lib
|
||||
if state == 'master':
|
||||
# NOTE(ralonsoh): move 'primary' and 'backup' constants to n-lib
|
||||
if state == 'primary':
|
||||
eventlet.sleep(self.conf.ha_vrrp_advert_int)
|
||||
if self._update_transition_state(router_id) != state:
|
||||
# If the current "transition state" is not the initial "state" sent
|
||||
# to update the router, that means the actual router state is the
|
||||
# same as the "transition state" (e.g.: backup-->master-->backup).
|
||||
# same as the "transition state" (e.g.: backup-->primary-->backup).
|
||||
return
|
||||
|
||||
ri = self._get_router_info(router_id)
|
||||
@ -164,7 +164,7 @@ class AgentMixin(object):
|
||||
state_change_data)
|
||||
|
||||
# Set external gateway port link up or down according to state
|
||||
if state == 'master':
|
||||
if state == 'primary':
|
||||
ri.set_external_gw_port_link_status(link_up=True, set_gw=True)
|
||||
elif state == 'backup':
|
||||
ri.set_external_gw_port_link_status(link_up=False)
|
||||
@ -181,7 +181,7 @@ class AgentMixin(object):
|
||||
if self.conf.enable_metadata_proxy:
|
||||
self._update_metadata_proxy(ri, router_id, state)
|
||||
self._update_radvd_daemon(ri, state)
|
||||
self.pd.process_ha_state(router_id, state == 'master')
|
||||
self.pd.process_ha_state(router_id, state == 'primary')
|
||||
self.state_change_notifier.queue_event((router_id, state))
|
||||
self.l3_ext_manager.ha_state_change(self.context, state_change_data)
|
||||
|
||||
@ -189,7 +189,7 @@ class AgentMixin(object):
|
||||
if not self.use_ipv6:
|
||||
return
|
||||
|
||||
ipv6_forwarding_enable = state == 'master'
|
||||
ipv6_forwarding_enable = state == 'primary'
|
||||
if ri.router.get('distributed', False):
|
||||
namespace = ri.ha_namespace
|
||||
else:
|
||||
@ -202,7 +202,7 @@ class AgentMixin(object):
|
||||
# If ipv6 is enabled on the platform, ipv6_gateway config flag is
|
||||
# not set and external_network associated to the router does not
|
||||
# include any IPv6 subnet, enable the gateway interface to accept
|
||||
# Router Advts from upstream router for default route on master
|
||||
# Router Advts from upstream router for default route on primary
|
||||
# instances as well as ipv6 forwarding. Otherwise, disable them.
|
||||
ex_gw_port_id = ri.ex_gw_port and ri.ex_gw_port['id']
|
||||
if ex_gw_port_id:
|
||||
@ -215,7 +215,7 @@ class AgentMixin(object):
|
||||
# NOTE(slaweq): Since the metadata proxy is spawned in the qrouter
|
||||
# namespace and not in the snat namespace, even standby DVR-HA
|
||||
# routers needs to serve metadata requests to local ports.
|
||||
if state == 'master' or ri.router.get('distributed', False):
|
||||
if state == 'primary' or ri.router.get('distributed', False):
|
||||
LOG.debug('Spawning metadata proxy for router %s', router_id)
|
||||
self.metadata_driver.spawn_monitored_metadata_proxy(
|
||||
self.process_monitor, ri.ns_name, self.conf.metadata_port,
|
||||
@ -226,9 +226,9 @@ class AgentMixin(object):
|
||||
self.process_monitor, ri.router_id, self.conf, ri.ns_name)
|
||||
|
||||
def _update_radvd_daemon(self, ri, state):
|
||||
# Radvd has to be spawned only on the Master HA Router. If there are
|
||||
# Radvd has to be spawned only on the primary HA Router. If there are
|
||||
# any state transitions, we enable/disable radvd accordingly.
|
||||
if state == 'master':
|
||||
if state == 'primary':
|
||||
ri.enable_radvd()
|
||||
else:
|
||||
ri.disable_radvd()
|
||||
|
@ -55,7 +55,7 @@ class HaRouterNamespace(namespaces.RouterNamespace):
|
||||
It does so to prevent sending gratuitous ARPs for interfaces that got VIP
|
||||
removed in the middle of processing.
|
||||
It also disables ipv6 forwarding by default. Forwarding will be
|
||||
enabled during router configuration processing only for the master node.
|
||||
enabled during router configuration processing only for the primary node.
|
||||
It has to be disabled on all other nodes to avoid sending MLD packets
|
||||
which cause lost connectivity to Floating IPs.
|
||||
"""
|
||||
@ -96,12 +96,12 @@ class HaRouter(router.RouterInfo):
|
||||
return self.router.get('ha_vr_id')
|
||||
|
||||
def _check_and_set_real_state(self):
|
||||
# When the physical host was down/up, the 'master' router may still
|
||||
# When the physical host was down/up, the 'primary' router may still
|
||||
# have its original state in the _ha_state_path file. We directly
|
||||
# reset it to 'backup'.
|
||||
if (not self.keepalived_manager.check_processes() and
|
||||
os.path.exists(self.ha_state_path) and
|
||||
self.ha_state == 'master'):
|
||||
self.ha_state == 'primary'):
|
||||
self.ha_state = 'backup'
|
||||
|
||||
@property
|
||||
@ -110,7 +110,12 @@ class HaRouter(router.RouterInfo):
|
||||
return self._ha_state
|
||||
try:
|
||||
with open(self.ha_state_path, 'r') as f:
|
||||
self._ha_state = f.read()
|
||||
# TODO(haleyb): put old code back after a couple releases,
|
||||
# Y perhaps, just for backwards-compat
|
||||
# self._ha_state = f.read()
|
||||
ha_state = f.read()
|
||||
ha_state = 'primary' if ha_state == 'master' else ha_state
|
||||
self._ha_state = ha_state
|
||||
except (OSError, IOError):
|
||||
LOG.debug('Error while reading HA state for %s', self.router_id)
|
||||
return self._ha_state or 'unknown'
|
||||
@ -129,7 +134,7 @@ class HaRouter(router.RouterInfo):
|
||||
def ha_namespace(self):
|
||||
return self.ns_name
|
||||
|
||||
def is_router_master(self):
|
||||
def is_router_primary(self):
|
||||
"""this method is normally called before the ha_router object is fully
|
||||
initialized
|
||||
"""
|
||||
@ -298,14 +303,14 @@ class HaRouter(router.RouterInfo):
|
||||
onlink_route_cidr in onlink_route_cidrs]
|
||||
|
||||
def _should_delete_ipv6_lladdr(self, ipv6_lladdr):
|
||||
"""Only the master should have any IP addresses configured.
|
||||
"""Only the primary should have any IP addresses configured.
|
||||
Let keepalived manage IPv6 link local addresses, the same way we let
|
||||
it manage IPv4 addresses. If the router is not in the master state,
|
||||
it manage IPv4 addresses. If the router is not in the primary state,
|
||||
we must delete the address first as it is autoconfigured by the kernel.
|
||||
"""
|
||||
manager = self.keepalived_manager
|
||||
if manager.get_process().active:
|
||||
if self.ha_state != 'master':
|
||||
if self.ha_state != 'primary':
|
||||
conf = manager.get_conf_on_disk()
|
||||
managed_by_keepalived = conf and ipv6_lladdr in conf
|
||||
if managed_by_keepalived:
|
||||
@ -317,7 +322,7 @@ class HaRouter(router.RouterInfo):
|
||||
def _disable_ipv6_addressing_on_interface(self, interface_name):
|
||||
"""Disable IPv6 link local addressing on the device and add it as
|
||||
a VIP to keepalived. This means that the IPv6 link local address
|
||||
will only be present on the master.
|
||||
will only be present on the primary.
|
||||
"""
|
||||
device = ip_lib.IPDevice(interface_name, namespace=self.ha_namespace)
|
||||
ipv6_lladdr = ip_lib.get_ipv6_lladdr(device.link.address)
|
||||
@ -446,7 +451,7 @@ class HaRouter(router.RouterInfo):
|
||||
name=self.get_ha_device_name())
|
||||
cidrs = (address['cidr'] for address in addresses)
|
||||
ha_cidr = self._get_primary_vip()
|
||||
state = 'master' if ha_cidr in cidrs else 'backup'
|
||||
state = 'primary' if ha_cidr in cidrs else 'backup'
|
||||
self.ha_state = state
|
||||
callback(self.router_id, state)
|
||||
|
||||
@ -468,10 +473,10 @@ class HaRouter(router.RouterInfo):
|
||||
self._add_gateway_vip(ex_gw_port, interface_name)
|
||||
self._disable_ipv6_addressing_on_interface(interface_name)
|
||||
|
||||
# Enable RA and IPv6 forwarding only for master instances. This will
|
||||
# Enable RA and IPv6 forwarding only for primary instances. This will
|
||||
# prevent backup routers from sending packets to the upstream switch
|
||||
# and disrupt connections.
|
||||
enable = self.ha_state == 'master'
|
||||
enable = self.ha_state == 'primary'
|
||||
self._configure_ipv6_params_on_gw(ex_gw_port, self.ns_name,
|
||||
interface_name, enable)
|
||||
|
||||
@ -486,11 +491,11 @@ class HaRouter(router.RouterInfo):
|
||||
def external_gateway_removed(self, ex_gw_port, interface_name):
|
||||
self._clear_vips(interface_name)
|
||||
|
||||
if self.ha_state == 'master':
|
||||
if self.ha_state == 'primary':
|
||||
super(HaRouter, self).external_gateway_removed(ex_gw_port,
|
||||
interface_name)
|
||||
else:
|
||||
# We are not the master node, so no need to delete ip addresses.
|
||||
# We are not the primary node, so no need to delete ip addresses.
|
||||
self.driver.unplug(interface_name,
|
||||
namespace=self.ns_name,
|
||||
prefix=router.EXTERNAL_DEV_PREFIX)
|
||||
@ -526,13 +531,13 @@ class HaRouter(router.RouterInfo):
|
||||
@runtime.synchronized('enable_radvd')
|
||||
def enable_radvd(self, internal_ports=None):
|
||||
if (self.keepalived_manager.get_process().active and
|
||||
self.ha_state == 'master'):
|
||||
self.ha_state == 'primary'):
|
||||
super(HaRouter, self).enable_radvd(internal_ports)
|
||||
|
||||
def external_gateway_link_up(self):
|
||||
# Check HA router ha_state for its gateway port link state.
|
||||
# 'backup' instance will not link up the gateway port.
|
||||
return self.ha_state == 'master'
|
||||
return self.ha_state == 'primary'
|
||||
|
||||
def set_external_gw_port_link_status(self, link_up, set_gw=False):
|
||||
link_state = "up" if link_up else "down"
|
||||
|
@ -84,7 +84,10 @@ class MonitorDaemon(daemon.Daemon):
|
||||
continue
|
||||
|
||||
if event['name'] == self.interface and event['cidr'] == self.cidr:
|
||||
new_state = 'master' if event['event'] == 'added' else 'backup'
|
||||
if event['event'] == 'added':
|
||||
new_state = 'primary'
|
||||
else:
|
||||
new_state = 'backup'
|
||||
self.write_state_change(new_state)
|
||||
self.notify_agent(new_state)
|
||||
elif event['name'] != self.interface and event['event'] == 'added':
|
||||
@ -103,7 +106,7 @@ class MonitorDaemon(daemon.Daemon):
|
||||
ip = ip_lib.IPDevice(self.interface, self.namespace)
|
||||
for address in ip.addr.list():
|
||||
if address.get('cidr') == self.cidr:
|
||||
state = 'master'
|
||||
state = 'primary'
|
||||
self.write_state_change(state)
|
||||
self.notify_agent(state)
|
||||
break
|
||||
|
@ -170,7 +170,7 @@ class RouterInfo(BaseRouterInfo):
|
||||
return namespaces.RouterNamespace(
|
||||
router_id, agent_conf, iface_driver, use_ipv6)
|
||||
|
||||
def is_router_master(self):
|
||||
def is_router_primary(self):
|
||||
return True
|
||||
|
||||
def _update_routing_table(self, operation, route, namespace):
|
||||
|
@ -35,7 +35,7 @@ PRIMARY_VIP_RANGE_SIZE = 24
|
||||
KEEPALIVED_SERVICE_NAME = 'keepalived'
|
||||
KEEPALIVED_EMAIL_FROM = 'neutron@openstack.local'
|
||||
KEEPALIVED_ROUTER_ID = 'neutron'
|
||||
GARP_MASTER_DELAY = 60
|
||||
GARP_PRIMARY_DELAY = 60
|
||||
HEALTH_CHECK_NAME = 'ha_health_check'
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
@ -167,7 +167,7 @@ class KeepalivedInstance(object):
|
||||
def __init__(self, state, interface, vrouter_id, ha_cidrs,
|
||||
priority=HA_DEFAULT_PRIORITY, advert_int=None,
|
||||
mcast_src_ip=None, nopreempt=False,
|
||||
garp_master_delay=GARP_MASTER_DELAY,
|
||||
garp_primary_delay=GARP_PRIMARY_DELAY,
|
||||
vrrp_health_check_interval=0,
|
||||
ha_conf_dir=None):
|
||||
self.name = 'VR_%s' % vrouter_id
|
||||
@ -182,7 +182,7 @@ class KeepalivedInstance(object):
|
||||
self.nopreempt = nopreempt
|
||||
self.advert_int = advert_int
|
||||
self.mcast_src_ip = mcast_src_ip
|
||||
self.garp_master_delay = garp_master_delay
|
||||
self.garp_primary_delay = garp_primary_delay
|
||||
self.track_interfaces = []
|
||||
self.vips = []
|
||||
self.virtual_routes = KeepalivedInstanceRoutes()
|
||||
@ -294,7 +294,7 @@ class KeepalivedInstance(object):
|
||||
' interface %s' % self.interface,
|
||||
' virtual_router_id %s' % self.vrouter_id,
|
||||
' priority %s' % self.priority,
|
||||
' garp_master_delay %s' % self.garp_master_delay])
|
||||
' garp_master_delay %s' % self.garp_primary_delay])
|
||||
|
||||
if self.nopreempt:
|
||||
config.append(' nopreempt')
|
||||
@ -380,7 +380,7 @@ class KeepalivedManager(object):
|
||||
self.process_monitor = process_monitor
|
||||
self.conf_path = conf_path
|
||||
# configure throttler for spawn to introduce delay between SIGHUPs,
|
||||
# otherwise keepalived master may unnecessarily flip to slave
|
||||
# otherwise keepalived primary may unnecessarily flip to backup
|
||||
if throttle_restart_value is not None:
|
||||
self._throttle_spawn(throttle_restart_value)
|
||||
|
||||
|
@ -56,8 +56,8 @@ class PrefixDelegation(object):
|
||||
events.AFTER_DELETE)
|
||||
self._get_sync_data()
|
||||
|
||||
def _is_pd_master_router(self, router):
|
||||
return router['master']
|
||||
def _is_pd_primary_router(self, router):
|
||||
return router['primary']
|
||||
|
||||
@runtime.synchronized("l3-agent-pd")
|
||||
def enable_subnet(self, router_id, subnet_id, prefix, ri_ifname, mac):
|
||||
@ -74,11 +74,11 @@ class PrefixDelegation(object):
|
||||
if pd_info.sync:
|
||||
pd_info.mac = mac
|
||||
pd_info.old_prefix = prefix
|
||||
elif self._is_pd_master_router(router):
|
||||
elif self._is_pd_primary_router(router):
|
||||
self._add_lla(router, pd_info.get_bind_lla_with_mask())
|
||||
|
||||
def _delete_pd(self, router, pd_info):
|
||||
if not self._is_pd_master_router(router):
|
||||
if not self._is_pd_primary_router(router):
|
||||
return
|
||||
self._delete_lla(router, pd_info.get_bind_lla_with_mask())
|
||||
if pd_info.client_started:
|
||||
@ -94,7 +94,7 @@ class PrefixDelegation(object):
|
||||
if not pd_info:
|
||||
return
|
||||
self._delete_pd(router, pd_info)
|
||||
if self._is_pd_master_router(router):
|
||||
if self._is_pd_primary_router(router):
|
||||
prefix_update[subnet_id] = n_const.PROVISIONAL_IPV6_PD_PREFIX
|
||||
LOG.debug("Update server with prefixes: %s", prefix_update)
|
||||
self.notifier(self.context, prefix_update)
|
||||
@ -117,7 +117,7 @@ class PrefixDelegation(object):
|
||||
if not router:
|
||||
return
|
||||
router['gw_interface'] = gw_ifname
|
||||
if not self._is_pd_master_router(router):
|
||||
if not self._is_pd_primary_router(router):
|
||||
return
|
||||
prefix_update = {}
|
||||
for pd_info in router['subnets'].values():
|
||||
@ -141,7 +141,7 @@ class PrefixDelegation(object):
|
||||
self.notifier(self.context, prefix_update)
|
||||
|
||||
def delete_router_pd(self, router):
|
||||
if not self._is_pd_master_router(router):
|
||||
if not self._is_pd_primary_router(router):
|
||||
return
|
||||
prefix_update = {}
|
||||
for subnet_id, pd_info in router['subnets'].items():
|
||||
@ -260,13 +260,13 @@ class PrefixDelegation(object):
|
||||
return False
|
||||
|
||||
@runtime.synchronized("l3-agent-pd")
|
||||
def process_ha_state(self, router_id, master):
|
||||
def process_ha_state(self, router_id, primary):
|
||||
router = self.routers.get(router_id)
|
||||
if router is None or router['master'] == master:
|
||||
if router is None or router['primary'] == primary:
|
||||
return
|
||||
|
||||
router['master'] = master
|
||||
if master:
|
||||
router['primary'] = primary
|
||||
if primary:
|
||||
for pd_info in router['subnets'].values():
|
||||
bind_lla_with_mask = pd_info.get_bind_lla_with_mask()
|
||||
self._add_lla(router, bind_lla_with_mask)
|
||||
@ -285,7 +285,7 @@ class PrefixDelegation(object):
|
||||
|
||||
prefix_update = {}
|
||||
for router_id, router in self.routers.items():
|
||||
if not (self._is_pd_master_router(router) and
|
||||
if not (self._is_pd_primary_router(router) and
|
||||
router['gw_interface']):
|
||||
continue
|
||||
|
||||
@ -338,7 +338,7 @@ class PrefixDelegation(object):
|
||||
for pd_info in sync_data:
|
||||
router_id = pd_info.router_id
|
||||
if not self.routers.get(router_id):
|
||||
self.routers[router_id] = {'master': True,
|
||||
self.routers[router_id] = {'primary': True,
|
||||
'gw_interface': None,
|
||||
'ns_name': None,
|
||||
'subnets': {}}
|
||||
@ -356,8 +356,8 @@ def remove_router(resource, event, l3_agent, **kwargs):
|
||||
del l3_agent.pd.routers[router_id]
|
||||
|
||||
|
||||
def get_router_entry(ns_name, master):
|
||||
return {'master': master,
|
||||
def get_router_entry(ns_name, primary):
|
||||
return {'primary': primary,
|
||||
'gw_interface': None,
|
||||
'ns_name': ns_name,
|
||||
'subnets': {}}
|
||||
@ -368,14 +368,14 @@ def add_router(resource, event, l3_agent, **kwargs):
|
||||
added_router = kwargs['router']
|
||||
router = l3_agent.pd.routers.get(added_router.router_id)
|
||||
gw_ns_name = added_router.get_gw_ns_name()
|
||||
master = added_router.is_router_master()
|
||||
primary = added_router.is_router_primary()
|
||||
if not router:
|
||||
l3_agent.pd.routers[added_router.router_id] = (
|
||||
get_router_entry(gw_ns_name, master))
|
||||
get_router_entry(gw_ns_name, primary))
|
||||
else:
|
||||
# This will happen during l3 agent restart
|
||||
router['ns_name'] = gw_ns_name
|
||||
router['master'] = master
|
||||
router['primary'] = primary
|
||||
|
||||
|
||||
@runtime.synchronized("l3-agent-pd")
|
||||
|
@ -174,7 +174,7 @@ DHCPV6_STATELESS_OPT = 'dhcpv6_stateless'
|
||||
# When setting global DHCP options, these options will be ignored
|
||||
# as they are required for basic network functions and will be
|
||||
# set by Neutron.
|
||||
GLOBAL_DHCP_OPTS_BLACKLIST = {
|
||||
GLOBAL_DHCP_OPTS_PROHIBIT_LIST = {
|
||||
4: ['server_id', 'lease_time', 'mtu', 'router', 'server_mac',
|
||||
'dns_server', 'classless_static_route'],
|
||||
6: ['dhcpv6_stateless', 'dns_server', 'server_id']}
|
||||
|
@ -49,11 +49,11 @@ OPTS = [
|
||||
'VRRP health checks. Recommended value is 5. '
|
||||
'This will cause pings to be sent to the gateway '
|
||||
'IP address(es) - requires ICMP_ECHO_REQUEST '
|
||||
'to be enabled on the gateway. '
|
||||
'If gateway fails, all routers will be reported '
|
||||
'as master, and master election will be repeated '
|
||||
'in round-robin fashion, until one of the router '
|
||||
'restore the gateway connection.')),
|
||||
'to be enabled on the gateway(s). '
|
||||
'If a gateway fails, all routers will be reported '
|
||||
'as primary, and a primary election will be repeated '
|
||||
'in a round-robin fashion, until one of the routers '
|
||||
'restores the gateway connection.')),
|
||||
]
|
||||
|
||||
|
||||
|
@ -18,8 +18,9 @@ from neutron._i18n import _
|
||||
|
||||
|
||||
sriov_driver_opts = [
|
||||
cfg.ListOpt('vnic_type_blacklist',
|
||||
cfg.ListOpt('vnic_type_prohibit_list',
|
||||
default=[],
|
||||
deprecated_name='vnic_type_blacklist',
|
||||
help=_("Comma-separated list of VNIC types for which support "
|
||||
"is administratively prohibited by the mechanism "
|
||||
"driver. Please note that the supported vnic_types "
|
||||
|
@ -18,8 +18,9 @@ from neutron._i18n import _
|
||||
|
||||
|
||||
ovs_driver_opts = [
|
||||
cfg.ListOpt('vnic_type_blacklist',
|
||||
cfg.ListOpt('vnic_type_prohibit_list',
|
||||
default=[],
|
||||
deprecated_name='vnic_type_blacklist',
|
||||
help=_("Comma-separated list of VNIC types for which support "
|
||||
"is administratively prohibited by the mechanism "
|
||||
"driver. Please note that the supported vnic_types "
|
||||
|
@ -750,10 +750,10 @@ class L3_HA_NAT_db_mixin(l3_dvr_db.L3_NAT_with_dvr_db_mixin,
|
||||
if ha_binding_state != constants.HA_ROUTER_STATE_ACTIVE:
|
||||
continue
|
||||
# For create router gateway, the gateway port may not be ACTIVE
|
||||
# yet, so we return 'master' host directly.
|
||||
# yet, so we return 'primary' host directly.
|
||||
if gateway_port_status != constants.PORT_STATUS_ACTIVE:
|
||||
return ha_binding_agent.host
|
||||
# Do not let the original 'master' (current is backup) host,
|
||||
# Do not let the original 'primary' (current is backup) host,
|
||||
# override the gateway port binding host.
|
||||
if (gateway_port_status == constants.PORT_STATUS_ACTIVE and
|
||||
ha_binding_agent.host == gateway_port_binding_host):
|
||||
|
@ -458,7 +458,7 @@ class LinuxBridgeManager(amb.CommonAgentManagerBase):
|
||||
# Check if the interface is part of the bridge
|
||||
if not bridge_device.owns_interface(interface):
|
||||
try:
|
||||
# Check if the interface is not enslaved in another bridge
|
||||
# Check if the interface is attached to another bridge
|
||||
bridge = bridge_lib.BridgeDevice.get_interface_bridge(
|
||||
interface)
|
||||
if bridge:
|
||||
|
@ -138,19 +138,20 @@ class AgentMechanismDriverBase(api.MechanismDriver, metaclass=abc.ABCMeta):
|
||||
return True. Otherwise, it must return False.
|
||||
"""
|
||||
|
||||
def blacklist_supported_vnic_types(self, vnic_types, blacklist):
|
||||
"""Validate the blacklist and blacklist the supported_vnic_types
|
||||
def prohibit_list_supported_vnic_types(self, vnic_types, prohibit_list):
|
||||
"""Validate the prohibit_list and prohibit the supported_vnic_types
|
||||
|
||||
:param vnic_types: The supported_vnic_types list
|
||||
:param blacklist: The blacklist as in vnic_type_blacklist
|
||||
:return The blacklisted vnic_types
|
||||
:param prohibit_list: The prohibit_list as in vnic_type_prohibit_list
|
||||
:return The prohibited vnic_types
|
||||
"""
|
||||
if not blacklist:
|
||||
if not prohibit_list:
|
||||
return vnic_types
|
||||
|
||||
# Not valid values in the blacklist:
|
||||
if not all(bl in vnic_types for bl in blacklist):
|
||||
raise ValueError(_("Not all of the items from vnic_type_blacklist "
|
||||
# Not valid values in the prohibit_list:
|
||||
if not all(bl in vnic_types for bl in prohibit_list):
|
||||
raise ValueError(_("Not all of the items from "
|
||||
"vnic_type_prohibit_list "
|
||||
"are valid vnic_types for %(agent)s mechanism "
|
||||
"driver. The valid values are: "
|
||||
"%(valid_vnics)s.") %
|
||||
@ -158,11 +159,11 @@ class AgentMechanismDriverBase(api.MechanismDriver, metaclass=abc.ABCMeta):
|
||||
'valid_vnics': vnic_types})
|
||||
|
||||
supported_vnic_types = [vnic_t for vnic_t in vnic_types if
|
||||
vnic_t not in blacklist]
|
||||
vnic_t not in prohibit_list]
|
||||
|
||||
# Nothing left in the supported vnict types list:
|
||||
if len(supported_vnic_types) < 1:
|
||||
raise ValueError(_("All possible vnic_types were blacklisted for "
|
||||
raise ValueError(_("All possible vnic_types were prohibited for "
|
||||
"%s mechanism driver!") % self.agent_type)
|
||||
return supported_vnic_types
|
||||
|
||||
|
@ -69,12 +69,12 @@ class SriovNicSwitchMechanismDriver(mech_agent.SimpleAgentMechanismDriverBase):
|
||||
"""
|
||||
self.agent_type = agent_type
|
||||
|
||||
# TODO(lajoskatona): move this blacklisting to
|
||||
# SimpleAgentMechanismDriverBase. By that e blacklisting and validation
|
||||
# TODO(lajoskatona): move this prohibition to
|
||||
# SimpleAgentMechanismDriverBase. By that, prohibition and validation
|
||||
# of the vnic_types would be available for all mechanism drivers.
|
||||
self.supported_vnic_types = self.blacklist_supported_vnic_types(
|
||||
self.supported_vnic_types = self.prohibit_list_supported_vnic_types(
|
||||
vnic_types=supported_vnic_types,
|
||||
blacklist=cfg.CONF.SRIOV_DRIVER.vnic_type_blacklist
|
||||
prohibit_list=cfg.CONF.SRIOV_DRIVER.vnic_type_prohibit_list
|
||||
)
|
||||
|
||||
# NOTE(ndipanov): PF passthrough requires a different vif type
|
||||
|
@ -69,14 +69,14 @@ class OpenvswitchMechanismDriver(mech_agent.SimpleAgentMechanismDriverBase):
|
||||
portbindings.VIF_TYPE_OVS,
|
||||
vif_details)
|
||||
|
||||
# TODO(lajoskatona): move this blacklisting to
|
||||
# SimpleAgentMechanismDriverBase. By that e blacklisting and validation
|
||||
# TODO(lajoskatona): move this prohibition to
|
||||
# SimpleAgentMechanismDriverBase. By that, prohibition and validation
|
||||
# of the vnic_types would be available for all mechanism drivers.
|
||||
self.supported_vnic_types = self.blacklist_supported_vnic_types(
|
||||
self.supported_vnic_types = self.prohibit_list_supported_vnic_types(
|
||||
vnic_types=[portbindings.VNIC_NORMAL,
|
||||
portbindings.VNIC_DIRECT,
|
||||
portbindings.VNIC_SMARTNIC],
|
||||
blacklist=cfg.CONF.OVS_DRIVER.vnic_type_blacklist
|
||||
prohibit_list=cfg.CONF.OVS_DRIVER.vnic_type_prohibit_list
|
||||
)
|
||||
LOG.info("%s's supported_vnic_types: %s",
|
||||
self.agent_type, self.supported_vnic_types)
|
||||
|
@ -1754,7 +1754,7 @@ class OVNClient(object):
|
||||
global_options = ovn_conf.get_global_dhcpv6_opts()
|
||||
|
||||
for option, value in global_options.items():
|
||||
if option in ovn_const.GLOBAL_DHCP_OPTS_BLACKLIST[ip_version]:
|
||||
if option in ovn_const.GLOBAL_DHCP_OPTS_PROHIBIT_LIST[ip_version]:
|
||||
# This option is not allowed to be set with a global setting
|
||||
LOG.debug('DHCP option %s is not permitted to be set in '
|
||||
'global options. This option will be ignored.',
|
||||
|
@ -220,7 +220,7 @@ class PortBindingChassisEvent(row_event.RowEvent):
|
||||
When a chassisredirect port is updated with chassis, this event get
|
||||
generated. We will update corresponding router's gateway port with
|
||||
the chassis's host_id. Later, users can check router's gateway port
|
||||
host_id to find the location of master HA router.
|
||||
host_id to find the location of primary HA router.
|
||||
"""
|
||||
|
||||
def __init__(self, driver):
|
||||
|
@ -379,7 +379,7 @@ class OVNL3RouterPlugin(service_base.ServicePluginBase,
|
||||
# Remove any invalid gateway chassis from the list, otherwise
|
||||
# we can have a situation where all existing_chassis are invalid
|
||||
existing_chassis = self._ovn.get_gateway_chassis_binding(g_name)
|
||||
master = existing_chassis[0] if existing_chassis else None
|
||||
primary = existing_chassis[0] if existing_chassis else None
|
||||
existing_chassis = self.scheduler.filter_existing_chassis(
|
||||
nb_idl=self._ovn, gw_chassis=all_gw_chassis,
|
||||
physnet=physnet, chassis_physnets=chassis_with_physnets,
|
||||
@ -392,21 +392,21 @@ class OVNL3RouterPlugin(service_base.ServicePluginBase,
|
||||
chassis = self.scheduler.select(
|
||||
self._ovn, self._sb_ovn, g_name, candidates=candidates,
|
||||
existing_chassis=existing_chassis)
|
||||
if master and master != chassis[0]:
|
||||
if master not in chassis:
|
||||
LOG.debug("Master gateway chassis %(old)s "
|
||||
if primary and primary != chassis[0]:
|
||||
if primary not in chassis:
|
||||
LOG.debug("Primary gateway chassis %(old)s "
|
||||
"has been removed from the system. Moving "
|
||||
"gateway %(gw)s to other chassis %(new)s.",
|
||||
{'gw': g_name,
|
||||
'old': master,
|
||||
'old': primary,
|
||||
'new': chassis[0]})
|
||||
else:
|
||||
LOG.debug("Gateway %s is hosted at %s.", g_name, master)
|
||||
# NOTE(mjozefcz): It means scheduler moved master chassis
|
||||
LOG.debug("Gateway %s is hosted at %s.", g_name, primary)
|
||||
# NOTE(mjozefcz): It means scheduler moved primary chassis
|
||||
# to other gw based on scheduling method. But we don't
|
||||
# want network flap - so moving actual master to be on
|
||||
# want network flap - so moving actual primary to be on
|
||||
# the top.
|
||||
index = chassis.index(master)
|
||||
index = chassis.index(primary)
|
||||
chassis[0], chassis[index] = chassis[index], chassis[0]
|
||||
# NOTE(dalvarez): Let's commit the changes in separate transactions
|
||||
# as we will rely on those for scheduling subsequent gateways.
|
||||
|
@ -453,15 +453,15 @@ class TestHAL3Agent(TestL3Agent):
|
||||
with open(keepalived_state_file, "r") as fd:
|
||||
return fd.read()
|
||||
|
||||
def _get_state_file_for_master_agent(self, router_id):
|
||||
def _get_state_file_for_primary_agent(self, router_id):
|
||||
for host in self.environment.hosts:
|
||||
keepalived_state_file = os.path.join(
|
||||
host.neutron_config.state_path, "ha_confs", router_id, "state")
|
||||
|
||||
if self._get_keepalived_state(keepalived_state_file) == "master":
|
||||
if self._get_keepalived_state(keepalived_state_file) == "primary":
|
||||
return keepalived_state_file
|
||||
|
||||
def test_keepalived_multiple_sighups_does_not_forfeit_mastership(self):
|
||||
def test_keepalived_multiple_sighups_does_not_forfeit_primary(self):
|
||||
"""Setup a complete "Neutron stack" - both an internal and an external
|
||||
network+subnet, and a router connected to both.
|
||||
"""
|
||||
@ -479,7 +479,7 @@ class TestHAL3Agent(TestL3Agent):
|
||||
self._is_ha_router_active_on_one_agent,
|
||||
router['id']),
|
||||
timeout=90)
|
||||
keepalived_state_file = self._get_state_file_for_master_agent(
|
||||
keepalived_state_file = self._get_state_file_for_primary_agent(
|
||||
router['id'])
|
||||
self.assertIsNotNone(keepalived_state_file)
|
||||
network = self.safe_client.create_network(tenant_id)
|
||||
@ -498,8 +498,9 @@ class TestHAL3Agent(TestL3Agent):
|
||||
tenant_id, ext_net['id'], vm.ip, vm.neutron_port['id'])
|
||||
|
||||
# Check that the keepalived's state file has not changed and is still
|
||||
# master. This will indicate that the Throttler works. We want to check
|
||||
# for ha_vrrp_advert_int (the default is 2 seconds), plus a bit more.
|
||||
# primary. This will indicate that the Throttler works. We want to
|
||||
# check for ha_vrrp_advert_int (the default is 2 seconds), plus a bit
|
||||
# more.
|
||||
time_to_stop = (time.time() +
|
||||
(common_utils.DEFAULT_THROTTLER_VALUE *
|
||||
ha_router.THROTTLER_MULTIPLIER * 1.3))
|
||||
@ -507,7 +508,7 @@ class TestHAL3Agent(TestL3Agent):
|
||||
if time.time() > time_to_stop:
|
||||
break
|
||||
self.assertEqual(
|
||||
"master",
|
||||
"primary",
|
||||
self._get_keepalived_state(keepalived_state_file))
|
||||
|
||||
@tests_base.unstable_test("bug 1798475")
|
||||
|
@ -202,7 +202,7 @@ class L3AgentTestFramework(base.BaseSudoTestCase):
|
||||
n, len([line for line in out.strip().split('\n') if line]))
|
||||
|
||||
if ha:
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'master')
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'primary')
|
||||
|
||||
with self.assert_max_execution_time(100):
|
||||
assert_num_of_conntrack_rules(0)
|
||||
@ -322,7 +322,7 @@ class L3AgentTestFramework(base.BaseSudoTestCase):
|
||||
router.process()
|
||||
|
||||
if enable_ha:
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'master')
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'primary')
|
||||
|
||||
# Keepalived notifies of a state transition when it starts,
|
||||
# not when it ends. Thus, we have to wait until keepalived finishes
|
||||
@ -629,34 +629,34 @@ class L3AgentTestFramework(base.BaseSudoTestCase):
|
||||
|
||||
return (router1, router2)
|
||||
|
||||
def _get_master_and_slave_routers(self, router1, router2,
|
||||
check_external_device=True):
|
||||
def _get_primary_and_backup_routers(self, router1, router2,
|
||||
check_external_device=True):
|
||||
|
||||
try:
|
||||
common_utils.wait_until_true(
|
||||
lambda: router1.ha_state == 'master')
|
||||
lambda: router1.ha_state == 'primary')
|
||||
if check_external_device:
|
||||
common_utils.wait_until_true(
|
||||
lambda: self._check_external_device(router1))
|
||||
master_router = router1
|
||||
slave_router = router2
|
||||
primary_router = router1
|
||||
backup_router = router2
|
||||
except common_utils.WaitTimeout:
|
||||
common_utils.wait_until_true(
|
||||
lambda: router2.ha_state == 'master')
|
||||
lambda: router2.ha_state == 'primary')
|
||||
if check_external_device:
|
||||
common_utils.wait_until_true(
|
||||
lambda: self._check_external_device(router2))
|
||||
master_router = router2
|
||||
slave_router = router1
|
||||
primary_router = router2
|
||||
backup_router = router1
|
||||
|
||||
common_utils.wait_until_true(
|
||||
lambda: master_router.ha_state == 'master')
|
||||
lambda: primary_router.ha_state == 'primary')
|
||||
if check_external_device:
|
||||
common_utils.wait_until_true(
|
||||
lambda: self._check_external_device(master_router))
|
||||
lambda: self._check_external_device(primary_router))
|
||||
common_utils.wait_until_true(
|
||||
lambda: slave_router.ha_state == 'backup')
|
||||
return master_router, slave_router
|
||||
lambda: backup_router.ha_state == 'backup')
|
||||
return primary_router, backup_router
|
||||
|
||||
def fail_ha_router(self, router):
|
||||
device_name = router.get_ha_device_name()
|
||||
|
@ -585,7 +585,7 @@ class TestDvrRouter(DvrRouterTestFramework, framework.L3AgentTestFramework):
|
||||
interface_name = router.get_external_device_name(port['id'])
|
||||
self._assert_no_ip_addresses_on_interface(router.ha_namespace,
|
||||
interface_name)
|
||||
utils.wait_until_true(lambda: router.ha_state == 'master')
|
||||
utils.wait_until_true(lambda: router.ha_state == 'primary')
|
||||
|
||||
# Keepalived notifies of a state transition when it starts,
|
||||
# not when it ends. Thus, we have to wait until keepalived finishes
|
||||
@ -1348,7 +1348,7 @@ class TestDvrRouter(DvrRouterTestFramework, framework.L3AgentTestFramework):
|
||||
snat_ports = router.get_snat_interfaces()
|
||||
if not snat_ports:
|
||||
return
|
||||
if router.is_router_master():
|
||||
if router.is_router_primary():
|
||||
centralized_floatingips = (
|
||||
router.router[lib_constants.FLOATINGIP_KEY])
|
||||
for fip in centralized_floatingips:
|
||||
@ -1484,31 +1484,31 @@ class TestDvrRouter(DvrRouterTestFramework, framework.L3AgentTestFramework):
|
||||
ha_port_ip="169.254.192.107",
|
||||
ha_port_mac="12:34:56:78:3a:bb")
|
||||
|
||||
master, backup = self._get_master_and_slave_routers(
|
||||
primary, backup = self._get_primary_and_backup_routers(
|
||||
router1, router2, check_external_device=False)
|
||||
|
||||
self._assert_ip_addresses_in_dvr_ha_snat_namespace(master)
|
||||
self._assert_ip_addresses_in_dvr_ha_snat_namespace(primary)
|
||||
self._assert_no_ip_addresses_in_dvr_ha_snat_namespace(backup)
|
||||
master_ha_device = master.get_ha_device_name()
|
||||
primary_ha_device = primary.get_ha_device_name()
|
||||
backup_ha_device = backup.get_ha_device_name()
|
||||
self.assertTrue(
|
||||
ip_lib.device_exists(master_ha_device, master.ha_namespace))
|
||||
ip_lib.device_exists(primary_ha_device, primary.ha_namespace))
|
||||
self.assertTrue(
|
||||
ip_lib.device_exists(backup_ha_device, backup.ha_namespace))
|
||||
|
||||
new_master_router = copy.deepcopy(master.router)
|
||||
new_master_router['_ha_interface'] = None
|
||||
self.agent._process_updated_router(new_master_router)
|
||||
router_updated = self.agent.router_info[master.router_id]
|
||||
new_primary_router = copy.deepcopy(primary.router)
|
||||
new_primary_router['_ha_interface'] = None
|
||||
self.agent._process_updated_router(new_primary_router)
|
||||
router_updated = self.agent.router_info[primary.router_id]
|
||||
|
||||
self.assertTrue(self._namespace_exists(router_updated.ns_name))
|
||||
self._assert_snat_namespace_exists(router_updated)
|
||||
snat_namespace_name = dvr_snat_ns.SnatNamespace.get_snat_ns_name(
|
||||
router_updated.router_id)
|
||||
self.assertFalse(
|
||||
ip_lib.device_exists(master_ha_device, snat_namespace_name))
|
||||
ip_lib.device_exists(primary_ha_device, snat_namespace_name))
|
||||
|
||||
utils.wait_until_true(lambda: backup.ha_state == 'master')
|
||||
utils.wait_until_true(lambda: backup.ha_state == 'primary')
|
||||
self._assert_ip_addresses_in_dvr_ha_snat_namespace(backup)
|
||||
self.assertTrue(
|
||||
ip_lib.device_exists(backup_ha_device, backup.ha_namespace))
|
||||
@ -1535,18 +1535,18 @@ class TestDvrRouter(DvrRouterTestFramework, framework.L3AgentTestFramework):
|
||||
ha_port_ip="169.254.192.101",
|
||||
ha_port_mac="12:34:56:78:2b:bb")
|
||||
|
||||
master, backup = self._get_master_and_slave_routers(
|
||||
primary, backup = self._get_primary_and_backup_routers(
|
||||
router1, router2, check_external_device=False)
|
||||
|
||||
self._assert_ip_addresses_in_dvr_ha_snat_namespace_with_fip(master)
|
||||
self._assert_ip_addresses_in_dvr_ha_snat_namespace_with_fip(primary)
|
||||
self._assert_no_ip_addresses_in_dvr_ha_snat_namespace_with_fip(backup)
|
||||
self.fail_ha_router(master)
|
||||
self.fail_ha_router(primary)
|
||||
|
||||
utils.wait_until_true(lambda: backup.ha_state == 'master')
|
||||
utils.wait_until_true(lambda: master.ha_state == 'backup')
|
||||
utils.wait_until_true(lambda: backup.ha_state == 'primary')
|
||||
utils.wait_until_true(lambda: primary.ha_state == 'backup')
|
||||
|
||||
self._assert_ip_addresses_in_dvr_ha_snat_namespace_with_fip(backup)
|
||||
self._assert_no_ip_addresses_in_dvr_ha_snat_namespace_with_fip(master)
|
||||
self._assert_no_ip_addresses_in_dvr_ha_snat_namespace_with_fip(primary)
|
||||
|
||||
def _test_dvr_ha_router_failover(self, enable_gw, vrrp_id=None):
|
||||
self._setup_dvr_ha_agents()
|
||||
@ -1561,19 +1561,19 @@ class TestDvrRouter(DvrRouterTestFramework, framework.L3AgentTestFramework):
|
||||
ha_port_ip="169.254.192.103",
|
||||
ha_port_mac="12:34:56:78:2b:dd")
|
||||
|
||||
master, backup = self._get_master_and_slave_routers(
|
||||
primary, backup = self._get_primary_and_backup_routers(
|
||||
router1, router2, check_external_device=False)
|
||||
|
||||
self._assert_ip_addresses_in_dvr_ha_snat_namespace(master)
|
||||
self._assert_ip_addresses_in_dvr_ha_snat_namespace(primary)
|
||||
self._assert_no_ip_addresses_in_dvr_ha_snat_namespace(backup)
|
||||
|
||||
self.fail_ha_router(master)
|
||||
self.fail_ha_router(primary)
|
||||
|
||||
utils.wait_until_true(lambda: backup.ha_state == 'master')
|
||||
utils.wait_until_true(lambda: master.ha_state == 'backup')
|
||||
utils.wait_until_true(lambda: backup.ha_state == 'primary')
|
||||
utils.wait_until_true(lambda: primary.ha_state == 'backup')
|
||||
|
||||
self._assert_ip_addresses_in_dvr_ha_snat_namespace(backup)
|
||||
self._assert_no_ip_addresses_in_dvr_ha_snat_namespace(master)
|
||||
self._assert_no_ip_addresses_in_dvr_ha_snat_namespace(primary)
|
||||
|
||||
def test_dvr_ha_router_failover_with_gw(self):
|
||||
self._test_dvr_ha_router_failover(enable_gw=True, vrrp_id=10)
|
||||
@ -1607,7 +1607,7 @@ class TestDvrRouter(DvrRouterTestFramework, framework.L3AgentTestFramework):
|
||||
r2_chsfr = mock.patch.object(self.failover_agent,
|
||||
'check_ha_state_for_router').start()
|
||||
|
||||
utils.wait_until_true(lambda: router1.ha_state == 'master')
|
||||
utils.wait_until_true(lambda: router1.ha_state == 'primary')
|
||||
|
||||
self.agent._process_updated_router(router1.router)
|
||||
self.assertTrue(r1_chsfr.called)
|
||||
|
@ -41,7 +41,7 @@ class L3HATestCase(framework.L3AgentTestFramework):
|
||||
side_effect=self.change_router_state).start()
|
||||
router_info = self.generate_router_info(enable_ha=True)
|
||||
router = self.manage_router(self.agent, router_info)
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'master')
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'primary')
|
||||
|
||||
self.fail_ha_router(router)
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'backup')
|
||||
@ -50,7 +50,7 @@ class L3HATestCase(framework.L3AgentTestFramework):
|
||||
(enqueue_mock.call_count == 3 or enqueue_mock.call_count == 4))
|
||||
calls = [args[0] for args in enqueue_mock.call_args_list]
|
||||
self.assertEqual((router.router_id, 'backup'), calls[0])
|
||||
self.assertEqual((router.router_id, 'master'), calls[1])
|
||||
self.assertEqual((router.router_id, 'primary'), calls[1])
|
||||
self.assertEqual((router.router_id, 'backup'), calls[-1])
|
||||
|
||||
def _expected_rpc_report(self, expected):
|
||||
@ -73,7 +73,7 @@ class L3HATestCase(framework.L3AgentTestFramework):
|
||||
router2 = self.manage_router(self.agent, router_info)
|
||||
|
||||
common_utils.wait_until_true(lambda: router1.ha_state == 'backup')
|
||||
common_utils.wait_until_true(lambda: router2.ha_state == 'master')
|
||||
common_utils.wait_until_true(lambda: router2.ha_state == 'primary')
|
||||
common_utils.wait_until_true(
|
||||
lambda: self._expected_rpc_report(
|
||||
{router1.router_id: 'standby', router2.router_id: 'active'}))
|
||||
@ -112,7 +112,7 @@ class L3HATestCase(framework.L3AgentTestFramework):
|
||||
router_info = l3_test_common.prepare_router_data(
|
||||
enable_snat=True, enable_ha=True, dual_stack=True, enable_gw=False)
|
||||
router = self.manage_router(self.agent, router_info)
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'master')
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'primary')
|
||||
if state == 'backup':
|
||||
self.fail_ha_router(router)
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'backup')
|
||||
@ -121,23 +121,23 @@ class L3HATestCase(framework.L3AgentTestFramework):
|
||||
router_info['gw_port'] = ex_port
|
||||
router.process()
|
||||
self._assert_ipv6_accept_ra(router, expected_ra)
|
||||
# As router is going first to master and than to backup mode,
|
||||
# As router is going first to primary and than to backup mode,
|
||||
# ipv6_forwarding should be enabled on "all" interface always after
|
||||
# that transition
|
||||
self._assert_ipv6_forwarding(router, expected_forwarding,
|
||||
True)
|
||||
|
||||
@testtools.skipUnless(netutils.is_ipv6_enabled(), "IPv6 is not enabled")
|
||||
def test_ipv6_router_advts_and_fwd_after_router_state_change_master(self):
|
||||
def test_ipv6_router_advts_and_fwd_after_router_state_change_primary(self):
|
||||
# Check that RA and forwarding are enabled when there's no IPv6
|
||||
# gateway.
|
||||
self._test_ipv6_router_advts_and_fwd_helper('master',
|
||||
self._test_ipv6_router_advts_and_fwd_helper('primary',
|
||||
enable_v6_gw=False,
|
||||
expected_ra=True,
|
||||
expected_forwarding=True)
|
||||
# Check that RA is disabled and forwarding is enabled when an IPv6
|
||||
# gateway is configured.
|
||||
self._test_ipv6_router_advts_and_fwd_helper('master',
|
||||
self._test_ipv6_router_advts_and_fwd_helper('primary',
|
||||
enable_v6_gw=True,
|
||||
expected_ra=False,
|
||||
expected_forwarding=True)
|
||||
@ -219,7 +219,7 @@ class L3HATestCase(framework.L3AgentTestFramework):
|
||||
router_info = self.generate_router_info(
|
||||
ip_version=constants.IP_VERSION_6, enable_ha=True)
|
||||
router1 = self.manage_router(self.agent, router_info)
|
||||
common_utils.wait_until_true(lambda: router1.ha_state == 'master')
|
||||
common_utils.wait_until_true(lambda: router1.ha_state == 'primary')
|
||||
common_utils.wait_until_true(lambda: router1.radvd.enabled)
|
||||
|
||||
def _check_lla_status(router, expected):
|
||||
@ -263,7 +263,7 @@ class L3HATestCase(framework.L3AgentTestFramework):
|
||||
ipv6_subnet_modes=[slaac_mode],
|
||||
interface_id=interface_id)
|
||||
router.process()
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'master')
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'primary')
|
||||
|
||||
# Verify that router internal interface is present and is configured
|
||||
# with IP address from both the subnets.
|
||||
@ -309,7 +309,7 @@ class L3HATestCase(framework.L3AgentTestFramework):
|
||||
router = self.manage_router(self.agent, router_info)
|
||||
ex_gw_port = router.get_ex_gw_port()
|
||||
interface_name = router.get_external_device_interface_name(ex_gw_port)
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'master')
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'primary')
|
||||
self._add_fip(router, '172.168.1.20', fixed_address='10.0.0.3')
|
||||
router.process()
|
||||
router.router[constants.FLOATINGIP_KEY] = []
|
||||
@ -328,7 +328,7 @@ class L3HATestCase(framework.L3AgentTestFramework):
|
||||
router1.router[constants.HA_INTERFACE_KEY]['status'] = (
|
||||
constants.PORT_STATUS_ACTIVE)
|
||||
self.agent._process_updated_router(router1.router)
|
||||
common_utils.wait_until_true(lambda: router1.ha_state == 'master')
|
||||
common_utils.wait_until_true(lambda: router1.ha_state == 'primary')
|
||||
|
||||
def test_ha_router_namespace_has_ip_nonlocal_bind_disabled(self):
|
||||
router_info = self.generate_router_info(enable_ha=True)
|
||||
@ -362,7 +362,7 @@ class L3HATestCase(framework.L3AgentTestFramework):
|
||||
router.router[constants.HA_INTERFACE_KEY]['status'] = (
|
||||
constants.PORT_STATUS_ACTIVE)
|
||||
self.agent._process_updated_router(router.router)
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'master')
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'primary')
|
||||
self._wait_until_ipv6_forwarding_has_state(
|
||||
router.ns_name, external_device_name, 1)
|
||||
|
||||
@ -380,7 +380,7 @@ class L3HATestCase(framework.L3AgentTestFramework):
|
||||
router.router[constants.HA_INTERFACE_KEY]['status'] = (
|
||||
constants.PORT_STATUS_ACTIVE)
|
||||
self.agent._process_updated_router(router.router)
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'master')
|
||||
common_utils.wait_until_true(lambda: router.ha_state == 'primary')
|
||||
self._wait_until_ipv6_forwarding_has_state(router.ns_name, 'all', 1)
|
||||
|
||||
|
||||
@ -426,29 +426,29 @@ class L3HATestFailover(framework.L3AgentTestFramework):
|
||||
def test_ha_router_failover(self):
|
||||
router1, router2 = self.create_ha_routers()
|
||||
|
||||
master_router, slave_router = self._get_master_and_slave_routers(
|
||||
primary_router, backup_router = self._get_primary_and_backup_routers(
|
||||
router1, router2)
|
||||
|
||||
self._assert_ipv6_accept_ra(master_router, True)
|
||||
self._assert_ipv6_forwarding(master_router, True, True)
|
||||
self._assert_ipv6_accept_ra(slave_router, False)
|
||||
self._assert_ipv6_forwarding(slave_router, False, False)
|
||||
self._assert_ipv6_accept_ra(primary_router, True)
|
||||
self._assert_ipv6_forwarding(primary_router, True, True)
|
||||
self._assert_ipv6_accept_ra(backup_router, False)
|
||||
self._assert_ipv6_forwarding(backup_router, False, False)
|
||||
|
||||
self.fail_ha_router(router1)
|
||||
|
||||
# NOTE: passing slave_router as first argument, because we expect
|
||||
# that this router should be the master
|
||||
new_master, new_slave = self._get_master_and_slave_routers(
|
||||
slave_router, master_router)
|
||||
# NOTE: passing backup_router as first argument, because we expect
|
||||
# that this router should be the primary
|
||||
new_primary, new_backup = self._get_primary_and_backup_routers(
|
||||
backup_router, primary_router)
|
||||
|
||||
self.assertEqual(master_router, new_slave)
|
||||
self.assertEqual(slave_router, new_master)
|
||||
self._assert_ipv6_accept_ra(new_master, True)
|
||||
self._assert_ipv6_forwarding(new_master, True, True)
|
||||
self._assert_ipv6_accept_ra(new_slave, False)
|
||||
# after transition from master -> slave, 'all' IPv6 forwarding should
|
||||
self.assertEqual(primary_router, new_backup)
|
||||
self.assertEqual(backup_router, new_primary)
|
||||
self._assert_ipv6_accept_ra(new_primary, True)
|
||||
self._assert_ipv6_forwarding(new_primary, True, True)
|
||||
self._assert_ipv6_accept_ra(new_backup, False)
|
||||
# after transition from primary -> backup, 'all' IPv6 forwarding should
|
||||
# be enabled
|
||||
self._assert_ipv6_forwarding(new_slave, False, True)
|
||||
self._assert_ipv6_forwarding(new_backup, False, True)
|
||||
|
||||
def test_ha_router_lost_gw_connection(self):
|
||||
self.agent.conf.set_override(
|
||||
@ -458,18 +458,18 @@ class L3HATestFailover(framework.L3AgentTestFramework):
|
||||
|
||||
router1, router2 = self.create_ha_routers()
|
||||
|
||||
master_router, slave_router = self._get_master_and_slave_routers(
|
||||
primary_router, backup_router = self._get_primary_and_backup_routers(
|
||||
router1, router2)
|
||||
|
||||
self.fail_gw_router_port(master_router)
|
||||
self.fail_gw_router_port(primary_router)
|
||||
|
||||
# NOTE: passing slave_router as first argument, because we expect
|
||||
# that this router should be the master
|
||||
new_master, new_slave = self._get_master_and_slave_routers(
|
||||
slave_router, master_router)
|
||||
# NOTE: passing backup_router as first argument, because we expect
|
||||
# that this router should be the primary
|
||||
new_primary, new_backup = self._get_primary_and_backup_routers(
|
||||
backup_router, primary_router)
|
||||
|
||||
self.assertEqual(master_router, new_slave)
|
||||
self.assertEqual(slave_router, new_master)
|
||||
self.assertEqual(primary_router, new_backup)
|
||||
self.assertEqual(backup_router, new_primary)
|
||||
|
||||
def test_both_ha_router_lost_gw_connection(self):
|
||||
self.agent.conf.set_override(
|
||||
@ -479,24 +479,24 @@ class L3HATestFailover(framework.L3AgentTestFramework):
|
||||
|
||||
router1, router2 = self.create_ha_routers()
|
||||
|
||||
master_router, slave_router = self._get_master_and_slave_routers(
|
||||
primary_router, backup_router = self._get_primary_and_backup_routers(
|
||||
router1, router2)
|
||||
|
||||
self.fail_gw_router_port(master_router)
|
||||
self.fail_gw_router_port(slave_router)
|
||||
self.fail_gw_router_port(primary_router)
|
||||
self.fail_gw_router_port(backup_router)
|
||||
|
||||
common_utils.wait_until_true(
|
||||
lambda: master_router.ha_state == 'master')
|
||||
lambda: primary_router.ha_state == 'primary')
|
||||
common_utils.wait_until_true(
|
||||
lambda: slave_router.ha_state == 'master')
|
||||
lambda: backup_router.ha_state == 'primary')
|
||||
|
||||
self.restore_gw_router_port(master_router)
|
||||
self.restore_gw_router_port(primary_router)
|
||||
|
||||
new_master, new_slave = self._get_master_and_slave_routers(
|
||||
master_router, slave_router)
|
||||
new_primary, new_backup = self._get_primary_and_backup_routers(
|
||||
primary_router, backup_router)
|
||||
|
||||
self.assertEqual(master_router, new_master)
|
||||
self.assertEqual(slave_router, new_slave)
|
||||
self.assertEqual(primary_router, new_primary)
|
||||
self.assertEqual(backup_router, new_backup)
|
||||
|
||||
|
||||
class LinuxBridgeL3HATestCase(L3HATestCase):
|
||||
|
@ -157,7 +157,7 @@ class TestMonitorDaemon(base.BaseLoggingTestCase):
|
||||
self._run_monitor()
|
||||
msg = 'Wrote router %s state %s'
|
||||
self.router.port.addr.add(self.cidr)
|
||||
self._search_in_file(self.log_file, msg % (self.router_id, 'master'))
|
||||
self._search_in_file(self.log_file, msg % (self.router_id, 'primary'))
|
||||
self.router.port.addr.delete(self.cidr)
|
||||
self._search_in_file(self.log_file, msg % (self.router_id, 'backup'))
|
||||
|
||||
@ -184,8 +184,8 @@ class TestMonitorDaemon(base.BaseLoggingTestCase):
|
||||
msg = 'Initial status of router %s is %s' % (self.router_id, 'backup')
|
||||
self._search_in_file(self.log_file, msg)
|
||||
|
||||
def test_handle_initial_state_master(self):
|
||||
def test_handle_initial_state_primary(self):
|
||||
self.router.port.addr.add(self.cidr)
|
||||
self._run_monitor()
|
||||
msg = 'Initial status of router %s is %s' % (self.router_id, 'master')
|
||||
msg = 'Initial status of router %s is %s' % (self.router_id, 'primary')
|
||||
self._search_in_file(self.log_file, msg)
|
||||
|
@ -256,11 +256,11 @@ class TestVirtualPorts(base.TestOVNFunctionalBase):
|
||||
virt_port = self._create_port()
|
||||
virt_ip = virt_port['fixed_ips'][0]['ip_address']
|
||||
|
||||
# Create the master port with the VIP address already set in
|
||||
# Create the primary port with the VIP address already set in
|
||||
# the allowed_address_pairs field
|
||||
master = self._create_port(allowed_address=virt_ip)
|
||||
primary = self._create_port(allowed_address=virt_ip)
|
||||
|
||||
# Assert the virt port has the type virtual and master is set
|
||||
# Assert the virt port has the type virtual and primary is set
|
||||
# as parent
|
||||
self._check_port_type(virt_port['id'], ovn_const.LSP_TYPE_VIRTUAL)
|
||||
ovn_vport = self._find_port_row(virt_port['id'])
|
||||
@ -268,7 +268,7 @@ class TestVirtualPorts(base.TestOVNFunctionalBase):
|
||||
virt_ip,
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_IP_KEY])
|
||||
self.assertEqual(
|
||||
master['id'],
|
||||
primary['id'],
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_PARENTS_KEY])
|
||||
|
||||
# Create the backport parent port
|
||||
@ -281,7 +281,7 @@ class TestVirtualPorts(base.TestOVNFunctionalBase):
|
||||
virt_ip,
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_IP_KEY])
|
||||
self.assertIn(
|
||||
master['id'],
|
||||
primary['id'],
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_PARENTS_KEY])
|
||||
self.assertIn(
|
||||
backup['id'],
|
||||
@ -289,7 +289,7 @@ class TestVirtualPorts(base.TestOVNFunctionalBase):
|
||||
|
||||
@tests_base.unstable_test("bug 1865453")
|
||||
def test_virtual_port_update_address_pairs(self):
|
||||
master = self._create_port()
|
||||
primary = self._create_port()
|
||||
backup = self._create_port()
|
||||
virt_port = self._create_port()
|
||||
virt_ip = virt_port['fixed_ips'][0]['ip_address']
|
||||
@ -303,8 +303,8 @@ class TestVirtualPorts(base.TestOVNFunctionalBase):
|
||||
self.assertNotIn(ovn_const.LSP_OPTIONS_VIRTUAL_IP_KEY,
|
||||
ovn_vport.options)
|
||||
|
||||
# Set the virt IP to the allowed address pairs of the master port
|
||||
self._set_allowed_address_pair(master['id'], virt_ip)
|
||||
# Set the virt IP to the allowed address pairs of the primary port
|
||||
self._set_allowed_address_pair(primary['id'], virt_ip)
|
||||
|
||||
# Assert the virt port is now updated
|
||||
self._check_port_type(virt_port['id'], ovn_const.LSP_TYPE_VIRTUAL),
|
||||
@ -313,7 +313,7 @@ class TestVirtualPorts(base.TestOVNFunctionalBase):
|
||||
virt_ip,
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_IP_KEY])
|
||||
self.assertEqual(
|
||||
master['id'],
|
||||
primary['id'],
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_PARENTS_KEY])
|
||||
|
||||
# Set the virt IP to the allowed address pairs of the backup port
|
||||
@ -326,14 +326,14 @@ class TestVirtualPorts(base.TestOVNFunctionalBase):
|
||||
virt_ip,
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_IP_KEY])
|
||||
self.assertIn(
|
||||
master['id'],
|
||||
primary['id'],
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_PARENTS_KEY])
|
||||
self.assertIn(
|
||||
backup['id'],
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_PARENTS_KEY])
|
||||
|
||||
# Remove the address pairs from the master port
|
||||
self._unset_allowed_address_pair(master['id'])
|
||||
# Remove the address pairs from the primary port
|
||||
self._unset_allowed_address_pair(primary['id'])
|
||||
|
||||
# Assert the virt port now only has the backup port as a parent
|
||||
self._check_port_type(virt_port['id'], ovn_const.LSP_TYPE_VIRTUAL),
|
||||
@ -359,13 +359,13 @@ class TestVirtualPorts(base.TestOVNFunctionalBase):
|
||||
|
||||
@tests_base.unstable_test("bug 1865453")
|
||||
def test_virtual_port_created_after(self):
|
||||
master = self._create_port(fixed_ip='10.0.0.11')
|
||||
primary = self._create_port(fixed_ip='10.0.0.11')
|
||||
backup = self._create_port(fixed_ip='10.0.0.12')
|
||||
virt_ip = '10.0.0.55'
|
||||
|
||||
# Set the virt IP to the master and backup ports *before* creating
|
||||
# Set the virt IP to the primary and backup ports *before* creating
|
||||
# the virtual port
|
||||
self._set_allowed_address_pair(master['id'], virt_ip)
|
||||
self._set_allowed_address_pair(primary['id'], virt_ip)
|
||||
self._set_allowed_address_pair(backup['id'], virt_ip)
|
||||
|
||||
virt_port = self._create_port(fixed_ip=virt_ip)
|
||||
@ -378,7 +378,7 @@ class TestVirtualPorts(base.TestOVNFunctionalBase):
|
||||
virt_ip,
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_IP_KEY])
|
||||
self.assertIn(
|
||||
master['id'],
|
||||
primary['id'],
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_PARENTS_KEY])
|
||||
self.assertIn(
|
||||
backup['id'],
|
||||
@ -386,7 +386,7 @@ class TestVirtualPorts(base.TestOVNFunctionalBase):
|
||||
|
||||
@tests_base.unstable_test("bug 1865453")
|
||||
def test_virtual_port_delete_parents(self):
|
||||
master = self._create_port()
|
||||
primary = self._create_port()
|
||||
backup = self._create_port()
|
||||
virt_port = self._create_port()
|
||||
virt_ip = virt_port['fixed_ips'][0]['ip_address']
|
||||
@ -400,8 +400,8 @@ class TestVirtualPorts(base.TestOVNFunctionalBase):
|
||||
self.assertNotIn(ovn_const.LSP_OPTIONS_VIRTUAL_IP_KEY,
|
||||
ovn_vport.options)
|
||||
|
||||
# Set allowed address paris to the master and backup ports
|
||||
self._set_allowed_address_pair(master['id'], virt_ip)
|
||||
# Set allowed address paris to the primary and backup ports
|
||||
self._set_allowed_address_pair(primary['id'], virt_ip)
|
||||
self._set_allowed_address_pair(backup['id'], virt_ip)
|
||||
|
||||
# Assert the virtual port is correct
|
||||
@ -411,7 +411,7 @@ class TestVirtualPorts(base.TestOVNFunctionalBase):
|
||||
virt_ip,
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_IP_KEY])
|
||||
self.assertIn(
|
||||
master['id'],
|
||||
primary['id'],
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_PARENTS_KEY])
|
||||
self.assertIn(
|
||||
backup['id'],
|
||||
@ -420,18 +420,18 @@ class TestVirtualPorts(base.TestOVNFunctionalBase):
|
||||
# Delete the backup port
|
||||
self._delete('ports', backup['id'])
|
||||
|
||||
# Assert the virt port now only has the master port as a parent
|
||||
# Assert the virt port now only has the primary port as a parent
|
||||
ovn_vport = self._find_port_row(virt_port['id'])
|
||||
self.assertEqual(ovn_const.LSP_TYPE_VIRTUAL, ovn_vport.type)
|
||||
self.assertEqual(
|
||||
virt_ip,
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_IP_KEY])
|
||||
self.assertEqual(
|
||||
master['id'],
|
||||
primary['id'],
|
||||
ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_PARENTS_KEY])
|
||||
|
||||
# Delete the master port
|
||||
self._delete('ports', master['id'])
|
||||
# Delete the primary port
|
||||
self._delete('ports', primary['id'])
|
||||
|
||||
# Assert the virt port is not type virtual anymore and the virtual
|
||||
# port options are cleared
|
||||
|
@ -521,7 +521,7 @@ class TestRouter(base.TestOVNFunctionalBase):
|
||||
self.l3_plugin.schedule_unhosted_gateways()
|
||||
|
||||
# As Chassis4 has been removed so all gateways that were
|
||||
# hosted there are now masters on chassis5 and have
|
||||
# hosted there are now primaries on chassis5 and have
|
||||
# priority 1.
|
||||
self.assertEqual({1: 20}, _get_result_dict()[chassis5])
|
||||
|
||||
|
@ -29,78 +29,78 @@ PRIORITY_RPC = 0
|
||||
|
||||
class TestExclusiveResourceProcessor(base.BaseTestCase):
|
||||
|
||||
def test_i_am_master(self):
|
||||
master = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
not_master = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
master_2 = queue.ExclusiveResourceProcessor(FAKE_ID_2)
|
||||
not_master_2 = queue.ExclusiveResourceProcessor(FAKE_ID_2)
|
||||
def test_i_am_primary(self):
|
||||
primary = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
not_primary = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
primary_2 = queue.ExclusiveResourceProcessor(FAKE_ID_2)
|
||||
not_primary_2 = queue.ExclusiveResourceProcessor(FAKE_ID_2)
|
||||
|
||||
self.assertTrue(master._i_am_master())
|
||||
self.assertFalse(not_master._i_am_master())
|
||||
self.assertTrue(master_2._i_am_master())
|
||||
self.assertFalse(not_master_2._i_am_master())
|
||||
self.assertTrue(primary._i_am_primary())
|
||||
self.assertFalse(not_primary._i_am_primary())
|
||||
self.assertTrue(primary_2._i_am_primary())
|
||||
self.assertFalse(not_primary_2._i_am_primary())
|
||||
|
||||
master.__exit__(None, None, None)
|
||||
master_2.__exit__(None, None, None)
|
||||
primary.__exit__(None, None, None)
|
||||
primary_2.__exit__(None, None, None)
|
||||
|
||||
def test_master(self):
|
||||
master = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
not_master = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
master_2 = queue.ExclusiveResourceProcessor(FAKE_ID_2)
|
||||
not_master_2 = queue.ExclusiveResourceProcessor(FAKE_ID_2)
|
||||
def test_primary(self):
|
||||
primary = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
not_primary = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
primary_2 = queue.ExclusiveResourceProcessor(FAKE_ID_2)
|
||||
not_primary_2 = queue.ExclusiveResourceProcessor(FAKE_ID_2)
|
||||
|
||||
self.assertEqual(master, master._master)
|
||||
self.assertEqual(master, not_master._master)
|
||||
self.assertEqual(master_2, master_2._master)
|
||||
self.assertEqual(master_2, not_master_2._master)
|
||||
self.assertEqual(primary, primary._primary)
|
||||
self.assertEqual(primary, not_primary._primary)
|
||||
self.assertEqual(primary_2, primary_2._primary)
|
||||
self.assertEqual(primary_2, not_primary_2._primary)
|
||||
|
||||
master.__exit__(None, None, None)
|
||||
master_2.__exit__(None, None, None)
|
||||
primary.__exit__(None, None, None)
|
||||
primary_2.__exit__(None, None, None)
|
||||
|
||||
def test__enter__(self):
|
||||
self.assertNotIn(FAKE_ID, queue.ExclusiveResourceProcessor._masters)
|
||||
master = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
master.__enter__()
|
||||
self.assertIn(FAKE_ID, queue.ExclusiveResourceProcessor._masters)
|
||||
master.__exit__(None, None, None)
|
||||
self.assertNotIn(FAKE_ID, queue.ExclusiveResourceProcessor._primaries)
|
||||
primary = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
primary.__enter__()
|
||||
self.assertIn(FAKE_ID, queue.ExclusiveResourceProcessor._primaries)
|
||||
primary.__exit__(None, None, None)
|
||||
|
||||
def test__exit__(self):
|
||||
master = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
not_master = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
master.__enter__()
|
||||
self.assertIn(FAKE_ID, queue.ExclusiveResourceProcessor._masters)
|
||||
not_master.__enter__()
|
||||
not_master.__exit__(None, None, None)
|
||||
self.assertIn(FAKE_ID, queue.ExclusiveResourceProcessor._masters)
|
||||
master.__exit__(None, None, None)
|
||||
self.assertNotIn(FAKE_ID, queue.ExclusiveResourceProcessor._masters)
|
||||
primary = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
not_primary = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
primary.__enter__()
|
||||
self.assertIn(FAKE_ID, queue.ExclusiveResourceProcessor._primaries)
|
||||
not_primary.__enter__()
|
||||
not_primary.__exit__(None, None, None)
|
||||
self.assertIn(FAKE_ID, queue.ExclusiveResourceProcessor._primaries)
|
||||
primary.__exit__(None, None, None)
|
||||
self.assertNotIn(FAKE_ID, queue.ExclusiveResourceProcessor._primaries)
|
||||
|
||||
def test_data_fetched_since(self):
|
||||
master = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
primary = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
self.assertEqual(datetime.datetime.min,
|
||||
master._get_resource_data_timestamp())
|
||||
primary._get_resource_data_timestamp())
|
||||
|
||||
ts1 = datetime.datetime.utcnow() - datetime.timedelta(seconds=10)
|
||||
ts2 = datetime.datetime.utcnow()
|
||||
|
||||
master.fetched_and_processed(ts2)
|
||||
self.assertEqual(ts2, master._get_resource_data_timestamp())
|
||||
master.fetched_and_processed(ts1)
|
||||
self.assertEqual(ts2, master._get_resource_data_timestamp())
|
||||
primary.fetched_and_processed(ts2)
|
||||
self.assertEqual(ts2, primary._get_resource_data_timestamp())
|
||||
primary.fetched_and_processed(ts1)
|
||||
self.assertEqual(ts2, primary._get_resource_data_timestamp())
|
||||
|
||||
master.__exit__(None, None, None)
|
||||
primary.__exit__(None, None, None)
|
||||
|
||||
def test_updates(self):
|
||||
master = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
not_master = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
primary = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
not_primary = queue.ExclusiveResourceProcessor(FAKE_ID)
|
||||
|
||||
master.queue_update(queue.ResourceUpdate(FAKE_ID, 0))
|
||||
not_master.queue_update(queue.ResourceUpdate(FAKE_ID, 0))
|
||||
primary.queue_update(queue.ResourceUpdate(FAKE_ID, 0))
|
||||
not_primary.queue_update(queue.ResourceUpdate(FAKE_ID, 0))
|
||||
|
||||
for update in not_master.updates():
|
||||
raise Exception("Only the master should process a resource")
|
||||
for update in not_primary.updates():
|
||||
raise Exception("Only the primary should process a resource")
|
||||
|
||||
self.assertEqual(2, len([i for i in master.updates()]))
|
||||
self.assertEqual(2, len([i for i in primary.updates()]))
|
||||
|
||||
def test_hit_retry_limit(self):
|
||||
tries = 1
|
||||
|
@ -230,7 +230,7 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework):
|
||||
non_existent_router = 42
|
||||
|
||||
# Make sure the exceptional code path has coverage
|
||||
agent.enqueue_state_change(non_existent_router, 'master')
|
||||
agent.enqueue_state_change(non_existent_router, 'primary')
|
||||
|
||||
def _enqueue_state_change_transitions(self, transitions, num_called):
|
||||
self.conf.set_override('ha_vrrp_advert_int', 1)
|
||||
@ -252,17 +252,17 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework):
|
||||
else:
|
||||
mock_get_router_info.assert_not_called()
|
||||
|
||||
def test_enqueue_state_change_from_none_to_master(self):
|
||||
self._enqueue_state_change_transitions(['master'], 1)
|
||||
def test_enqueue_state_change_from_none_to_primary(self):
|
||||
self._enqueue_state_change_transitions(['primary'], 1)
|
||||
|
||||
def test_enqueue_state_change_from_none_to_backup(self):
|
||||
self._enqueue_state_change_transitions(['backup'], 1)
|
||||
|
||||
def test_enqueue_state_change_from_none_to_master_to_backup(self):
|
||||
self._enqueue_state_change_transitions(['master', 'backup'], 0)
|
||||
def test_enqueue_state_change_from_none_to_primary_to_backup(self):
|
||||
self._enqueue_state_change_transitions(['primary', 'backup'], 0)
|
||||
|
||||
def test_enqueue_state_change_from_none_to_backup_to_master(self):
|
||||
self._enqueue_state_change_transitions(['backup', 'master'], 2)
|
||||
def test_enqueue_state_change_from_none_to_backup_to_primary(self):
|
||||
self._enqueue_state_change_transitions(['backup', 'primary'], 2)
|
||||
|
||||
def test_enqueue_state_change_metadata_disable(self):
|
||||
self.conf.set_override('enable_metadata_proxy', False)
|
||||
@ -272,7 +272,7 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework):
|
||||
router_info = mock.MagicMock()
|
||||
agent.router_info[router.id] = router_info
|
||||
agent._update_metadata_proxy = mock.Mock()
|
||||
agent.enqueue_state_change(router.id, 'master')
|
||||
agent.enqueue_state_change(router.id, 'primary')
|
||||
eventlet.sleep(self.conf.ha_vrrp_advert_int + 2)
|
||||
self.assertFalse(agent._update_metadata_proxy.call_count)
|
||||
|
||||
@ -284,11 +284,11 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework):
|
||||
router_info.agent = agent
|
||||
agent.router_info[router.id] = router_info
|
||||
agent.l3_ext_manager.ha_state_change = mock.Mock()
|
||||
agent.enqueue_state_change(router.id, 'master')
|
||||
agent.enqueue_state_change(router.id, 'primary')
|
||||
eventlet.sleep(self.conf.ha_vrrp_advert_int + 2)
|
||||
agent.l3_ext_manager.ha_state_change.assert_called_once_with(
|
||||
agent.context,
|
||||
{'router_id': router.id, 'state': 'master',
|
||||
{'router_id': router.id, 'state': 'primary',
|
||||
'host': agent.host})
|
||||
|
||||
def test_enqueue_state_change_router_active_ha(self):
|
||||
@ -300,7 +300,7 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework):
|
||||
) as spawn_metadata_proxy, mock.patch.object(
|
||||
agent.metadata_driver, 'destroy_monitored_metadata_proxy'
|
||||
) as destroy_metadata_proxy:
|
||||
agent._update_metadata_proxy(router_info, "router_id", "master")
|
||||
agent._update_metadata_proxy(router_info, "router_id", "primary")
|
||||
spawn_metadata_proxy.assert_called()
|
||||
destroy_metadata_proxy.assert_not_called()
|
||||
|
||||
@ -337,7 +337,7 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework):
|
||||
router_info = l3router.RouterInfo(agent, _uuid(), {}, **self.ri_kwargs)
|
||||
if gw_port_id:
|
||||
router_info.ex_gw_port = {'id': gw_port_id}
|
||||
expected_forwarding_state = state == 'master'
|
||||
expected_forwarding_state = state == 'primary'
|
||||
with mock.patch.object(
|
||||
router_info.driver, "configure_ipv6_forwarding"
|
||||
) as configure_ipv6_forwarding, mock.patch.object(
|
||||
@ -345,7 +345,7 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework):
|
||||
) as configure_ipv6_on_gw:
|
||||
agent._configure_ipv6_params(router_info, state)
|
||||
|
||||
if state == 'master':
|
||||
if state == 'primary':
|
||||
configure_ipv6_forwarding.assert_called_once_with(
|
||||
router_info.ns_name, 'all', expected_forwarding_state)
|
||||
else:
|
||||
@ -360,30 +360,30 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework):
|
||||
else:
|
||||
configure_ipv6_on_gw.assert_not_called()
|
||||
|
||||
def test__configure_ipv6_params_master(self):
|
||||
self._test__configure_ipv6_params_helper('master', gw_port_id=_uuid())
|
||||
def test__configure_ipv6_params_primary(self):
|
||||
self._test__configure_ipv6_params_helper('primary', gw_port_id=_uuid())
|
||||
|
||||
def test__configure_ipv6_params_backup(self):
|
||||
self._test__configure_ipv6_params_helper('backup', gw_port_id=_uuid())
|
||||
|
||||
def test__configure_ipv6_params_master_no_gw_port(self):
|
||||
self._test__configure_ipv6_params_helper('master', gw_port_id=None)
|
||||
def test__configure_ipv6_params_primary_no_gw_port(self):
|
||||
self._test__configure_ipv6_params_helper('primary', gw_port_id=None)
|
||||
|
||||
def test__configure_ipv6_params_backup_no_gw_port(self):
|
||||
self._test__configure_ipv6_params_helper('backup', gw_port_id=None)
|
||||
|
||||
def test_check_ha_state_for_router_master_standby(self):
|
||||
def test_check_ha_state_for_router_primary_standby(self):
|
||||
agent = l3_agent.L3NATAgent(HOSTNAME, self.conf)
|
||||
router = mock.Mock()
|
||||
router.id = '1234'
|
||||
router_info = mock.MagicMock()
|
||||
agent.router_info[router.id] = router_info
|
||||
router_info.ha_state = 'master'
|
||||
router_info.ha_state = 'primary'
|
||||
with mock.patch.object(agent.state_change_notifier,
|
||||
'queue_event') as queue_event:
|
||||
agent.check_ha_state_for_router(
|
||||
router.id, lib_constants.HA_ROUTER_STATE_STANDBY)
|
||||
queue_event.assert_called_once_with((router.id, 'master'))
|
||||
queue_event.assert_called_once_with((router.id, 'primary'))
|
||||
|
||||
def test_check_ha_state_for_router_standby_standby(self):
|
||||
agent = l3_agent.L3NATAgent(HOSTNAME, self.conf)
|
||||
@ -3275,7 +3275,7 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework):
|
||||
ri.get_internal_device_name,
|
||||
self.conf)
|
||||
if enable_ha:
|
||||
agent.pd.routers[router['id']]['master'] = False
|
||||
agent.pd.routers[router['id']]['primary'] = False
|
||||
return agent, router, ri
|
||||
|
||||
def _pd_remove_gw_interface(self, intfs, agent, ri):
|
||||
|
@ -833,7 +833,7 @@ class TestDvrRouterOperations(base.BaseTestCase):
|
||||
fip_cidr = '11.22.33.44/24'
|
||||
|
||||
ri = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, [], **self.ri_kwargs)
|
||||
ri.is_router_master = mock.Mock(return_value=False)
|
||||
ri.is_router_primary = mock.Mock(return_value=False)
|
||||
ri._add_vip = mock.Mock()
|
||||
interface_name = ri.get_snat_external_device_interface_name(
|
||||
ri.get_ex_gw_port())
|
||||
@ -844,7 +844,7 @@ class TestDvrRouterOperations(base.BaseTestCase):
|
||||
router[lib_constants.HA_INTERFACE_KEY]['status'] = 'DOWN'
|
||||
self._set_ri_kwargs(agent, router['id'], router)
|
||||
ri_1 = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, [], **self.ri_kwargs)
|
||||
ri_1.is_router_master = mock.Mock(return_value=True)
|
||||
ri_1.is_router_primary = mock.Mock(return_value=True)
|
||||
ri_1._add_vip = mock.Mock()
|
||||
interface_name = ri_1.get_snat_external_device_interface_name(
|
||||
ri_1.get_ex_gw_port())
|
||||
@ -855,7 +855,7 @@ class TestDvrRouterOperations(base.BaseTestCase):
|
||||
router[lib_constants.HA_INTERFACE_KEY]['status'] = 'ACTIVE'
|
||||
self._set_ri_kwargs(agent, router['id'], router)
|
||||
ri_2 = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, [], **self.ri_kwargs)
|
||||
ri_2.is_router_master = mock.Mock(return_value=True)
|
||||
ri_2.is_router_primary = mock.Mock(return_value=True)
|
||||
ri_2._add_vip = mock.Mock()
|
||||
interface_name = ri_2.get_snat_external_device_interface_name(
|
||||
ri_2.get_ex_gw_port())
|
||||
@ -877,14 +877,14 @@ class TestDvrRouterOperations(base.BaseTestCase):
|
||||
fip_cidr = '11.22.33.44/24'
|
||||
|
||||
ri = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, [], **self.ri_kwargs)
|
||||
ri.is_router_master = mock.Mock(return_value=False)
|
||||
ri.is_router_primary = mock.Mock(return_value=False)
|
||||
ri._remove_vip = mock.Mock()
|
||||
ri.remove_centralized_floatingip(fip_cidr)
|
||||
ri._remove_vip.assert_called_once_with(fip_cidr)
|
||||
super_remove_centralized_floatingip.assert_not_called()
|
||||
|
||||
ri1 = dvr_edge_ha_rtr.DvrEdgeHaRouter(HOSTNAME, [], **self.ri_kwargs)
|
||||
ri1.is_router_master = mock.Mock(return_value=True)
|
||||
ri1.is_router_primary = mock.Mock(return_value=True)
|
||||
ri1._remove_vip = mock.Mock()
|
||||
ri1.remove_centralized_floatingip(fip_cidr)
|
||||
ri1._remove_vip.assert_called_once_with(fip_cidr)
|
||||
@ -922,7 +922,7 @@ class TestDvrRouterOperations(base.BaseTestCase):
|
||||
ri._ha_state_path = self.get_temp_file_path('router_ha_state')
|
||||
|
||||
with open(ri._ha_state_path, "w") as f:
|
||||
f.write("master")
|
||||
f.write("primary")
|
||||
|
||||
ri._create_snat_namespace = mock.Mock()
|
||||
ri.update_initial_state = mock.Mock()
|
||||
|
@ -138,8 +138,8 @@ class TestBasicRouterOperations(base.BaseTestCase):
|
||||
lib_fixtures.OpenFixture('ha_state', read_return)).mock_open
|
||||
self.assertEqual(expected, ri.ha_state)
|
||||
|
||||
def test_ha_state_master(self):
|
||||
self._test_ha_state('master', 'master')
|
||||
def test_ha_state_primary(self):
|
||||
self._test_ha_state('primary', 'primary')
|
||||
|
||||
def test_ha_state_unknown(self):
|
||||
# an empty state file should yield 'unknown'
|
||||
|
@ -183,26 +183,26 @@ class FakePortContext(api.PortContext):
|
||||
|
||||
class MechDriverConfFixture(config_fixture.Config):
|
||||
|
||||
def __init__(self, conf=cfg.CONF, blacklist_cfg=None,
|
||||
def __init__(self, conf=cfg.CONF, prohibit_list_cfg=None,
|
||||
registration_func=None):
|
||||
"""ConfigFixture for vnic_type_blacklist
|
||||
"""ConfigFixture for vnic_type_prohibit_list
|
||||
|
||||
:param conf: The driver configuration object
|
||||
:param blacklist_cfg: A dictionary in the form
|
||||
:param prohibit_list_cfg: A dictionary in the form
|
||||
{'group': {'opt': 'value'}}, i.e.:
|
||||
{'OVS_DRIVER': {'vnic_type_blacklist':
|
||||
{'OVS_DRIVER': {'vnic_type_prohibit_list':
|
||||
['foo']}}
|
||||
:param registration_func: The method which do the config group's
|
||||
registration.
|
||||
"""
|
||||
super(MechDriverConfFixture, self).__init__(conf)
|
||||
self.blacklist_cfg = blacklist_cfg
|
||||
self.prohibit_list_cfg = prohibit_list_cfg
|
||||
self.registration_func = registration_func
|
||||
|
||||
def setUp(self):
|
||||
super(MechDriverConfFixture, self).setUp()
|
||||
self.registration_func(self.conf)
|
||||
for group, option in self.blacklist_cfg.items():
|
||||
for group, option in self.prohibit_list_cfg.items():
|
||||
self.config(group=group, **option)
|
||||
|
||||
|
||||
|
@ -234,9 +234,9 @@ class SriovSwitchMechVnicTypesTestCase(SriovNicSwitchMechanismBaseTestCase):
|
||||
portbindings.VNIC_DIRECT,
|
||||
portbindings.VNIC_MACVTAP,
|
||||
portbindings.VNIC_DIRECT_PHYSICAL]
|
||||
self.blacklist_cfg = {
|
||||
self.prohibit_list_cfg = {
|
||||
'SRIOV_DRIVER': {
|
||||
'vnic_type_blacklist': []
|
||||
'vnic_type_prohibit_list': []
|
||||
}
|
||||
}
|
||||
super(SriovSwitchMechVnicTypesTestCase, self).setUp()
|
||||
@ -250,13 +250,13 @@ class SriovSwitchMechVnicTypesTestCase(SriovNicSwitchMechanismBaseTestCase):
|
||||
self.override_vnic_types,
|
||||
self.driver_with_vnic_types.supported_vnic_types)
|
||||
|
||||
def test_vnic_type_blacklist_valid_item(self):
|
||||
self.blacklist_cfg['SRIOV_DRIVER']['vnic_type_blacklist'] = \
|
||||
def test_vnic_type_prohibit_list_valid_item(self):
|
||||
self.prohibit_list_cfg['SRIOV_DRIVER']['vnic_type_prohibit_list'] = \
|
||||
[portbindings.VNIC_MACVTAP]
|
||||
|
||||
fake_conf = cfg.CONF
|
||||
fake_conf_fixture = base.MechDriverConfFixture(
|
||||
fake_conf, self.blacklist_cfg,
|
||||
fake_conf, self.prohibit_list_cfg,
|
||||
mech_sriov_conf.register_sriov_mech_driver_opts)
|
||||
self.useFixture(fake_conf_fixture)
|
||||
|
||||
@ -268,26 +268,27 @@ class SriovSwitchMechVnicTypesTestCase(SriovNicSwitchMechanismBaseTestCase):
|
||||
self.assertEqual(len(self.default_supported_vnics) - 1,
|
||||
len(supported_vnic_types))
|
||||
|
||||
def test_vnic_type_blacklist_not_valid_item(self):
|
||||
self.blacklist_cfg['SRIOV_DRIVER']['vnic_type_blacklist'] = ['foo']
|
||||
def test_vnic_type_prohibit_list_not_valid_item(self):
|
||||
self.prohibit_list_cfg['SRIOV_DRIVER']['vnic_type_prohibit_list'] = \
|
||||
['foo']
|
||||
fake_conf = cfg.CONF
|
||||
fake_conf_fixture = base.MechDriverConfFixture(
|
||||
fake_conf, self.blacklist_cfg,
|
||||
fake_conf, self.prohibit_list_cfg,
|
||||
mech_sriov_conf.register_sriov_mech_driver_opts)
|
||||
self.useFixture(fake_conf_fixture)
|
||||
|
||||
self.assertRaises(ValueError,
|
||||
mech_driver.SriovNicSwitchMechanismDriver)
|
||||
|
||||
def test_vnic_type_blacklist_all_items(self):
|
||||
self.blacklist_cfg['SRIOV_DRIVER']['vnic_type_blacklist'] = \
|
||||
def test_vnic_type_prohibit_list_all_items(self):
|
||||
self.prohibit_list_cfg['SRIOV_DRIVER']['vnic_type_prohibit_list'] = \
|
||||
[portbindings.VNIC_DIRECT,
|
||||
portbindings.VNIC_MACVTAP,
|
||||
portbindings.VNIC_DIRECT_PHYSICAL]
|
||||
|
||||
fake_conf = cfg.CONF
|
||||
fake_conf_fixture = base.MechDriverConfFixture(
|
||||
fake_conf, self.blacklist_cfg,
|
||||
fake_conf, self.prohibit_list_cfg,
|
||||
mech_sriov_conf.register_sriov_mech_driver_opts)
|
||||
self.useFixture(fake_conf_fixture)
|
||||
|
||||
|
@ -335,9 +335,9 @@ class OpenvswitchMechVnicTypesTestCase(OpenvswitchMechanismBaseTestCase):
|
||||
portbindings.VNIC_SMARTNIC]
|
||||
|
||||
def setUp(self):
|
||||
self.blacklist_cfg = {
|
||||
self.prohibit_list_cfg = {
|
||||
'OVS_DRIVER': {
|
||||
'vnic_type_blacklist': []
|
||||
'vnic_type_prohibit_list': []
|
||||
}
|
||||
}
|
||||
self.default_supported_vnics = self.supported_vnics
|
||||
@ -347,13 +347,13 @@ class OpenvswitchMechVnicTypesTestCase(OpenvswitchMechanismBaseTestCase):
|
||||
self.assertEqual(self.default_supported_vnics,
|
||||
self.driver.supported_vnic_types)
|
||||
|
||||
def test_vnic_type_blacklist_valid_item(self):
|
||||
self.blacklist_cfg['OVS_DRIVER']['vnic_type_blacklist'] = \
|
||||
def test_vnic_type_prohibit_list_valid_item(self):
|
||||
self.prohibit_list_cfg['OVS_DRIVER']['vnic_type_prohibit_list'] = \
|
||||
[portbindings.VNIC_DIRECT]
|
||||
|
||||
fake_conf = cfg.CONF
|
||||
fake_conf_fixture = base.MechDriverConfFixture(
|
||||
fake_conf, self.blacklist_cfg,
|
||||
fake_conf, self.prohibit_list_cfg,
|
||||
mech_ovs_conf.register_ovs_mech_driver_opts)
|
||||
self.useFixture(fake_conf_fixture)
|
||||
|
||||
@ -364,24 +364,25 @@ class OpenvswitchMechVnicTypesTestCase(OpenvswitchMechanismBaseTestCase):
|
||||
self.assertEqual(len(self.default_supported_vnics) - 1,
|
||||
len(supported_vnic_types))
|
||||
|
||||
def test_vnic_type_blacklist_not_valid_item(self):
|
||||
self.blacklist_cfg['OVS_DRIVER']['vnic_type_blacklist'] = ['foo']
|
||||
def test_vnic_type_prohibit_list_not_valid_item(self):
|
||||
self.prohibit_list_cfg['OVS_DRIVER']['vnic_type_prohibit_list'] = \
|
||||
['foo']
|
||||
|
||||
fake_conf = cfg.CONF
|
||||
fake_conf_fixture = base.MechDriverConfFixture(
|
||||
fake_conf, self.blacklist_cfg,
|
||||
fake_conf, self.prohibit_list_cfg,
|
||||
mech_ovs_conf.register_ovs_mech_driver_opts)
|
||||
self.useFixture(fake_conf_fixture)
|
||||
|
||||
self.assertRaises(ValueError,
|
||||
mech_openvswitch.OpenvswitchMechanismDriver)
|
||||
|
||||
def test_vnic_type_blacklist_all_items(self):
|
||||
self.blacklist_cfg['OVS_DRIVER']['vnic_type_blacklist'] = \
|
||||
def test_vnic_type_prohibit_list_all_items(self):
|
||||
self.prohibit_list_cfg['OVS_DRIVER']['vnic_type_prohibit_list'] = \
|
||||
self.supported_vnics
|
||||
fake_conf = cfg.CONF
|
||||
fake_conf_fixture = base.MechDriverConfFixture(
|
||||
fake_conf, self.blacklist_cfg,
|
||||
fake_conf, self.prohibit_list_cfg,
|
||||
mech_ovs_conf.register_ovs_mech_driver_opts)
|
||||
self.useFixture(fake_conf_fixture)
|
||||
|
||||
|
@ -1415,7 +1415,7 @@ class TestOVNL3RouterPlugin(test_mech_driver.Ml2PluginV2TestCase):
|
||||
self.nb_idl().get_gateway_chassis_binding.side_effect = (
|
||||
existing_port_bindings)
|
||||
# for 1. port schedule untouched, add only 3'rd chassis
|
||||
# for 2. port master scheduler somewhere else
|
||||
# for 2. port primary scheduler somewhere else
|
||||
# for 3. port schedule all
|
||||
self.mock_schedule.side_effect = [
|
||||
['chassis1', 'chassis2', 'chassis3'],
|
||||
@ -1435,7 +1435,7 @@ class TestOVNL3RouterPlugin(test_mech_driver.Ml2PluginV2TestCase):
|
||||
'lrp-foo-2', [], ['chassis2']),
|
||||
mock.call(self.nb_idl(), self.sb_idl(),
|
||||
'lrp-foo-3', [], [])])
|
||||
# make sure that for second port master chassis stays untouched
|
||||
# make sure that for second port primary chassis stays untouched
|
||||
self.nb_idl().update_lrouter_port.assert_has_calls([
|
||||
mock.call('lrp-foo-1',
|
||||
gateway_chassis=['chassis1', 'chassis2', 'chassis3']),
|
||||
|
@ -1,9 +1,9 @@
|
||||
---
|
||||
features:
|
||||
- Keepalived VRRP health check functionality to enable verification of
|
||||
connectivity from the "master" router to all gateways. Activation of this
|
||||
connectivity from the "primary" router to all gateways. Activation of this
|
||||
feature enables gateway connectivity validation and rescheduling of the
|
||||
"master" router to another node when connectivity is lost. If all routers
|
||||
"primary" router to another node when connectivity is lost. If all routers
|
||||
lose connectivity to the gateways, the election process will be repeated
|
||||
round-robin until one of the routers restores its gateway connection. In
|
||||
the mean time, all of the routers will be reported as "master".
|
||||
the mean time, all of the routers will be reported as "primary".
|
||||
|
10
releasenotes/notes/improve-terminology-d69d7549b79dff5d.yaml
Normal file
10
releasenotes/notes/improve-terminology-d69d7549b79dff5d.yaml
Normal file
@ -0,0 +1,10 @@
|
||||
---
|
||||
deprecations:
|
||||
- |
|
||||
Terminology such as ``master`` and ``slave`` have been replaced with
|
||||
more inclusive words, such as ``primary`` and ``backup`` wherever
|
||||
possible.
|
||||
|
||||
The configuration option ``vnic_type_blacklist`` has been deprecated
|
||||
for both the OpenvSwitch and SRIOV mechanism drivers, and replaced with
|
||||
``vnic_type_prohibit_list``. They will be removed in a future release.
|
@ -2,8 +2,8 @@
|
||||
other:
|
||||
- |
|
||||
Add new configuration group ``ovs_driver`` and new configuration option
|
||||
under it ``vnic_type_blacklist``, to make the previously hardcoded
|
||||
under it ``vnic_type_prohibit_list``, to make the previously hardcoded
|
||||
``supported_vnic_types`` parameter of the OpenvswitchMechanismDriver
|
||||
configurable.
|
||||
The ``vnic_types`` listed in the blacklist will be removed from the
|
||||
The ``vnic_types`` listed in the prohibit list will be removed from the
|
||||
supported_vnic_types list.
|
||||
|
@ -2,8 +2,8 @@
|
||||
other:
|
||||
- |
|
||||
Add new configuration group ``sriov_driver`` and new configuration option
|
||||
under it ``vnic_type_blacklist``, to make the previously hardcoded
|
||||
under it ``vnic_type_prohibit_list``, to make the previously hardcoded
|
||||
``supported_vnic_types`` parameter of the SriovNicSwitchMechanismDriver
|
||||
configurable.
|
||||
The ``vnic_types`` listed in the blacklist will be removed from the
|
||||
The ``vnic_types`` listed in the prohibit list will be removed from the
|
||||
supported_vnic_types list.
|
||||
|
@ -5,7 +5,7 @@ upgrade:
|
||||
security group rules based on the protocol given, instead of
|
||||
relying on the backend firewall driver to do this enforcement,
|
||||
typically silently ignoring the port option in the rule. The
|
||||
valid set of whitelisted protocols that support ports are TCP,
|
||||
valid set of allowed protocols that support ports are TCP,
|
||||
UDP, UDPLITE, SCTP and DCCP. Ports used with other protocols
|
||||
will now generate an HTTP 400 error. For more information, see
|
||||
bug `1818385 <https://bugs.launchpad.net/neutron/+bug/1818385>`_.
|
||||
|
2
tox.ini
2
tox.ini
@ -200,7 +200,7 @@ import_exceptions = neutron._i18n
|
||||
[testenv:bandit]
|
||||
envdir = {toxworkdir}/shared
|
||||
# B104: Possible binding to all interfaces
|
||||
# B303: blacklist calls: md5, sha1
|
||||
# B303: prohibit list calls: md5, sha1
|
||||
# B311: Standard pseudo-random generators are not suitable for security/cryptographic purpose
|
||||
# B604: any_other_function_with_shell_equals_true
|
||||
deps = -r{toxinidir}/test-requirements.txt
|
||||
|
@ -217,13 +217,13 @@
|
||||
tempest_test_regex: ""
|
||||
# TODO(slaweq): remove tests from
|
||||
# tempest.scenario.test_network_v6.TestGettingAddress module from
|
||||
# blacklist when bug https://bugs.launchpad.net/neutron/+bug/1863577 will
|
||||
# be fixed
|
||||
# prohibit list when bug https://bugs.launchpad.net/neutron/+bug/1863577
|
||||
# will be fixed
|
||||
# TODO(mjozefcz): The test test_port_security_macspoofing_port
|
||||
# and related bug https://bugs.launchpad.net/tempest/+bug/1728886
|
||||
# are fixed in Core-OVN, but tempest-slow job uses stable release of
|
||||
# core OVN now and thats why it is still failing in this job.
|
||||
# Remove this blacklist when OVN 20.06 will be releaseed and consumed.
|
||||
# Remove this prohibit list when OVN 20.06 will be releaseed and consumed.
|
||||
# In addition: on next PTG we will discuss the rules of running specific
|
||||
# jobs with OVN master and OVN release branches. Please consider
|
||||
# specyfing explicitely the version of OVN in tempest-slow jobs.
|
||||
|
Loading…
Reference in New Issue
Block a user