Do not apply NoExecute taint to AIO hosts when locked

When a kubernetes host is locked, the VIM applies the NoExecute
taint to the node which causes pods to be evicted. However, in
the AIO simplex case, there is nowhere else for the pods to go
and when the host is unlocked, pods cannot start until the VIM
removes the NoExecute taint. The problem is that when the
OpenStack application is configured, the VIM requires the
rabbitmq pod to be running before the VIM can start, causing a
chicken and egg scenario. For now, we will avoid applying the
NoExecute taint on AIO hosts when they are locked, to avoid
this scenario.

Story: 2003910
Task: 27852

Change-Id: Icd7c00b6f7fba9cdbf23b60e265e90e9271d5243
Signed-off-by: Bart Wensley <barton.wensley@windriver.com>
This commit is contained in:
Bart Wensley 2018-11-09 09:21:25 -06:00
parent 8a69e1948a
commit f14c851b42

View File

@ -1419,21 +1419,37 @@ class NFVIInfrastructureAPI(nfvi.api.v1.NFVIInfrastructureAPI):
raise
if self._host_supports_kubernetes(host_personality):
response['reason'] = 'failed to disable kubernetes services'
if 'controller' in host_personality and \
'compute' in host_personality:
# This is an AIO host (either simplex or duplex). For now,
# we do not want to apply the NoExecute taint. When
# the host reboots (e.g. on a lock/unlock), the VIM will
# not initialize if it cannot register with rabbitmq
# (which is running in a pod). But the VIM must first
# remove the NoExecute taint, before that pod will run.
# This is only necessary on AIO simplex hosts, but we have
# no way to know whether the host is simplex or duplex
# in this plugin. Long term, this decision will be moved to
# the VIM, before invoking the plugin, once the plugins are
# refactored into separate enable/disable functions for
# nova, neutron, kubernetes, etc...
pass
else:
response['reason'] = 'failed to disable kubernetes services'
# To disable kubernetes we add the NoExecute taint to the
# node. This removes pods that can be scheduled elsewhere
# and prevents new pods from scheduling on the node.
future.work(kubernetes_client.taint_node,
host_name, "NoExecute", "services", "disabled")
# To disable kubernetes we add the NoExecute taint to the
# node. This removes pods that can be scheduled elsewhere
# and prevents new pods from scheduling on the node.
future.work(kubernetes_client.taint_node,
host_name, "NoExecute", "services", "disabled")
future.result = (yield)
future.result = (yield)
if not future.result.is_complete():
DLOG.error("Kubernetes taint_node failed, operation "
"did not complete, host_uuid=%s, host_name=%s."
% (host_uuid, host_name))
return
if not future.result.is_complete():
DLOG.error("Kubernetes taint_node failed, operation "
"did not complete, host_uuid=%s, host_name=%s."
% (host_uuid, host_name))
return
response['completed'] = True
response['reason'] = ''