This patch set increases the timeout to wait for resources to be
created/deleted. This is needed to better support spikes without
restarting the kuryr-controller. This patch also ensures that
future retry events are not afecting the kuryr controller if
they are retried once the related resources are already deleted,
i.e., the on_delete event was executed before one of the retries.
Closes-Bug: 1847753
Change-Id: I725ba22f0babf496af219a37e42e1c33b247308a
LoadBalancerHandler._get_lbaas_spec is identical to
utils.get_lbaas_spec, which is used by LBaaSSpecHandler, so we can reuse the
function from utils instead of duplication of code.
Note that the passed k8s object is endpoint in case of
LoadBalancerHandler and service in case of LBaaSSpecHandler (but same
annotation used in both cases, so a common function should be enough)
Change-Id: I124109f79bcdefcc4948eb35b4bbb4a9ca87c43b
Signed-off-by: Yash Gupta <y.gupta@samsung.com>
It can take a while for a pod to have annotations and a hostIP
defined. It is also possible that a pod is deleted before
the state is set, causing a NotFound k8s exception. Lastly,
a service migth be missing the lbaas_spec annotation, causing
the event handled on the endpoints to crash. All this scenarios
can be avoided by raising a resource not ready exception, which
allows the operation to be retried.
Change-Id: I5476cd4261a6118dbb388d7238e83169439ffe0d
When namespace subnet driver is used, a new subnet is created for
each new namespace. As pools are created per subnet, this patch
ensures that new ports are created for each pool for the new subnet
in the nested case.
Note this feature depends on using resource tagging to filter out
trunk ports in case of multiple clusters deployed on the same openstack
project or when other trunks are present. Otherwise it will consider
all the existing trunks no matter if they belong or not to the
kubernetes cluster.
NOTE: this is only for nested case, where pooling shows the greatest
improvements as ports are already ACTIVE.
Change-Id: Id014cf49da8d4cbe0c1795e47765fcf2f0684c09
The LBaaS SG update is failing when the pods selected by the selector
in the rule block are removed after the pod, on which the policy is
enforced, is removed. This commit fixes the issue by changing from
LBaaSServiceSpec object to LBaaSLoadBalancer, which is the object
type expected by '_apply_members_security_groups' function.
Change-Id: I17f2f632e02bc0f46ccc7434173acce68aef957b
Closes-Bug: 1823022
This patch adds support for services that define the targetPort
with text (port name), pointing to the same port number as the defined
exposed port on the svc.
Closes-Bug: 1818969
Change-Id: I7f957d292f7c4a43b759292e5bd04c4db704c4c4
When a service is created with a Network Policy applied and
deployments are scaled up or down, the LBaaS SG rules should be
updated accordindly. Right now, the LBaaS/Service do not react on
deployment scales.
This commit fixes the issue by ensuring that the LBaaS SG is updated
on pod events.
Also, when Pods, Network Policies and SVCs are created together it might
happen that the LBaaS SG remains with default SG rules, even though
the policy is being enforced. This commit ensures the right SG rules
are applied on a LBaaS regardless the order of k8s resources creation.
This happens by setting the LBaaS Spec annotation whenever a request
to update the SG rules has been made and retrieving the Spec again
whenever a LBaaS member is created.
Change-Id: I1c54d17a5fcff5387ffae2b132f5036ee9bf07ca
Closes-Bug: 1816015
When a Network Policy is changed, services must also be updated,
deleting the unnecessary rules that do not match the NP anymore
and create needed ones.
Closes-Bug: #1811242
Partially Implements: blueprint k8s-network-policies
Change-Id: I800477d08fd1f46c2a94d3653496f8f1188a3844
This patch adds support for Network Policy on services. It
applies pods' security groups onto the services in front of them.
It makes the next assumptions:
- All the pods pointed by one svc have the same labels, thus the same
sgs being enforced
- Only copies the SG rules that have the same protocol and direction
as the listener being created
- Adds a default rule to NP to enable traffic from services subnet CIDR
Partially Implements: blueprint k8s-network-policies
Change-Id: Ibd4b51ff40b69af26ab7e7b81d18e63abddf775b
This patch ensures the controller healthchecks do not set the
controller as not Ready due to missing CRDs when deploying without
namespace and/or policy handlers. In that case the CRDs are not needed
Closes-Bug: 1808966
Change-Id: I685f9a47605da86504619983848b8ef73d71b332
Currently, if the number of neutron resources requested reaches
the quota, kuryr-controller is marked as unhealthy and restarted.
In order to avoid the constant restart of the pod, this patch adds
a new readiness checks that checks if the resources used by
the enabled handlers are over quota.
Closes-Bug: 1804310
Change-Id: If4d42f866d2d64cae63736f4c206bedca039258b
This reverts commit f0cde86ee68027bd66597f2e4b8db4e10fa81e0b as it turns
out that variable is actually used by kuryr-daemon and fixes the way
kuryr-controller detects its identity to match how leader-elector is
doing it.
Change-Id: I95c2d3e1760a938d40d57a99fb87b6f02ca7f64a
Closes-Bug: 1798835
This commit adds SR-IOV driver and new type of VIF to handle SR-IOV requests.
This driver can work as a primary driver and only one driver, but only when kubernetes
will fully support CNI specification.
Now this driver can work in couple with multi vif driver, e.g. NPWGMultiVIFDriver.
(see doc/source/installation/multi_vif_with_npwg_spec.rst)
Also this driver relies on kubernetes SRIOV device plugin.
This commit also adds 'default_physnet_subnets' setting, that should
include a mapping of physnets to neutron subnet IDs, it's necessary to
specify VIF's physnet (subnet id comes from annotation).
To get details how to create pods with sriov interfaces see
doc/source/installation/sriov.rst
Target bp: kuryr-kubernetes-sriov-support
Change-Id: I45c5f1a7fb423ee68731d0ae85f7171e33d0aeeb
Signed-off-by: Danil Golov <d.golov@partner.samsung.com>
Signed-off-by: Vladimir Kuramshin <v.kuramshin@samsung.com>
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
We've changed Pod annotation format in Rocky. To support upgrading
Kuryr we need to keep compatiblity with that format. This commit
implements that.
We should also think about creating a tool that will convert all the
annotations in a setup.
Change-Id: I88e1b318d58d0d90138e347503928da41518a888
Closes-Bug: 1782366
Since the function _get_subnet is widely used by different components,
I move it to kuryr_kubernetes.utils as a part of common utilities.
Change-Id: I9a80fb55f5c02274fb50c4c92eb3514ccb42830e
This commit implements initial version of high availability support in
kuryr-controller - Active/Passive mode. In this mode only one instance
of controller is processing the resources while other ones are in
standby mode. If current leader dies, one of standbys is taking the
leader role and starts processing resources.
Please note that as leader election is based on Kubernetes mechanisms,
this is only supported when kuryr-controller is run as Pod on Kubernetes
cluster.
Implements: bp high-availability
Change-Id: I2c6c9315612d64158fb9f8284e0abb065aca7208
This patch adds support for nodes with different vif drivers as
well as different pool drivers for each vif driver type.
Closes-Bug: 1747406
Change-Id: I842fd4b513a5f325d598d677e5008f9ea51adab9
As with the k8sCNIRegistryPlugin the watching is for the
complete node, instead of per pod and namespace, we need
to make registry information to account for the namespace
where the pod is created to differentiate between different
containers running on the same node, with the same name, but
in a different namespace
Related-Bug: 1731486
Change-Id: I26e1dec6ae613c5316a45f93563c4a015df59441
This commit implements kuryr-daemon support when
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True. It's done by:
* CNI docker image installs Kuryr-Kubernetes pip package and adds
exectution of kuryr-daemon into entrypoint script.
* Hosts /proc and /var/run/openvswitch are mounted into the CNI
container.
* Code is changed to use /host_proc instead of /proc when in a container
(it's impossible to mount host's /proc into container's /proc).
Implements: blueprint cni-split-exec-daemon
Change-Id: I9155a2cba28f578cee129a4c40066209f7ab543d
This commits introduces the asyncio event loop as well as the base
abstract class to define watchers.
It is in a very simple approach (it does not reschedule watchers if
they fail) but it lets you to see the proposal of hierarchy in watchers
as well as the methods that a watcher has to implement (see the pod
module).
Partial-Implements: blueprint kuryr-k8s-integration
Co-Authored-By: Taku Fukushima <f.tac.mac@gmail.com>
Co-Authored-By: Antoni Segura Puimedon
Change-Id: I91975dd197213c1a6b0e171c1ae218a547722eeb