KuryrNetworkPolicy CRD

This commit is a huge refactoring of how we handle network policies. In
general:

* KuryrNetPolicy is replaced by KuryrNetworkPolicy. The upgrade path
  is handled in the constructor of KuryrNetworkPolicyHandler.
* New CRD has spec and status properties. spec is always populated by
  NetworkPolicyHandler. status is handled by KuryrNetworkPolicyHandler.
  This means that in order to trigger SG rules recalculation on Pod ang
  Service events, the NetworkPolicy is "bumped" with a dummy annotation.
* NetworkPolicyHandler injects finalizers onto NetworkPolicy and
  KuryrNetworkPolicy objects, so that objects cannot get removed before
  KuryrNetworkPolicyHandler won't process deletion correctly.

Depends-On: https://review.opendev.org/742209
Change-Id: Iafc982e590ada0cd9d82e922c103583e4304e9ce
This commit is contained in:
Michał Dulko 2020-03-11 11:27:09 +01:00
parent 3eeb67d84e
commit a1708e1c76
26 changed files with 1286 additions and 1574 deletions

View File

@ -99,7 +99,7 @@
vars: vars:
devstack_localrc: devstack_localrc:
DOCKER_CGROUP_DRIVER: "systemd" DOCKER_CGROUP_DRIVER: "systemd"
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_SG_DRIVER: policy KURYR_SG_DRIVER: policy
KURYR_SUBNET_DRIVER: namespace KURYR_SUBNET_DRIVER: namespace
devstack_services: devstack_services:
@ -120,7 +120,7 @@
vars: vars:
devstack_localrc: devstack_localrc:
KURYR_SUBNET_DRIVER: namespace KURYR_SUBNET_DRIVER: namespace
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_SG_DRIVER: policy KURYR_SG_DRIVER: policy
KURYR_USE_PORT_POOLS: true KURYR_USE_PORT_POOLS: true
KURYR_POD_VIF_DRIVER: neutron-vif KURYR_POD_VIF_DRIVER: neutron-vif
@ -134,7 +134,7 @@
parent: kuryr-kubernetes-tempest-containerized parent: kuryr-kubernetes-tempest-containerized
vars: vars:
devstack_localrc: devstack_localrc:
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_SG_DRIVER: policy KURYR_SG_DRIVER: policy
KURYR_SUBNET_DRIVER: namespace KURYR_SUBNET_DRIVER: namespace

View File

@ -98,7 +98,7 @@
KURYR_LB_ALGORITHM: SOURCE_IP_PORT KURYR_LB_ALGORITHM: SOURCE_IP_PORT
KURYR_SUBNET_DRIVER: namespace KURYR_SUBNET_DRIVER: namespace
KURYR_SG_DRIVER: policy KURYR_SG_DRIVER: policy
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
voting: false voting: false
- job: - job:
@ -144,7 +144,7 @@
KURYR_ENFORCE_SG_RULES: false KURYR_ENFORCE_SG_RULES: false
KURYR_LB_ALGORITHM: SOURCE_IP_PORT KURYR_LB_ALGORITHM: SOURCE_IP_PORT
KURYR_HYPERKUBE_VERSION: v1.16.0 KURYR_HYPERKUBE_VERSION: v1.16.0
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_SG_DRIVER: policy KURYR_SG_DRIVER: policy
KURYR_SUBNET_DRIVER: namespace KURYR_SUBNET_DRIVER: namespace
KURYR_K8S_CONTAINERIZED_DEPLOYMENT: true KURYR_K8S_CONTAINERIZED_DEPLOYMENT: true

View File

@ -452,6 +452,7 @@ rules:
- kuryrnets - kuryrnets
- kuryrnetworks - kuryrnetworks
- kuryrnetpolicies - kuryrnetpolicies
- kuryrnetworkpolicies
- kuryrloadbalancers - kuryrloadbalancers
- kuryrports - kuryrports
- apiGroups: ["networking.k8s.io"] - apiGroups: ["networking.k8s.io"]

View File

@ -975,6 +975,7 @@ function update_tempest_conf_file {
iniset $TEMPEST_CONFIG kuryr_kubernetes kuryrnetworks True iniset $TEMPEST_CONFIG kuryr_kubernetes kuryrnetworks True
iniset $TEMPEST_CONFIG kuryr_kubernetes kuryrports True iniset $TEMPEST_CONFIG kuryr_kubernetes kuryrports True
iniset $TEMPEST_CONFIG kuryr_kubernetes kuryrloadbalancers True iniset $TEMPEST_CONFIG kuryr_kubernetes kuryrloadbalancers True
iniset $TEMPEST_CONFIG kuryr_kubernetes new_kuryrnetworkpolicy_crd True
} }
source $DEST/kuryr-kubernetes/devstack/lib/kuryr_kubernetes source $DEST/kuryr-kubernetes/devstack/lib/kuryr_kubernetes

View File

@ -47,22 +47,22 @@ The network policy CRD has the following format:
.. code-block:: yaml .. code-block:: yaml
apiVersion: openstack.org/v1 apiVersion: openstack.org/v1
kind: KuryrNetPolicy kind: KuryrNetworkPolicy
metadata: metadata:
... ...
spec: spec:
egressSgRules: egressSgRules:
- security_group_rule: - sgRule:
... ...
ingressSgRules: ingressSgRules:
- security_group_rule: - sgRule:
...
networkpolicy_spec:
... ...
podSelector: podSelector:
... ...
status:
securityGroupId: ... securityGroupId: ...
securityGroupName: ... podSelector: ...
securityGroupRules: ...
A new handler has been added to react to Network Policy events, and the existing A new handler has been added to react to Network Policy events, and the existing
ones, for instance service/pod handlers, have been modified to account for the ones, for instance service/pod handlers, have been modified to account for the
@ -201,26 +201,25 @@ are assumed to assumed to affect Ingress.
.. code-block:: yaml .. code-block:: yaml
apiVersion: openstack.org/v1 apiVersion: openstack.org/v1
kind: KuryrNetPolicy kind: KuryrNetworkPolicy
metadata: metadata:
name: np-default-deny name: default-deny
namespace: default namespace: default
... ...
spec: spec:
egressSgRules: egressSgRules:
- security_group_rule: - sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress direction: egress
ethertype: IPv4 ethertype: IPv4
id: 60a0d59c-2102-43e0-b025-75c98b7d9315
security_group_id: 20d9b623-f1e0-449d-95c1-01624cb3e315 security_group_id: 20d9b623-f1e0-449d-95c1-01624cb3e315
ingressSgRules: [] ingressSgRules: []
networkpolicy_spec:
...
podSelector: podSelector:
... ...
status:
securityGroupId: 20d9b623-f1e0-449d-95c1-01624cb3e315 securityGroupId: 20d9b623-f1e0-449d-95c1-01624cb3e315
securityGroupName: sg-default-deny securityGroupRules: ...
podSelector: ...
Allow traffic from pod Allow traffic from pod
@ -263,37 +262,33 @@ restriction was enforced.
.. code-block:: yaml .. code-block:: yaml
apiVersion: openstack.org/v1 apiVersion: openstack.org/v1
kind: KuryrNetPolicy kind: KuryrNetworkPolicy
metadata: metadata:
name: np-allow-monitoring-via-pod-selector name: allow-monitoring-via-pod-selector
namespace: default namespace: default
... ...
spec: spec:
egressSgRules: egressSgRules:
- security_group_rule: - sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress direction: egress
ethertype: IPv4 ethertype: IPv4
id: 203a14fe-1059-4eff-93ed-a42bd957145d
security_group_id: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
ingressSgRules: ingressSgRules:
- namespace: default - namespace: default
security_group_rule: sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress direction: ingress
ethertype: IPv4 ethertype: IPv4
id: 7987c382-f2a9-47f7-b6e8-1a3a1bcb7d95
port_range_max: 8080 port_range_max: 8080
port_range_min: 8080 port_range_min: 8080
protocol: tcp protocol: tcp
remote_ip_prefix: 10.0.1.143 remote_ip_prefix: 10.0.1.143
security_group_id: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
networkpolicy_spec:
...
podSelector: podSelector:
... ...
status:
securityGroupId: 7f0ef8c2-4846-4d8c-952f-94a9098fff17 securityGroupId: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
securityGroupName: sg-allow-monitoring-via-pod-selector securityGroupRules: ...
podSelector: ...
Allow traffic from namespace Allow traffic from namespace
@ -337,36 +332,32 @@ egress rule allowing traffic to everywhere.
.. code-block:: yaml .. code-block:: yaml
apiVersion: openstack.org/v1 apiVersion: openstack.org/v1
kind: KuryrNetPolicy kind: KuryrNetworkPolicy
name: np-allow-test-via-ns-selector name: allow-test-via-ns-selector
namespace: default namespace: default
... ...
spec: spec:
egressSgRules: egressSgRules:
- security_group_rule: - sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress direction: egress
ethertype: IPv4 ethertype: IPv4
id: 8c21bf42-c8b9-4628-b0a1-bd0dbb192e6b
security_group_id: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
ingressSgRules: ingressSgRules:
- namespace: dev - namespace: dev
security_group_rule: sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress direction: ingress
ethertype: IPv4 ethertype: IPv4
id: 2a33b802-56ad-430a-801d-690f653198ef
port_range_max: 8080 port_range_max: 8080
port_range_min: 8080 port_range_min: 8080
protocol: tcp protocol: tcp
remote_ip_prefix: 10.0.1.192/26 remote_ip_prefix: 10.0.1.192/26
security_group_id: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
networkpolicy_spec:
...
podSelector: podSelector:
... ...
status:
securityGroupId: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9 securityGroupId: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
securityGroupName: sg-allow-test-via-ns-selector securityGroupRules: ...
podSelector: ...
.. note:: .. note::

View File

@ -95,6 +95,7 @@ Edit ``kuryr.conf``:
- kuryrnets - kuryrnets
- kuryrnetworks - kuryrnetworks
- kuryrnetpolicies - kuryrnetpolicies
- kuryrnetworkpolicies
- kuryrloadbalancers - kuryrloadbalancers
- apiGroups: ["networking.k8s.io"] - apiGroups: ["networking.k8s.io"]
resources: resources:

View File

@ -10,7 +10,7 @@ be found at :doc:`./devstack/containerized`):
.. code-block:: ini .. code-block:: ini
[kubernetes] [kubernetes]
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetwork,kuryrnetpolicy enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetwork,kuryrnetworkpolicy
Note that if you also want to enable prepopulation of ports pools upon new Note that if you also want to enable prepopulation of ports pools upon new
namespace creation, you need to also dd the kuryrnetwork_population handler namespace creation, you need to also dd the kuryrnetwork_population handler
@ -19,7 +19,7 @@ namespace creation, you need to also dd the kuryrnetwork_population handler
.. code-block:: ini .. code-block:: ini
[kubernetes] [kubernetes]
enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy,kuryrnetwork,kuryrnetwork_population enabled_handlers=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetworkpolicy,kuryrnetwork,kuryrnetwork_population
After that, enable also the security group drivers for policies: After that, enable also the security group drivers for policies:
@ -82,7 +82,7 @@ to add the policy, pod_label and namespace handler and drivers with:
.. code-block:: bash .. code-block:: bash
KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetpolicy KURYR_ENABLED_HANDLERS=vif,lb,lbaasspec,policy,pod_label,namespace,kuryrnetworkpolicy
KURYR_SG_DRIVER=policy KURYR_SG_DRIVER=policy
KURYR_SUBNET_DRIVER=namespace KURYR_SUBNET_DRIVER=namespace
@ -143,9 +143,9 @@ Testing the network policy support functionality
.. code-block:: console .. code-block:: console
$ kubectl get kuryrnetpolicies $ kubectl get kuryrnetworkpolicies
NAME AGE NAME AGE
np-test-network-policy 2s test-network-policy 2s
$ kubectl get networkpolicies $ kubectl get networkpolicies
NAME POD-SELECTOR AGE NAME POD-SELECTOR AGE
@ -158,69 +158,42 @@ Testing the network policy support functionality
.. code-block:: console .. code-block:: console
$ kubectl get kuryrnetpolicy np-test-network-policy -o yaml $ kubectl get kuryrnetworkpolicy test-network-policy -o yaml
apiVersion: openstack.org/v1 apiVersion: openstack.org/v1
kind: KuryrNetPolicy kind: KuryrNetworkPolicy
metadata: metadata:
annotations: annotations:
networkpolicy_name: test-network-policy networkPolicyLink: <link>
networkpolicy_namespace: default
networkpolicy_uid: aee1c59f-c634-11e8-b63d-002564fdd760
clusterName: "" clusterName: ""
creationTimestamp: 2018-10-02T11:17:02Z creationTimestamp: 2018-10-02T11:17:02Z
generation: 0 generation: 0
name: np-test-network-policy name: test-network-policy
namespace: default namespace: default
resourceVersion: "2117" resourceVersion: "2117"
selfLink: /apis/openstack.org/v1/namespaces/default/kuryrnetpolicies/np-test-network-policy selfLink: /apis/openstack.org/v1/namespaces/default/kuryrnetworkpolicies/test-network-policy
uid: afb99326-c634-11e8-b63d-002564fdd760 uid: afb99326-c634-11e8-b63d-002564fdd760
spec: spec:
egressSgRules: egressSgRules:
- security_group_rule: - sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress direction: egress
ethertype: IPv4 ethertype: IPv4
id: 6297c198-b385-44f3-8b43-29951f933a8f
port_range_max: 5978 port_range_max: 5978
port_range_min: 5978 port_range_min: 5978
protocol: tcp protocol: tcp
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5
ingressSgRules: ingressSgRules:
- security_group_rule: - sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress direction: ingress
ethertype: IPv4 ethertype: IPv4
id: f4e11e73-81c6-4c1b-9760-714eedff417b
port_range_max: 6379 port_range_max: 6379
port_range_min: 6379 port_range_min: 6379
protocol: tcp protocol: tcp
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5 status:
securityGroupId: cdee7815-3b49-4a3e-abc8-31e384ab75c5 securityGroupId: cdee7815-3b49-4a3e-abc8-31e384ab75c5
securityGroupName: sg-test-network-policy securityGroupRules:
networkpolicy_spec:
egress:
- to:
- namespaceSelector:
matchLabels:
project: default
ports:
- port: 5978
protocol: TCP
ingress:
- from:
- namespaceSelector:
matchLabels:
project: default
ports:
- port: 6379
protocol: TCP
podSelector:
matchLabels:
project: default
policyTypes:
- Ingress
- Egress
$ openstack security group rule list sg-test-network-policy --protocol tcp -c "IP Protocol" -c "Port Range" -c "Direction" --long $ openstack security group rule list sg-test-network-policy --protocol tcp -c "IP Protocol" -c "Port Range" -c "Direction" --long
+-------------+------------+-----------+ +-------------+------------+-----------+
@ -273,67 +246,41 @@ Testing the network policy support functionality
$ kubectl patch networkpolicy test-network-policy -p '{"spec":{"ingress":[{"ports":[{"port": 8080,"protocol": "TCP"}]}]}}' $ kubectl patch networkpolicy test-network-policy -p '{"spec":{"ingress":[{"ports":[{"port": 8080,"protocol": "TCP"}]}]}}'
networkpolicy "test-network-policy" patched networkpolicy "test-network-policy" patched
$ kubectl get knp np-test-network-policy -o yaml $ kubectl get knp test-network-policy -o yaml
apiVersion: openstack.org/v1 apiVersion: openstack.org/v1
kind: KuryrNetPolicy kind: KuryrNetworkPolicy
metadata: metadata:
annotations: annotations:
networkpolicy_name: test-network-policy networkPolicyLink: <link>
networkpolicy_namespace: default
networkpolicy_uid: aee1c59f-c634-11e8-b63d-002564fdd760
clusterName: "" clusterName: ""
creationTimestamp: 2018-10-02T11:17:02Z creationTimestamp: 2018-10-02T11:17:02Z
generation: 0 generation: 0
name: np-test-network-policy name: test-network-policy
namespace: default namespace: default
resourceVersion: "1546" resourceVersion: "1546"
selfLink: /apis/openstack.org/v1/namespaces/default/kuryrnetpolicies/np-test-network-policy selfLink: /apis/openstack.org/v1/namespaces/default/kuryrnetworkpolicies/np-test-network-policy
uid: afb99326-c634-11e8-b63d-002564fdd760 uid: afb99326-c634-11e8-b63d-002564fdd760
spec: spec:
egressSgRules: egressSgRules:
- security_group_rule: - sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress direction: egress
ethertype: IPv4 ethertype: IPv4
id: 1969a0b3-55e1-43d7-ba16-005b4ed4cbb7
port_range_max: 5978 port_range_max: 5978
port_range_min: 5978 port_range_min: 5978
protocol: tcp protocol: tcp
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5
ingressSgRules: ingressSgRules:
- security_group_rule: - sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress direction: ingress
ethertype: IPv4 ethertype: IPv4
id: 6598aa1f-4f94-4fb2-81ce-d3649ba28f33
port_range_max: 8080 port_range_max: 8080
port_range_min: 8080 port_range_min: 8080
protocol: tcp protocol: tcp
security_group_id: cdee7815-3b49-4a3e-abc8-31e384ab75c5 status:
securityGroupId: cdee7815-3b49-4a3e-abc8-31e384ab75c5 securityGroupId: cdee7815-3b49-4a3e-abc8-31e384ab75c5
networkpolicy_spec: securityGroupRules:
egress:
- ports:
- port: 5978
protocol: TCP
to:
- namespaceSelector:
matchLabels:
project: default
ingress:
- ports:
- port: 8080
protocol: TCP
from:
- namespaceSelector:
matchLabels:
project: default
podSelector:
matchLabels:
project: default
policyTypes:
- Ingress
- Egress
$ openstack security group rule list sg-test-network-policy -c "IP Protocol" -c "Port Range" -c "Direction" --long $ openstack security group rule list sg-test-network-policy -c "IP Protocol" -c "Port Range" -c "Direction" --long
+-------------+------------+-----------+ +-------------+------------+-----------+
@ -388,6 +335,6 @@ Testing the network policy support functionality
.. code-block:: console .. code-block:: console
$ kubectl delete -f network_policy.yml $ kubectl delete -f network_policy.yml
$ kubectl get kuryrnetpolicies $ kubectl get kuryrnetworkpolicies
$ kubectl get networkpolicies $ kubectl get networkpolicies
$ openstack security group list | grep sg-test-network-policy $ openstack security group list | grep sg-test-network-policy

View File

@ -9,8 +9,6 @@ spec:
plural: kuryrnetpolicies plural: kuryrnetpolicies
singular: kuryrnetpolicy singular: kuryrnetpolicy
kind: KuryrNetPolicy kind: KuryrNetPolicy
shortNames:
- knp
versions: versions:
- name: v1 - name: v1
served: true served: true

View File

@ -0,0 +1,158 @@
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: kuryrnetworkpolicies.openstack.org
spec:
group: openstack.org
scope: Namespaced
names:
plural: kuryrnetworkpolicies
singular: kuryrnetworkpolicy
kind: KuryrNetworkPolicy
shortNames:
- knp
versions:
- name: v1
served: true
storage: true
additionalPrinterColumns:
- name: SG-ID
type: string
description: The ID of the SG associated to the policy
jsonPath: .status.securityGroupId
- name: Age
type: date
jsonPath: .metadata.creationTimestamp
schema:
openAPIV3Schema:
type: object
required:
- status
- spec
properties:
spec:
type: object
required:
- egressSgRules
- ingressSgRules
- podSelector
- policyTypes
properties:
egressSgRules:
type: array
items:
type: object
required:
- sgRule
properties:
affectedPods:
type: array
items:
type: object
properties:
podIP:
type: string
podNamespace:
type: string
required:
- podIP
- podNamespace
namespace:
type: string
sgRule:
type: object
properties:
description:
type: string
direction:
type: string
ethertype:
type: string
port_range_max:
type: integer
port_range_min:
type: integer
protocol:
type: string
remote_ip_prefix:
type: string
ingressSgRules:
type: array
items:
type: object
required:
- sgRule
properties:
affectedPods:
type: array
items:
type: object
properties:
podIP:
type: string
podNamespace:
type: string
required:
- podIP
- podNamespace
namespace:
type: string
sgRule:
type: object
properties:
description:
type: string
direction:
type: string
ethertype:
type: string
port_range_max:
type: integer
port_range_min:
type: integer
protocol:
type: string
remote_ip_prefix:
type: string
podSelector:
x-kubernetes-preserve-unknown-fields: true
type: object
policyTypes:
type: array
items:
type: string
status:
type: object
required:
- securityGroupRules
properties:
securityGroupId:
type: string
securityGroupRules:
type: array
items:
type: object
required:
- id
properties:
id:
type: string
description:
type: string
direction:
type: string
ethertype:
type: string
port_range_max:
type: integer
port_range_min:
type: integer
protocol:
type: string
remote_ip_prefix:
type: string
security_group_id:
type: string
podSelector:
x-kubernetes-preserve-unknown-fields: true
type: object

View File

@ -23,9 +23,11 @@ K8S_API_CRD_NAMESPACES = K8S_API_CRD + '/namespaces'
K8S_API_CRD_KURYRNETS = K8S_API_CRD + '/kuryrnets' K8S_API_CRD_KURYRNETS = K8S_API_CRD + '/kuryrnets'
K8S_API_CRD_KURYRNETWORKS = K8S_API_CRD + '/kuryrnetworks' K8S_API_CRD_KURYRNETWORKS = K8S_API_CRD + '/kuryrnetworks'
K8S_API_CRD_KURYRNETPOLICIES = K8S_API_CRD + '/kuryrnetpolicies' K8S_API_CRD_KURYRNETPOLICIES = K8S_API_CRD + '/kuryrnetpolicies'
K8S_API_CRD_KURYRNETWORKPOLICIES = K8S_API_CRD + '/kuryrnetworkpolicies'
K8S_API_CRD_KURYRLOADBALANCERS = K8S_API_CRD + '/kuryrloadbalancers' K8S_API_CRD_KURYRLOADBALANCERS = K8S_API_CRD + '/kuryrloadbalancers'
K8S_API_CRD_KURYRPORTS = K8S_API_CRD + '/kuryrports' K8S_API_CRD_KURYRPORTS = K8S_API_CRD + '/kuryrports'
K8S_API_POLICIES = '/apis/networking.k8s.io/v1/networkpolicies' K8S_API_POLICIES = '/apis/networking.k8s.io/v1/networkpolicies'
K8S_API_NETWORKING = '/apis/networking.k8s.io/v1'
K8S_API_NPWG_CRD = '/apis/k8s.cni.cncf.io/v1' K8S_API_NPWG_CRD = '/apis/k8s.cni.cncf.io/v1'
@ -37,6 +39,7 @@ K8S_OBJ_POLICY = 'NetworkPolicy'
K8S_OBJ_KURYRNET = 'KuryrNet' K8S_OBJ_KURYRNET = 'KuryrNet'
K8S_OBJ_KURYRNETWORK = 'KuryrNetwork' K8S_OBJ_KURYRNETWORK = 'KuryrNetwork'
K8S_OBJ_KURYRNETPOLICY = 'KuryrNetPolicy' K8S_OBJ_KURYRNETPOLICY = 'KuryrNetPolicy'
K8S_OBJ_KURYRNETWORKPOLICY = 'KuryrNetworkPolicy'
K8S_OBJ_KURYRLOADBALANCER = 'KuryrLoadBalancer' K8S_OBJ_KURYRLOADBALANCER = 'KuryrLoadBalancer'
K8S_OBJ_KURYRPORT = 'KuryrPort' K8S_OBJ_KURYRPORT = 'KuryrPort'
@ -47,11 +50,13 @@ K8S_POD_STATUS_FAILED = 'Failed'
K8S_ANNOTATION_PREFIX = 'openstack.org/kuryr' K8S_ANNOTATION_PREFIX = 'openstack.org/kuryr'
K8S_ANNOTATION_VIF = K8S_ANNOTATION_PREFIX + '-vif' K8S_ANNOTATION_VIF = K8S_ANNOTATION_PREFIX + '-vif'
K8S_ANNOTATION_LABEL = K8S_ANNOTATION_PREFIX + '-pod-label' K8S_ANNOTATION_LABEL = K8S_ANNOTATION_PREFIX + '-pod-label'
K8S_ANNOTATION_IP = K8S_ANNOTATION_PREFIX + '-pod-ip'
K8S_ANNOTATION_NAMESPACE_LABEL = K8S_ANNOTATION_PREFIX + '-namespace-label' K8S_ANNOTATION_NAMESPACE_LABEL = K8S_ANNOTATION_PREFIX + '-namespace-label'
K8S_ANNOTATION_LBAAS_SPEC = K8S_ANNOTATION_PREFIX + '-lbaas-spec' K8S_ANNOTATION_LBAAS_SPEC = K8S_ANNOTATION_PREFIX + '-lbaas-spec'
K8S_ANNOTATION_LBAAS_STATE = K8S_ANNOTATION_PREFIX + '-lbaas-state' K8S_ANNOTATION_LBAAS_STATE = K8S_ANNOTATION_PREFIX + '-lbaas-state'
K8S_ANNOTATION_NET_CRD = K8S_ANNOTATION_PREFIX + '-net-crd' K8S_ANNOTATION_NET_CRD = K8S_ANNOTATION_PREFIX + '-net-crd'
K8S_ANNOTATION_NETPOLICY_CRD = K8S_ANNOTATION_PREFIX + '-netpolicy-crd' K8S_ANNOTATION_NETPOLICY_CRD = K8S_ANNOTATION_PREFIX + '-netpolicy-crd'
K8S_ANNOTATION_POLICY = K8S_ANNOTATION_PREFIX + '-counter'
K8S_ANNOTATION_NPWG_PREFIX = 'k8s.v1.cni.cncf.io' K8S_ANNOTATION_NPWG_PREFIX = 'k8s.v1.cni.cncf.io'
K8S_ANNOTATION_NPWG_NETWORK = K8S_ANNOTATION_NPWG_PREFIX + '/networks' K8S_ANNOTATION_NPWG_NETWORK = K8S_ANNOTATION_NPWG_PREFIX + '/networks'
@ -68,6 +73,7 @@ POD_FINALIZER = KURYR_FQDN + '/pod-finalizer'
KURYRNETWORK_FINALIZER = 'kuryrnetwork.finalizers.kuryr.openstack.org' KURYRNETWORK_FINALIZER = 'kuryrnetwork.finalizers.kuryr.openstack.org'
KURYRLB_FINALIZER = 'kuryr.openstack.org/kuryrloadbalancer-finalizers' KURYRLB_FINALIZER = 'kuryr.openstack.org/kuryrloadbalancer-finalizers'
SERVICE_FINALIZER = 'kuryr.openstack.org/service-finalizer' SERVICE_FINALIZER = 'kuryr.openstack.org/service-finalizer'
NETWORKPOLICY_FINALIZER = 'kuryr.openstack.org/networkpolicy-finalizer'
KURYRPORT_FINALIZER = KURYR_FQDN + '/kuryrport-finalizer' KURYRPORT_FINALIZER = KURYR_FQDN + '/kuryrport-finalizer'
KURYRPORT_LABEL = KURYR_FQDN + '/nodeName' KURYRPORT_LABEL = KURYR_FQDN + '/nodeName'

View File

@ -697,13 +697,10 @@ class NetworkPolicyDriver(DriverBase, metaclass=abc.ABCMeta):
ALIAS = 'network_policy' ALIAS = 'network_policy'
@abc.abstractmethod @abc.abstractmethod
def ensure_network_policy(self, policy, project_id): def ensure_network_policy(self, policy):
"""Policy created or updated """Policy created or updated
:param policy: dict containing Kubernetes NP object :param policy: dict containing Kubernetes NP object
:param project_id: openstack project_id
:returns: list of Pod objects affected by the network policy
creation or its podSelector modification
""" """
raise NotImplementedError() raise NotImplementedError()
@ -711,7 +708,7 @@ class NetworkPolicyDriver(DriverBase, metaclass=abc.ABCMeta):
def release_network_policy(self, kuryrnetpolicy): def release_network_policy(self, kuryrnetpolicy):
"""Delete a network policy """Delete a network policy
:param kuryrnetpolicy: dict containing Kuryrnetpolicy CRD object :param kuryrnetpolicy: dict containing NetworkPolicy object
""" """
raise NotImplementedError() raise NotImplementedError()
@ -729,18 +726,6 @@ class NetworkPolicyDriver(DriverBase, metaclass=abc.ABCMeta):
""" """
raise NotImplementedError() raise NotImplementedError()
@abc.abstractmethod
def knps_on_namespace(self, namespace):
"""Check if there si kuryr network policy CRDs on the namespace
This method returns true if there are knps on the specified namespace
or false otherwise
:param namespace: namespace name where the knps CRDs should be
:returns: true if knps CRDs on the namespace, false otherwise
"""
raise NotImplementedError()
@abc.abstractmethod @abc.abstractmethod
def namespaced_pods(self, policy): def namespaced_pods(self, policy):
"""Return pods on the policy namespace """Return pods on the policy namespace
@ -752,15 +737,6 @@ class NetworkPolicyDriver(DriverBase, metaclass=abc.ABCMeta):
""" """
raise NotImplementedError() raise NotImplementedError()
@abc.abstractmethod
def get_kuryrnetpolicy_crd(self, policy):
"""Return kuryrnetpolicy CRD object associated to the policy
:param policy: dict containing Kubernetes NP object
:returns: kuryrnetpolicy CRD object associated to the policy
"""
raise NotImplementedError()
class NetworkPolicyProjectDriver(DriverBase, metaclass=abc.ABCMeta): class NetworkPolicyProjectDriver(DriverBase, metaclass=abc.ABCMeta):
"""Get an OpenStack project id for K8s network policies""" """Get an OpenStack project id for K8s network policies"""

View File

@ -749,24 +749,41 @@ class LBaaSv2Driver(base.LBaaSDriver):
endpoints_link = utils.get_endpoints_link(service) endpoints_link = utils.get_endpoints_link(service)
k8s = clients.get_kubernetes_client() k8s = clients.get_kubernetes_client()
try: try:
endpoint = k8s.get(endpoints_link) k8s.get(endpoints_link)
except k_exc.K8sResourceNotFound: except k_exc.K8sResourceNotFound:
LOG.debug("Endpoint not Found. Skipping LB SG update for " LOG.debug("Endpoint not Found. Skipping LB SG update for "
"%s as the LB resources are not present", lbaas_name) "%s as the LB resources are not present", lbaas_name)
return return
lbaas = utils.get_lbaas_state(endpoint) try:
if not lbaas: klb = k8s.get(f'{k_const.K8S_API_CRD_NAMESPACES}/{svc_namespace}/'
LOG.debug('Endpoint not yet annotated with lbaas state.') f'kuryrloadbalancers/{svc_name}')
except k_exc.K8sResourceNotFound:
LOG.debug('No KuryrLoadBalancer for service %s created yet.',
lbaas_name)
raise k_exc.ResourceNotReady(svc_name) raise k_exc.ResourceNotReady(svc_name)
lbaas_obj = lbaas.loadbalancer if (not klb.get('status', {}).get('loadbalancer') or
lbaas_obj.security_groups = sgs klb.get('status', {}).get('listeners') is None):
LOG.debug('KuryrLoadBalancer for service %s not populated yet.',
lbaas_name)
raise k_exc.ResourceNotReady(svc_name)
utils.set_lbaas_state(endpoint, lbaas) klb['status']['loadbalancer']['security_groups'] = sgs
lb = klb['status']['loadbalancer']
try:
k8s.patch_crd('status/loadbalancer', klb['metadata']['selfLink'],
{'security_groups': sgs})
except k_exc.K8sResourceNotFound:
LOG.debug('KuryrLoadBalancer CRD not found %s', lbaas_name)
return
except k_exc.K8sClientException:
LOG.exception('Error updating KuryLoadBalancer CRD %s', lbaas_name)
raise
lsnr_ids = {(listener['protocol'], listener['port']): listener['id'] lsnr_ids = {(listener['protocol'], listener['port']): listener['id']
for listener in lbaas.listeners} for listener in klb['status']['listeners']}
for port in svc_ports: for port in svc_ports:
port_protocol = port['protocol'] port_protocol = port['protocol']
@ -779,6 +796,6 @@ class LBaaSv2Driver(base.LBaaSDriver):
"%s and port %s. Skipping", port_protocol, "%s and port %s. Skipping", port_protocol,
lbaas_port) lbaas_port)
continue continue
self._apply_members_security_groups(lbaas_obj, lbaas_port, self._apply_members_security_groups(lb, lbaas_port,
target_port, port_protocol, target_port, port_protocol,
sg_rule_name, listener_id, sgs) sg_rule_name, listener_id, sgs)

View File

@ -38,91 +38,94 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
self.os_net = clients.get_network_client() self.os_net = clients.get_network_client()
self.kubernetes = clients.get_kubernetes_client() self.kubernetes = clients.get_kubernetes_client()
def ensure_network_policy(self, policy, project_id): def ensure_network_policy(self, policy):
"""Create security group rules out of network policies """Create security group rules out of network policies
Triggered by events from network policies, this method ensures that Triggered by events from network policies, this method ensures that
security groups and security group rules are created or updated in KuryrNetworkPolicy object is created with the security group rules
reaction to kubernetes network policies events. definitions required to represent the NetworkPolicy.
In addition it returns the pods affected by the policy:
- Creation: pods on the namespace of the created policy
- Update: pods that needs to be updated in case of PodSelector
modification, i.e., the pods that were affected by the previous
PodSelector
""" """
LOG.debug("Creating network policy %s", policy['metadata']['name']) LOG.debug("Creating network policy %s", policy['metadata']['name'])
if self.get_kuryrnetpolicy_crd(policy): i_rules, e_rules = self._get_security_group_rules_from_network_policy(
previous_selector = ( policy)
self.update_security_group_rules_from_network_policy(policy))
if previous_selector or previous_selector == {}: knp = self._get_knp_crd(policy)
return self.affected_pods(policy, previous_selector) if not knp:
if previous_selector is None: self._create_knp_crd(policy, i_rules, e_rules)
return self.namespaced_pods(policy)
else: else:
self.create_security_group_rules_from_network_policy(policy, self._patch_knp_crd(policy, i_rules, e_rules, knp)
project_id)
def update_security_group_rules_from_network_policy(self, policy): def _convert_old_sg_rule(self, rule):
"""Update security group rules del rule['security_group_rule']['id']
del rule['security_group_rule']['security_group_id']
result = {
'sgRule': rule['security_group_rule'],
}
This method updates security group rules based on CRUD events gotten if 'namespace' in rule:
from a configuration or patch to an existing network policy result['namespace'] = rule['namespace']
if 'remote_ip_prefixes' in rule:
result['affectedPods'] = []
for ip, namespace in rule['remote_ip_prefixes']:
result['affectedPods'].append({
'podIP': ip,
'podNamespace': namespace,
})
return result
def get_from_old_crd(self, netpolicy):
name = netpolicy['metadata']['name'][3:] # Remove 'np-'
namespace = netpolicy['metadata']['namespace']
link = (f'{constants.K8S_API_NETWORKING}/namespaces/{namespace}/'
f'networkpolicies/{name}')
knp = {
'apiVersion': constants.K8S_API_CRD_VERSION,
'kind': constants.K8S_OBJ_KURYRNETWORKPOLICY,
'metadata': {
'namespace': namespace,
'name': name,
'annotations': {
'networkPolicyLink': link,
},
'finalizers': [constants.NETWORKPOLICY_FINALIZER],
},
'spec': {
'podSelector':
netpolicy['spec']['networkpolicy_spec']['podSelector'],
'egressSgRules': [self._convert_old_sg_rule(r) for r in
netpolicy['spec']['egressSgRules']],
'ingressSgRules': [self._convert_old_sg_rule(r) for r in
netpolicy['spec']['ingressSgRules']],
'policyTypes':
netpolicy['spec']['networkpolicy_spec']['policyTypes'],
},
'status': {
'podSelector': netpolicy['spec']['podSelector'],
'securityGroupId': netpolicy['spec']['securityGroupId'],
# We'll just let KuryrNetworkPolicyHandler figure out if rules
# are created on its own.
'securityGroupRules': [],
},
}
return knp
def _get_security_group_rules_from_network_policy(self, policy):
"""Get security group rules required to represent an NP
This method creates the security group rules bodies coming out of a
network policies' parsing.
""" """
crd = self.get_kuryrnetpolicy_crd(policy) i_rules, e_rules = self.parse_network_policy_rules(policy)
crd_name = crd['metadata']['name'] # Add default rules to allow traffic from host and svc subnet
LOG.debug("Already existing CRD %s", crd_name) i_rules += self._get_default_np_rules()
sg_id = crd['spec']['securityGroupId']
# Fetch existing SG rules from kuryrnetpolicy CRD
existing_sg_rules = []
existing_i_rules = crd['spec'].get('ingressSgRules')
existing_e_rules = crd['spec'].get('egressSgRules')
if existing_i_rules or existing_e_rules:
existing_sg_rules = existing_i_rules + existing_e_rules
existing_pod_selector = crd['spec'].get('podSelector')
# Parse network policy update and get new ruleset
i_rules, e_rules = self.parse_network_policy_rules(policy, sg_id)
current_sg_rules = i_rules + e_rules
# Get existing security group rules ids
sgr_ids = [x['security_group_rule'].pop('id') for x in
existing_sg_rules]
# SG rules that are meant to be kept get their id back
sg_rules_to_keep = [existing_sg_rules.index(rule) for rule in
existing_sg_rules if rule in current_sg_rules]
for sg_rule in sg_rules_to_keep:
sgr_id = sgr_ids[sg_rule]
existing_sg_rules[sg_rule]['security_group_rule']['id'] = sgr_id
# Delete SG rules that are no longer in the updated policy
sg_rules_to_delete = [existing_sg_rules.index(rule) for rule in
existing_sg_rules if rule not in
current_sg_rules]
for sg_rule in sg_rules_to_delete:
driver_utils.delete_security_group_rule(sgr_ids[sg_rule])
# Create new rules that weren't already on the security group
sg_rules_to_add = [rule for rule in current_sg_rules if rule not in
existing_sg_rules]
for sg_rule in sg_rules_to_add:
sgr_id = driver_utils.create_security_group_rule(sg_rule)
if sg_rule['security_group_rule'].get('direction') == 'ingress':
for i_rule in i_rules:
if sg_rule == i_rule:
i_rule["security_group_rule"]["id"] = sgr_id
else:
for e_rule in e_rules:
if sg_rule == e_rule:
e_rule["security_group_rule"]["id"] = sgr_id
# Annotate kuryrnetpolicy CRD with current policy and ruleset
pod_selector = policy['spec'].get('podSelector')
driver_utils.patch_kuryrnetworkpolicy_crd(crd, i_rules, e_rules,
pod_selector,
np_spec=policy['spec'])
if existing_pod_selector != pod_selector: return i_rules, e_rules
return existing_pod_selector
return False
def _add_default_np_rules(self, sg_id): def _get_default_np_rules(self):
"""Add extra SG rule to allow traffic from svcs and host. """Add extra SG rule to allow traffic from svcs and host.
This method adds the base security group rules for the NP security This method adds the base security group rules for the NP security
@ -130,6 +133,7 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
- Ensure traffic is allowed from the services subnet - Ensure traffic is allowed from the services subnet
- Ensure traffic is allowed from the host - Ensure traffic is allowed from the host
""" """
rules = []
default_cidrs = [] default_cidrs = []
if CONF.octavia_defaults.enforce_sg_rules: if CONF.octavia_defaults.enforce_sg_rules:
default_cidrs.append(utils.get_subnet_cidr( default_cidrs.append(utils.get_subnet_cidr(
@ -141,27 +145,21 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
ethertype = constants.IPv4 ethertype = constants.IPv4
if ipaddress.ip_network(cidr).version == constants.IP_VERSION_6: if ipaddress.ip_network(cidr).version == constants.IP_VERSION_6:
ethertype = constants.IPv6 ethertype = constants.IPv6
default_rule = { rules.append({
'security_group_rule': { 'sgRule': {
'ethertype': ethertype, 'ethertype': ethertype,
'security_group_id': sg_id,
'direction': 'ingress', 'direction': 'ingress',
'description': 'Kuryr-Kubernetes NetPolicy SG rule', 'description': 'Kuryr-Kubernetes NetPolicy SG rule',
'remote_ip_prefix': cidr 'remote_ip_prefix': cidr,
}} }})
driver_utils.create_security_group_rule(default_rule)
def create_security_group_rules_from_network_policy(self, policy, return rules
project_id):
"""Create initial security group and rules
This method creates the initial security group for hosting security def create_security_group(self, knp, project_id):
group rules coming out of network policies' parsing. sg_name = ("sg-" + knp['metadata']['namespace'] + "-" +
""" knp['metadata']['name'])
sg_name = ("sg-" + policy['metadata']['namespace'] + "-" + desc = ("Kuryr-Kubernetes Network Policy %s SG" %
policy['metadata']['name']) utils.get_res_unique_name(knp))
desc = "Kuryr-Kubernetes NetPolicy SG"
sg = None
try: try:
# Create initial security group # Create initial security group
sg = self.os_net.create_security_group(name=sg_name, sg = self.os_net.create_security_group(name=sg_name,
@ -176,46 +174,14 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
# rules just after creation. # rules just after creation.
for sgr in sg.security_group_rules: for sgr in sg.security_group_rules:
self.os_net.delete_security_group_rule(sgr['id']) self.os_net.delete_security_group_rule(sgr['id'])
i_rules, e_rules = self.parse_network_policy_rules(policy, sg.id)
for i_rule in i_rules:
sgr_id = driver_utils.create_security_group_rule(i_rule)
i_rule['security_group_rule']['id'] = sgr_id
for e_rule in e_rules:
sgr_id = driver_utils.create_security_group_rule(e_rule)
e_rule['security_group_rule']['id'] = sgr_id
# Add default rules to allow traffic from host and svc subnet
self._add_default_np_rules(sg.id)
except (os_exc.SDKException, exceptions.ResourceNotReady): except (os_exc.SDKException, exceptions.ResourceNotReady):
LOG.exception("Error creating security group for network policy " LOG.exception("Error creating security group for network policy "
" %s", policy['metadata']['name']) " %s", knp['metadata']['name'])
# If there's any issue creating sg rules, remove them
if sg:
self.os_net.delete_security_group(sg.id)
raise raise
try: return sg.id
self._add_kuryrnetpolicy_crd(policy, project_id, sg.id, i_rules,
e_rules)
except exceptions.K8sClientException:
LOG.exception("Rolling back security groups")
# Same with CRD creation
self.os_net.delete_security_group(sg.id)
raise
try: def _get_pods(self, pod_selector, namespace=None, namespace_selector=None):
crd = self.get_kuryrnetpolicy_crd(policy)
self.kubernetes.annotate(policy['metadata']['selfLink'],
{"kuryrnetpolicy_selfLink":
crd['metadata']['selfLink']})
except exceptions.K8sClientException:
LOG.exception('Error annotating network policy')
raise
def _get_pods(self, pod_selector, namespace=None,
namespace_selector=None):
matching_pods = {"items": []} matching_pods = {"items": []}
if namespace_selector: if namespace_selector:
matching_namespaces = driver_utils.get_namespaces( matching_namespaces = driver_utils.get_namespaces(
@ -232,7 +198,6 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
if not namespace_selector and namespace: if not namespace_selector and namespace:
matching_namespaces.append(self.kubernetes.get( matching_namespaces.append(self.kubernetes.get(
'{}/namespaces/{}'.format(constants.K8S_API_BASE, namespace))) '{}/namespaces/{}'.format(constants.K8S_API_BASE, namespace)))
else: else:
matching_namespaces.extend(driver_utils.get_namespaces( matching_namespaces.extend(driver_utils.get_namespaces(
namespace_selector).get('items')) namespace_selector).get('items'))
@ -285,7 +250,7 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
def _create_sg_rules_with_container_ports( def _create_sg_rules_with_container_ports(
self, container_ports, allow_all, resource, matched_pods, self, container_ports, allow_all, resource, matched_pods,
crd_rules, sg_id, direction, port, pod_selector=None, crd_rules, direction, port, pod_selector=None,
policy_namespace=None): policy_namespace=None):
cidr, ns = self._get_resource_details(resource) cidr, ns = self._get_resource_details(resource)
for pod, container_port in container_ports: for pod, container_port in container_ports:
@ -308,18 +273,18 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
if not allow_all and matched_pods and cidr: if not allow_all and matched_pods and cidr:
for container_port, pods in matched_pods.items(): for container_port, pods in matched_pods.items():
sg_rule = driver_utils.create_security_group_rule_body( sg_rule = driver_utils.create_security_group_rule_body(
sg_id, direction, container_port, direction, container_port,
protocol=port.get('protocol'), protocol=port.get('protocol'),
cidr=cidr, pods=pods) cidr=cidr, pods=pods)
if sg_rule not in crd_rules: if sg_rule not in crd_rules:
crd_rules.append(sg_rule) crd_rules.append(sg_rule)
if direction == 'egress': if direction == 'egress':
self._create_svc_egress_sg_rule( self._create_svc_egress_sg_rule(
sg_id, policy_namespace, crd_rules, policy_namespace, crd_rules,
resource=resource, port=container_port, resource=resource, port=container_port,
protocol=port.get('protocol')) protocol=port.get('protocol'))
def _create_sg_rule_body_on_text_port(self, sg_id, direction, port, def _create_sg_rule_body_on_text_port(self, direction, port,
resources, crd_rules, pod_selector, resources, crd_rules, pod_selector,
policy_namespace, allow_all=False): policy_namespace, allow_all=False):
"""Create SG rules when named port is used in the NP rule """Create SG rules when named port is used in the NP rule
@ -352,7 +317,7 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
for resource in resources: for resource in resources:
self._create_sg_rules_with_container_ports( self._create_sg_rules_with_container_ports(
container_ports, allow_all, resource, matched_pods, container_ports, allow_all, resource, matched_pods,
crd_rules, sg_id, direction, port) crd_rules, direction, port)
elif direction == "egress": elif direction == "egress":
for resource in resources: for resource in resources:
# NOTE(maysams) Skipping objects that refers to ipblocks # NOTE(maysams) Skipping objects that refers to ipblocks
@ -364,24 +329,24 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
container_ports = driver_utils.get_ports(resource, port) container_ports = driver_utils.get_ports(resource, port)
self._create_sg_rules_with_container_ports( self._create_sg_rules_with_container_ports(
container_ports, allow_all, resource, matched_pods, container_ports, allow_all, resource, matched_pods,
crd_rules, sg_id, direction, port, pod_selector, crd_rules, direction, port, pod_selector,
policy_namespace) policy_namespace)
if allow_all: if allow_all:
container_port = None container_port = None
for container_port, pods in matched_pods.items(): for container_port, pods in matched_pods.items():
for ethertype in (constants.IPv4, constants.IPv6): for ethertype in (constants.IPv4, constants.IPv6):
sg_rule = driver_utils.create_security_group_rule_body( sg_rule = driver_utils.create_security_group_rule_body(
sg_id, direction, container_port, direction, container_port,
protocol=port.get('protocol'), protocol=port.get('protocol'),
ethertype=ethertype, ethertype=ethertype,
pods=pods) pods=pods)
crd_rules.append(sg_rule) crd_rules.append(sg_rule)
if direction == 'egress': if direction == 'egress':
self._create_svc_egress_sg_rule( self._create_svc_egress_sg_rule(
sg_id, policy_namespace, crd_rules, policy_namespace, crd_rules,
port=container_port, protocol=port.get('protocol')) port=container_port, protocol=port.get('protocol'))
def _create_sg_rule_on_number_port(self, allowed_resources, sg_id, def _create_sg_rule_on_number_port(self, allowed_resources,
direction, port, sg_rule_body_list, direction, port, sg_rule_body_list,
policy_namespace): policy_namespace):
for resource in allowed_resources: for resource in allowed_resources:
@ -393,52 +358,51 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
continue continue
sg_rule = ( sg_rule = (
driver_utils.create_security_group_rule_body( driver_utils.create_security_group_rule_body(
sg_id, direction, port.get('port'), direction, port.get('port'),
protocol=port.get('protocol'), protocol=port.get('protocol'),
cidr=cidr, cidr=cidr,
namespace=ns)) namespace=ns))
sg_rule_body_list.append(sg_rule) sg_rule_body_list.append(sg_rule)
if direction == 'egress': if direction == 'egress':
self._create_svc_egress_sg_rule( self._create_svc_egress_sg_rule(
sg_id, policy_namespace, sg_rule_body_list, policy_namespace, sg_rule_body_list,
resource=resource, port=port.get('port'), resource=resource, port=port.get('port'),
protocol=port.get('protocol')) protocol=port.get('protocol'))
def _create_all_pods_sg_rules(self, port, sg_id, direction, def _create_all_pods_sg_rules(self, port, direction,
sg_rule_body_list, pod_selector, sg_rule_body_list, pod_selector,
policy_namespace): policy_namespace):
if type(port.get('port')) is not int: if type(port.get('port')) is not int:
all_pods = driver_utils.get_namespaced_pods().get('items') all_pods = driver_utils.get_namespaced_pods().get('items')
self._create_sg_rule_body_on_text_port( self._create_sg_rule_body_on_text_port(
sg_id, direction, port, all_pods, direction, port, all_pods,
sg_rule_body_list, pod_selector, policy_namespace, sg_rule_body_list, pod_selector, policy_namespace,
allow_all=True) allow_all=True)
else: else:
for ethertype in (constants.IPv4, constants.IPv6): for ethertype in (constants.IPv4, constants.IPv6):
sg_rule = ( sg_rule = (
driver_utils.create_security_group_rule_body( driver_utils.create_security_group_rule_body(
sg_id, direction, port.get('port'), direction, port.get('port'),
ethertype=ethertype, ethertype=ethertype,
protocol=port.get('protocol'))) protocol=port.get('protocol')))
sg_rule_body_list.append(sg_rule) sg_rule_body_list.append(sg_rule)
if direction == 'egress': if direction == 'egress':
self._create_svc_egress_sg_rule( self._create_svc_egress_sg_rule(
sg_id, policy_namespace, sg_rule_body_list, policy_namespace, sg_rule_body_list,
port=port.get('port'), port=port.get('port'),
protocol=port.get('protocol')) protocol=port.get('protocol'))
def _create_default_sg_rule(self, sg_id, direction, sg_rule_body_list): def _create_default_sg_rule(self, direction, sg_rule_body_list):
for ethertype in (constants.IPv4, constants.IPv6): for ethertype in (constants.IPv4, constants.IPv6):
default_rule = { default_rule = {
'security_group_rule': { 'sgRule': {
'ethertype': ethertype, 'ethertype': ethertype,
'security_group_id': sg_id,
'direction': direction, 'direction': direction,
'description': 'Kuryr-Kubernetes NetPolicy SG rule', 'description': 'Kuryr-Kubernetes NetPolicy SG rule',
}} }}
sg_rule_body_list.append(default_rule) sg_rule_body_list.append(default_rule)
def _parse_sg_rules(self, sg_rule_body_list, direction, policy, sg_id): def _parse_sg_rules(self, sg_rule_body_list, direction, policy):
"""Parse policy into security group rules. """Parse policy into security group rules.
This method inspects the policy object and create the equivalent This method inspects the policy object and create the equivalent
@ -460,16 +424,14 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
# traffic as NP policy is not affecting ingress # traffic as NP policy is not affecting ingress
LOG.debug('Applying default all open for ingress for ' LOG.debug('Applying default all open for ingress for '
'policy %s', policy['metadata']['selfLink']) 'policy %s', policy['metadata']['selfLink'])
self._create_default_sg_rule( self._create_default_sg_rule(direction, sg_rule_body_list)
sg_id, direction, sg_rule_body_list)
elif direction == 'egress': elif direction == 'egress':
if policy_types and 'Egress' not in policy_types: if policy_types and 'Egress' not in policy_types:
# NOTE(ltomasbo): add default rule to enable all egress # NOTE(ltomasbo): add default rule to enable all egress
# traffic as NP policy is not affecting egress # traffic as NP policy is not affecting egress
LOG.debug('Applying default all open for egress for ' LOG.debug('Applying default all open for egress for '
'policy %s', policy['metadata']['selfLink']) 'policy %s', policy['metadata']['selfLink'])
self._create_default_sg_rule( self._create_default_sg_rule(direction, sg_rule_body_list)
sg_id, direction, sg_rule_body_list)
else: else:
LOG.warning('Not supported policyType at network policy %s', LOG.warning('Not supported policyType at network policy %s',
policy['metadata']['selfLink']) policy['metadata']['selfLink'])
@ -487,7 +449,7 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
policy['metadata']['selfLink']) policy['metadata']['selfLink'])
for ethertype in (constants.IPv4, constants.IPv6): for ethertype in (constants.IPv4, constants.IPv6):
rule = driver_utils.create_security_group_rule_body( rule = driver_utils.create_security_group_rule_body(
sg_id, direction, ethertype=ethertype) direction, ethertype=ethertype)
sg_rule_body_list.append(rule) sg_rule_body_list.append(rule)
for rule_block in rule_list: for rule_block in rule_list:
@ -519,20 +481,20 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
if allowed_resources or allow_all or selectors: if allowed_resources or allow_all or selectors:
if type(port.get('port')) is not int: if type(port.get('port')) is not int:
self._create_sg_rule_body_on_text_port( self._create_sg_rule_body_on_text_port(
sg_id, direction, port, allowed_resources, direction, port, allowed_resources,
sg_rule_body_list, pod_selector, sg_rule_body_list, pod_selector,
policy_namespace) policy_namespace)
else: else:
self._create_sg_rule_on_number_port( self._create_sg_rule_on_number_port(
allowed_resources, sg_id, direction, port, allowed_resources, direction, port,
sg_rule_body_list, policy_namespace) sg_rule_body_list, policy_namespace)
if allow_all: if allow_all:
self._create_all_pods_sg_rules( self._create_all_pods_sg_rules(
port, sg_id, direction, sg_rule_body_list, port, direction, sg_rule_body_list,
pod_selector, policy_namespace) pod_selector, policy_namespace)
else: else:
self._create_all_pods_sg_rules( self._create_all_pods_sg_rules(
port, sg_id, direction, sg_rule_body_list, port, direction, sg_rule_body_list,
pod_selector, policy_namespace) pod_selector, policy_namespace)
elif allowed_resources or allow_all or selectors: elif allowed_resources or allow_all or selectors:
for resource in allowed_resources: for resource in allowed_resources:
@ -543,27 +505,27 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
if not cidr: if not cidr:
continue continue
rule = driver_utils.create_security_group_rule_body( rule = driver_utils.create_security_group_rule_body(
sg_id, direction, direction,
port_range_min=1, port_range_min=1,
port_range_max=65535, port_range_max=65535,
cidr=cidr, cidr=cidr,
namespace=namespace) namespace=namespace)
sg_rule_body_list.append(rule) sg_rule_body_list.append(rule)
if direction == 'egress': if direction == 'egress':
rule = self._create_svc_egress_sg_rule( self._create_svc_egress_sg_rule(
sg_id, policy_namespace, sg_rule_body_list, policy_namespace, sg_rule_body_list,
resource=resource) resource=resource)
if allow_all: if allow_all:
for ethertype in (constants.IPv4, constants.IPv6): for ethertype in (constants.IPv4, constants.IPv6):
rule = driver_utils.create_security_group_rule_body( rule = driver_utils.create_security_group_rule_body(
sg_id, direction, direction,
port_range_min=1, port_range_min=1,
port_range_max=65535, port_range_max=65535,
ethertype=ethertype) ethertype=ethertype)
sg_rule_body_list.append(rule) sg_rule_body_list.append(rule)
if direction == 'egress': if direction == 'egress':
self._create_svc_egress_sg_rule( self._create_svc_egress_sg_rule(policy_namespace,
sg_id, policy_namespace, sg_rule_body_list) sg_rule_body_list)
else: else:
LOG.debug('This network policy specifies no %(direction)s ' LOG.debug('This network policy specifies no %(direction)s '
'%(rule_direction)s and no ports: %(policy)s', '%(rule_direction)s and no ports: %(policy)s',
@ -571,15 +533,14 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
'rule_direction': rule_direction, 'rule_direction': rule_direction,
'policy': policy['metadata']['selfLink']}) 'policy': policy['metadata']['selfLink']})
def _create_svc_egress_sg_rule(self, sg_id, policy_namespace, def _create_svc_egress_sg_rule(self, policy_namespace, sg_rule_body_list,
sg_rule_body_list, resource=None, resource=None, port=None, protocol=None):
port=None, protocol=None):
services = driver_utils.get_services() services = driver_utils.get_services()
if not resource: if not resource:
svc_subnet = utils.get_subnet_cidr( svc_subnet = utils.get_subnet_cidr(
CONF.neutron_defaults.service_subnet) CONF.neutron_defaults.service_subnet)
rule = driver_utils.create_security_group_rule_body( rule = driver_utils.create_security_group_rule_body(
sg_id, 'egress', port, protocol=protocol, cidr=svc_subnet) 'egress', port, protocol=protocol, cidr=svc_subnet)
if rule not in sg_rule_body_list: if rule not in sg_rule_body_list:
sg_rule_body_list.append(rule) sg_rule_body_list.append(rule)
return return
@ -613,7 +574,7 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
if not cluster_ip: if not cluster_ip:
continue continue
rule = driver_utils.create_security_group_rule_body( rule = driver_utils.create_security_group_rule_body(
sg_id, 'egress', port, protocol=protocol, 'egress', port, protocol=protocol,
cidr=cluster_ip) cidr=cluster_ip)
if rule not in sg_rule_body_list: if rule not in sg_rule_body_list:
sg_rule_body_list.append(rule) sg_rule_body_list.append(rule)
@ -626,7 +587,7 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
return True return True
return False return False
def parse_network_policy_rules(self, policy, sg_id): def parse_network_policy_rules(self, policy):
"""Create security group rule bodies out of network policies. """Create security group rule bodies out of network policies.
Whenever a notification from the handler 'on-present' method is Whenever a notification from the handler 'on-present' method is
@ -637,10 +598,8 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
ingress_sg_rule_body_list = [] ingress_sg_rule_body_list = []
egress_sg_rule_body_list = [] egress_sg_rule_body_list = []
self._parse_sg_rules(ingress_sg_rule_body_list, 'ingress', policy, self._parse_sg_rules(ingress_sg_rule_body_list, 'ingress', policy)
sg_id) self._parse_sg_rules(egress_sg_rule_body_list, 'egress', policy)
self._parse_sg_rules(egress_sg_rule_body_list, 'egress', policy,
sg_id)
return ingress_sg_rule_body_list, egress_sg_rule_body_list return ingress_sg_rule_body_list, egress_sg_rule_body_list
@ -657,19 +616,15 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
LOG.exception("Error deleting security group %s.", sg_id) LOG.exception("Error deleting security group %s.", sg_id)
raise raise
def release_network_policy(self, netpolicy_crd): def release_network_policy(self, policy):
if netpolicy_crd is not None: return self._del_knp_crd(policy)
self.delete_np_sg(netpolicy_crd['spec']['securityGroupId'])
self._del_kuryrnetpolicy_crd(
netpolicy_crd['metadata']['name'],
netpolicy_crd['metadata']['namespace'])
def get_kuryrnetpolicy_crd(self, policy): def _get_knp_crd(self, policy):
netpolicy_crd_name = "np-" + policy['metadata']['name'] netpolicy_crd_name = policy['metadata']['name']
netpolicy_crd_namespace = policy['metadata']['namespace'] netpolicy_crd_namespace = policy['metadata']['namespace']
try: try:
netpolicy_crd = self.kubernetes.get( netpolicy_crd = self.kubernetes.get(
'{}/{}/kuryrnetpolicies/{}'.format( '{}/{}/kuryrnetworkpolicies/{}'.format(
constants.K8S_API_CRD_NAMESPACES, netpolicy_crd_namespace, constants.K8S_API_CRD_NAMESPACES, netpolicy_crd_namespace,
netpolicy_crd_name)) netpolicy_crd_name))
except exceptions.K8sResourceNotFound: except exceptions.K8sResourceNotFound:
@ -679,77 +634,81 @@ class NetworkPolicyDriver(base.NetworkPolicyDriver):
raise raise
return netpolicy_crd return netpolicy_crd
def knps_on_namespace(self, namespace): def _create_knp_crd(self, policy, i_rules, e_rules):
try:
netpolicy_crds = self.kubernetes.get(
'{}/{}/kuryrnetpolicies'.format(
constants.K8S_API_CRD_NAMESPACES,
namespace))
except exceptions.K8sClientException:
LOG.exception("Kubernetes Client Exception.")
raise
if netpolicy_crds.get('items'):
return True
return False
def _add_kuryrnetpolicy_crd(self, policy, project_id, sg_id, i_rules,
e_rules):
networkpolicy_name = policy['metadata']['name'] networkpolicy_name = policy['metadata']['name']
netpolicy_crd_name = "np-" + networkpolicy_name
namespace = policy['metadata']['namespace'] namespace = policy['metadata']['namespace']
pod_selector = policy['spec'].get('podSelector') pod_selector = policy['spec'].get('podSelector')
policy_types = policy['spec'].get('policyTypes', [])
netpolicy_crd = { netpolicy_crd = {
'apiVersion': 'openstack.org/v1', 'apiVersion': 'openstack.org/v1',
'kind': constants.K8S_OBJ_KURYRNETPOLICY, 'kind': constants.K8S_OBJ_KURYRNETWORKPOLICY,
'metadata': { 'metadata': {
'name': netpolicy_crd_name, 'name': networkpolicy_name,
'namespace': namespace, 'namespace': namespace,
'annotations': { 'annotations': {
'networkpolicy_name': networkpolicy_name, 'networkPolicyLink': policy['metadata']['selfLink'],
'networkpolicy_namespace': namespace,
'networkpolicy_uid': policy['metadata']['uid'],
}, },
'finalizers': [constants.NETWORKPOLICY_FINALIZER],
}, },
'spec': { 'spec': {
'securityGroupName': "sg-" + networkpolicy_name,
'securityGroupId': sg_id,
'ingressSgRules': i_rules, 'ingressSgRules': i_rules,
'egressSgRules': e_rules, 'egressSgRules': e_rules,
'podSelector': pod_selector, 'podSelector': pod_selector,
'networkpolicy_spec': policy['spec'] 'policyTypes': policy_types,
},
'status': {
'securityGroupRules': [],
}, },
} }
try: try:
LOG.debug("Creating KuryrNetPolicy CRD %s" % netpolicy_crd) LOG.debug("Creating KuryrNetworkPolicy CRD %s" % netpolicy_crd)
kubernetes_post = '{}/{}/kuryrnetpolicies'.format( url = '{}/{}/kuryrnetworkpolicies'.format(
constants.K8S_API_CRD_NAMESPACES, constants.K8S_API_CRD_NAMESPACES,
namespace) namespace)
self.kubernetes.post(kubernetes_post, netpolicy_crd) netpolicy_crd = self.kubernetes.post(url, netpolicy_crd)
except exceptions.K8sClientException: except exceptions.K8sClientException:
LOG.exception("Kubernetes Client Exception creating kuryrnetpolicy" LOG.exception("Kubernetes Client Exception creating "
" CRD. %s" % exceptions.K8sClientException) "KuryrNetworkPolicy CRD.")
raise raise
return netpolicy_crd return netpolicy_crd
def _del_kuryrnetpolicy_crd(self, netpolicy_crd_name, def _patch_knp_crd(self, policy, i_rules, e_rules, knp):
netpolicy_crd_namespace): networkpolicy_name = policy['metadata']['name']
namespace = policy['metadata']['namespace']
pod_selector = policy['spec'].get('podSelector')
url = (f'{constants.K8S_API_CRD_NAMESPACES}/{namespace}'
f'/kuryrnetworkpolicies/{networkpolicy_name}')
# FIXME(dulek): Rules should be hashable objects, not dict so that
# we could compare them easily here.
data = {
'ingressSgRules': i_rules,
'egressSgRules': e_rules,
}
if knp['spec'].get('podSelector') != pod_selector:
data['podSelector'] = pod_selector
self.kubernetes.patch_crd('spec', url, data)
def _del_knp_crd(self, policy):
try: try:
LOG.debug("Deleting KuryrNetPolicy CRD %s" % netpolicy_crd_name) ns = policy['metadata']['namespace']
self.kubernetes.delete('{}/{}/kuryrnetpolicies/{}'.format( name = policy['metadata']['name']
constants.K8S_API_CRD_NAMESPACES, LOG.debug("Deleting KuryrNetworkPolicy CRD %s" % name)
netpolicy_crd_namespace, self.kubernetes.delete('{}/{}/kuryrnetworkpolicies/{}'.format(
netpolicy_crd_name)) constants.K8S_API_CRD_NAMESPACES, ns, name))
return True
except exceptions.K8sResourceNotFound: except exceptions.K8sResourceNotFound:
LOG.debug("KuryrNetPolicy CRD Object not found: %s", LOG.debug("KuryrNetworkPolicy CRD Object not found: %s", name)
netpolicy_crd_name) return False
except exceptions.K8sClientException: except exceptions.K8sClientException:
LOG.exception("Kubernetes Client Exception deleting kuryrnetpolicy" LOG.exception("Kubernetes Client Exception deleting "
" CRD.") "KuryrNetworkPolicy CRD %s." % name)
raise raise
def affected_pods(self, policy, selector=None): def affected_pods(self, policy, selector=None):
if selector or selector == {}: if selector is not None:
pod_selector = selector pod_selector = selector
else: else:
pod_selector = policy['spec'].get('podSelector') pod_selector = policy['spec'].get('podSelector')

View File

@ -12,6 +12,8 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import uuid
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
@ -21,6 +23,7 @@ from kuryr_kubernetes import constants
from kuryr_kubernetes.controller.drivers import base from kuryr_kubernetes.controller.drivers import base
from kuryr_kubernetes.controller.drivers import utils as driver_utils from kuryr_kubernetes.controller.drivers import utils as driver_utils
from kuryr_kubernetes import exceptions from kuryr_kubernetes import exceptions
from kuryr_kubernetes import utils
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@ -29,9 +32,7 @@ def _get_namespace_labels(namespace):
kubernetes = clients.get_kubernetes_client() kubernetes = clients.get_kubernetes_client()
try: try:
path = '{}/{}'.format( path = '{}/{}'.format(constants.K8S_API_NAMESPACES, namespace)
constants.K8S_API_NAMESPACES, namespace)
LOG.debug("K8s API Query %s", path)
namespaces = kubernetes.get(path) namespaces = kubernetes.get(path)
LOG.debug("Return Namespace: %s", namespaces) LOG.debug("Return Namespace: %s", namespaces)
except exceptions.K8sResourceNotFound: except exceptions.K8sResourceNotFound:
@ -43,107 +44,41 @@ def _get_namespace_labels(namespace):
return namespaces['metadata'].get('labels') return namespaces['metadata'].get('labels')
def _create_sg_rule(sg_id, direction, cidr, port=None, namespace=None): def _bump_networkpolicy(knp):
if port: kubernetes = clients.get_kubernetes_client()
sg_rule = driver_utils.create_security_group_rule_body(
sg_id, direction, port.get('port'),
protocol=port.get('protocol'), cidr=cidr, namespace=namespace)
else:
sg_rule = driver_utils.create_security_group_rule_body(
sg_id, direction, port_range_min=1,
port_range_max=65535, cidr=cidr, namespace=namespace)
sgr_id = driver_utils.create_security_group_rule(sg_rule) try:
kubernetes.annotate(
sg_rule['security_group_rule']['id'] = sgr_id knp['metadata']['annotations']['networkPolicyLink'],
return sg_rule {constants.K8S_ANNOTATION_POLICY: str(uuid.uuid4())})
except exceptions.K8sResourceNotFound:
LOG.exception("NetworkPolicy not found")
raise
except exceptions.K8sClientException:
LOG.exception("Kubernetes Client Exception")
raise
def _get_crd_rule(crd_rules, container_port): def _create_sg_rules_with_container_ports(container_ports, matched):
"""Returns a CRD rule that matches a container port """Checks if security group rules based on container ports will be updated
Retrieves the CRD rule that contains a given port in
the range of the rule ports.
"""
for crd_rule in crd_rules:
remote_ip_prefixes = crd_rule.get('remote_ip_prefixes')
min_port = crd_rule['security_group_rule'].get('port_range_min')
max_port = crd_rule['security_group_rule'].get('port_range_max')
if (remote_ip_prefixes and (
min_port >= container_port and
container_port <= max_port)):
return crd_rule
def _create_sg_rules_with_container_ports(matched_pods, container_ports,
allow_all, namespace, matched,
crd_rules, sg_id, direction,
port, rule_selected_pod):
"""Create security group rules based on container ports
If it's an allow from/to everywhere rule or a rule with a
NamespaceSelector, updates a sg rule that might already exist
and match the named port or creates a new one with the
remote_ip_prefixes field containing the matched pod info.
Otherwise, creates rules for each container port without
a remote_ip_prefixes field.
param matched_pods: List of dicts where the key is a container
port and value is the pods that have the port
param container_ports: List of tuples with pods and port values param container_ports: List of tuples with pods and port values
param allow_all: True is it's an allow from/to everywhere rule,
False otherwise.
param namespace: Namespace name
param matched: If a sg rule was created for the NP rule param matched: If a sg rule was created for the NP rule
param crd_rules: List of sg rules to update when patching the CRD
param sg_id: ID of the security group
param direction: String representing rule direction, ingress or egress
param port: Dict containing port and protocol
param rule_selected_pod: K8s Pod object selected by the rules selectors
return: True if a sg rule was created, False otherwise. return: True if a sg rule needs to be created, False otherwise.
""" """
for pod, container_port in container_ports: for pod, container_port in container_ports:
pod_namespace = pod['metadata']['namespace']
pod_ip = driver_utils.get_pod_ip(pod) pod_ip = driver_utils.get_pod_ip(pod)
if not pod_ip: if not pod_ip:
LOG.debug("Skipping SG rule creation for pod %s due to " LOG.debug("Skipping SG rule creation for pod %s due to "
"no IP assigned", pod['metadata']['name']) "no IP assigned", pod['metadata']['name'])
continue continue
return matched
pod_info = {pod_ip: pod_namespace} return False
matched = True
if allow_all or namespace:
crd_rule = _get_crd_rule(crd_rules, container_port)
if crd_rule:
crd_rule['remote_ip_prefixes'].update(pod_info)
else:
if container_port in matched_pods:
matched_pods[container_port].update(pod_info)
else:
matched_pods[container_port] = pod_info
else:
pod_ip = driver_utils.get_pod_ip(rule_selected_pod)
if not pod_ip:
LOG.debug("Skipping SG rule creation for pod %s due to no IP "
"assigned", rule_selected_pod['metadata']['name'])
continue
sg_rule = driver_utils.create_security_group_rule_body(
sg_id, direction, container_port,
protocol=port.get('protocol'),
cidr=pod_ip, pods=pod_info)
sgr_id = driver_utils.create_security_group_rule(sg_rule)
sg_rule['security_group_rule']['id'] = sgr_id
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
return matched
def _create_sg_rule_on_text_port(sg_id, direction, port, rule_selected_pods, def _create_sg_rule_on_text_port(direction, port, rule_selected_pods, matched,
crd_rules, matched, crd, crd):
allow_all=False, namespace=None):
matched_pods = {}
spec_pod_selector = crd['spec'].get('podSelector') spec_pod_selector = crd['spec'].get('podSelector')
policy_namespace = crd['metadata']['namespace'] policy_namespace = crd['metadata']['namespace']
spec_pods = driver_utils.get_pods( spec_pods = driver_utils.get_pods(
@ -151,11 +86,8 @@ def _create_sg_rule_on_text_port(sg_id, direction, port, rule_selected_pods,
if direction == 'ingress': if direction == 'ingress':
for spec_pod in spec_pods: for spec_pod in spec_pods:
container_ports = driver_utils.get_ports(spec_pod, port) container_ports = driver_utils.get_ports(spec_pod, port)
for rule_selected_pod in rule_selected_pods: matched = _create_sg_rules_with_container_ports(
matched = _create_sg_rules_with_container_ports( container_ports, matched)
matched_pods, container_ports, allow_all, namespace,
matched, crd_rules, sg_id, direction, port,
rule_selected_pod)
elif direction == 'egress': elif direction == 'egress':
for rule_selected_pod in rule_selected_pods: for rule_selected_pod in rule_selected_pods:
pod_label = rule_selected_pod['metadata'].get('labels') pod_label = rule_selected_pod['metadata'].get('labels')
@ -168,51 +100,11 @@ def _create_sg_rule_on_text_port(sg_id, direction, port, rule_selected_pods,
container_ports = driver_utils.get_ports( container_ports = driver_utils.get_ports(
rule_selected_pod, port) rule_selected_pod, port)
matched = _create_sg_rules_with_container_ports( matched = _create_sg_rules_with_container_ports(
matched_pods, container_ports, allow_all, container_ports, matched)
namespace, matched, crd_rules, sg_id, direction,
port, rule_selected_pod)
_apply_sg_rules_on_matched_pods(matched_pods, sg_id, direction, namespace,
port, crd_rules, allow_all)
return matched return matched
def _apply_sg_rules_on_matched_pods(matched_pods, sg_id, direction, namespace, def _create_sg_rules(crd, pod, pod_selector, rule_block, direction, matched):
port, crd_rules, allow_all=False):
for container_port, pods in matched_pods.items():
if allow_all:
for ethertype in (constants.IPv4, constants.IPv6):
sg_rule = driver_utils.create_security_group_rule_body(
sg_id, direction, container_port,
protocol=port.get('protocol'),
ethertype=ethertype,
pods=pods)
sgr_id = driver_utils.create_security_group_rule(sg_rule)
sg_rule['security_group_rule']['id'] = sgr_id
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
else:
namespace_obj = driver_utils.get_namespace(namespace)
if not namespace_obj:
LOG.debug("Skipping SG rule creation. Inexistent"
" namespace.")
continue
namespace_cidr = driver_utils.get_namespace_subnet_cidr(
namespace_obj)
sg_rule = driver_utils.create_security_group_rule_body(
sg_id, direction, container_port,
protocol=port.get('protocol'), cidr=namespace_cidr,
pods=pods)
sgr_id = driver_utils.create_security_group_rule(sg_rule)
sg_rule['security_group_rule']['id'] = sgr_id
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
def _create_sg_rules(crd, pod, pod_selector, rule_block,
crd_rules, direction, matched, namespace=None,
allow_all=False):
pod_labels = pod['metadata'].get('labels') pod_labels = pod['metadata'].get('labels')
pod_ip = driver_utils.get_pod_ip(pod) pod_ip = driver_utils.get_pod_ip(pod)
if not pod_ip: if not pod_ip:
@ -224,73 +116,52 @@ def _create_sg_rules(crd, pod, pod_selector, rule_block,
# with empty value or with '{}', as they have same result in here. # with empty value or with '{}', as they have same result in here.
if pod_selector: if pod_selector:
if driver_utils.match_selector(pod_selector, pod_labels): if driver_utils.match_selector(pod_selector, pod_labels):
sg_id = crd['spec']['securityGroupId']
if 'ports' in rule_block: if 'ports' in rule_block:
for port in rule_block['ports']: for port in rule_block['ports']:
if type(port.get('port')) is not int: if type(port.get('port')) is not int:
matched = _create_sg_rule_on_text_port( matched = _create_sg_rule_on_text_port(
sg_id, direction, port, [pod], direction, port, [pod], matched, crd)
crd_rules, matched, crd)
else: else:
matched = True matched = True
sg_rule = _create_sg_rule(
sg_id, direction, cidr=pod_ip, port=port,
namespace=namespace)
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
else: else:
matched = True matched = True
sg_rule = _create_sg_rule(
sg_id, direction, cidr=pod_ip, namespace=namespace)
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
else: else:
# NOTE (maysams) When a policy with namespaceSelector and text port # NOTE (maysams) When a policy with namespaceSelector and text port
# is applied the port on the pods needs to be retrieved. # is applied the port on the pods needs to be retrieved.
sg_id = crd['spec']['securityGroupId']
if 'ports' in rule_block: if 'ports' in rule_block:
for port in rule_block['ports']: for port in rule_block['ports']:
if type(port.get('port')) is not int: if type(port.get('port')) is not int:
matched = ( matched = _create_sg_rule_on_text_port(
_create_sg_rule_on_text_port( direction, port, [pod], matched, crd)
sg_id, direction, port, [pod],
crd_rules, matched, crd,
allow_all=allow_all, namespace=namespace))
return matched return matched
def _parse_selectors_on_pod(crd, pod, pod_selector, namespace_selector, def _parse_selectors_on_pod(crd, pod, pod_selector, namespace_selector,
rule_block, crd_rules, direction, matched): rule_block, direction, matched):
pod_namespace = pod['metadata']['namespace'] pod_namespace = pod['metadata']['namespace']
pod_namespace_labels = _get_namespace_labels(pod_namespace) pod_namespace_labels = _get_namespace_labels(pod_namespace)
policy_namespace = crd['metadata']['namespace'] policy_namespace = crd['metadata']['namespace']
if namespace_selector == {}: if namespace_selector == {}:
matched = _create_sg_rules(crd, pod, pod_selector, rule_block, matched = _create_sg_rules(crd, pod, pod_selector, rule_block,
crd_rules, direction, matched, direction, matched)
allow_all=True)
elif namespace_selector: elif namespace_selector:
if (pod_namespace_labels and if (pod_namespace_labels and
driver_utils.match_selector(namespace_selector, driver_utils.match_selector(namespace_selector,
pod_namespace_labels)): pod_namespace_labels)):
matched = _create_sg_rules(crd, pod, pod_selector, matched = _create_sg_rules(crd, pod, pod_selector,
rule_block, crd_rules, rule_block, direction, matched)
direction, matched,
namespace=pod_namespace)
else: else:
if pod_namespace == policy_namespace: if pod_namespace == policy_namespace:
matched = _create_sg_rules(crd, pod, pod_selector, rule_block, matched = _create_sg_rules(crd, pod, pod_selector, rule_block,
crd_rules, direction, matched, direction, matched)
namespace=pod_namespace) return matched
return matched, crd_rules
def _parse_selectors_on_namespace(crd, direction, pod_selector, def _parse_selectors_on_namespace(crd, direction, pod_selector,
ns_selector, rule_block, crd_rules, ns_selector, rule_block, namespace, matched):
namespace, matched):
ns_name = namespace['metadata'].get('name') ns_name = namespace['metadata'].get('name')
ns_labels = namespace['metadata'].get('labels') ns_labels = namespace['metadata'].get('labels')
sg_id = crd['spec']['securityGroupId']
if (ns_selector and ns_labels and if (ns_selector and ns_labels and
driver_utils.match_selector(ns_selector, ns_labels)): driver_utils.match_selector(ns_selector, ns_labels)):
@ -301,10 +172,8 @@ def _parse_selectors_on_namespace(crd, direction, pod_selector,
if type(port.get('port')) is not int: if type(port.get('port')) is not int:
matched = ( matched = (
_create_sg_rule_on_text_port( _create_sg_rule_on_text_port(
sg_id, direction, port, pods, direction, port, pods, matched, crd))
crd_rules, matched, crd))
else: else:
matched = True
for pod in pods: for pod in pods:
pod_ip = driver_utils.get_pod_ip(pod) pod_ip = driver_utils.get_pod_ip(pod)
if not pod_ip: if not pod_ip:
@ -312,11 +181,7 @@ def _parse_selectors_on_namespace(crd, direction, pod_selector,
LOG.debug("Skipping SG rule creation for pod " LOG.debug("Skipping SG rule creation for pod "
"%s due to no IP assigned", pod_name) "%s due to no IP assigned", pod_name)
continue continue
sg_rule = _create_sg_rule( matched = True
sg_id, direction, pod_ip, port=port,
namespace=ns_name)
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
else: else:
for pod in pods: for pod in pods:
pod_ip = driver_utils.get_pod_ip(pod) pod_ip = driver_utils.get_pod_ip(pod)
@ -326,45 +191,25 @@ def _parse_selectors_on_namespace(crd, direction, pod_selector,
" to no IP assigned", pod_name) " to no IP assigned", pod_name)
continue continue
matched = True matched = True
sg_rule = _create_sg_rule(
sg_id, direction, pod_ip,
namespace=ns_name)
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
else: else:
ns_pods = driver_utils.get_pods(ns_selector)['items'] ns_pods = driver_utils.get_pods(ns_selector)['items']
ns_cidr = driver_utils.get_namespace_subnet_cidr(namespace)
if 'ports' in rule_block: if 'ports' in rule_block:
for port in rule_block['ports']: for port in rule_block['ports']:
if type(port.get('port')) is not int: if type(port.get('port')) is not int:
matched = ( matched = (
_create_sg_rule_on_text_port( _create_sg_rule_on_text_port(
sg_id, direction, port, ns_pods, direction, port, ns_pods, matched, crd))
crd_rules, matched, crd))
else: else:
matched = True matched = True
sg_rule = _create_sg_rule(
sg_id, direction, ns_cidr,
port=port, namespace=ns_name)
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
else: else:
matched = True matched = True
sg_rule = _create_sg_rule( return matched
sg_id, direction, ns_cidr,
namespace=ns_name)
if sg_rule not in crd_rules:
crd_rules.append(sg_rule)
return matched, crd_rules
def _parse_rules(direction, crd, pod=None, namespace=None): def _parse_rules(direction, crd, policy, pod=None, namespace=None):
policy = crd['spec']['networkpolicy_spec']
rule_direction = 'from' rule_direction = 'from'
crd_rules = crd['spec'].get('ingressSgRules')
if direction == 'egress': if direction == 'egress':
rule_direction = 'to' rule_direction = 'to'
crd_rules = crd['spec'].get('egressSgRules')
matched = False matched = False
rule_list = policy.get(direction, []) rule_list = policy.get(direction, [])
@ -373,13 +218,13 @@ def _parse_rules(direction, crd, pod=None, namespace=None):
namespace_selector = rule.get('namespaceSelector') namespace_selector = rule.get('namespaceSelector')
pod_selector = rule.get('podSelector') pod_selector = rule.get('podSelector')
if pod: if pod:
matched, crd_rules = _parse_selectors_on_pod( matched = _parse_selectors_on_pod(
crd, pod, pod_selector, namespace_selector, crd, pod, pod_selector, namespace_selector,
rule_block, crd_rules, direction, matched) rule_block, direction, matched)
elif namespace: elif namespace:
matched, crd_rules = _parse_selectors_on_namespace( matched = _parse_selectors_on_namespace(
crd, direction, pod_selector, namespace_selector, crd, direction, pod_selector, namespace_selector,
rule_block, crd_rules, namespace, matched) rule_block, namespace, matched)
# NOTE(maysams): Cover the case of a network policy that allows # NOTE(maysams): Cover the case of a network policy that allows
# from everywhere on a named port, e.g., when there is no 'from' # from everywhere on a named port, e.g., when there is no 'from'
@ -387,84 +232,62 @@ def _parse_rules(direction, crd, pod=None, namespace=None):
if pod and not matched: if pod and not matched:
for port in rule_block.get('ports', []): for port in rule_block.get('ports', []):
if type(port.get('port')) is not int: if type(port.get('port')) is not int:
sg_id = crd['spec']['securityGroupId']
if (not rule_block.get(rule_direction, []) if (not rule_block.get(rule_direction, [])
or direction == "ingress"): or direction == "ingress"):
matched = (_create_sg_rule_on_text_port( matched = _create_sg_rule_on_text_port(
sg_id, direction, port, [pod], direction, port, [pod], matched, crd)
crd_rules, matched, crd, return matched
allow_all=True))
return matched, crd_rules
def _parse_rules_on_delete_namespace(rule_list, direction, ns_name): def _parse_rules_on_delete_namespace(rule_list, direction, ns_name):
matched = False
rules = []
for rule in rule_list: for rule in rule_list:
LOG.debug('Parsing %(dir)s Rule %(r)s', {'dir': direction, LOG.debug('Parsing %(dir)s Rule %(r)s', {'dir': direction, 'r': rule})
'r': rule})
rule_namespace = rule.get('namespace', None) rule_namespace = rule.get('namespace', None)
remote_ip_prefixes = rule.get('remote_ip_prefixes', {}) affectedPods = rule.get('affectedPods', [])
if rule_namespace and rule_namespace == ns_name: if rule_namespace and rule_namespace == ns_name:
matched = True return True
driver_utils.delete_security_group_rule( elif affectedPods:
rule['security_group_rule']['id']) for pod_info in affectedPods:
elif remote_ip_prefixes: if pod_info['podNamespace'] == ns_name:
for remote_ip, namespace in list(remote_ip_prefixes.items()): return True
if namespace == ns_name: return False
matched = True
remote_ip_prefixes.pop(remote_ip)
if remote_ip_prefixes:
rule['remote_ip_prefixes'] = remote_ip_prefixes
rules.append(rule)
else:
rules.append(rule)
return matched, rules
def _parse_rules_on_delete_pod(rule_list, direction, pod_ip): def _parse_rules_on_delete_pod(rule_list, direction, pod_ip):
matched = False
rules = []
for rule in rule_list: for rule in rule_list:
LOG.debug('Parsing %(dir)s Rule %(r)s', {'dir': direction, LOG.debug('Parsing %(dir)s Rule %(r)s', {'dir': direction, 'r': rule})
'r': rule}) remote_ip_prefix = rule['sgRule'].get('remote_ip_prefix')
remote_ip_prefix = rule['security_group_rule'].get( affectedPods = rule.get('affectedPods', [])
'remote_ip_prefix')
remote_ip_prefixes = rule.get('remote_ip_prefixes', {})
if remote_ip_prefix and remote_ip_prefix == pod_ip: if remote_ip_prefix and remote_ip_prefix == pod_ip:
matched = True return True
driver_utils.delete_security_group_rule( elif affectedPods:
rule['security_group_rule']['id']) for pod_info in affectedPods:
elif remote_ip_prefixes: if pod_info['podIP'] == pod_ip:
if pod_ip in remote_ip_prefixes: return True
matched = True return False
remote_ip_prefixes.pop(pod_ip)
if remote_ip_prefixes:
rule['remote_ip_prefixes'] = remote_ip_prefixes
rules.append(rule)
else:
rules.append(rule)
return matched, rules
def _get_pod_sgs(pod, project_id): def _get_pod_sgs(pod):
sg_list = [] sg_list = []
pod_labels = pod['metadata'].get('labels') pod_labels = pod['metadata'].get('labels')
pod_namespace = pod['metadata']['namespace'] pod_namespace = pod['metadata']['namespace']
knp_crds = driver_utils.get_kuryrnetpolicy_crds( knp_crds = driver_utils.get_kuryrnetworkpolicy_crds(
namespace=pod_namespace) namespace=pod_namespace)
for crd in knp_crds.get('items'): for crd in knp_crds:
pod_selector = crd['spec'].get('podSelector') pod_selector = crd['spec'].get('podSelector')
if pod_selector: if driver_utils.match_selector(pod_selector, pod_labels):
if driver_utils.match_selector(pod_selector, pod_labels): sg_id = crd['status'].get('securityGroupId')
LOG.debug("Appending %s", if not sg_id:
str(crd['spec']['securityGroupId'])) # NOTE(dulek): We could just assume KNP handler will apply it,
sg_list.append(str(crd['spec']['securityGroupId'])) # but it's possible that when it gets this pod it
else: # will have no IP yet and will be skipped.
LOG.debug("Appending %s", str(crd['spec']['securityGroupId'])) LOG.warning('SG for NP %s not created yet, will retry.',
sg_list.append(str(crd['spec']['securityGroupId'])) utils.get_res_unique_name(crd))
raise exceptions.ResourceNotReady(pod)
LOG.debug("Appending %s", crd['status']['securityGroupId'])
sg_list.append(crd['status']['securityGroupId'])
# NOTE(maysams) Pods that are not selected by any Networkpolicy # NOTE(maysams) Pods that are not selected by any Networkpolicy
# are fully accessible. Thus, the default security group is associated. # are fully accessible. Thus, the default security group is associated.
@ -481,55 +304,56 @@ class NetworkPolicySecurityGroupsDriver(base.PodSecurityGroupsDriver):
"""Provides security groups for pods based on network policies""" """Provides security groups for pods based on network policies"""
def get_security_groups(self, pod, project_id): def get_security_groups(self, pod, project_id):
return _get_pod_sgs(pod, project_id) return _get_pod_sgs(pod)
def create_sg_rules(self, pod): def create_sg_rules(self, pod):
LOG.debug("Creating sg rule for pod: %s", pod['metadata']['name']) LOG.debug("Creating SG rules for pod: %s", pod['metadata']['name'])
crd_pod_selectors = [] crd_pod_selectors = []
knp_crds = driver_utils.get_kuryrnetpolicy_crds() knp_crds = driver_utils.get_kuryrnetworkpolicy_crds()
for crd in knp_crds.get('items'): nps = driver_utils.get_networkpolicies()
crd_selector = crd['spec'].get('podSelector') pairs = driver_utils.zip_knp_np(knp_crds, nps)
i_matched, i_rules = _parse_rules('ingress', crd, pod=pod) for crd, policy in pairs:
e_matched, e_rules = _parse_rules('egress', crd, pod=pod) crd_selector = crd['spec'].get('podSelector')
spec = policy.get('spec')
i_matched = _parse_rules('ingress', crd, spec, pod=pod)
e_matched = _parse_rules('egress', crd, spec, pod=pod)
if i_matched or e_matched: if i_matched or e_matched:
driver_utils.patch_kuryrnetworkpolicy_crd(crd, i_rules, _bump_networkpolicy(crd)
e_rules,
crd_selector)
if i_matched: if i_matched:
crd_pod_selectors.append(crd_selector) crd_pod_selectors.append(crd_selector)
return crd_pod_selectors return crd_pod_selectors
def delete_sg_rules(self, pod): def delete_sg_rules(self, pod):
LOG.debug("Deleting sg rule for pod: %s", pod['metadata']['name']) LOG.debug("Deleting SG rules for pod: %s", pod['metadata']['name'])
pod_ip = driver_utils.get_pod_ip(pod) pod_ip = driver_utils.get_pod_ip(pod)
crd_pod_selectors = []
if not pod_ip: if not pod_ip:
LOG.debug("Skipping SG rule deletion as pod %s has no IP assigned", LOG.debug("Skipping SG rule deletion as pod %s has no IP assigned",
pod['metadata']['name']) pod['metadata']['name'])
return None return crd_pod_selectors
crd_pod_selectors = [] knp_crds = driver_utils.get_kuryrnetworkpolicy_crds()
knp_crds = driver_utils.get_kuryrnetpolicy_crds() for crd in knp_crds:
for crd in knp_crds.get('items'):
crd_selector = crd['spec'].get('podSelector') crd_selector = crd['spec'].get('podSelector')
ingress_rule_list = crd['spec'].get('ingressSgRules') ingress_rule_list = crd['spec'].get('ingressSgRules')
egress_rule_list = crd['spec'].get('egressSgRules') egress_rule_list = crd['spec'].get('egressSgRules')
i_matched, i_rules = _parse_rules_on_delete_pod( i_matched = _parse_rules_on_delete_pod(
ingress_rule_list, "ingress", pod_ip) ingress_rule_list, "ingress", pod_ip)
e_matched, e_rules = _parse_rules_on_delete_pod( e_matched = _parse_rules_on_delete_pod(
egress_rule_list, "egress", pod_ip) egress_rule_list, "egress", pod_ip)
if i_matched or e_matched: if i_matched or e_matched:
driver_utils.patch_kuryrnetworkpolicy_crd(crd, i_rules, _bump_networkpolicy(crd)
e_rules,
crd_selector)
if i_matched: if i_matched:
crd_pod_selectors.append(crd_selector) crd_pod_selectors.append(crd_selector)
return crd_pod_selectors return crd_pod_selectors
def update_sg_rules(self, pod): def update_sg_rules(self, pod):
LOG.debug("Updating sg rule for pod: %s", pod['metadata']['name']) LOG.debug("Updating SG rules for pod: %s", pod['metadata']['name'])
# FIXME(dulek): No need to bump twice.
crd_pod_selectors = [] crd_pod_selectors = []
crd_pod_selectors.extend(self.delete_sg_rules(pod)) crd_pod_selectors.extend(self.delete_sg_rules(pod))
crd_pod_selectors.extend(self.create_sg_rules(pod)) crd_pod_selectors.extend(self.create_sg_rules(pod))
@ -537,51 +361,47 @@ class NetworkPolicySecurityGroupsDriver(base.PodSecurityGroupsDriver):
def delete_namespace_sg_rules(self, namespace): def delete_namespace_sg_rules(self, namespace):
ns_name = namespace['metadata']['name'] ns_name = namespace['metadata']['name']
LOG.debug("Deleting sg rule for namespace: %s", LOG.debug("Deleting SG rules for namespace: %s", ns_name)
ns_name)
crd_selectors = [] crd_selectors = []
knp_crds = driver_utils.get_kuryrnetpolicy_crds() knp_crds = driver_utils.get_kuryrnetworkpolicy_crds()
for crd in knp_crds.get('items'): for crd in knp_crds:
crd_selector = crd['spec'].get('podSelector') crd_selector = crd['spec'].get('podSelector')
ingress_rule_list = crd['spec'].get('ingressSgRules') ingress_rule_list = crd['spec'].get('ingressSgRules')
egress_rule_list = crd['spec'].get('egressSgRules') egress_rule_list = crd['spec'].get('egressSgRules')
i_matched, i_rules = _parse_rules_on_delete_namespace( i_matched = _parse_rules_on_delete_namespace(
ingress_rule_list, "ingress", ns_name) ingress_rule_list, "ingress", ns_name)
e_matched, e_rules = _parse_rules_on_delete_namespace( e_matched = _parse_rules_on_delete_namespace(
egress_rule_list, "egress", ns_name) egress_rule_list, "egress", ns_name)
if i_matched or e_matched: if i_matched or e_matched:
driver_utils.patch_kuryrnetworkpolicy_crd( _bump_networkpolicy(crd)
crd, i_rules, e_rules, crd_selector)
if i_matched: if i_matched:
crd_selectors.append(crd_selector) crd_selectors.append(crd_selector)
return crd_selectors return crd_selectors
def create_namespace_sg_rules(self, namespace): def create_namespace_sg_rules(self, namespace):
ns_name = namespace['metadata']['name'] ns_name = namespace['metadata']['name']
LOG.debug("Creating sg rule for namespace: %s", ns_name) LOG.debug("Creating SG rules for namespace: %s", ns_name)
crd_selectors = [] crd_selectors = []
knp_crds = driver_utils.get_kuryrnetpolicy_crds() knp_crds = driver_utils.get_kuryrnetworkpolicy_crds()
for crd in knp_crds.get('items'): nps = driver_utils.get_networkpolicies()
pairs = driver_utils.zip_knp_np(knp_crds, nps)
for crd, policy in pairs:
crd_selector = crd['spec'].get('podSelector') crd_selector = crd['spec'].get('podSelector')
spec = policy.get('spec')
i_matched, i_rules = _parse_rules( i_matched = _parse_rules('ingress', crd, spec, namespace=namespace)
'ingress', crd, namespace=namespace) e_matched = _parse_rules('egress', crd, spec, namespace=namespace)
e_matched, e_rules = _parse_rules(
'egress', crd, namespace=namespace)
if i_matched or e_matched: if i_matched or e_matched:
driver_utils.patch_kuryrnetworkpolicy_crd(crd, i_rules, _bump_networkpolicy(crd)
e_rules,
crd_selector)
if i_matched: if i_matched:
crd_selectors.append(crd_selector) crd_selectors.append(crd_selector)
return crd_selectors return crd_selectors
def update_namespace_sg_rules(self, namespace): def update_namespace_sg_rules(self, namespace):
LOG.debug("Updating sg rule for namespace: %s", LOG.debug("Updating SG rules for namespace: %s",
namespace['metadata']['name']) namespace['metadata']['name'])
crd_selectors = [] crd_selectors = []
crd_selectors.extend(self.delete_namespace_sg_rules(namespace)) crd_selectors.extend(self.delete_namespace_sg_rules(namespace))
@ -608,5 +428,5 @@ class NetworkPolicyServiceSecurityGroupsDriver(
# all of them. Hence only considering the security groups applied # all of them. Hence only considering the security groups applied
# to the first one. # to the first one.
if pods: if pods:
return _get_pod_sgs(pods[0], project_id) return _get_pod_sgs(pods[0])
return sg_list[:] return sg_list[:]

View File

@ -25,6 +25,7 @@ from oslo_serialization import jsonutils
from kuryr_kubernetes import clients from kuryr_kubernetes import clients
from kuryr_kubernetes import constants from kuryr_kubernetes import constants
from kuryr_kubernetes import exceptions as k_exc from kuryr_kubernetes import exceptions as k_exc
from kuryr_kubernetes import utils
OPERATORS_WITH_VALUES = [constants.K8S_OPERATOR_IN, OPERATORS_WITH_VALUES = [constants.K8S_OPERATOR_IN,
@ -182,10 +183,8 @@ def replace_encoded_characters(labels):
def create_security_group_rule(body): def create_security_group_rule(body):
os_net = clients.get_network_client() os_net = clients.get_network_client()
sgr = ''
try: try:
params = dict(body['security_group_rule']) params = dict(body)
if 'ethertype' in params: if 'ethertype' in params:
# NOTE(gryf): in openstacksdk, there is ether_type attribute in # NOTE(gryf): in openstacksdk, there is ether_type attribute in
# the security_group_rule object, in CRD we have 'ethertype' # the security_group_rule object, in CRD we have 'ethertype'
@ -220,29 +219,27 @@ def delete_security_group_rule(security_group_rule_id):
raise raise
def patch_kuryrnetworkpolicy_crd(crd, i_rules, e_rules, pod_selector, def patch_kuryrnetworkpolicy_crd(crd, i_rules, e_rules):
np_spec=None):
kubernetes = clients.get_kubernetes_client() kubernetes = clients.get_kubernetes_client()
crd_name = crd['metadata']['name'] crd_name = crd['metadata']['name']
if not np_spec: LOG.debug('Patching KuryrNetworkPolicy CRD %s' % crd_name)
np_spec = crd['spec']['networkpolicy_spec']
LOG.debug('Patching KuryrNetPolicy CRD %s' % crd_name)
try: try:
kubernetes.patch_crd('spec', crd['metadata']['selfLink'], spec = {
{'ingressSgRules': i_rules, 'ingressSgRules': i_rules,
'egressSgRules': e_rules, 'egressSgRules': e_rules,
'podSelector': pod_selector, }
'networkpolicy_spec': np_spec})
kubernetes.patch_crd('spec', crd['metadata']['selfLink'], spec)
except k_exc.K8sResourceNotFound: except k_exc.K8sResourceNotFound:
LOG.debug('KuryrNetPolicy CRD not found %s', crd_name) LOG.debug('KuryrNetworkPolicy CRD not found %s', crd_name)
except k_exc.K8sClientException: except k_exc.K8sClientException:
LOG.exception('Error updating kuryrnetpolicy CRD %s', crd_name) LOG.exception('Error updating KuryrNetworkPolicy CRD %s', crd_name)
raise raise
def create_security_group_rule_body( def create_security_group_rule_body(
security_group_id, direction, port_range_min=None, direction, port_range_min=None, port_range_max=None, protocol=None,
port_range_max=None, protocol=None, ethertype=None, cidr=None, ethertype='IPv4', cidr=None,
description="Kuryr-Kubernetes NetPolicy SG rule", namespace=None, description="Kuryr-Kubernetes NetPolicy SG rule", namespace=None,
pods=None): pods=None):
if not port_range_min: if not port_range_min:
@ -253,15 +250,12 @@ def create_security_group_rule_body(
if not protocol: if not protocol:
protocol = 'TCP' protocol = 'TCP'
if not ethertype: if cidr and netaddr.IPNetwork(cidr).version == 6:
ethertype = 'IPv4' ethertype = 'IPv6'
if cidr and netaddr.IPNetwork(cidr).version == 6:
ethertype = 'IPv6'
security_group_rule_body = { security_group_rule_body = {
'security_group_rule': { 'sgRule': {
'ethertype': ethertype, 'ethertype': ethertype,
'security_group_id': security_group_id,
'description': description, 'description': description,
'direction': direction, 'direction': direction,
'protocol': protocol.lower(), 'protocol': protocol.lower(),
@ -270,12 +264,12 @@ def create_security_group_rule_body(
} }
} }
if cidr: if cidr:
security_group_rule_body['security_group_rule'][ security_group_rule_body['sgRule']['remote_ip_prefix'] = cidr
'remote_ip_prefix'] = cidr
if namespace: if namespace:
security_group_rule_body['namespace'] = namespace security_group_rule_body['namespace'] = namespace
if pods: if pods:
security_group_rule_body['remote_ip_prefixes'] = pods security_group_rule_body['affectedPods'] = [
{'podIP': ip, 'podNamespace': ns} for ip, ns in pods.items()]
LOG.debug("Creating sg rule body %s", security_group_rule_body) LOG.debug("Creating sg rule body %s", security_group_rule_body)
return security_group_rule_body return security_group_rule_body
@ -310,25 +304,60 @@ def get_annotated_labels(resource, annotation_labels):
return None return None
def get_kuryrnetpolicy_crds(namespace=None): def get_kuryrnetworkpolicy_crds(namespace=None):
kubernetes = clients.get_kubernetes_client() kubernetes = clients.get_kubernetes_client()
try: try:
if namespace: if namespace:
knp_path = '{}/{}/kuryrnetpolicies'.format( knp_path = '{}/{}/kuryrnetworkpolicies'.format(
constants.K8S_API_CRD_NAMESPACES, namespace) constants.K8S_API_CRD_NAMESPACES, namespace)
else: else:
knp_path = constants.K8S_API_CRD_KURYRNETPOLICIES knp_path = constants.K8S_API_CRD_KURYRNETWORKPOLICIES
LOG.debug("K8s API Query %s", knp_path)
knps = kubernetes.get(knp_path) knps = kubernetes.get(knp_path)
LOG.debug("Return Kuryr Network Policies with label %s", knps) LOG.debug("Returning KuryrNetworkPolicies %s", knps)
except k_exc.K8sResourceNotFound: except k_exc.K8sResourceNotFound:
LOG.exception("KuryrNetPolicy CRD not found") LOG.exception("KuryrNetworkPolicy CRD not found")
raise raise
except k_exc.K8sClientException: except k_exc.K8sClientException:
LOG.exception("Kubernetes Client Exception") LOG.exception("Kubernetes Client Exception")
raise raise
return knps return knps.get('items', [])
def get_networkpolicies(namespace=None):
# FIXME(dulek): This is awful, shouldn't we have list method on k8s_client?
kubernetes = clients.get_kubernetes_client()
try:
if namespace:
np_path = '{}/{}/networkpolicies'.format(
constants.K8S_API_CRD_NAMESPACES, namespace)
else:
np_path = constants.K8S_API_POLICIES
nps = kubernetes.get(np_path)
except k_exc.K8sResourceNotFound:
LOG.exception("NetworkPolicy or namespace %s not found", namespace)
raise
except k_exc.K8sClientException:
LOG.exception("Exception when listing NetworkPolicies.")
raise
return nps.get('items', [])
def zip_knp_np(knps, nps):
"""Returns tuples of matching KuryrNetworkPolicy and NetworkPolicy objs.
:param knps: List of KuryrNetworkPolicy objects
:param nps: List of NetworkPolicy objects
:return: List of tuples of matching (knp, np)
"""
pairs = []
for knp in knps:
for np in nps:
if utils.get_res_unique_name(knp) == utils.get_res_unique_name(np):
pairs.append((knp, np))
break
return pairs
def match_expressions(expressions, labels): def match_expressions(expressions, labels):
@ -369,6 +398,8 @@ def match_labels(crd_labels, labels):
def match_selector(selector, labels): def match_selector(selector, labels):
if selector is None:
return True
crd_labels = selector.get('matchLabels', None) crd_labels = selector.get('matchLabels', None)
crd_expressions = selector.get('matchExpressions', None) crd_expressions = selector.get('matchExpressions', None)

View File

@ -1,37 +0,0 @@
# Copyright 2019 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from kuryr_kubernetes import constants
from kuryr_kubernetes.controller.drivers import base as drivers
from kuryr_kubernetes.handlers import k8s_base
class KuryrNetPolicyHandler(k8s_base.ResourceEventHandler):
"""Controller side of KuryrNetPolicy process for Kubernetes pods.
`KuryrNetPolicyHandler` runs on the Kuryr-Kubernetes controller and is
responsible for deleting associated security groups upon namespace
deletion.
"""
OBJECT_KIND = constants.K8S_OBJ_KURYRNETPOLICY
OBJECT_WATCH_PATH = constants.K8S_API_CRD_KURYRNETPOLICIES
def __init__(self):
super(KuryrNetPolicyHandler, self).__init__()
self._drv_policy = drivers.NetworkPolicyDriver.get_instance()
def on_deleted(self, netpolicy_crd):
crd_sg = netpolicy_crd['spec'].get('securityGroupId')
if crd_sg:
self._drv_policy.delete_np_sg(crd_sg)

View File

@ -0,0 +1,307 @@
# Copyright 2019 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from openstack import exceptions as os_exc
from oslo_config import cfg
from oslo_log import log as logging
from kuryr_kubernetes import clients
from kuryr_kubernetes import constants
from kuryr_kubernetes.controller.drivers import base as drivers
from kuryr_kubernetes.controller.drivers import utils as driver_utils
from kuryr_kubernetes import exceptions
from kuryr_kubernetes.handlers import k8s_base
from kuryr_kubernetes import utils
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
class KuryrNetworkPolicyHandler(k8s_base.ResourceEventHandler):
"""Controller side of KuryrNetworkPolicy process for Kubernetes pods.
`KuryrNetworkPolicyHandler` runs on the kuryr-controller and is
responsible for creating and deleting SG and SG rules for `NetworkPolicy`.
The `KuryrNetworkPolicy` objects are created by `NetworkPolicyHandler`.
"""
OBJECT_KIND = constants.K8S_OBJ_KURYRNETWORKPOLICY
OBJECT_WATCH_PATH = constants.K8S_API_CRD_KURYRNETWORKPOLICIES
def __init__(self):
super(KuryrNetworkPolicyHandler, self).__init__()
self.os_net = clients.get_network_client()
self.k8s = clients.get_kubernetes_client()
self._drv_project = drivers.NetworkPolicyProjectDriver.get_instance()
self._drv_policy = drivers.NetworkPolicyDriver.get_instance()
self._drv_vif_pool = drivers.VIFPoolDriver.get_instance(
specific_driver='multi_pool')
self._drv_vif_pool.set_vif_driver()
self._drv_pod_sg = drivers.PodSecurityGroupsDriver.get_instance()
self._drv_svc_sg = drivers.ServiceSecurityGroupsDriver.get_instance()
self._drv_lbaas = drivers.LBaaSDriver.get_instance()
self._convert_old_crds()
def _convert_old_crds(self):
try:
netpolicies = self.k8s.get(constants.K8S_API_CRD_KURYRNETPOLICIES)
except exceptions.K8sClientException:
LOG.exception("Error when fetching old KuryrNetPolicy CRDs for "
"conversion.")
return
for netpolicy in netpolicies.get('items', []):
new_networkpolicy = self._drv_policy.get_from_old_crd(netpolicy)
url = (f"{constants.K8S_API_CRD_NAMESPACES}/"
f"{netpolicy['metadata']['namespace']}/"
f"kuryrnetworkpolicies")
try:
self.k8s.post(url, new_networkpolicy)
except exceptions.K8sConflict:
LOG.warning('KuryrNetworkPolicy %s already existed when '
'converting KuryrNetPolicy %s. Ignoring.',
utils.get_res_unique_name(new_networkpolicy),
utils.get_res_unique_name(netpolicy))
self.k8s.delete(netpolicy['metadata']['selfLink'])
def _patch_kuryrnetworkpolicy_crd(self, knp, field, data,
action='replace'):
name = knp['metadata']['name']
LOG.debug('Patching KuryrNet CRD %s', name)
try:
status = self.k8s.patch_crd(field, knp['metadata']['selfLink'],
data, action=action)
except exceptions.K8sResourceNotFound:
LOG.debug('KuryrNetworkPolicy CRD not found %s', name)
return None
except exceptions.K8sClientException:
LOG.exception('Error updating KuryrNetworkPolicy CRD %s', name)
raise
knp['status'] = status
return knp
def _get_networkpolicy(self, link):
return self.k8s.get(link)
def _compare_sgs(self, a, b):
checked_props = ('direction', 'ethertype', 'port_range_max',
'port_range_min', 'protocol', 'remote_ip_prefix')
for k in checked_props:
if a.get(k) != b.get(k):
return False
return True
def _find_sgs(self, a, rules):
for r in rules:
if self._compare_sgs(r, a):
return True
return False
def on_present(self, knp):
uniq_name = utils.get_res_unique_name(knp)
LOG.debug('on_present() for NP %s', uniq_name)
project_id = self._drv_project.get_project(knp)
if not knp['status'].get('securityGroupId'):
LOG.debug('Creating SG for NP %s', uniq_name)
# TODO(dulek): Do this right, why do we have a project driver per
# resource?! This one expects policy, not knp, but it
# ignores it anyway!
sg_id = self._drv_policy.create_security_group(knp, project_id)
knp = self._patch_kuryrnetworkpolicy_crd(
knp, 'status', {'securityGroupId': sg_id})
LOG.debug('Created SG %s for NP %s', sg_id, uniq_name)
else:
# TODO(dulek): Check if it really exists, recreate if not.
sg_id = knp['status'].get('securityGroupId')
# First update SG rules as we want to apply updated ones
current = knp['status']['securityGroupRules']
required = knp['spec']['ingressSgRules'] + knp['spec']['egressSgRules']
required = [r['sgRule'] for r in required]
# FIXME(dulek): This *might* be prone to race conditions if failure
# happens between SG rule is created/deleted and status
# is annotated. We don't however need to revert on failed
# K8s operations - creation, deletion of SG rules and
# attaching or detaching SG from ports are idempotent
# so we can repeat them. What worries me is losing track
# of an update due to restart. The only way to do it
# would be to periodically check if what's in `status`
# is the reality in OpenStack API. That should be just
# two Neutron API calls + possible resync.
to_add = []
to_remove = []
for r in required:
if not self._find_sgs(r, current):
to_add.append(r)
for i, c in enumerate(current):
if not self._find_sgs(c, required):
to_remove.append((i, c['id']))
LOG.debug('SGs to add for NP %s: %s', uniq_name, to_add)
for sg_rule in to_add:
LOG.debug('Adding SG rule %s for NP %s', sg_rule, uniq_name)
sg_rule['security_group_id'] = sg_id
sgr_id = driver_utils.create_security_group_rule(sg_rule)
sg_rule['id'] = sgr_id
knp = self._patch_kuryrnetworkpolicy_crd(
knp, 'status', {'securityGroupRules/-': sg_rule}, 'add')
# We need to remove starting from the last one in order to maintain
# indexes. Please note this will start to fail miserably if we start
# to change status from multiple places.
to_remove.reverse()
LOG.debug('SGs to remove for NP %s: %s', uniq_name,
[x[1] for x in to_remove])
for i, sg_rule_id in to_remove:
LOG.debug('Removing SG rule %s as it is no longer part of NP %s',
sg_rule_id, uniq_name)
driver_utils.delete_security_group_rule(sg_rule_id)
knp = self._patch_kuryrnetworkpolicy_crd(
knp, 'status/securityGroupRules', i, 'remove')
pods_to_update = []
previous_sel = knp['status'].get('podSelector', None)
current_sel = knp['spec']['podSelector']
if previous_sel is None:
# Fresh NetworkPolicy that was never applied.
pods_to_update.extend(self._drv_policy.namespaced_pods(knp))
elif previous_sel != current_sel or previous_sel == {}:
pods_to_update.extend(
self._drv_policy.affected_pods(knp, previous_sel))
matched_pods = self._drv_policy.affected_pods(knp)
pods_to_update.extend(matched_pods)
for pod in pods_to_update:
if driver_utils.is_host_network(pod):
continue
pod_sgs = self._drv_pod_sg.get_security_groups(pod, project_id)
self._drv_vif_pool.update_vif_sgs(pod, pod_sgs)
# FIXME(dulek): We should not need this one day.
policy = self._get_networkpolicy(knp['metadata']['annotations']
['networkPolicyLink'])
if (pods_to_update and CONF.octavia_defaults.enforce_sg_rules and
not self._is_egress_only_policy(policy)):
# NOTE(ltomasbo): only need to change services if the pods that
# they point to are updated
services = driver_utils.get_services(knp['metadata']['namespace'])
for service in services.get('items', []):
# TODO(ltomasbo): Skip other services that are not affected
# by the policy
# FIXME(dulek): Make sure to include svcs without selector when
# we start supporting them.
if (not service['spec'].get('selector') or not
self._is_service_affected(service, pods_to_update)):
continue
sgs = self._drv_svc_sg.get_security_groups(service, project_id)
self._drv_lbaas.update_lbaas_sg(service, sgs)
self._patch_kuryrnetworkpolicy_crd(knp, 'status',
{'podSelector': current_sel})
def _is_service_affected(self, service, affected_pods):
svc_namespace = service['metadata']['namespace']
svc_selector = service['spec'].get('selector')
svc_pods = driver_utils.get_pods({'selector': svc_selector},
svc_namespace).get('items')
return any(pod in svc_pods for pod in affected_pods)
def _is_egress_only_policy(self, policy):
policy_types = policy['spec'].get('policyTypes', [])
return (policy_types == ['Egress'] or
(policy['spec'].get('egress') and
not policy['spec'].get('ingress')))
def _get_policy_net_id(self, knp):
policy_ns = knp['metadata']['namespace']
kubernetes = clients.get_kubernetes_client()
try:
path = (f'{constants.K8S_API_CRD_NAMESPACES}/{policy_ns}/'
f'kuryrnetworks/{policy_ns}')
net_crd = kubernetes.get(path)
except exceptions.K8sClientException:
LOG.exception("Kubernetes Client Exception.")
raise
return net_crd['status']['netId']
def on_finalize(self, knp):
LOG.debug("Finalizing KuryrNetworkPolicy %s")
project_id = self._drv_project.get_project(knp)
pods_to_update = self._drv_policy.affected_pods(knp)
crd_sg = knp['status'].get('securityGroupId')
try:
policy = self._get_networkpolicy(knp['metadata']['annotations']
['networkPolicyLink'])
except exceptions.K8sResourceNotFound:
# NP is already gone, let's just try to clean up.
policy = None
if crd_sg:
for pod in pods_to_update:
if driver_utils.is_host_network(pod):
continue
pod_sgs = self._drv_pod_sg.get_security_groups(pod, project_id)
if crd_sg in pod_sgs:
pod_sgs.remove(crd_sg)
if not pod_sgs:
pod_sgs = CONF.neutron_defaults.pod_security_groups
if not pod_sgs:
raise cfg.RequiredOptError(
'pod_security_groups',
cfg.OptGroup('neutron_defaults'))
try:
self._drv_vif_pool.update_vif_sgs(pod, pod_sgs)
except os_exc.NotFoundException:
LOG.debug("Fail to update pod sgs."
" Retrying policy deletion.")
raise exceptions.ResourceNotReady(knp)
# ensure ports at the pool don't have the NP sg associated
try:
net_id = self._get_policy_net_id(knp)
self._drv_vif_pool.remove_sg_from_pools(crd_sg, net_id)
except exceptions.K8sResourceNotFound:
# Probably the network got removed already, we can ignore it.
pass
if (CONF.octavia_defaults.enforce_sg_rules and policy and
not self._is_egress_only_policy(policy)):
services = driver_utils.get_services(
knp['metadata']['namespace'])
for svc in services.get('items'):
if (not svc['spec'].get('selector') or not
self._is_service_affected(svc, pods_to_update)):
continue
sgs = self._drv_svc_sg.get_security_groups(svc, project_id)
self._drv_lbaas.update_lbaas_sg(svc, sgs)
self._drv_policy.delete_np_sg(crd_sg)
LOG.debug("Removing finalizers from KuryrNetworkPolicy and "
"NetworkPolicy.")
if policy:
self.k8s.remove_finalizer(policy,
constants.NETWORKPOLICY_FINALIZER)
self.k8s.remove_finalizer(knp, constants.NETWORKPOLICY_FINALIZER)

View File

@ -58,21 +58,24 @@ class PodLabelHandler(k8s_base.ResourceEventHandler):
# annotation to be moved to KuryrPort CRD. # annotation to be moved to KuryrPort CRD.
return return
current_pod_labels = pod['metadata'].get('labels') current_pod_info = (pod['metadata'].get('labels'),
previous_pod_labels = self._get_pod_labels(pod) pod['status'].get('podIP'))
LOG.debug("Got previous pod labels from annotation: %r", previous_pod_info = self._get_pod_info(pod)
previous_pod_labels) LOG.debug("Got previous pod info from annotation: %r",
previous_pod_info)
if current_pod_labels == previous_pod_labels: if current_pod_info == previous_pod_info:
return return
# FIXME(dulek): We should be able to just do create if only podIP
# changed, right?
crd_pod_selectors = self._drv_sg.update_sg_rules(pod) crd_pod_selectors = self._drv_sg.update_sg_rules(pod)
project_id = self._drv_project.get_project(pod) project_id = self._drv_project.get_project(pod)
security_groups = self._drv_sg.get_security_groups(pod, project_id) security_groups = self._drv_sg.get_security_groups(pod, project_id)
self._drv_vif_pool.update_vif_sgs(pod, security_groups) self._drv_vif_pool.update_vif_sgs(pod, security_groups)
try: try:
self._set_pod_labels(pod, current_pod_labels) self._set_pod_info(pod, current_pod_info)
except k_exc.K8sResourceNotFound: except k_exc.K8sResourceNotFound:
LOG.debug("Pod already deleted, no need to retry.") LOG.debug("Pod already deleted, no need to retry.")
return return
@ -81,26 +84,30 @@ class PodLabelHandler(k8s_base.ResourceEventHandler):
services = driver_utils.get_services() services = driver_utils.get_services()
self._update_services(services, crd_pod_selectors, project_id) self._update_services(services, crd_pod_selectors, project_id)
def _get_pod_labels(self, pod): def _get_pod_info(self, pod):
try: try:
annotations = pod['metadata']['annotations'] annotations = pod['metadata']['annotations']
pod_labels_annotation = annotations[constants.K8S_ANNOTATION_LABEL] pod_labels_annotation = annotations[constants.K8S_ANNOTATION_LABEL]
pod_ip_annotation = annotations[constants.K8S_ANNOTATION_IP]
except KeyError: except KeyError:
return None return None, None
pod_labels = jsonutils.loads(pod_labels_annotation) pod_labels = jsonutils.loads(pod_labels_annotation)
return pod_labels return pod_labels, pod_ip_annotation
def _set_pod_labels(self, pod, labels): def _set_pod_info(self, pod, info):
if not labels: if not info[0]:
LOG.debug("Removing Label annotation: %r", labels) LOG.debug("Removing info annotations: %r", info)
annotation = None annotation = None, info[1]
else: else:
annotation = jsonutils.dumps(labels, sort_keys=True) annotation = jsonutils.dumps(info[0], sort_keys=True), info[1]
LOG.debug("Setting Labels annotation: %r", annotation) LOG.debug("Setting info annotations: %r", annotation)
k8s = clients.get_kubernetes_client() k8s = clients.get_kubernetes_client()
k8s.annotate(pod['metadata']['selfLink'], k8s.annotate(pod['metadata']['selfLink'],
{constants.K8S_ANNOTATION_LABEL: annotation}, {
constants.K8S_ANNOTATION_LABEL: annotation[0],
constants.K8S_ANNOTATION_IP: annotation[1]
},
resource_version=pod['metadata']['resourceVersion']) resource_version=pod['metadata']['resourceVersion'])
def _has_vifs(self, pod): def _has_vifs(self, pod):
@ -117,6 +124,5 @@ class PodLabelHandler(k8s_base.ResourceEventHandler):
if not driver_utils.service_matches_affected_pods( if not driver_utils.service_matches_affected_pods(
service, crd_pod_selectors): service, crd_pod_selectors):
continue continue
sgs = self._drv_svc_sg.get_security_groups(service, sgs = self._drv_svc_sg.get_security_groups(service, project_id)
project_id)
self._drv_lbaas.update_lbaas_sg(service, sgs) self._drv_lbaas.update_lbaas_sg(service, sgs)

View File

@ -12,15 +12,11 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from openstack import exceptions as os_exc
from oslo_config import cfg as oslo_cfg
from oslo_log import log as logging from oslo_log import log as logging
from kuryr_kubernetes import clients from kuryr_kubernetes import clients
from kuryr_kubernetes import constants as k_const from kuryr_kubernetes import constants as k_const
from kuryr_kubernetes.controller.drivers import base as drivers from kuryr_kubernetes.controller.drivers import base as drivers
from kuryr_kubernetes.controller.drivers import utils as driver_utils
from kuryr_kubernetes import exceptions
from kuryr_kubernetes.handlers import k8s_base from kuryr_kubernetes.handlers import k8s_base
from kuryr_kubernetes import utils from kuryr_kubernetes import utils
@ -36,99 +32,25 @@ class NetworkPolicyHandler(k8s_base.ResourceEventHandler):
def __init__(self): def __init__(self):
super(NetworkPolicyHandler, self).__init__() super(NetworkPolicyHandler, self).__init__()
self._drv_policy = drivers.NetworkPolicyDriver.get_instance() self._drv_policy = drivers.NetworkPolicyDriver.get_instance()
self._drv_project = drivers.NetworkPolicyProjectDriver.get_instance() self.k8s = clients.get_kubernetes_client()
self._drv_vif_pool = drivers.VIFPoolDriver.get_instance(
specific_driver='multi_pool')
self._drv_vif_pool.set_vif_driver()
self._drv_pod_sg = drivers.PodSecurityGroupsDriver.get_instance()
self._drv_svc_sg = drivers.ServiceSecurityGroupsDriver.get_instance()
self._drv_lbaas = drivers.LBaaSDriver.get_instance()
def on_present(self, policy): def on_present(self, policy):
LOG.debug("Created or updated: %s", policy) LOG.debug("Created or updated: %s", policy)
project_id = self._drv_project.get_project(policy)
pods_to_update = []
modified_pods = self._drv_policy.ensure_network_policy(policy, self._drv_policy.ensure_network_policy(policy)
project_id)
if modified_pods:
pods_to_update.extend(modified_pods)
matched_pods = self._drv_policy.affected_pods(policy) # Put finalizer in if it's not there already.
pods_to_update.extend(matched_pods) self.k8s.add_finalizer(policy, k_const.NETWORKPOLICY_FINALIZER)
for pod in pods_to_update: def on_finalize(self, policy):
if driver_utils.is_host_network(pod): LOG.debug("Finalizing policy %s", policy)
continue if not self._drv_policy.release_network_policy(policy):
pod_sgs = self._drv_pod_sg.get_security_groups(pod, project_id) # KNP was not found, so we need to finalize on our own.
self._drv_vif_pool.update_vif_sgs(pod, pod_sgs) self.k8s.remove_finalizer(policy, k_const.NETWORKPOLICY_FINALIZER)
if (pods_to_update and
oslo_cfg.CONF.octavia_defaults.enforce_sg_rules and
not self._is_egress_only_policy(policy)):
# NOTE(ltomasbo): only need to change services if the pods that
# they point to are updated
services = driver_utils.get_services(
policy['metadata']['namespace'])
for service in services.get('items'):
# TODO(ltomasbo): Skip other services that are not affected
# by the policy
if (not service['spec'].get('selector') or not
self._is_service_affected(service, pods_to_update)):
continue
sgs = self._drv_svc_sg.get_security_groups(service,
project_id)
self._drv_lbaas.update_lbaas_sg(service, sgs)
def on_deleted(self, policy):
LOG.debug("Deleted network policy: %s", policy)
project_id = self._drv_project.get_project(policy)
pods_to_update = self._drv_policy.affected_pods(policy)
netpolicy_crd = self._drv_policy.get_kuryrnetpolicy_crd(policy)
if netpolicy_crd:
crd_sg = netpolicy_crd['spec'].get('securityGroupId')
for pod in pods_to_update:
if driver_utils.is_host_network(pod):
continue
pod_sgs = self._drv_pod_sg.get_security_groups(pod,
project_id)
if crd_sg in pod_sgs:
pod_sgs.remove(crd_sg)
if not pod_sgs:
pod_sgs = (
oslo_cfg.CONF.neutron_defaults.pod_security_groups)
if not pod_sgs:
raise oslo_cfg.RequiredOptError(
'pod_security_groups',
oslo_cfg.OptGroup('neutron_defaults'))
try:
self._drv_vif_pool.update_vif_sgs(pod, pod_sgs)
except os_exc.NotFoundException:
LOG.debug("Fail to update pod sgs."
" Retrying policy deletion.")
raise exceptions.ResourceNotReady(policy)
# ensure ports at the pool don't have the NP sg associated
net_id = self._get_policy_net_id(policy)
self._drv_vif_pool.remove_sg_from_pools(crd_sg, net_id)
self._drv_policy.release_network_policy(netpolicy_crd)
if (oslo_cfg.CONF.octavia_defaults.enforce_sg_rules and
not self._is_egress_only_policy(policy)):
services = driver_utils.get_services(
policy['metadata']['namespace'])
for svc in services.get('items'):
if (not svc['spec'].get('selector') or not
self._is_service_affected(svc, pods_to_update)):
continue
sgs = self._drv_svc_sg.get_security_groups(svc,
project_id)
self._drv_lbaas.update_lbaas_sg(svc, sgs)
def is_ready(self, quota): def is_ready(self, quota):
if not (utils.has_kuryr_crd(k_const.K8S_API_CRD_KURYRNETPOLICIES) and if not (utils.has_kuryr_crd(k_const.K8S_API_CRD_KURYRNETWORKPOLICIES)
self._check_quota(quota)): and self._check_quota(quota)):
LOG.error("Marking NetworkPolicyHandler as not ready.") LOG.error("Marking NetworkPolicyHandler as not ready.")
return False return False
return True return True
@ -137,29 +59,3 @@ class NetworkPolicyHandler(k8s_base.ResourceEventHandler):
if utils.has_limit(quota.security_groups): if utils.has_limit(quota.security_groups):
return utils.is_available('security_groups', quota.security_groups) return utils.is_available('security_groups', quota.security_groups)
return True return True
def _is_service_affected(self, service, affected_pods):
svc_namespace = service['metadata']['namespace']
svc_selector = service['spec'].get('selector')
svc_pods = driver_utils.get_pods({'selector': svc_selector},
svc_namespace).get('items')
return any(pod in svc_pods for pod in affected_pods)
def _get_policy_net_id(self, policy):
policy_ns = policy['metadata']['namespace']
kubernetes = clients.get_kubernetes_client()
try:
path = (f'{k_const.K8S_API_CRD_NAMESPACES}/{policy_ns}/'
f'kuryrnetworks/{policy_ns}')
net_crd = kubernetes.get(path)
except exceptions.K8sClientException:
LOG.exception("Kubernetes Client Exception.")
raise
return net_crd['status']['netId']
def _is_egress_only_policy(self, policy):
policy_types = policy['spec'].get('policyTypes', [])
return (policy_types == ['Egress'] or
(policy['spec'].get('egress') and
not policy['spec'].get('ingress')))

View File

@ -12,8 +12,6 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import munch
from openstack import exceptions as os_exc
from unittest import mock from unittest import mock
from kuryr_kubernetes.controller.drivers import network_policy from kuryr_kubernetes.controller.drivers import network_policy
@ -75,8 +73,8 @@ class TestNetworkPolicyDriver(test_base.TestCase):
self._policy_uid = mock.sentinel.policy_uid self._policy_uid = mock.sentinel.policy_uid
self._policy_link = mock.sentinel.policy_link self._policy_link = mock.sentinel.policy_link
self._sg_id = mock.sentinel.sg_id self._sg_id = mock.sentinel.sg_id
self._i_rules = [{'security_group_rule': {'id': ''}}] self._i_rules = [{'sgRule': {'id': ''}}]
self._e_rules = [{'security_group_rule': {'id': ''}}] self._e_rules = [{'sgRule': {'id': ''}}]
self._policy = { self._policy = {
'apiVersion': 'networking.k8s.io/v1', 'apiVersion': 'networking.k8s.io/v1',
@ -104,12 +102,46 @@ class TestNetworkPolicyDriver(test_base.TestCase):
[{'namespaceSelector': { [{'namespaceSelector': {
'matchLabels': { 'matchLabels': {
'project': 'myproject'}}}]}], 'project': 'myproject'}}}]}],
'policyTypes': ['Ingress', 'Egress'] 'policyTypes': ['Ingress', 'Egress'],
'podSelector': {},
} }
} }
self._crd = { self.crd = {
'metadata': {'name': mock.sentinel.name, 'metadata': {'name': 'foobar',
'namespace': 'default',
'selfLink': mock.sentinel.selfLink},
'spec': {
'egressSgRules': [
{'sgRule':
{'description': 'Kuryr-Kubernetes NetPolicy SG rule',
'direction': 'egress',
'ethertype': 'IPv4',
'port_range_max': 5978,
'port_range_min': 5978,
'protocol': 'tcp',
}}],
'ingressSgRules': [
{'sgRule':
{'description': 'Kuryr-Kubernetes NetPolicy SG rule',
'direction': 'ingress',
'ethertype': 'IPv4',
'port_range_max': 6379,
'port_range_min': 6379,
'protocol': 'tcp',
}}],
'podSelector': {},
'policyTypes': self._policy['spec']['policyTypes']
},
'status': {
'securityGroupId': self._sg_id,
'securityGroupRules': [],
'podSelector': {},
}
}
self.old_crd = {
'metadata': {'name': 'np-foobar',
'namespace': 'default', 'namespace': 'default',
'selfLink': mock.sentinel.selfLink}, 'selfLink': mock.sentinel.selfLink},
'spec': { 'spec': {
@ -135,6 +167,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
'security_group_id': self._sg_id, 'security_group_id': self._sg_id,
'id': mock.sentinel.id 'id': mock.sentinel.id
}}], }}],
'podSelector': {},
'networkpolicy_spec': self._policy['spec'], 'networkpolicy_spec': self._policy['spec'],
'securityGroupId': self._sg_id, 'securityGroupId': self._sg_id,
'securityGroupName': mock.sentinel.sg_name}} 'securityGroupName': mock.sentinel.sg_name}}
@ -144,207 +177,57 @@ class TestNetworkPolicyDriver(test_base.TestCase):
self._driver = network_policy.NetworkPolicyDriver() self._driver = network_policy.NetworkPolicyDriver()
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd', return_value=False) '_get_default_np_rules')
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
'create_security_group_rules_from_network_policy') '_get_knp_crd', return_value=False)
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
'update_security_group_rules_from_network_policy') '_create_knp_crd')
def test_ensure_network_policy(self, m_update, m_create, m_get_crd):
self._driver.ensure_network_policy(self._policy, self._project_id)
m_get_crd.assert_called_once_with(self._policy)
m_create.assert_called_once_with(self._policy, self._project_id)
m_update.assert_not_called()
@mock.patch.object(network_policy.NetworkPolicyDriver, 'affected_pods')
@mock.patch.object(network_policy.NetworkPolicyDriver, 'namespaced_pods')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd', return_value=True)
@mock.patch.object(network_policy.NetworkPolicyDriver,
'create_security_group_rules_from_network_policy')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'update_security_group_rules_from_network_policy')
def test_ensure_network_policy_with_existing_crd(
self, m_update, m_create, m_get_crd, m_namespaced, m_affected):
previous_selector = mock.sentinel.previous_selector
m_update.return_value = previous_selector
self._driver.ensure_network_policy(self._policy, self._project_id)
m_get_crd.assert_called_once_with(self._policy)
m_create.assert_not_called()
m_update.assert_called_once_with(self._policy)
m_affected.assert_called_once_with(self._policy, previous_selector)
m_namespaced.assert_not_called()
@mock.patch.object(network_policy.NetworkPolicyDriver, 'affected_pods')
@mock.patch.object(network_policy.NetworkPolicyDriver, 'namespaced_pods')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd', return_value=True)
@mock.patch.object(network_policy.NetworkPolicyDriver,
'create_security_group_rules_from_network_policy')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'update_security_group_rules_from_network_policy')
def test_ensure_network_policy_with_existing_crd_no_selector(
self, m_update, m_create, m_get_crd, m_namespaced, m_affected):
m_update.return_value = None
self._driver.ensure_network_policy(self._policy, self._project_id)
m_get_crd.assert_called_once_with(self._policy)
m_create.assert_not_called()
m_update.assert_called_once_with(self._policy)
m_affected.assert_not_called()
m_namespaced.assert_called_once_with(self._policy)
@mock.patch.object(network_policy.NetworkPolicyDriver, 'affected_pods')
@mock.patch.object(network_policy.NetworkPolicyDriver, 'namespaced_pods')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'create_security_group_rules_from_network_policy')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'update_security_group_rules_from_network_policy')
def test_ensure_network_policy_with_existing_crd_empty_selector(
self, m_update, m_create, m_get_crd, m_namespaced, m_affected):
previous_selector = {}
pod_selector = {'matchLabels': {'run': 'demo'}}
updated_policy = self._policy.copy()
updated_policy['spec']['podSelector'] = pod_selector
crd_with_empty_selector = self._crd.copy()
crd_with_empty_selector['spec']['podSelector'] = previous_selector
m_get_crd.return_value = crd_with_empty_selector
m_update.return_value = previous_selector
self._driver.ensure_network_policy(updated_policy, self._project_id)
m_get_crd.assert_called_once_with(updated_policy)
m_create.assert_not_called()
m_update.assert_called_once_with(updated_policy)
m_affected.assert_called_with(self._policy, previous_selector)
m_namespaced.assert_not_called()
@mock.patch.object(network_policy.NetworkPolicyDriver,
'_add_default_np_rules')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'_add_kuryrnetpolicy_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
'parse_network_policy_rules') 'parse_network_policy_rules')
@mock.patch.object(utils, 'get_subnet_cidr') @mock.patch.object(utils, 'get_subnet_cidr')
def test_create_security_group_rules_from_network_policy(self, m_utils, def test_ensure_network_policy(self, m_utils, m_parse, m_add_crd,
m_parse, m_get_crd, m_get_default):
m_add_crd,
m_get_crd,
m_add_default):
self._driver.os_net.create_security_group.return_value = (
munch.Munch({'id': mock.sentinel.id,
'security_group_rules': []}))
m_utils.get_subnet_cidr.return_value = mock.sentinel.cidr m_utils.get_subnet_cidr.return_value = mock.sentinel.cidr
m_parse.return_value = (self._i_rules, self._e_rules) m_parse.return_value = (self._i_rules, self._e_rules)
self._driver.os_net.create_security_group_rule.return_value = ( self._driver.ensure_network_policy(
munch.Munch({'id': mock.sentinel.id})) self._policy)
self._driver.create_security_group_rules_from_network_policy(
self._policy, self._project_id)
m_get_crd.assert_called_once() m_get_crd.assert_called_once()
m_add_crd.assert_called_once() m_add_crd.assert_called_once()
m_add_default.assert_called_once() m_get_default.assert_called_once()
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
'_add_default_np_rules') '_get_default_np_rules')
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd') '_get_knp_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'_add_kuryrnetpolicy_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
'parse_network_policy_rules') 'parse_network_policy_rules')
@mock.patch.object(utils, 'get_subnet_cidr') @mock.patch.object(utils, 'get_subnet_cidr')
def test_create_security_group_rules_with_k8s_exc(self, m_utils, m_parse, def test_ensure_network_policy_with_k8s_exc(self, m_utils, m_parse,
m_add_crd, m_get_crd, m_get_crd, m_get_default):
m_add_default):
self._driver.os_net.create_security_group.return_value = (
munch.Munch({'id': mock.sentinel.id,
'security_group_rules': []}))
m_utils.get_subnet_cidr.return_value = mock.sentinel.cidr m_utils.get_subnet_cidr.return_value = mock.sentinel.cidr
m_parse.return_value = (self._i_rules, self._e_rules) m_parse.return_value = (self._i_rules, self._e_rules)
m_get_crd.side_effect = exceptions.K8sClientException m_get_crd.side_effect = exceptions.K8sClientException
self._driver.os_net.create_security_group_rule.return_value = ( self.assertRaises(exceptions.K8sClientException,
munch.Munch({'id': mock.sentinel.id})) self._driver.ensure_network_policy, self._policy)
self.assertRaises( m_get_default.assert_called_once()
exceptions.K8sClientException,
self._driver.create_security_group_rules_from_network_policy,
self._policy, self._project_id)
m_add_crd.assert_called_once()
m_add_default.assert_called_once()
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
'_add_default_np_rules') '_get_default_np_rules')
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd') '_get_knp_crd', return_value=None)
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver, '_create_knp_crd')
'_add_kuryrnetpolicy_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
'parse_network_policy_rules') 'parse_network_policy_rules')
@mock.patch.object(utils, 'get_subnet_cidr') @mock.patch.object(utils, 'get_subnet_cidr')
def test_create_security_group_rules_error_add_crd(self, m_utils, m_parse, def test_ensure_network_policy_error_add_crd(
m_add_crd, m_get_crd, self, m_utils, m_parse, m_add_crd, m_get_crd, m_get_default):
m_add_default):
self._driver.os_net.create_security_group.return_value = (
munch.Munch({'id': mock.sentinel.id,
'security_group_rules': []}))
m_utils.get_subnet_cidr.return_value = mock.sentinel.cidr m_utils.get_subnet_cidr.return_value = mock.sentinel.cidr
m_parse.return_value = (self._i_rules, self._e_rules) m_parse.return_value = (self._i_rules, self._e_rules)
m_add_crd.side_effect = exceptions.K8sClientException m_add_crd.side_effect = exceptions.K8sClientException
self._driver.os_net.create_security_group_rule.return_value = ( self.assertRaises(exceptions.K8sClientException,
munch.Munch({'id': mock.sentinel.id})) self._driver.ensure_network_policy, self._policy)
self.assertRaises( m_get_crd.assert_called()
exceptions.K8sClientException, m_get_default.assert_called_once()
self._driver.create_security_group_rules_from_network_policy,
self._policy, self._project_id)
m_get_crd.assert_not_called()
m_add_default.assert_called_once()
def test_create_security_group_rules_with_n_exc(self):
self._driver.os_net.create_security_group.side_effect = (
os_exc.SDKException())
self.assertRaises(
os_exc.SDKException,
self._driver.create_security_group_rules_from_network_policy,
self._policy, self._project_id)
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'parse_network_policy_rules')
def test_update_security_group_rules(self, m_parse, m_get_crd,
m_create_sgr):
policy = self._policy.copy()
policy['spec']['podSelector'] = {'matchLabels': {'test': 'test'}}
m_get_crd.return_value = self._crd
m_parse.return_value = (self._i_rules, self._e_rules)
self._driver.update_security_group_rules_from_network_policy(
policy)
m_parse.assert_called_with(policy, self._sg_id)
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'get_kuryrnetpolicy_crd')
@mock.patch.object(network_policy.NetworkPolicyDriver,
'parse_network_policy_rules')
def test_update_security_group_rules_with_k8s_exc(self, m_parse, m_get_crd,
m_create_sgr):
self._driver.kubernetes.patch_crd.side_effect = (
exceptions.K8sClientException())
m_get_crd.return_value = self._crd
m_parse.return_value = (self._i_rules, self._e_rules)
self.assertRaises(
exceptions.K8sClientException,
self._driver.update_security_group_rules_from_network_policy,
self._policy)
m_parse.assert_called_with(self._policy, self._sg_id)
def test_get_namespaces(self): def test_get_namespaces(self):
namespace_selector = {'namespaceSelector': { namespace_selector = {'namespaceSelector': {
@ -363,6 +246,13 @@ class TestNetworkPolicyDriver(test_base.TestCase):
self.assertEqual([], resp) self.assertEqual([], resp)
self.kubernetes.get.assert_called_once() self.kubernetes.get.assert_called_once()
def test_get_from_old_crd(self):
knp = self._driver.get_from_old_crd(self.old_crd)
self.assertEqual(self.crd['spec'], knp['spec'])
self.assertEqual(self.crd['status'], knp['status'])
for k in ['name', 'namespace']:
self.assertEqual(self.crd['metadata'][k], knp['metadata'][k])
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services') @mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services')
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
'_get_resource_details') '_get_resource_details')
@ -377,7 +267,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
namespace = 'myproject' namespace = 'myproject'
m_get_namespaces.return_value = [get_namespace_obj()] m_get_namespaces.return_value = [get_namespace_obj()]
m_get_resource_details.return_value = subnet_cidr, namespace m_get_resource_details.return_value = subnet_cidr, namespace
self._driver.parse_network_policy_rules(self._policy, self._sg_id) self._driver.parse_network_policy_rules(self._policy)
m_get_namespaces.assert_called() m_get_namespaces.assert_called()
m_get_resource_details.assert_called() m_get_resource_details.assert_called()
m_create.assert_called() m_create.assert_called()
@ -391,12 +281,12 @@ class TestNetworkPolicyDriver(test_base.TestCase):
policy = self._policy.copy() policy = self._policy.copy()
policy['spec']['ingress'] = [{}] policy['spec']['ingress'] = [{}]
policy['spec']['egress'] = [{}] policy['spec']['egress'] = [{}]
self._driver.parse_network_policy_rules(policy, self._sg_id) self._driver.parse_network_policy_rules(policy)
m_get_ns.assert_not_called() m_get_ns.assert_not_called()
calls = [mock.call(self._sg_id, 'ingress', ethertype='IPv4'), calls = [mock.call('ingress', ethertype='IPv4'),
mock.call(self._sg_id, 'ingress', ethertype='IPv6'), mock.call('ingress', ethertype='IPv6'),
mock.call(self._sg_id, 'egress', ethertype='IPv4'), mock.call('egress', ethertype='IPv4'),
mock.call(self._sg_id, 'egress', ethertype='IPv6')] mock.call('egress', ethertype='IPv6')]
m_create.assert_has_calls(calls) m_create.assert_has_calls(calls)
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
@ -408,7 +298,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
[{'port': 6379, 'protocol': 'TCP'}]}] [{'port': 6379, 'protocol': 'TCP'}]}]
policy['spec']['egress'] = [{'ports': policy['spec']['egress'] = [{'ports':
[{'port': 6379, 'protocol': 'TCP'}]}] [{'port': 6379, 'protocol': 'TCP'}]}]
self._driver.parse_network_policy_rules(policy, self._sg_id) self._driver.parse_network_policy_rules(policy)
m_create_all_pods_sg_rules.assert_called() m_create_all_pods_sg_rules.assert_called()
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
@ -429,7 +319,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
'TCP'}], 'TCP'}],
'to': [{'ipBlock': 'to': [{'ipBlock':
{'cidr': '10.0.0.0/24'}}]}] {'cidr': '10.0.0.0/24'}}]}]
self._driver.parse_network_policy_rules(policy, self._sg_id) self._driver.parse_network_policy_rules(policy)
m_create_sg_rule.assert_called() m_create_sg_rule.assert_called()
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services') @mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services')
@ -450,38 +340,19 @@ class TestNetworkPolicyDriver(test_base.TestCase):
selectors = {'namespaceSelector': { selectors = {'namespaceSelector': {
'matchLabels': { 'matchLabels': {
'project': 'myproject'}}} 'project': 'myproject'}}}
policy['spec']['egress'] = [ policy['spec']['egress'] = [{'to': [selectors]}]
{'to': policy['spec']['ingress'] = [{'from': [selectors]}]
[selectors]}] self._driver.parse_network_policy_rules(policy)
policy['spec']['ingress'] = [
{'from':
[selectors]}]
selectors = {'namespace_selector': selectors['namespaceSelector']}
self._driver.parse_network_policy_rules(policy, self._sg_id)
m_get_namespaces.assert_called() m_get_namespaces.assert_called()
m_get_resource_details.assert_called() m_get_resource_details.assert_called()
calls = [mock.call(self._sg_id, 'ingress', port_range_min=1, calls = [mock.call('ingress', port_range_min=1,
port_range_max=65535, cidr=subnet_cidr, port_range_max=65535, cidr=subnet_cidr,
namespace=namespace), namespace=namespace),
mock.call(self._sg_id, 'egress', port_range_min=1, mock.call('egress', port_range_min=1,
port_range_max=65535, cidr=subnet_cidr, port_range_max=65535, cidr=subnet_cidr,
namespace=namespace)] namespace=namespace)]
m_create.assert_has_calls(calls) m_create.assert_has_calls(calls)
def test_knps_on_namespace(self):
self.kubernetes.get.return_value = {'items': ['not-empty']}
namespace = 'test1'
resp = self._driver.knps_on_namespace(namespace)
self.assertTrue(resp)
def test_knps_on_namespace_empty(self):
self.kubernetes.get.return_value = {'items': []}
namespace = 'test1'
resp = self._driver.knps_on_namespace(namespace)
self.assertFalse(resp)
@mock.patch.object(network_policy.NetworkPolicyDriver, 'namespaced_pods') @mock.patch.object(network_policy.NetworkPolicyDriver, 'namespaced_pods')
def test_affected_pods(self, m_namespaced): def test_affected_pods(self, m_namespaced):
self._driver.affected_pods(self._policy) self._driver.affected_pods(self._policy)
@ -509,19 +380,10 @@ class TestNetworkPolicyDriver(test_base.TestCase):
self.assertEqual([], resp) self.assertEqual([], resp)
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
'_del_kuryrnetpolicy_crd', return_value=False) '_del_knp_crd', return_value=False)
def test_release_network_policy(self, m_del_crd): def test_release_network_policy(self, m_del_crd):
self._driver.release_network_policy(self._crd) self._driver.release_network_policy(self.crd)
self.neutron.delete_security_group.assert_called_once_with( m_del_crd.assert_called_once_with(self.crd)
self._crd['spec']['securityGroupId'])
m_del_crd.assert_called_once_with(self._crd['metadata']['name'],
self._crd['metadata']['namespace'])
@mock.patch.object(network_policy.NetworkPolicyDriver,
'_del_kuryrnetpolicy_crd', return_value=False)
def test_release_network_policy_removed_crd(self, m_del_crd):
self._driver.release_network_policy(None)
m_del_crd.assert_not_called()
@mock.patch.object(network_policy.NetworkPolicyDriver, @mock.patch.object(network_policy.NetworkPolicyDriver,
'_create_sg_rules_with_container_ports') '_create_sg_rules_with_container_ports')
@ -543,8 +405,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_get_pods.return_value = {'items': [pod]} m_get_pods.return_value = {'items': [pod]}
m_get_ports.return_value = container_ports m_get_ports.return_value = container_ports
self._driver._create_sg_rule_body_on_text_port(self._sg_id, self._driver._create_sg_rule_body_on_text_port(direction,
direction,
port, port,
resources, resources,
crd_rules, crd_rules,
@ -577,8 +438,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_get_pods.return_value = {'items': [pod]} m_get_pods.return_value = {'items': [pod]}
m_get_ports.return_value = container_ports m_get_ports.return_value = container_ports
self._driver._create_sg_rule_body_on_text_port(self._sg_id, self._driver._create_sg_rule_body_on_text_port(direction,
direction,
port, port,
resources, resources,
crd_rules, crd_rules,
@ -600,7 +460,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_create_sgr): m_create_sgr):
def _create_sgr_cont(container_ports, allow_all, resource, def _create_sgr_cont(container_ports, allow_all, resource,
matched_pods, crd_rules, sg_id, direction, port, matched_pods, crd_rules, direction, port,
pod_selector=None, policy_namespace=None): pod_selector=None, policy_namespace=None):
matched_pods[container_ports[0][1]] = 'foo' matched_pods[container_ports[0][1]] = 'foo'
@ -617,8 +477,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_get_pods.return_value = {'items': [pod]} m_get_pods.return_value = {'items': [pod]}
m_get_ports.return_value = container_ports m_get_ports.return_value = container_ports
self._driver._create_sg_rule_body_on_text_port(self._sg_id, self._driver._create_sg_rule_body_on_text_port(direction,
direction,
port, port,
resources, resources,
crd_rules, crd_rules,
@ -629,7 +488,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_get_pods.assert_called_with(pod_selector, namespace) m_get_pods.assert_called_with(pod_selector, namespace)
m_get_ports.assert_called_with(pod, port) m_get_ports.assert_called_with(pod, port)
calls = [mock.call(self._sg_id, direction, container_ports[0][1], calls = [mock.call(direction, container_ports[0][1],
protocol=port['protocol'], ethertype=e, protocol=port['protocol'], ethertype=e,
pods='foo') for e in ('IPv4', 'IPv6')] pods='foo') for e in ('IPv4', 'IPv6')]
@ -656,8 +515,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_get_pods.return_value = {'items': [pod]} m_get_pods.return_value = {'items': [pod]}
m_get_ports.return_value = container_ports m_get_ports.return_value = container_ports
self._driver._create_sg_rule_body_on_text_port(self._sg_id, self._driver._create_sg_rule_body_on_text_port(direction,
direction,
port, port,
resources, resources,
crd_rules, crd_rules,
@ -685,8 +543,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_get_ports.return_value = container_ports m_get_ports.return_value = container_ports
self._driver._create_sg_rule_body_on_text_port(self._sg_id, self._driver._create_sg_rule_body_on_text_port(direction,
direction,
port, port,
resources, resources,
crd_rules, crd_rules,
@ -695,8 +552,8 @@ class TestNetworkPolicyDriver(test_base.TestCase):
allow_all=True) allow_all=True)
m_get_ports.assert_called_with(resources[0], port) m_get_ports.assert_called_with(resources[0], port)
m_create_sgr.assert_called_once_with(self._sg_id, 'egress', None, m_create_sgr.assert_called_once_with('egress', None, cidr=mock.ANY,
cidr=mock.ANY, protocol='TCP') protocol='TCP')
self.assertEqual(len(crd_rules), 1) self.assertEqual(len(crd_rules), 1)
@mock.patch('kuryr_kubernetes.utils.get_subnet_cidr') @mock.patch('kuryr_kubernetes.utils.get_subnet_cidr')
@ -731,8 +588,7 @@ class TestNetworkPolicyDriver(test_base.TestCase):
m_get_pods.return_value = {'items': [pod]} m_get_pods.return_value = {'items': [pod]}
m_get_ports.return_value = container_ports m_get_ports.return_value = container_ports
self._driver._create_sg_rule_body_on_text_port(self._sg_id, self._driver._create_sg_rule_body_on_text_port(direction,
direction,
port, port,
resources, resources,
crd_rules, crd_rules,
@ -741,10 +597,10 @@ class TestNetworkPolicyDriver(test_base.TestCase):
allow_all=True) allow_all=True)
m_get_ports.assert_called_with(resources[0], port) m_get_ports.assert_called_with(resources[0], port)
calls = [mock.call(self._sg_id, direction, container_ports[0][1], calls = [mock.call(direction, container_ports[0][1],
protocol=port['protocol'], ethertype=e, protocol=port['protocol'], ethertype=e,
pods='foo') for e in ('IPv4', 'IPv6')] pods='foo') for e in ('IPv4', 'IPv6')]
calls.append(mock.call(self._sg_id, direction, container_ports[0][1], calls.append(mock.call(direction, container_ports[0][1],
protocol=port['protocol'], protocol=port['protocol'],
cidr='10.0.0.128/26')) cidr='10.0.0.128/26'))
m_create_sgr.assert_has_calls(calls) m_create_sgr.assert_has_calls(calls)
@ -758,19 +614,18 @@ class TestNetworkPolicyDriver(test_base.TestCase):
direction = 'ingress' direction = 'ingress'
rules = [] rules = []
self._driver._create_all_pods_sg_rules(port, self._sg_id, direction, self._driver._create_all_pods_sg_rules(port, direction, rules, '',
rules, '', None) None)
self.assertEqual(len(rules), 2) self.assertEqual(len(rules), 2)
def test__create_default_sg_rule(self): def test__create_default_sg_rule(self):
for direction in ('ingress', 'egress'): for direction in ('ingress', 'egress'):
rules = [] rules = []
self._driver._create_default_sg_rule(self._sg_id, direction, rules) self._driver._create_default_sg_rule(direction, rules)
self.assertEqual(len(rules), 2) self.assertEqual(len(rules), 2)
self.assertListEqual(rules, [{'security_group_rule': { self.assertListEqual(rules, [{'sgRule': {
'ethertype': e, 'ethertype': e,
'security_group_id': self._sg_id,
'direction': direction, 'direction': direction,
'description': 'Kuryr-Kubernetes NetPolicy SG rule' 'description': 'Kuryr-Kubernetes NetPolicy SG rule'
}} for e in ('IPv4', 'IPv6')]) }} for e in ('IPv4', 'IPv6')])

View File

@ -66,7 +66,7 @@ def get_sg_rule():
pod_ip = get_match_crd_pod_obj()['status'].get('podIP') pod_ip = get_match_crd_pod_obj()['status'].get('podIP')
return { return {
"namespace": 'dev', "namespace": 'dev',
"security_group_rule": { "sgRule": {
"description": "Kuryr-Kubernetes NetPolicy SG rule", "description": "Kuryr-Kubernetes NetPolicy SG rule",
"direction": "ingress", "direction": "ingress",
"ethertype": "IPv4", "ethertype": "IPv4",
@ -80,7 +80,7 @@ def get_sg_rule():
def get_matched_crd_obj(): def get_matched_crd_obj():
return { return {
"kind": "KuryrNetPolicy", "kind": "KuryrNetworkPolicy",
"metadata": {"name": "np-test-network-policy", "metadata": {"name": "np-test-network-policy",
"namespace": "default"}, "namespace": "default"},
"spec": { "spec": {
@ -159,7 +159,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
'selfLink': mock.sentinel.selfLink}, 'selfLink': mock.sentinel.selfLink},
'spec': { 'spec': {
'egressSgRules': [ 'egressSgRules': [
{'security_group_rule': {'sgRule':
{'description': 'Kuryr-Kubernetes NetPolicy SG rule', {'description': 'Kuryr-Kubernetes NetPolicy SG rule',
'direction': 'egress', 'direction': 'egress',
'ethertype': 'IPv4', 'ethertype': 'IPv4',
@ -170,7 +170,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
'id': mock.sentinel.id 'id': mock.sentinel.id
}}], }}],
'ingressSgRules': [ 'ingressSgRules': [
{'security_group_rule': {'sgRule':
{'description': 'Kuryr-Kubernetes NetPolicy SG rule', {'description': 'Kuryr-Kubernetes NetPolicy SG rule',
'direction': 'ingress', 'direction': 'ingress',
'ethertype': 'IPv4', 'ethertype': 'IPv4',
@ -189,16 +189,18 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
'production']}], 'production']}],
'matchLabels': { 'matchLabels': {
'run': 'demo' 'run': 'demo'
}}, }}},
'status': {
'securityGroupId': self._sg_id, 'securityGroupId': self._sg_id,
'securityGroupName': mock.sentinel.sg_name}} },
}
self._crd2 = { self._crd2 = {
'metadata': {'name': mock.sentinel.name3, 'metadata': {'name': mock.sentinel.name3,
'selfLink': mock.sentinel.selfLink}, 'selfLink': mock.sentinel.selfLink},
'spec': { 'spec': {
'ingressSgRules': [ 'ingressSgRules': [
{'security_group_rule': {'sgRule':
{'description': 'Kuryr-Kubernetes NetPolicy SG rule', {'description': 'Kuryr-Kubernetes NetPolicy SG rule',
'direction': 'ingress', 'direction': 'ingress',
'ethertype': 'IPv4', 'ethertype': 'IPv4',
@ -208,25 +210,14 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
'security_group_id': self._sg_id2, 'security_group_id': self._sg_id2,
'id': mock.sentinel.id 'id': mock.sentinel.id
}}], }}],
'podSelector': {}, 'podSelector': {}},
'status': {
'securityGroupId': self._sg_id2, 'securityGroupId': self._sg_id2,
'securityGroupName': mock.sentinel.sg_name}} 'securityGroupName': mock.sentinel.sg_name}}
self._crds = { self._crds = [self._crd]
"apiVersion": "v1",
"items": [self._crd],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": mock.sentinel.selfLink}}
self._multiple_crds = { self._multiple_crds = [self._crd, self._crd2]
"apiVersion": "v1",
"items": [self._crd, self._crd2],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": mock.sentinel.selfLink}}
self._pod = { self._pod = {
'apiVersion': 'v1', 'apiVersion': 'v1',
@ -304,7 +295,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
self._crd_sg_id = mock.sentinel.crd_sg_id self._crd_sg_id = mock.sentinel.crd_sg_id
self._sg_rule_body = { self._sg_rule_body = {
'security_group_rule': { 'sgRule': {
'direction': 'ingress', 'direction': 'ingress',
'protocol': 'tcp', 'protocol': 'tcp',
'description': 'Kuryr-Kubernetes NetPolicy SG rule', 'description': 'Kuryr-Kubernetes NetPolicy SG rule',
@ -323,7 +314,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
"spec": { "spec": {
"egressSgRules": [], "egressSgRules": [],
"ingressSgRules": [{ "ingressSgRules": [{
"security_group_rule": { "sgRule": {
"description": "Kuryr-Kubernetes NetPolicy SG rule", "description": "Kuryr-Kubernetes NetPolicy SG rule",
"direction": "ingress", "direction": "ingress",
"ethertype": "IPv4", "ethertype": "IPv4",
@ -348,20 +339,12 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
"podSelector": {"matchLabels": {"app": "demo"}}, "podSelector": {"matchLabels": {"app": "demo"}},
"securityGroupId": self._crd_sg_id}} "securityGroupId": self._crd_sg_id}}
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule_body')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_selector', return_value=True) 'match_selector', return_value=True)
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_pod_ip') @mock.patch('kuryr_kubernetes.controller.drivers.utils.get_pod_ip')
def test__create_sg_rules(self, m_get_pod_ip, def test__create_sg_rules(self, m_get_pod_ip,
m_match_selector, m_match_selector):
m_create_sg_rule_body,
m_create_sg_rule):
m_create_sg_rule_body.return_value = self._sg_rule_body
sgr_id = mock.sentinel.sgr_id sgr_id = mock.sentinel.sgr_id
m_create_sg_rule.return_value = sgr_id
crd = get_crd_obj_with_all_selectors() crd = get_crd_obj_with_all_selectors()
pod = get_match_crd_pod_obj() pod = get_match_crd_pod_obj()
m_get_pod_ip.return_value = pod['status'].get('podIP') m_get_pod_ip.return_value = pod['status'].get('podIP')
@ -370,80 +353,58 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
policy = crd['spec']['networkpolicy_spec'] policy = crd['spec']['networkpolicy_spec']
rule_list = policy.get('ingress', None) rule_list = policy.get('ingress', None)
crd_rules = crd['spec'].get('ingressSgRules')
pod_ns = pod['metadata']['namespace'] pod_ns = pod['metadata']['namespace']
for rule_block in rule_list: for rule_block in rule_list:
for rule in rule_block.get('from', []): for rule in rule_block.get('from', []):
pod_selector = rule.get('podSelector') pod_selector = rule.get('podSelector')
matched = network_policy_security_groups._create_sg_rules( matched = network_policy_security_groups._create_sg_rules(
crd, pod, pod_selector, rule_block, crd, pod, pod_selector, rule_block, 'ingress', matched)
crd_rules, 'ingress', matched, pod_ns)
new_sg_rule['namespace'] = pod_ns new_sg_rule['namespace'] = pod_ns
new_sg_rule['security_group_rule']['id'] = sgr_id new_sg_rule['sgRule']['id'] = sgr_id
m_match_selector.assert_called_once_with( m_match_selector.assert_called_once_with(
pod_selector, pod['metadata']['labels']) pod_selector, pod['metadata']['labels'])
m_get_pod_ip.assert_called_once_with(pod) m_get_pod_ip.assert_called_once_with(pod)
m_create_sg_rule_body.assert_called_once()
m_create_sg_rule.assert_called_once()
self.assertEqual([new_sg_rule], crd_rules)
self.assertEqual(matched, True) self.assertEqual(matched, True)
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_pod_ip') 'get_pod_ip')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_selector', return_value=False) 'match_selector', return_value=False)
def test__create_sg_rules_no_match(self, m_match_selector, def test__create_sg_rules_no_match(self, m_match_selector, m_get_pod_ip):
m_get_pod_ip):
crd = get_crd_obj_with_all_selectors() crd = get_crd_obj_with_all_selectors()
pod = self._pod2 pod = self._pod2
policy = crd['spec']['networkpolicy_spec'] policy = crd['spec']['networkpolicy_spec']
rule_list = policy.get('ingress', None) rule_list = policy.get('ingress', None)
crd_rules = crd['spec'].get('ingressSgRules')
for rule_block in rule_list: for rule_block in rule_list:
for rule in rule_block.get('from', []): for rule in rule_block.get('from', []):
pod_selector = rule.get('podSelector') pod_selector = rule.get('podSelector')
matched = network_policy_security_groups._create_sg_rules( matched = network_policy_security_groups._create_sg_rules(
crd, pod, pod_selector, rule_block, crd, pod, pod_selector, rule_block, 'ingress', False)
crd_rules, 'ingress', False, self._namespace)
self.assertEqual(matched, False) self.assertEqual(matched, False)
@mock.patch('kuryr_kubernetes.controller.drivers.'
'network_policy_security_groups._bump_networkpolicy')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'patch_kuryrnetworkpolicy_crd') 'get_kuryrnetworkpolicy_crds')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'delete_security_group_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_pod_ip') @mock.patch('kuryr_kubernetes.controller.drivers.utils.get_pod_ip')
def test_delete_sg_rules(self, m_get_pod_ip, m_delete_sg_rule, def test_delete_sg_rules(self, m_get_pod_ip, m_get_knp_crds, m_bump):
m_get_knp_crds, m_patch_kuryrnetworkpolicy_crd):
crd = self._crd_with_rule crd = self._crd_with_rule
i_rule = crd['spec'].get('ingressSgRules')[0]
sgr_id = i_rule['security_group_rule'].get('id')
m_get_pod_ip.return_value = self._pod_ip m_get_pod_ip.return_value = self._pod_ip
m_get_knp_crds.return_value = { m_get_knp_crds.return_value = [crd]
"apiVersion": "v1",
"items": [crd],
"kind": "List",
"metadata": {
"resourceVersion": "",
"selfLink": mock.sentinel.selfLink}}
i_rules = e_rules = []
pod = self._pod_dev_namespace pod = self._pod_dev_namespace
self._driver.delete_sg_rules(pod) self._driver.delete_sg_rules(pod)
m_get_knp_crds.assert_called_once() m_get_knp_crds.assert_called_once()
m_get_pod_ip.assert_called_once_with(pod) m_get_pod_ip.assert_called_once_with(pod)
m_delete_sg_rule.assert_called_once_with(sgr_id) m_bump.assert_called_once()
m_patch_kuryrnetworkpolicy_crd.assert_called_with(
crd, i_rules, e_rules, crd['spec'].get('podSelector'))
@mock.patch('kuryr_kubernetes.config.CONF') @mock.patch('kuryr_kubernetes.config.CONF')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds') 'get_kuryrnetworkpolicy_crds')
def test_get_sgs_for_pod_without_label(self, m_get_crds, m_cfg): def test_get_sgs_for_pod_without_label(self, m_get_crds, m_cfg):
m_get_crds.return_value = self._crds m_get_crds.return_value = self._crds
sg_list = [str(mock.sentinel.sg_id)] sg_list = [str(mock.sentinel.sg_id)]
@ -460,7 +421,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_labels') 'match_labels')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds') 'get_kuryrnetworkpolicy_crds')
def test_get_sgs_for_pod_with_label(self, m_get_crds, m_match_labels, def test_get_sgs_for_pod_with_label(self, m_get_crds, m_match_labels,
m_match_expressions): m_match_expressions):
m_get_crds.return_value = self._crds m_get_crds.return_value = self._crds
@ -474,7 +435,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
self._crd['spec']['podSelector']['matchExpressions'], pod_labels) self._crd['spec']['podSelector']['matchExpressions'], pod_labels)
m_match_labels.assert_called_once_with( m_match_labels.assert_called_once_with(
self._crd['spec']['podSelector']['matchLabels'], pod_labels) self._crd['spec']['podSelector']['matchLabels'], pod_labels)
self.assertEqual(resp, [str(self._sg_id)]) self.assertEqual(resp, [self._sg_id])
@mock.patch('kuryr_kubernetes.config.CONF') @mock.patch('kuryr_kubernetes.config.CONF')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
@ -482,7 +443,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_labels') 'match_labels')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds') 'get_kuryrnetworkpolicy_crds')
def test_get_sgs_for_pod_with_label_no_match(self, m_get_crds, def test_get_sgs_for_pod_with_label_no_match(self, m_get_crds,
m_match_labels, m_match_labels,
m_match_expressions, m_cfg): m_match_expressions, m_cfg):
@ -503,9 +464,9 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
self.assertEqual(sg_list, sgs) self.assertEqual(sg_list, sgs)
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds') 'get_kuryrnetworkpolicy_crds')
def test_get_sgs_no_crds(self, m_get_crds): def test_get_sgs_no_crds(self, m_get_crds):
m_get_crds.return_value = {"items": []} m_get_crds.return_value = []
cfg.CONF.set_override('pod_security_groups', [], cfg.CONF.set_override('pod_security_groups', [],
group='neutron_defaults') group='neutron_defaults')
@ -519,7 +480,7 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_labels') 'match_labels')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds') 'get_kuryrnetworkpolicy_crds')
def test_get_sgs_multiple_crds(self, m_get_crds, m_match_labels, def test_get_sgs_multiple_crds(self, m_get_crds, m_match_labels,
m_match_expressions): m_match_expressions):
m_match_expressions.return_value = True m_match_expressions.return_value = True
@ -529,87 +490,64 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
resp = self._driver.get_security_groups(self._pod, self._project_id) resp = self._driver.get_security_groups(self._pod, self._project_id)
m_get_crds.assert_called_once_with(namespace=self._namespace) m_get_crds.assert_called_once_with(namespace=self._namespace)
self.assertEqual([str(self._sg_id), str(self._sg_id2)], resp) self.assertEqual([self._sg_id, self._sg_id2], resp)
@mock.patch('kuryr_kubernetes.controller.drivers.'
'network_policy_security_groups._bump_networkpolicy')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'patch_kuryrnetworkpolicy_crd') 'get_kuryrnetworkpolicy_crds')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' def test_delete_namespace_sg_rule(self, m_get_knp_crd, m_bump):
'delete_security_group_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds')
def test_delete_namespace_sg_rule(self, m_get_knp_crd, m_delete_sg_rule,
m_patch_kuryrnetworkpolicy_crd):
cls = network_policy_security_groups.NetworkPolicySecurityGroupsDriver cls = network_policy_security_groups.NetworkPolicySecurityGroupsDriver
m_driver = mock.MagicMock(spec=cls) m_driver = mock.MagicMock(spec=cls)
i_rule = get_matched_crd_obj()['spec']['ingressSgRules'][0]
sg_rule_id = i_rule.get('security_group_rule')['id']
m_get_knp_crd.return_value = {"items": [get_matched_crd_obj()]} m_get_knp_crd.return_value = [get_matched_crd_obj()]
cls.delete_namespace_sg_rules(m_driver, get_match_crd_namespace_obj()) cls.delete_namespace_sg_rules(m_driver, get_match_crd_namespace_obj())
m_get_knp_crd.assert_called_once() m_get_knp_crd.assert_called_once()
m_delete_sg_rule.assert_called_once_with(sg_rule_id) m_bump.assert_called_once()
m_patch_kuryrnetworkpolicy_crd.assert_called_once()
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.'
'patch_kuryrnetworkpolicy_crd') 'network_policy_security_groups._bump_networkpolicy')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'delete_security_group_rule') 'delete_security_group_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_kuryrnetpolicy_crds') 'get_kuryrnetworkpolicy_crds')
def test_delete_namespace_sg_rule_no_match(self, m_get_knp_crd, def test_delete_namespace_sg_rule_no_match(
m_delete_sg_rule, self, m_get_knp_crd, m_delete_sg_rule, m_bump):
m_patch_kuryrnetworkpolicy_crd):
cls = network_policy_security_groups.NetworkPolicySecurityGroupsDriver cls = network_policy_security_groups.NetworkPolicySecurityGroupsDriver
m_driver = mock.MagicMock(spec=cls) m_driver = mock.MagicMock(spec=cls)
m_get_knp_crd.return_value = {"items": [get_matched_crd_obj()]} m_get_knp_crd.return_value = [get_matched_crd_obj()]
cls.delete_namespace_sg_rules(m_driver, cls.delete_namespace_sg_rules(m_driver,
get_no_match_crd_namespace_obj()) get_no_match_crd_namespace_obj())
m_get_knp_crd.assert_called_once() m_get_knp_crd.assert_called_once()
m_delete_sg_rule.assert_not_called() m_delete_sg_rule.assert_not_called()
m_patch_kuryrnetworkpolicy_crd.assert_not_called() m_bump.assert_not_called()
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.get_pods')
'get_pods') @mock.patch('kuryr_kubernetes.controller.drivers.utils.match_selector')
@mock.patch('kuryr_kubernetes.controller.drivers.' def test__parse_rules(self, m_match_selector, m_get_pods):
'network_policy_security_groups._create_sg_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_selector')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_namespace_subnet_cidr')
def test__parse_rules(self, m_get_ns_subnet_cidr, m_match_selector,
m_create_sg_rule, m_get_pods):
crd = get_crd_obj_no_match() crd = get_crd_obj_no_match()
policy = crd['spec']['networkpolicy_spec'] policy = crd['spec']['networkpolicy_spec']
i_rule = policy.get('ingress')[0] i_rule = policy.get('ingress')[0]
ns_selector = i_rule['from'][0].get('namespaceSelector') ns_selector = i_rule['from'][0].get('namespaceSelector')
ns = get_match_crd_namespace_obj() ns = get_match_crd_namespace_obj()
m_get_ns_subnet_cidr.return_value = '10.0.2.0/26'
m_match_selector.return_value = True m_match_selector.return_value = True
m_create_sg_rule.return_value = get_sg_rule()
matched, rules = network_policy_security_groups._parse_rules( matched = network_policy_security_groups._parse_rules(
'ingress', crd, namespace=ns) 'ingress', crd, policy, namespace=ns)
m_get_ns_subnet_cidr.assert_called_once_with(ns)
m_match_selector.assert_called_once_with(ns_selector, m_match_selector.assert_called_once_with(ns_selector,
ns['metadata']['labels']) ns['metadata']['labels'])
m_create_sg_rule.assert_called_once()
self.assertEqual(matched, True) self.assertEqual(matched, True)
self.assertEqual(rules, [get_sg_rule()])
@mock.patch('kuryr_kubernetes.controller.drivers.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.match_selector')
'network_policy_security_groups._create_sg_rule') def test__parse_rules_no_match(self, m_match_selector):
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_selector')
def test__parse_rules_no_match(self, m_match_selector,
m_create_sg_rule):
crd = get_crd_obj_no_match() crd = get_crd_obj_no_match()
policy = crd['spec']['networkpolicy_spec'] policy = crd['spec']['networkpolicy_spec']
i_rule = policy.get('ingress')[0] i_rule = policy.get('ingress')[0]
@ -618,26 +556,19 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
m_match_selector.return_value = False m_match_selector.return_value = False
matched, rules = network_policy_security_groups._parse_rules( matched = network_policy_security_groups._parse_rules(
'ingress', crd, namespace=ns) 'ingress', crd, policy, namespace=ns)
m_match_selector.assert_called_once_with(ns_selector, m_match_selector.assert_called_once_with(ns_selector,
ns['metadata']['labels']) ns['metadata']['labels'])
m_create_sg_rule.assert_not_called()
self.assertEqual(matched, False) self.assertEqual(matched, False)
self.assertEqual(rules, [])
@mock.patch('kuryr_kubernetes.controller.drivers.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.get_pods')
'network_policy_security_groups._create_sg_rule') @mock.patch('kuryr_kubernetes.controller.drivers.utils.get_pod_ip')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' @mock.patch('kuryr_kubernetes.controller.drivers.utils.match_selector')
'get_pod_ip') def test__parse_rules_all_selectors(self, m_match_selector, m_get_pod_ip,
@mock.patch('kuryr_kubernetes.controller.drivers.utils.' m_get_pods):
'get_pods')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'match_selector')
def test__parse_rules_all_selectors(self, m_match_selector, m_get_pods,
m_get_pod_ip, m_create_sg_rule):
crd = get_crd_obj_with_all_selectors() crd = get_crd_obj_with_all_selectors()
policy = crd['spec']['networkpolicy_spec'] policy = crd['spec']['networkpolicy_spec']
i_rule = policy.get('ingress')[0] i_rule = policy.get('ingress')[0]
@ -647,22 +578,19 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
pod = get_match_crd_pod_obj() pod = get_match_crd_pod_obj()
m_match_selector.return_value = True m_match_selector.return_value = True
m_get_pods.return_value = {"items": [pod]}
m_get_pod_ip.return_value = pod['status']['podIP'] m_get_pod_ip.return_value = pod['status']['podIP']
m_create_sg_rule.return_value = get_sg_rule() m_get_pods.return_value = {"items": [pod]}
matched, rules = network_policy_security_groups._parse_rules( matched = network_policy_security_groups._parse_rules(
'ingress', crd, namespace=ns) 'ingress', crd, policy, namespace=ns)
m_match_selector.assert_called_once_with(ns_selector, m_match_selector.assert_called_once_with(ns_selector,
ns['metadata']['labels']) ns['metadata']['labels'])
m_get_pods.assert_called_once_with(pod_selector, m_get_pods.assert_called_once_with(pod_selector,
ns['metadata']['name']) ns['metadata']['name'])
m_get_pod_ip.assert_called_once_with(pod) m_get_pod_ip.assert_called_once_with(pod)
m_create_sg_rule.assert_called_once()
self.assertEqual(matched, True) self.assertEqual(matched, True)
self.assertEqual(rules, [get_sg_rule()])
@mock.patch('kuryr_kubernetes.controller.drivers.' @mock.patch('kuryr_kubernetes.controller.drivers.'
'network_policy_security_groups._parse_selectors_on_pod') 'network_policy_security_groups._parse_selectors_on_pod')
@ -670,124 +598,26 @@ class TestNetworkPolicySecurityGroupsDriver(test_base.TestCase):
no_selector = None no_selector = None
matched_selector = True matched_selector = True
pod = mock.sentinel.pod pod = mock.sentinel.pod
final_crd_rules = [mock.sentinel.crd_rules] m_parse_selectors_on_pod.side_effect = [matched_selector]*2
m_parse_selectors_on_pod.side_effect = [
(matched_selector, final_crd_rules)]*2
initial_crd_rules = []
direction = "ingress" direction = "ingress"
pod_selector = mock.sentinel.pod_selector pod_selector = mock.sentinel.pod_selector
namespace_selector = mock.sentinel.namespace_selector namespace_selector = mock.sentinel.namespace_selector
rule_block = {'from': [{'podSelector': pod_selector}, rule_block = {'from': [{'podSelector': pod_selector},
{'namespaceSelector': namespace_selector}]} {'namespaceSelector': namespace_selector}]}
crd = {"spec": { policy = {
"ingressSgRules": initial_crd_rules, "ingress": [rule_block],
"networkpolicy_spec": { "policyTypes": ["Ingress"]
"ingress": [rule_block], }
"policyTypes": [ crd = {"spec": {"ingressSgRules": []}}
"Ingress"
]}, }}
matched, rules = network_policy_security_groups._parse_rules( matched = network_policy_security_groups._parse_rules(
direction, crd, pod=pod) direction, crd, policy, pod=pod)
calls = [mock.call(crd, pod, pod_selector, no_selector, rule_block, calls = [mock.call(crd, pod, pod_selector, no_selector, rule_block,
initial_crd_rules, direction, not matched_selector), direction, not matched_selector),
mock.call(crd, pod, no_selector, namespace_selector, mock.call(crd, pod, no_selector, namespace_selector,
rule_block, final_crd_rules, direction, rule_block, direction, matched_selector)]
matched_selector)]
m_parse_selectors_on_pod.assert_has_calls(calls) m_parse_selectors_on_pod.assert_has_calls(calls)
self.assertEqual(matched, matched_selector) self.assertEqual(matched, matched_selector)
self.assertEqual(rules, final_crd_rules)
class TestNetworkPolicySecurityGroupsFunctions(test_base.TestCase):
def setUp(self):
super().setUp()
self.kubernetes = self.useFixture(k_fix.MockK8sClient()).client
self.npsg = network_policy_security_groups
self.sg_id = mock.sentinel.sg_id
self.crd = {
'spec': {
'ingressSgRules': [],
'networkpolicy_spec': {
'ingress': [],
'policyTypes': ['Ingress']
}
},
'metadata': {'namespace': 'ns'}
}
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule_body')
def test__apply_sg_rules_on_matched_pods_empty_match(self, m_create_sgrb,
m_create_sgr):
self.npsg._apply_sg_rules_on_matched_pods({}, self.sg_id, 'ingress',
'ns', 'port', 'crd_rules')
m_create_sgrb.assert_not_called()
m_create_sgr.assert_not_called()
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_namespace_subnet_cidr')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_namespace')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule_body')
def test__apply_sg_rules_on_matched_pods_not_all(self, m_create_sgrb,
m_create_sgr, m_get_ns,
m_get_ns_sub_cidr):
pod = mock.sentinel.pod
ns = mock.sentinel.ns
port = {'protocol': 'TCP', 'port': 22}
matched_pods = {'container_port': [pod]}
m_get_ns.return_value = ns
m_create_sgrb.return_value = {'security_group_rule': {}}
crd_rules = []
direction = 'ingress'
self.npsg._apply_sg_rules_on_matched_pods(matched_pods, self.sg_id,
direction, 'ns', port,
crd_rules)
m_get_ns_sub_cidr.assert_called_once_with(ns)
m_create_sgrb.assert_called_once_with(self.sg_id, direction,
'container_port',
protocol=mock.ANY, cidr=mock.ANY,
pods=[pod])
m_create_sgr.assert_called_once()
self.assertEqual(len(crd_rules), 1)
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_namespace_subnet_cidr')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'get_namespace')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.'
'create_security_group_rule')
def test__apply_sg_rules_on_matched_pods_all(self, m_create_sgr, m_get_ns,
m_get_ns_sub_cidr):
pod = mock.sentinel.pod
ns = mock.sentinel.ns
port = {'protocol': 'TCP', 'port': 22}
matched_pods = {'container_port': [pod]}
m_get_ns.return_value = ns
crd_rules = []
direction = 'ingress'
self.npsg._apply_sg_rules_on_matched_pods(matched_pods, self.sg_id,
direction, 'ns', port,
crd_rules, allow_all=True)
self.assertEqual(m_create_sgr.call_count, 2)
self.assertEqual(len(crd_rules), 2)
self.assertListEqual([r['security_group_rule']['ethertype']
for r in crd_rules], ['IPv4', 'IPv6'])

View File

@ -0,0 +1,112 @@
# Copyright 2020 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest import mock
from kuryr_kubernetes.controller.drivers import base as drivers
from kuryr_kubernetes.controller.handlers import kuryrnetworkpolicy
from kuryr_kubernetes.tests import base as test_base
class TestPolicyHandler(test_base.TestCase):
@mock.patch.object(drivers.LBaaSDriver, 'get_instance')
@mock.patch.object(drivers.NetworkPolicyDriver, 'get_instance')
@mock.patch('kuryr_kubernetes.clients.get_kubernetes_client')
@mock.patch('kuryr_kubernetes.clients.get_network_client')
@mock.patch('kuryr_kubernetes.clients.get_loadbalancer_client')
def setUp(self, m_get_os_lb, m_get_os_net, m_get_k8s, m_get_np,
m_get_lbaas):
super(TestPolicyHandler, self).setUp()
self._project_id = mock.sentinel.project_id
self._policy_name = 'np-test'
self._policy_uid = mock.sentinel.policy_uid
self._policy_link = mock.sentinel.policy_link
self._policy = {
'apiVersion': 'networking.k8s.io/v1',
'kind': 'NetworkPolicy',
'metadata': {
'name': self._policy_name,
'resourceVersion': '2259309',
'generation': 1,
'creationTimestamp': '2018-09-18T14:09:51Z',
'namespace': 'default',
'annotations': {},
'selfLink': self._policy_link,
'uid': self._policy_uid
},
'spec': {
'egress': [{'ports': [{'port': 5978, 'protocol': 'TCP'}]}],
'ingress': [{'ports': [{'port': 6379, 'protocol': 'TCP'}]}],
'policyTypes': ['Ingress', 'Egress']
}
}
self.k8s = mock.Mock()
m_get_k8s.return_value = self.k8s
self.m_get_k8s = m_get_k8s
self.os_net = mock.Mock()
m_get_os_net.return_value = self.os_net
self.m_get_os_net = m_get_os_net
self.np_driver = mock.Mock()
m_get_np.return_value = self.np_driver
self.m_get_np = m_get_np
self.lbaas_driver = mock.Mock()
m_get_lbaas.return_value = self.lbaas_driver
self.m_get_lbaas = m_get_lbaas
self.k8s.get.return_value = {}
self.handler = kuryrnetworkpolicy.KuryrNetworkPolicyHandler()
def _get_knp_obj(self):
knp_obj = {
'apiVersion': 'openstack.org/v1',
'kind': 'KuryrNetworkPolicy',
'metadata': {
'name': 'np-test-network-policy',
'namespace': 'test-1',
},
'spec': {
'securityGroupId': 'c1ac16f5-e198-4628-9d84-253c6001be8e',
'securityGroupName': 'sg-test-network-policy'
}}
return knp_obj
def test_init(self):
self.m_get_k8s.assert_called_once()
self.m_get_np.assert_called_once()
self.assertEqual(self.np_driver, self.handler._drv_policy)
self.assertEqual(self.k8s, self.handler.k8s)
self.assertEqual(self.os_net, self.handler.os_net)
self.assertEqual(self.lbaas_driver, self.handler._drv_lbaas)
def test_convert(self):
self.k8s.get.return_value = {'items': [{
'metadata': {
'selfLink': mock.sentinel.old_self_link,
'namespace': 'ns',
}
}]}
self.np_driver.get_from_old_crd.return_value = mock.sentinel.new_crd
self.handler._convert_old_crds()
self.k8s.post.assert_called_once_with(mock.ANY, mock.sentinel.new_crd)
self.k8s.delete.assert_called_once_with(mock.sentinel.old_self_link)

View File

@ -47,8 +47,8 @@ class TestPodLabelHandler(test_base.TestCase):
self._get_project = self._handler._drv_project.get_project self._get_project = self._handler._drv_project.get_project
self._get_security_groups = self._handler._drv_sg.get_security_groups self._get_security_groups = self._handler._drv_sg.get_security_groups
self._set_vif_driver = self._handler._drv_vif_pool.set_vif_driver self._set_vif_driver = self._handler._drv_vif_pool.set_vif_driver
self._get_pod_labels = self._handler._get_pod_labels self._get_pod_info = self._handler._get_pod_info
self._set_pod_labels = self._handler._set_pod_labels self._set_pod_info = self._handler._set_pod_info
self._has_vifs = self._handler._has_vifs self._has_vifs = self._handler._has_vifs
self._update_vif_sgs = self._handler._drv_vif_pool.update_vif_sgs self._update_vif_sgs = self._handler._drv_vif_pool.update_vif_sgs
@ -81,16 +81,16 @@ class TestPodLabelHandler(test_base.TestCase):
def test_on_present(self, m_get_services): def test_on_present(self, m_get_services):
m_get_services.return_value = {"items": []} m_get_services.return_value = {"items": []}
self._has_vifs.return_value = True self._has_vifs.return_value = True
self._get_pod_labels.return_value = {'test1': 'test'} self._get_pod_info.return_value = ({'test1': 'test'}, '192.168.0.1')
p_label.PodLabelHandler.on_present(self._handler, self._pod) p_label.PodLabelHandler.on_present(self._handler, self._pod)
self._has_vifs.assert_called_once_with(self._pod) self._has_vifs.assert_called_once_with(self._pod)
self._get_pod_labels.assert_called_once_with(self._pod) self._get_pod_info.assert_called_once_with(self._pod)
self._get_project.assert_called_once() self._get_project.assert_called_once()
self._get_security_groups.assert_called_once() self._get_security_groups.assert_called_once()
self._update_vif_sgs.assert_called_once_with(self._pod, [self._sg_id]) self._update_vif_sgs.assert_called_once_with(self._pod, [self._sg_id])
self._set_pod_labels.assert_called_once_with(self._pod, None) self._set_pod_info.assert_called_once_with(self._pod, (None, None))
def test_on_present_no_state(self): def test_on_present_no_state(self):
self._has_vifs.return_value = False self._has_vifs.return_value = False
@ -99,27 +99,29 @@ class TestPodLabelHandler(test_base.TestCase):
self.assertIsNone(resp) self.assertIsNone(resp)
self._has_vifs.assert_called_once_with(self._pod) self._has_vifs.assert_called_once_with(self._pod)
self._get_pod_labels.assert_not_called() self._get_pod_info.assert_not_called()
self._set_pod_labels.assert_not_called() self._set_pod_info.assert_not_called()
def test_on_present_no_labels(self): @mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services')
def test_on_present_no_labels(self, m_get_services):
self._has_vifs.return_value = True self._has_vifs.return_value = True
self._get_pod_labels.return_value = None self._get_pod_info.return_value = None, None
p_label.PodLabelHandler.on_present(self._handler, self._pod) p_label.PodLabelHandler.on_present(self._handler, self._pod)
self._has_vifs.assert_called_once_with(self._pod) self._has_vifs.assert_called_once_with(self._pod)
self._get_pod_labels.assert_called_once_with(self._pod) self._get_pod_info.assert_called_once_with(self._pod)
self._set_pod_labels.assert_not_called() self._set_pod_info.assert_not_called()
def test_on_present_no_changes(self): def test_on_present_no_changes(self):
self._has_vifs.return_value = True self._has_vifs.return_value = True
pod_with_label = self._pod.copy() pod_with_label = self._pod.copy()
pod_with_label['metadata']['labels'] = {'test1': 'test'} pod_with_label['metadata']['labels'] = {'test1': 'test'}
self._get_pod_labels.return_value = {'test1': 'test'} pod_with_label['status']['podIP'] = '192.168.0.1'
self._get_pod_info.return_value = ({'test1': 'test'}, '192.168.0.1')
p_label.PodLabelHandler.on_present(self._handler, pod_with_label) p_label.PodLabelHandler.on_present(self._handler, pod_with_label)
self._has_vifs.assert_called_once_with(pod_with_label) self._has_vifs.assert_called_once_with(pod_with_label)
self._get_pod_labels.assert_called_once_with(pod_with_label) self._get_pod_info.assert_called_once_with(pod_with_label)
self._set_pod_labels.assert_not_called() self._set_pod_info.assert_not_called()

View File

@ -21,14 +21,15 @@ from kuryr_kubernetes.tests import base as test_base
class TestPolicyHandler(test_base.TestCase): class TestPolicyHandler(test_base.TestCase):
def setUp(self): @mock.patch.object(drivers.NetworkPolicyDriver, 'get_instance')
@mock.patch('kuryr_kubernetes.clients.get_kubernetes_client')
def setUp(self, m_get_k8s, m_get_np):
super(TestPolicyHandler, self).setUp() super(TestPolicyHandler, self).setUp()
self._project_id = mock.sentinel.project_id self._project_id = mock.sentinel.project_id
self._policy_name = 'np-test' self._policy_name = 'np-test'
self._policy_uid = mock.sentinel.policy_uid self._policy_uid = mock.sentinel.policy_uid
self._policy_link = mock.sentinel.policy_link self._policy_link = mock.sentinel.policy_link
self._pod_sg = mock.sentinel.pod_sg
self._policy = { self._policy = {
'apiVersion': 'networking.k8s.io/v1', 'apiVersion': 'networking.k8s.io/v1',
@ -50,198 +51,31 @@ class TestPolicyHandler(test_base.TestCase):
} }
} }
self._handler = mock.MagicMock(spec=policy.NetworkPolicyHandler) self.k8s = mock.Mock()
m_get_k8s.return_value = self.k8s
self.m_get_k8s = m_get_k8s
self._handler._drv_project = mock.Mock( self.np_driver = mock.Mock()
spec=drivers.NetworkPolicyProjectDriver) m_get_np.return_value = self.np_driver
self._handler._drv_policy = mock.MagicMock( self._m_get_np = m_get_np
spec=drivers.NetworkPolicyDriver)
self._handler._drv_pod_sg = mock.Mock(
spec=drivers.PodSecurityGroupsDriver)
self._handler._drv_svc_sg = mock.Mock(
spec=drivers.ServiceSecurityGroupsDriver)
self._handler._drv_vif_pool = mock.MagicMock(
spec=drivers.VIFPoolDriver)
self._handler._drv_lbaas = mock.Mock(
spec=drivers.LBaaSDriver)
self._get_project = self._handler._drv_project.get_project self.handler = policy.NetworkPolicyHandler()
self._get_project.return_value = self._project_id
self._get_security_groups = (
self._handler._drv_pod_sg.get_security_groups)
self._set_vifs_driver = self._handler._drv_vif_pool.set_vif_driver
self._set_vifs_driver.return_value = mock.Mock(
spec=drivers.PodVIFDriver)
self._update_vif_sgs = self._handler._drv_vif_pool.update_vif_sgs
self._update_vif_sgs.return_value = None
self._update_lbaas_sg = self._handler._drv_lbaas.update_lbaas_sg
self._update_lbaas_sg.return_value = None
self._remove_sg = self._handler._drv_vif_pool.remove_sg_from_pools
self._remove_sg.return_value = None
def _get_knp_obj(self): def test_init(self):
knp_obj = { self.m_get_k8s.assert_called_once()
'apiVersion': 'openstack.org/v1', self._m_get_np.assert_called_once()
'kind': 'KuryrNetPolicy',
'metadata': {
'name': 'np-test-network-policy',
'namespace': 'test-1'
},
'spec': {
'securityGroupId': 'c1ac16f5-e198-4628-9d84-253c6001be8e',
'securityGroupName': 'sg-test-network-policy'
}}
return knp_obj
@mock.patch.object(drivers.LBaaSDriver, 'get_instance') self.assertEqual(self.np_driver, self.handler._drv_policy)
@mock.patch.object(drivers.ServiceSecurityGroupsDriver, 'get_instance') self.assertEqual(self.k8s, self.handler.k8s)
@mock.patch.object(drivers.PodSecurityGroupsDriver, 'get_instance')
@mock.patch.object(drivers.VIFPoolDriver, 'get_instance')
@mock.patch.object(drivers.NetworkPolicyDriver, 'get_instance')
@mock.patch.object(drivers.NetworkPolicyProjectDriver, 'get_instance')
def test_init(self, m_get_project_driver, m_get_policy_driver,
m_get_vif_driver, m_get_pod_sg_driver, m_get_svc_sg_driver,
m_get_lbaas_driver):
handler = policy.NetworkPolicyHandler()
m_get_project_driver.assert_called_once() def test_on_finalize(self):
m_get_policy_driver.assert_called_once() self.handler.on_finalize(self._policy)
m_get_vif_driver.assert_called_once() self.np_driver.release_network_policy.assert_called_once_with(
m_get_pod_sg_driver.assert_called_once() self._policy)
m_get_svc_sg_driver.assert_called_once()
m_get_lbaas_driver.assert_called_once()
self.assertEqual(m_get_project_driver.return_value, def test_on_present(self):
handler._drv_project) self.handler.on_present(self._policy)
self.assertEqual(m_get_policy_driver.return_value, handler._drv_policy) self.k8s.add_finalizer.assert_called_once_with(
self._policy, 'kuryr.openstack.org/networkpolicy-finalizer')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services') self.np_driver.ensure_network_policy.assert_called_once_with(
@mock.patch('kuryr_kubernetes.controller.drivers.utils.is_host_network') self._policy)
def test_on_present(self, m_host_network, m_get_services):
modified_pod = mock.sentinel.modified_pod
match_pod = mock.sentinel.match_pod
m_host_network.return_value = False
knp_on_ns = self._handler._drv_policy.knps_on_namespace
knp_on_ns.return_value = True
namespaced_pods = self._handler._drv_policy.namespaced_pods
ensure_nw_policy = self._handler._drv_policy.ensure_network_policy
ensure_nw_policy.return_value = [modified_pod]
affected_pods = self._handler._drv_policy.affected_pods
affected_pods.return_value = [match_pod]
sg1 = [mock.sentinel.sg1]
sg2 = [mock.sentinel.sg2]
self._get_security_groups.side_effect = [sg1, sg2]
m_get_services.return_value = {'items': []}
policy.NetworkPolicyHandler.on_present(self._handler, self._policy)
namespaced_pods.assert_not_called()
ensure_nw_policy.assert_called_once_with(self._policy,
self._project_id)
affected_pods.assert_called_once_with(self._policy)
calls = [mock.call(modified_pod, self._project_id),
mock.call(match_pod, self._project_id)]
self._get_security_groups.assert_has_calls(calls)
calls = [mock.call(modified_pod, sg1), mock.call(match_pod, sg2)]
self._update_vif_sgs.assert_has_calls(calls)
self._update_lbaas_sg.assert_not_called()
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.is_host_network')
def test_on_present_without_knps_on_namespace(self, m_host_network,
m_get_services):
modified_pod = mock.sentinel.modified_pod
match_pod = mock.sentinel.match_pod
m_host_network.return_value = False
ensure_nw_policy = self._handler._drv_policy.ensure_network_policy
ensure_nw_policy.return_value = [modified_pod]
affected_pods = self._handler._drv_policy.affected_pods
affected_pods.return_value = [match_pod]
sg2 = [mock.sentinel.sg2]
sg3 = [mock.sentinel.sg3]
self._get_security_groups.side_effect = [sg2, sg3]
m_get_services.return_value = {'items': []}
policy.NetworkPolicyHandler.on_present(self._handler, self._policy)
ensure_nw_policy.assert_called_once_with(self._policy,
self._project_id)
affected_pods.assert_called_once_with(self._policy)
calls = [mock.call(modified_pod, self._project_id),
mock.call(match_pod, self._project_id)]
self._get_security_groups.assert_has_calls(calls)
calls = [mock.call(modified_pod, sg2),
mock.call(match_pod, sg3)]
self._update_vif_sgs.assert_has_calls(calls)
self._update_lbaas_sg.assert_not_called()
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.is_host_network')
def test_on_present_with_services(self, m_host_network, m_get_services):
modified_pod = mock.sentinel.modified_pod
match_pod = mock.sentinel.match_pod
m_host_network.return_value = False
self._handler._is_egress_only_policy.return_value = False
self._handler._is_service_affected.return_value = True
knp_on_ns = self._handler._drv_policy.knps_on_namespace
knp_on_ns.return_value = True
namespaced_pods = self._handler._drv_policy.namespaced_pods
ensure_nw_policy = self._handler._drv_policy.ensure_network_policy
ensure_nw_policy.return_value = [modified_pod]
affected_pods = self._handler._drv_policy.affected_pods
affected_pods.return_value = [match_pod]
sg1 = [mock.sentinel.sg1]
sg2 = [mock.sentinel.sg2]
self._get_security_groups.side_effect = [sg1, sg2]
service = {'metadata': {'name': 'service-test'},
'spec': {'selector': mock.sentinel.selector}}
m_get_services.return_value = {'items': [service]}
policy.NetworkPolicyHandler.on_present(self._handler, self._policy)
namespaced_pods.assert_not_called()
ensure_nw_policy.assert_called_once_with(self._policy,
self._project_id)
affected_pods.assert_called_once_with(self._policy)
calls = [mock.call(modified_pod, self._project_id),
mock.call(match_pod, self._project_id)]
self._get_security_groups.assert_has_calls(calls)
calls = [mock.call(modified_pod, sg1), mock.call(match_pod, sg2)]
self._update_vif_sgs.assert_has_calls(calls)
self._handler._is_service_affected.assert_called_once_with(
service, [modified_pod, match_pod])
self._update_lbaas_sg.assert_called_once()
@mock.patch('kuryr_kubernetes.controller.drivers.utils.get_services')
@mock.patch('kuryr_kubernetes.controller.drivers.utils.is_host_network')
def test_on_deleted(self, m_host_network, m_get_services):
namespace_pod = mock.sentinel.namespace_pod
match_pod = mock.sentinel.match_pod
m_host_network.return_value = False
affected_pods = self._handler._drv_policy.affected_pods
affected_pods.return_value = [match_pod]
get_knp_crd = self._handler._drv_policy.get_kuryrnetpolicy_crd
knp_obj = self._get_knp_obj()
get_knp_crd.return_value = knp_obj
sg1 = [mock.sentinel.sg1]
sg2 = [mock.sentinel.sg2]
self._get_security_groups.side_effect = [sg1, sg2]
m_get_services.return_value = {'items': []}
release_nw_policy = self._handler._drv_policy.release_network_policy
knp_on_ns = self._handler._drv_policy.knps_on_namespace
knp_on_ns.return_value = False
ns_pods = self._handler._drv_policy.namespaced_pods
ns_pods.return_value = [namespace_pod]
policy.NetworkPolicyHandler.on_deleted(self._handler, self._policy)
release_nw_policy.assert_called_once_with(knp_obj)
self._get_security_groups.assert_called_once_with(match_pod,
self._project_id)
self._update_vif_sgs.assert_called_once_with(match_pod, sg1)
self._update_lbaas_sg.assert_not_called()
self._remove_sg.assert_called_once()

View File

@ -104,7 +104,7 @@ kuryr_kubernetes.controller.handlers =
namespace = kuryr_kubernetes.controller.handlers.namespace:NamespaceHandler namespace = kuryr_kubernetes.controller.handlers.namespace:NamespaceHandler
policy = kuryr_kubernetes.controller.handlers.policy:NetworkPolicyHandler policy = kuryr_kubernetes.controller.handlers.policy:NetworkPolicyHandler
pod_label = kuryr_kubernetes.controller.handlers.pod_label:PodLabelHandler pod_label = kuryr_kubernetes.controller.handlers.pod_label:PodLabelHandler
kuryrnetpolicy = kuryr_kubernetes.controller.handlers.kuryrnetpolicy:KuryrNetPolicyHandler kuryrnetworkpolicy = kuryr_kubernetes.controller.handlers.kuryrnetworkpolicy:KuryrNetworkPolicyHandler
kuryrnetwork = kuryr_kubernetes.controller.handlers.kuryrnetwork:KuryrNetworkHandler kuryrnetwork = kuryr_kubernetes.controller.handlers.kuryrnetwork:KuryrNetworkHandler
kuryrnetwork_population = kuryr_kubernetes.controller.handlers.kuryrnetwork_population:KuryrNetworkPopulationHandler kuryrnetwork_population = kuryr_kubernetes.controller.handlers.kuryrnetwork_population:KuryrNetworkPopulationHandler
test_handler = kuryr_kubernetes.tests.unit.controller.handlers.test_fake_handler:TestHandler test_handler = kuryr_kubernetes.tests.unit.controller.handlers.test_fake_handler:TestHandler

View File

@ -38,6 +38,7 @@ sudo chown ${USER}:${USER} ${HOME}/.kube/config
/usr/local/bin/kubectl --kubeconfig=${HOME}/.kube/config get endpoints -o yaml --all-namespaces >> ${K8S_LOG_DIR}/endpoints.txt /usr/local/bin/kubectl --kubeconfig=${HOME}/.kube/config get endpoints -o yaml --all-namespaces >> ${K8S_LOG_DIR}/endpoints.txt
/usr/local/bin/kubectl --kubeconfig=${HOME}/.kube/config get kuryrnetpolicy -o yaml --all-namespaces >> ${K8S_LOG_DIR}/kuryrnetpolicy_crds.txt /usr/local/bin/kubectl --kubeconfig=${HOME}/.kube/config get kuryrnetpolicy -o yaml --all-namespaces >> ${K8S_LOG_DIR}/kuryrnetpolicy_crds.txt
/usr/local/bin/kubectl --kubeconfig=${HOME}/.kube/config get kuryrport -o yaml --all-namespaces >> ${K8S_LOG_DIR}/kuryrport_crds.txt /usr/local/bin/kubectl --kubeconfig=${HOME}/.kube/config get kuryrport -o yaml --all-namespaces >> ${K8S_LOG_DIR}/kuryrport_crds.txt
/usr/local/bin/kubectl --kubeconfig=${HOME}/.kube/config get kuryrnetworkpolicy -o yaml --all-namespaces >> ${K8S_LOG_DIR}/kuryrnetworkpolicy_crds.txt
# Kubernetes pods logs # Kubernetes pods logs
mkdir -p ${K8S_LOG_DIR}/pod_logs mkdir -p ${K8S_LOG_DIR}/pod_logs
while read -r line while read -r line