Retire kuryr-tempest-plugin: remove repo content

kuryr-tempest-plugin repository are retiring[1]
and this commit remove the content of this repo.

Depends-On: https://review.opendev.org/c/openstack/project-config/+/923072
[1] https://review.opendev.org/c/openstack/governance/+/922507

Change-Id: Ia98b722be28b197cfea069396a1ca9b612a30aa9
This commit is contained in:
Ghanshyam Mann 2024-06-28 14:41:59 -07:00
parent 115d6bd490
commit 82bfa1f624
49 changed files with 8 additions and 6113 deletions

23
.gitignore vendored
View File

@ -1,23 +0,0 @@
*.egg-info
*.egg[s]
*.log
*.py[co]
.coverage
.testrepository
.tox
.venv
AUTHORS
ChangeLog
build
cover
develop-eggs
dist
doc/build
doc/html
eggs
sdist
target
doc/source/sample.config
# Files created by releasenotes build
releasenotes/build

View File

@ -1,57 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- project:
templates:
- check-requirements
- publish-openstack-docs-pti
- tempest-plugin-jobs
- kuryr-kubernetes-tempest-jobs
check:
jobs:
- kuryr-kubernetes-tempest-2023-1
- kuryr-kubernetes-tempest-zed
- kuryr-kubernetes-tempest-yoga
- kuryr-kubernetes-tempest-xena
- kuryr-kubernetes-tempest-wallaby
- job:
name: kuryr-kubernetes-tempest-2023-1
parent: kuryr-kubernetes-tempest
nodeset: openstack-single-node-jammy
override-checkout: stable/2023.1
- job:
name: kuryr-kubernetes-tempest-zed
parent: kuryr-kubernetes-tempest
nodeset: openstack-single-node-focal
override-checkout: stable/zed
- job:
name: kuryr-kubernetes-tempest-yoga
parent: kuryr-kubernetes-tempest
nodeset: openstack-single-node-focal
override-checkout: stable/yoga
- job:
name: kuryr-kubernetes-tempest-xena
parent: kuryr-kubernetes-tempest
nodeset: openstack-single-node-focal
override-checkout: stable/xena
- job:
name: kuryr-kubernetes-tempest-wallaby
parent: kuryr-kubernetes-tempest
nodeset: openstack-single-node-focal
override-checkout: stable/wallaby

View File

@ -1,20 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
You can find more Kuryr-Kubernetes specific info in our guide:
https://docs.openstack.org/kuryr-kubernetes/latest/index.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
https://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/kuryr-tempest-plugin

View File

@ -1,50 +0,0 @@
Kuryr Tempest Plugin Style Commandments
=======================================
- Step 1: Read the OpenStack Style Commandments
https://docs.openstack.org/hacking/latest/
- Step 2: Read on
Commit Messages
---------------
Using a common format for commit messages will help keep our git history
readable. Follow these guidelines:
- [S365] First, provide a brief summary of 50 characters or less. Summaries
of greater then 72 characters will be rejected by the gate.
- [S364] The first line of the commit message should provide an accurate
description of the change, not just a reference to a bug or blueprint.
Imports
-------
- [S366, S367] Organize your imports according to the ``Import order``
Dictionaries/Lists
------------------
- [S360] Ensure default arguments are not mutable.
- [S368] Must use a dict comprehension instead of a dict constructor with a
sequence of key-value pairs. For more information, please refer to
http://legacy.python.org/dev/peps/pep-0274/
=======
Logs
----
- [S369] Check LOG.info translations
- [S370] Check LOG.error translations
- [S371] Check LOG.warning translations
- [S372] Check LOG.critical translation
- [S373] LOG.debug never used for translations
- [S374] You used a deprecated log level
Importing json
--------------
- [S375] It's more preferable to use ``jsonutils`` from ``oslo_serialization``
instead of ``json`` for operating with ``json`` objects.

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,60 +1,10 @@
========================
Team and repository tags
========================
This project is no longer maintained.
.. image:: https://governance.openstack.org/tc/badges/kuryr-tempest-plugin.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
.. Change things from this point on
============================
Tempest Integration of Kuryr
============================
Overview
========
This project defines a tempest plugin containing tests used to verify the
functionality of a kuryr installation. The plugin will automatically load
these tests into tempest.
For more information about Kuryr see:
https://docs.openstack.org/kuryr/latest/
For more information about Kuryr-kubernetes see:
https://docs.openstack.org/kuryr-kubernetes/latest/
For more information about Tempest plugins see:
https://docs.openstack.org/tempest/latest/plugin.html
* Free software: Apache license
* Documentation: https://docs.openstack.org/kuryr-tempest-plugin/latest/
* Source: https://opendev.org/openstack/kuryr-tempest-plugin
* Bugs: https://bugs.launchpad.net/kuryr
Installing
----------
Clone this repository and call from the repo::
$ pip install -e .
Running the tests
-----------------
To verify the functionality of Kuryr by running tests from this plugin;
From the tempest repo, initialize stestr::
$ stestr init
Then, to run all the tests from this plugin, call::
$ tempest run -r 'kuryr_tempest_plugin.*'
To run a single test case, call with full path, for example::
$ tempest run -r 'kuryr_tempest_plugin.tests.scenario.test_cross_ping.TestCrossPingScenario.test_vm_pod_ping*'
To retrieve a list of all tempest tests, run::
$ tempest run -l
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@ -1,39 +0,0 @@
# install tempest plugin
function build_test_container {
pushd "${DEST}/kuryr-tempest-plugin/test_container"
# FIXME(dulek): Until https://github.com/containers/buildah/issues/1206 is
# resolved instead of podman we need to use buildah directly,
# hence this awful if clause.
if [[ ${CONTAINER_ENGINE} == 'crio' ]]; then
sudo buildah bud -t quay.io/kuryr/demo -f Dockerfile .
sudo buildah bud -t quay.io/kuryr/sctp-demo -f \
kuryr_sctp_demo/Dockerfile .
else
docker build -t quay.io/kuryr/demo . -f Dockerfile
docker build -t quay.io/kuryr/sctp-demo . -f \
kuryr_sctp_demo/Dockerfile
fi
popd
}
function install_kuryr_tempest_plugin {
setup_dev_lib "kuryr-tempest-plugin"
}
if [[ "$1" == "stack" && "$2" == "extra" ]]; then
echo_summary "Building kuryr/demo test container"
build_test_container
elif [[ "$1" == "stack" && "$2" == "test-config" ]]; then
# (gmann): Install Kuryr Tempest Plugin on the system only
# if INSTALL_TEMPEST is True. Irrespective of plugin is
# installed on system wide by this file or not, Tempest
# and all enabled plugins will be installed and tested
# via venv. INSTALL_TEMPEST is False on stable branches, as
# master tempest or its plugins deps do not match
# stable branch deps.
if [[ "$INSTALL_TEMPEST" == "True" ]]; then
echo_summary "Installing Kuryr Tempest Plugin"
install_kuryr_tempest_plugin
fi
fi

View File

@ -1,6 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
sphinx>=2.0.0,!=2.1.0 # BSD
openstackdocstheme>=2.2.1 # Apache-2.0
reno>=3.1.0 # Apache-2.0

View File

@ -1,82 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.todo',
'openstackdocstheme',
'reno.sphinxext'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'kuryr'
copyright = u'2017, OpenStack Foundation'
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/kuryr-tempest-plugin'
openstackdocs_auto_name = False
openstackdocs_bug_project = 'kuryr-kubernetes'
openstackdocs_bug_tag = ''
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
html_theme = 'openstackdocs'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,9 +0,0 @@
.. include:: ../../README.rst
Scenario Tests
==============
.. toctree::
:maxdepth: 1
scenario_tests/scenario

View File

@ -1 +0,0 @@
../../../kuryr_tempest_plugin/tests/scenario/README.rst

View File

@ -1,6 +0,0 @@
===============================================
Tempest Integration of kuryr-tempest-plugin
===============================================
This directory contains Tempest tests to cover the kuryr-tempest-plugin project.

View File

@ -1,125 +0,0 @@
# Copyright 2015
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
service_option = cfg.BoolOpt("kuryr",
default=True,
help="Whether or not kuryr is expected to be "
"available")
ports_pool_batch = cfg.IntOpt("ports_pool_batch",
default=10,
help="The size of pool batch when "
"KURYR_USE_PORT_POOLS is enabled")
ports_pool_max = cfg.IntOpt("ports_pool_max",
default=0,
help="Maximum number of ports when "
"KURYR_USE_PORT_POOLS is enabled")
ports_pool_min = cfg.IntOpt("ports_pool_min",
default=5,
help="Minimum number of ports when "
"KURYR_USE_PORT_POOLS is enabled")
kuryr_k8s_opts = [
cfg.BoolOpt("port_pool_enabled", default=False,
help="Whether or not port pool feature is enabled"),
cfg.IntOpt("lb_build_timeout", default=1000,
help="The max time (in seconds) it should take to create LB"),
cfg.BoolOpt("namespace_enabled", default=False,
help="Whether or not namespace handler and driver are "
"enabled"),
cfg.BoolOpt("network_policy_enabled", default=False,
help="Whether or not network policy handler and driver are "
"enabled"),
cfg.BoolOpt("service_tests_enabled", default=True,
help="Whether or not service tests will be running"),
cfg.BoolOpt("containerized", default=False,
help="Whether or not kuryr-controller and kuryr-cni are "
"containerized"),
cfg.StrOpt("kube_system_namespace", default="kube-system",
help="Namespace where kuryr-controllers and kuryr-cnis run"),
cfg.BoolOpt("run_tests_serial", default=False,
help="Whether or not test run serially or in parallel"),
cfg.StrOpt("kubernetes_project_name", default="k8s",
help="The OpenStack project name for Kubernetes"),
cfg.BoolOpt("npwg_multi_vif_enabled", default=False,
help="Whether or not NPWG multi-vif feature is enabled"),
cfg.StrOpt("ocp_router_fip", default=None, help="OCP Router floating IP"),
cfg.BoolOpt("kuryr_daemon_enabled", default=True, help="Whether or not "
"kuryr-kubernetes is configured to run with kuryr-daemon"),
cfg.BoolOpt("ap_ha", default=False,
help='Whether or not A/P HA of kuryr-controller is enabled'),
cfg.StrOpt("controller_deployment_name", default="kuryr-controller",
help="Name of Kubernetes Deployment running kuryr-controller "
"Pods"),
cfg.BoolOpt("test_udp_services", default=False,
help="Whether or not service UDP tests will be running"),
cfg.BoolOpt("test_sctp_services", default=False,
help="Whether or not service SCTP tests will be running"),
cfg.BoolOpt("multi_worker_setup", default=False, help="Whether or not we "
"have a multi-worker setup"),
cfg.BoolOpt("cloud_provider", default=False, help="Whether or not a "
"cloud provider is set"),
cfg.BoolOpt("validate_crd", default=False, help="Whether or not kuryr "
"CRDs should be validated"),
cfg.BoolOpt("kuryrloadbalancers", default=False, help="Whether or not "
"kuryrloadbalancers CRDs are used"),
cfg.BoolOpt("kuryrnetworks", default=False, help="Whether or not "
"kuryrnetworks CRDs are used"),
cfg.BoolOpt("new_kuryrnetworkpolicy_crd", default=False,
help="Whether or not KuryrNetworkPolicy CRDs are used instead "
"of KuryrNetPolicy"),
cfg.BoolOpt("configmap_modifiable", default=False, help="Whether config "
"map can be changed"),
cfg.StrOpt("controller_label", default="name=kuryr-controller",
help="The label is used to identify the Kuryr controller pods"),
cfg.BoolOpt("prepopulation_enabled", default=False,
help="Whether prepopulation of ports is enabled"),
cfg.BoolOpt("subnet_per_namespace", default=False,
help="Whether there is a subnet per each namespace"),
cfg.BoolOpt("ipv6", default=False,
help="True if Kuryr is configured to use IPv6 subnets as pod "
"and service subnets."),
cfg.BoolOpt("test_services_without_selector", default=False,
help="Whether or not service without selector tests will be "
"running"),
cfg.BoolOpt("test_endpoints_object_removal", default=True,
help="Whether to check that LB members are deleted upon "
"endpoints object removal or not"),
cfg.BoolOpt("test_configurable_listener_timeouts", default=False,
help="Whether or not listener timeout values are "
"configurable"),
cfg.IntOpt("lb_members_change_timeout", default=1200,
help="The max time (in seconds) it should take to adjust the "
" number LB members"),
cfg.BoolOpt("enable_reconciliation", default=False,
help="Whether or not reconciliation is enabled"),
cfg.BoolOpt("enable_listener_reconciliation", default=False,
help="Whether or not listener reconciliation is enabled"),
cfg.IntOpt("lb_reconcile_timeout", default=600,
help="The max time (in seconds) it should take for LB "
"reconciliation. It doesn't include the LB build time."),
cfg.BoolOpt("trigger_namespace_upon_pod", default=False,
help="Whether or not Namespace should be handled upon Pod "
"creation"),
cfg.BoolOpt("annotation_project_driver", default=False,
help="Whether or not annotation project tests will be "
"running"),
cfg.BoolOpt("set_pod_security_context", default=False,
help="Whether or not to set security context for Kuryr demo "
"pods"),
]

View File

@ -1,49 +0,0 @@
# Copyright 2015
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from tempest.test_discover import plugins
from kuryr_tempest_plugin import config as project_config
class KuryrTempestPlugin(plugins.TempestPlugin):
def load_tests(self):
base_path = os.path.split(os.path.dirname(
os.path.abspath(__file__)))[0]
test_dir = "kuryr_tempest_plugin/tests"
full_test_dir = os.path.join(base_path, test_dir)
return full_test_dir, base_path
def register_opts(self, conf):
conf.register_opt(project_config.service_option,
group='service_available')
conf.register_opt(project_config.ports_pool_batch,
group='vif_pool')
conf.register_opt(project_config.ports_pool_min,
group='vif_pool')
conf.register_opt(project_config.ports_pool_max,
group='vif_pool')
conf.register_opts(project_config.kuryr_k8s_opts,
group='kuryr_kubernetes')
def get_opt_lists(self):
return [('service_available', [project_config.service_option]),
('kuryr_kubernetes', project_config.kuryr_k8s_opts),
('vif_pool', [project_config.ports_pool_batch,
project_config.ports_pool_min,
project_config.ports_pool_max])]

View File

@ -1,13 +0,0 @@
What are these tests?
---------------------
As stated in the tempest developer guide, scenario tests are meant to be used to
test the interaction between several OpenStack services to perform a real-life
use case.
In the case of the Kuryr Tempest Plugin it also involves interaction with
Kubernetes pods, so its manager class includes handlers to its python bindings.
A developer using this manager would be able to perform, among others, CRUD
operations with pods, alongside Kuryr-K8s added funcionality.

File diff suppressed because it is too large Load Diff

View File

@ -1,634 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import time
import kubernetes
from kubernetes import client as k8s_client
import netaddr
from oslo_log import log as logging
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
from kuryr_tempest_plugin.tests.scenario import base
from kuryr_tempest_plugin.tests.scenario import consts
LOG = logging.getLogger(__name__)
CONF = config.CONF
TIMEOUT_PERIOD = 180
class TestNetworkPolicyScenario(base.BaseKuryrScenarioTest,
metaclass=abc.ABCMeta):
@classmethod
def skip_checks(cls):
super(TestNetworkPolicyScenario, cls).skip_checks()
if not CONF.kuryr_kubernetes.network_policy_enabled:
raise cls.skipException('Network Policy driver and handler must '
'be enabled to run this tests')
def get_sg_rules_for_np(self, namespace, network_policy_name):
start = time.time()
sg_id = None
ready = False
while time.time() - start < TIMEOUT_PERIOD:
try:
time.sleep(consts.NP_CHECK_SLEEP_TIME)
sg_id, _, ready = self.get_np_crd_info(
name=network_policy_name, namespace=namespace)
if sg_id and ready:
break
except kubernetes.client.rest.ApiException:
continue
self.assertIsNotNone(sg_id)
self.assertTrue(ready)
return self.list_security_group_rules(sg_id)
def check_sg_rules_for_np(self, namespace, np,
ingress_cidrs_should_exist=(),
egress_cidrs_should_exist=(),
ingress_cidrs_shouldnt_exist=(),
egress_cidrs_shouldnt_exist=()):
ingress_cidrs_should_exist = set(ingress_cidrs_should_exist)
egress_cidrs_should_exist = set(egress_cidrs_should_exist)
ingress_cidrs_shouldnt_exist = set(ingress_cidrs_shouldnt_exist)
egress_cidrs_shouldnt_exist = set(egress_cidrs_shouldnt_exist)
rules_match = False
start = time.time()
while not rules_match and (time.time() - start) < TIMEOUT_PERIOD:
ingress_cidrs_found = set()
egress_cidrs_found = set()
sg_rules = self.get_sg_rules_for_np(namespace, np)
for rule in sg_rules:
if rule['direction'] == 'ingress':
ingress_cidrs_found.add(rule['remote_ip_prefix'])
elif rule['direction'] == 'egress':
egress_cidrs_found.add(rule['remote_ip_prefix'])
if (ingress_cidrs_should_exist.issubset(ingress_cidrs_found)
and (not ingress_cidrs_shouldnt_exist
or not ingress_cidrs_shouldnt_exist.issubset(
ingress_cidrs_found))
and egress_cidrs_should_exist.issubset(egress_cidrs_found)
and (not egress_cidrs_shouldnt_exist
or not egress_cidrs_shouldnt_exist.issubset(
egress_cidrs_found))):
rules_match = True
time.sleep(consts.NP_CHECK_SLEEP_TIME)
if not rules_match:
msg = 'Timed out waiting sg rules for np %s to match' % np
raise lib_exc.TimeoutException(msg)
@abc.abstractmethod
def get_np_crd_info(self, name, namespace='default', **kwargs):
pass
@decorators.idempotent_id('a9db5bc5-e921-4719-8201-5431537c86f8')
def test_ipblock_network_policy_sg_rules(self):
ingress_ipblock = "5.5.5.0/24"
egress_ipblock = "4.4.4.0/24"
namespace_name, namespace = self.create_namespace()
self.addCleanup(self.delete_namespace, namespace_name)
np = self.create_network_policy(namespace=namespace_name,
ingress_ipblock_cidr=ingress_ipblock,
egress_ipblock_cidr=egress_ipblock,
ingress_port=2500)
LOG.debug("Creating network policy %s", np)
self.addCleanup(self.delete_network_policy, np.metadata.name,
namespace_name)
network_policy_name = np.metadata.name
sg_id = None
ready = False
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
try:
time.sleep(consts.NP_CHECK_SLEEP_TIME)
sg_id, _, ready = self.get_np_crd_info(
network_policy_name, namespace=namespace_name)
if sg_id and ready:
break
except kubernetes.client.rest.ApiException:
continue
self.assertIsNotNone(sg_id)
self.assertTrue(ready)
sec_group_rules = self.list_security_group_rules(sg_id)
ingress_block_found, egress_block_found = False, False
for rule in sec_group_rules:
if (rule['direction'] == 'ingress' and
rule['remote_ip_prefix'] == ingress_ipblock):
ingress_block_found = True
if (rule['direction'] == 'egress' and
rule['remote_ip_prefix'] == egress_ipblock):
egress_block_found = True
self.assertTrue(ingress_block_found)
self.assertTrue(egress_block_found)
@decorators.idempotent_id('a9db5bc5-e921-4819-8301-5431437c76f8')
def test_ipblock_network_policy_allow_except(self):
namespace_name, namespace = self.create_namespace()
self.addCleanup(self.delete_namespace, namespace_name)
pod_name, pod = self.create_pod(namespace=namespace_name)
self.addCleanup(self.delete_pod, pod_name, pod,
namespace=namespace_name)
if CONF.kuryr_kubernetes.kuryrnetworks:
cidr = self.get_kuryr_network_crds(
namespace_name)['status']['subnetCIDR']
else:
crd_name = 'ns-' + namespace_name
cidr = self.get_kuryr_net_crds(
crd_name)['spec']['subnetCIDR']
ipn = netaddr.IPNetwork(cidr)
max_prefixlen = "/32"
if ipn.version == 6:
max_prefixlen = "/128"
curl_tmpl = self.get_curl_template(cidr, extra_args='-m 5', port=True)
allow_all_cidr = cidr
pod_ip_list = []
pod_name_list = []
cmd_list = []
for i in range(4):
pod_name, pod = self.create_pod(namespace=namespace_name)
self.addCleanup(self.delete_pod, pod_name, pod,
namespace=namespace_name)
pod_name_list.append(pod_name)
pod_ip = self.get_pod_ip(pod_name, namespace=namespace_name)
pod_ip_list.append(pod_ip)
cmd = ["/bin/sh", "-c", curl_tmpl.format(pod_ip_list[i], ':8080')]
cmd_list.append(cmd)
# Check connectivity from pod4 to other pods before creating NP
for i in range(3):
self.assertIn(consts.POD_OUTPUT, self.exec_command_in_pod(
pod_name_list[3], cmd_list[i], namespace=namespace_name))
# Check connectivity from pod1 to pod4 before creating NP
self.assertIn(consts.POD_OUTPUT, self.exec_command_in_pod(
pod_name_list[0], cmd_list[3], namespace=namespace_name))
# Create NP allowing all besides first pod on ingress
# and second pod on egress
np = self.create_network_policy(
namespace=namespace_name, ingress_ipblock_cidr=allow_all_cidr,
ingress_ipblock_except=[pod_ip_list[0] + max_prefixlen],
egress_ipblock_cidr=allow_all_cidr,
egress_ipblock_except=[pod_ip_list[1] + max_prefixlen])
LOG.debug("Creating network policy %s", np)
network_policy_name = np.metadata.name
sg_id = None
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
try:
time.sleep(consts.NP_CHECK_SLEEP_TIME)
sg_id, _, _ = self.get_np_crd_info(network_policy_name,
namespace=namespace_name)
if sg_id:
break
except kubernetes.client.rest.ApiException:
continue
if not sg_id:
msg = ('Timed out waiting for knp %s creation' %
network_policy_name)
raise lib_exc.TimeoutException(msg)
# Wait for network policy to be created
time.sleep(consts.TIME_TO_APPLY_SGS)
# Check that http connection from pod1 to pod4 is blocked
# after creating NP
self.assertNotIn(consts.POD_OUTPUT, self.exec_command_in_pod(
pod_name_list[0], cmd_list[3], namespace=namespace_name))
# Check that http connection from pod4 to pod2 is blocked
# after creating NP
self.assertNotIn(consts.POD_OUTPUT, self.exec_command_in_pod(
pod_name_list[3], cmd_list[1], namespace=namespace_name))
# Check that http connection from pod4 to pod1 is not blocked
self.assertIn(consts.POD_OUTPUT, self.exec_command_in_pod(
pod_name_list[3], cmd_list[0], namespace=namespace_name))
# Check that there is still http connection to pod3
# from pod4 as it's not blocked by IPblock rules
self.assertIn(consts.POD_OUTPUT, self.exec_command_in_pod(
pod_name_list[3], cmd_list[2], namespace=namespace_name))
# Delete network policy and check that there is still http connection
# between pods
self.delete_network_policy(np.metadata.name, namespace_name)
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
try:
time.sleep(consts.NP_CHECK_SLEEP_TIME)
self.get_np_crd_info(network_policy_name,
namespace=namespace_name)
except kubernetes.client.rest.ApiException as e:
if e.status == 404:
break
else:
continue
else:
msg = ('Timed out waiting for knp %s deletion' %
network_policy_name)
raise lib_exc.TimeoutException(msg)
for i in range(3):
self.assertIn(consts.POD_OUTPUT, self.exec_command_in_pod(
pod_name_list[3], cmd_list[i], namespace=namespace_name))
for i in range(1, 4):
self.assertIn(consts.POD_OUTPUT, self.exec_command_in_pod(
pod_name_list[0], cmd_list[i], namespace=namespace_name))
@decorators.idempotent_id('24577a9b-1d29-409b-8b60-da3b49d776b1')
def test_create_delete_network_policy(self):
np = self.create_network_policy()
LOG.debug("Creating network policy %s" % np)
network_policy_name = np.metadata.name
network_policies = self.list_network_policies()
sg_id = None
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
try:
time.sleep(consts.NP_CHECK_SLEEP_TIME)
sg_id, _, _ = self.get_np_crd_info(network_policy_name)
if sg_id:
break
except kubernetes.client.rest.ApiException:
continue
self.assertIsNotNone(sg_id)
sgs = self.list_security_groups(fields='id')
sg_ids = [sg['id'] for sg in sgs]
self.assertIn(network_policy_name, network_policies)
self.assertIn(sg_id, sg_ids)
self.delete_network_policy(network_policy_name)
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
time.sleep(consts.NP_CHECK_SLEEP_TIME)
if network_policy_name in self.list_network_policies():
continue
try:
self.get_np_crd_info(network_policy_name)
except kubernetes.client.rest.ApiException:
break
sgs_after = self.list_security_groups(fields='id')
sg_ids_after = [sg['id'] for sg in sgs_after]
self.assertNotIn(sg_id, sg_ids_after)
@decorators.idempotent_id('44daf8fe-6a4b-4ca9-8685-97fb1f573e5e')
def test_update_network_policy(self):
"""Update a Network Policy with a new pod selector
This method creates a Network Policy with a specific pod selector
and updates it with a new pod selector. The CRD should always have
the same pod selector as the Policy.
"""
match_labels = {'app': 'demo'}
np = self.create_network_policy(match_labels=match_labels)
self.addCleanup(self.delete_network_policy, np.metadata.name)
network_policy_name = np.metadata.name
crd_pod_selector = None
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
try:
time.sleep(consts.NP_CHECK_SLEEP_TIME)
_, crd_pod_selector, _ = self.get_np_crd_info(
network_policy_name)
if crd_pod_selector:
break
except kubernetes.client.rest.ApiException:
continue
self.assertIsNotNone(crd_pod_selector)
self.assertEqual(crd_pod_selector.get('matchLabels'), match_labels)
label_key = 'context'
match_labels = {label_key: 'demo'}
np = self.read_network_policy(np)
np.spec.pod_selector.match_labels = match_labels
np = self.update_network_policy(np)
labels = {}
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
try:
time.sleep(consts.NP_CHECK_SLEEP_TIME)
_, crd_pod_selector, _ = self.get_np_crd_info(
network_policy_name)
labels = crd_pod_selector.get('matchLabels')
if labels.get(label_key):
break
except kubernetes.client.rest.ApiException:
continue
if not labels.get(label_key):
raise lib_exc.TimeoutException()
self.assertEqual(labels, match_labels)
@decorators.idempotent_id('24577a9b-1d29-409b-8b60-da3c49d777c2')
def test_delete_namespace_with_network_policy(self):
ns_name, ns = self.create_namespace()
match_labels = {'role': 'db'}
np = self.create_network_policy(match_labels=match_labels,
namespace=ns_name)
LOG.debug("Creating network policy %s" % np)
network_policy_name = np.metadata.name
network_policies = self.list_network_policies(namespace=ns_name)
sg_id = None
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
try:
time.sleep(consts.NP_CHECK_SLEEP_TIME)
sg_id, _, _ = self.get_np_crd_info(network_policy_name,
namespace=ns_name)
if sg_id:
break
except kubernetes.client.rest.ApiException:
continue
sgs = self.list_security_groups(fields='id')
sg_ids = [sg['id'] for sg in sgs]
self.assertIn(network_policy_name, network_policies)
self.assertIn(sg_id, sg_ids)
# delete namespace
self.delete_namespace(ns_name)
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
time.sleep(consts.NP_CHECK_SLEEP_TIME)
if network_policy_name in self.list_network_policies(
namespace=ns_name):
continue
try:
self.get_np_crd_info(network_policy_name, namespace=ns_name)
except kubernetes.client.rest.ApiException:
break
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
sgs_after = self.list_security_groups(fields='id')
sg_ids_after = [sg['id'] for sg in sgs_after]
if sg_id not in sg_ids_after:
break
time.sleep(consts.NP_CHECK_SLEEP_TIME)
if time.time() - start >= TIMEOUT_PERIOD:
raise lib_exc.TimeoutException('Sec group ID still exists')
@decorators.idempotent_id('24577a9b-1d46-419b-8b60-da3c49d777c3')
def test_create_delete_network_policy_non_default_ns(self):
ns_name, ns = self.create_namespace()
self.addCleanup(self.delete_namespace, ns_name)
match_labels = {'role': 'db'}
np = self.create_network_policy(match_labels=match_labels,
namespace=ns_name)
LOG.debug("Creating network policy %s" % np)
network_policy_name = np.metadata.name
network_policies = self.list_network_policies(namespace=ns_name)
sg_id = None
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
try:
time.sleep(consts.NP_CHECK_SLEEP_TIME)
sg_id, _, _ = self.get_np_crd_info(network_policy_name,
namespace=ns_name)
if sg_id:
break
except kubernetes.client.rest.ApiException:
continue
sgs = self.list_security_groups(fields='id')
sg_ids = [sg['id'] for sg in sgs]
self.assertIn(network_policy_name, network_policies)
self.assertIn(sg_id, sg_ids)
self.delete_network_policy(network_policy_name, namespace=ns_name)
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
time.sleep(consts.NP_CHECK_SLEEP_TIME)
if network_policy_name in self.list_network_policies(
namespace=ns_name):
continue
try:
self.get_np_crd_info(network_policy_name, namespace=ns_name)
except kubernetes.client.rest.ApiException:
break
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
sgs_after = self.list_security_groups(fields='id')
sg_ids_after = [sg['id'] for sg in sgs_after]
if sg_id not in sg_ids_after:
break
time.sleep(consts.NP_CHECK_SLEEP_TIME)
if time.time() - start >= TIMEOUT_PERIOD:
raise lib_exc.TimeoutException('Sec group ID still exists')
@decorators.idempotent_id('a93b5bc5-e931-4719-8201-54315c5c86f8')
def test_network_policy_add_remove_pod(self):
np_name_server = 'allow-all-server'
np_name_client = 'allow-all-client'
server_label = {'app': 'server'}
client_label = {'app': 'client'}
namespace_name, namespace = self.create_namespace()
self.addCleanup(self.delete_namespace, namespace_name)
self.create_setup_for_service_test(label='server',
namespace=namespace_name,
cleanup=False)
LOG.debug("A service %s and two pods were created in namespace %s",
self.service_name, namespace_name)
service_pods = self.get_pod_list(namespace=namespace_name,
label_selector='app=server')
service_pods_cidrs = [pod.status.pod_ip+'/32' for pod in service_pods]
(first_server_pod_cidr, first_server_pod_name) = (
service_pods[0].status.pod_ip+"/32",
service_pods[0].metadata.name)
client_pod_name = self.check_service_internal_connectivity(
namespace=namespace_name,
labels=client_label,
cleanup=False)
client_pod_ip = self.get_pod_ip(client_pod_name,
namespace=namespace_name)
client_pod_cidr = client_pod_ip + "/32"
LOG.debug("Client pod %s was created", client_pod_name)
LOG.debug("Connection to service %s from %s was successful",
self.service_name, client_pod_name)
# Check connectivity in the same namespace
connect_to_service_cmd = ["/bin/sh", "-c", "curl {dst_ip}".format(
dst_ip=self.service_ip)]
blocked_pod, _ = self.create_pod(namespace=namespace_name)
self.assertIn(consts.POD_OUTPUT,
self.exec_command_in_pod(blocked_pod,
connect_to_service_cmd,
namespace_name))
pods_server_match_expression = {'key': 'app',
'operator': 'In',
'values': ['server']}
pods_client_match_expression = {'key': 'app',
'operator': 'In',
'values': ['client']}
np_server = self.create_network_policy(
name=np_name_server,
namespace=namespace_name,
match_labels=server_label,
ingress_match_expressions=[pods_client_match_expression],
egress_match_expressions=[pods_client_match_expression])
LOG.debug("Network policy %s with match expression %s was created",
np_server, pods_server_match_expression)
self.addCleanup(self.delete_network_policy, np_server.metadata.name,
namespace_name)
np_client = self.create_network_policy(
name=np_name_client,
namespace=namespace_name,
match_labels=client_label,
ingress_match_expressions=[pods_server_match_expression],
egress_match_expressions=[pods_server_match_expression])
LOG.debug("Network policy %s with match expression %s was created",
np_client, pods_client_match_expression)
self.addCleanup(self.delete_network_policy, np_client.metadata.name,
namespace_name)
self.check_sg_rules_for_np(
namespace_name, np_name_server,
ingress_cidrs_should_exist=[client_pod_cidr],
egress_cidrs_should_exist=[client_pod_cidr],
ingress_cidrs_shouldnt_exist=[],
egress_cidrs_shouldnt_exist=[])
self.check_sg_rules_for_np(
namespace_name, np_name_client,
ingress_cidrs_should_exist=service_pods_cidrs,
egress_cidrs_should_exist=service_pods_cidrs,
ingress_cidrs_shouldnt_exist=[],
egress_cidrs_shouldnt_exist=[])
self.check_service_internal_connectivity(namespace=namespace_name,
pod_name=client_pod_name)
LOG.debug("Connection to service %s from %s was successful after "
"network policy was applied",
self.service_name, client_pod_name)
# Check no connectivity from a pod not in the NP
self.assertNotIn(consts.POD_OUTPUT,
self.exec_command_in_pod(blocked_pod,
connect_to_service_cmd,
namespace_name))
self.delete_pod(first_server_pod_name, namespace=namespace_name)
LOG.debug("Deleted pod %s from service %s",
first_server_pod_name, self.service_name)
self.verify_lbaas_endpoints_configured(self.service_name,
1, namespace_name)
self.check_service_internal_connectivity(namespace=namespace_name,
pod_name=client_pod_name,
pod_num=1)
LOG.debug("Connection to service %s with one pod from %s was "
"successful", self.service_name, client_pod_name)
# Check that the deleted pod is removed from SG rules
self.check_sg_rules_for_np(
namespace_name, np_name_client,
ingress_cidrs_shouldnt_exist=[
first_server_pod_cidr],
egress_cidrs_shouldnt_exist=[
first_server_pod_cidr])
pod_name, pod = self.create_pod(labels=server_label,
namespace=namespace_name)
LOG.debug("Pod server %s with label %s was created",
pod_name, server_label)
self.verify_lbaas_endpoints_configured(self.service_name,
2, namespace_name)
service_pods = self.get_pod_list(namespace=namespace_name,
label_selector='app=server')
service_pods_cidrs = [pod.status.pod_ip+'/32' for pod in service_pods]
self.check_sg_rules_for_np(
namespace_name, np_name_server,
ingress_cidrs_should_exist=[client_pod_cidr],
egress_cidrs_should_exist=[client_pod_cidr],
ingress_cidrs_shouldnt_exist=[],
egress_cidrs_shouldnt_exist=[])
self.check_sg_rules_for_np(
namespace_name, np_name_client,
ingress_cidrs_should_exist=service_pods_cidrs,
egress_cidrs_should_exist=service_pods_cidrs)
self.check_service_internal_connectivity(namespace=namespace_name,
pod_name=client_pod_name)
LOG.debug("Connection to service %s from %s was successful",
self.service_name, client_pod_name)
# Check no connectivity from a pod not in the NP
self.assertNotIn(consts.POD_OUTPUT,
self.exec_command_in_pod(blocked_pod,
connect_to_service_cmd,
namespace_name))
@decorators.idempotent_id('ee018bf6-2d5d-4c4e-8c79-793f4772852f')
def test_network_policy_hairpin_traffic(self):
namespace_name, namespace = self.create_namespace()
self.addCleanup(self.delete_namespace, namespace_name)
svc_name, svc_pods = self.create_setup_for_service_test(
namespace=namespace_name, cleanup=False, save=False, pod_num=1)
self.check_service_internal_connectivity(
namespace=namespace_name, pod_num=1, service_name=svc_name,
pod_name=svc_pods[0])
policy_name = data_utils.rand_name(prefix='kuryr-policy')
np = k8s_client.V1NetworkPolicy(
kind='NetworkPolicy',
api_version='networking.k8s.io/v1',
metadata=k8s_client.V1ObjectMeta(
name=policy_name,
namespace=namespace_name),
spec=k8s_client.V1NetworkPolicySpec(
pod_selector=k8s_client.V1LabelSelector(),
policy_types=['Egress', 'Ingress'],
ingress=[
k8s_client.V1NetworkPolicyIngressRule(
_from=[
k8s_client.V1NetworkPolicyPeer(
pod_selector=k8s_client.V1LabelSelector(),
),
],
),
],
egress=[
k8s_client.V1NetworkPolicyEgressRule(
to=[
k8s_client.V1NetworkPolicyPeer(
pod_selector=k8s_client.V1LabelSelector(),
),
],
),
],
),
)
k8s_client.NetworkingV1Api().create_namespaced_network_policy(
namespace=namespace_name, body=np)
# Just to wait for SGs.
self.get_sg_rules_for_np(namespace_name, policy_name)
self.check_service_internal_connectivity(
namespace=namespace_name, pod_num=1, service_name=svc_name,
pod_name=svc_pods[0])

View File

@ -1,35 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
POD_OUTPUT = 'HELLO! I AM ALIVE!!!'
HA_ENDPOINT_NAME = 'kuryr-controller'
POD_AFFINITY = {'requiredDuringSchedulingIgnoredDuringExecution': [
{'labelSelector': {'matchExpressions': [
{'operator': 'In', 'values': ['demo'], 'key': 'type'}]},
'topologyKey': 'kubernetes.io/hostname'}]}
TIME_TO_APPLY_SGS = 30
POD_STATUS_RETRIES = 240
POD_CHECK_TIMEOUT = 240
POD_CHECK_SLEEP_TIME = 5
NP_CHECK_SLEEP_TIME = 10
NS_TIMEOUT = 600
LB_TIMEOUT = 1200
LB_RECONCILE_TIMEOUT = 600
REPETITIONS_PER_BACKEND = 10
KURYR_RESOURCE_CHECK_TIMEOUT = 300
KURYR_PORT_CRD_PLURAL = 'kuryrports'
KURYR_LOAD_BALANCER_CRD_PLURAL = 'kuryrloadbalancers'
KURYR_NETWORK_POLICY_CRD_PLURAL = 'kuryrnetworkpolicies'
K8s_ANNOTATION_PROJECT = 'openstack.org/kuryr-project'
LOADBALANCER = 'loadbalancer'
LISTENER = 'listeners'

View File

@ -1,99 +0,0 @@
# Copyright 2022 Troila.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from tempest import config
from tempest.lib import decorators
from kuryr_tempest_plugin.tests.scenario import base
from kuryr_tempest_plugin.tests.scenario import consts
CONF = config.CONF
class TestAnnotationProjectScenario(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(TestAnnotationProjectScenario, cls).skip_checks()
if not CONF.kuryr_kubernetes.annotation_project_driver:
raise cls.skipException('Annotation project driver must be '
'enabled to run these tests')
@classmethod
def resource_setup(cls):
super(TestAnnotationProjectScenario, cls).resource_setup()
cls.project_id = cls.os_primary.projects_client.project_id
@decorators.idempotent_id('edb10a26-4572-4565-80e4-af16af4186d3')
def test_create_namespace_and_pod(self):
annotations = {consts.K8s_ANNOTATION_PROJECT: self.project_id}
namespace_name, namespace = self.create_namespace(
annotations=annotations)
# Ensure the namespace can be cleaned up upon tests finishing
self.namespaces.append(namespace)
pod_name, _ = self.create_pod(labels={"app": 'pod-label'},
namespace=namespace_name)
self.wait_for_kuryr_resource(
namespace_name, consts.KURYR_PORT_CRD_PLURAL,
pod_name, status_key='vifs')
if CONF.kuryr_kubernetes.kuryrnetworks:
kuryr_net_crd = self.get_kuryr_network_crds(namespace_name)
subnet = self.os_admin.subnets_client.show_subnet(
subnet_id=kuryr_net_crd['status']['subnetId'])
self.assertEqual(subnet['subnet']['project_id'], self.project_id)
network = self.os_admin.networks_client.show_network(
kuryr_net_crd['status']['netId'])
self.assertEqual(network['network']['project_id'], self.project_id)
ports = self.os_admin.ports_client.list_ports(
**{'project_id': self.project_id, 'device_owner': 'compute:kuryr'})
self.assertTrue(len(ports['ports']) > 0)
def test_create_service(self):
if not CONF.kuryr_kubernetes.kuryrloadbalancers:
raise self.skipException("Kuryrloadbalancers CRD should be "
"used to run this test.")
annotations = {consts.K8s_ANNOTATION_PROJECT: self.project_id}
namespace_name, namespace = self.create_namespace(
annotations=annotations)
self.namespaces.append(namespace)
service_name, pods = self.create_setup_for_service_test(
namespace=namespace_name)
kuryr_loadbalancer_crd = self.wait_for_kuryr_resource(
namespace_name, consts.KURYR_LOAD_BALANCER_CRD_PLURAL,
service_name, status_key='loadbalancer')
lb = self.lbaas.show_loadbalancer(
kuryr_loadbalancer_crd['status']['loadbalancer']['id'])
self.assertEqual(lb['project_id'], self.project_id)
def test_create_network_policy(self):
if not CONF.kuryr_kubernetes.network_policy_enabled:
raise self.skipException("Network policy handler and driver "
"should be used to run this test.")
annotations = {consts.K8s_ANNOTATION_PROJECT: self.project_id}
namespace_name, namespace = self.create_namespace(
annotations=annotations)
self.namespaces.append(namespace)
self.create_network_policy(
name='network-policy', namespace=namespace_name)
kuryr_network_policy_crd = self.wait_for_kuryr_resource(
namespace_name, consts.KURYR_NETWORK_POLICY_CRD_PLURAL,
'network-policy', status_key="securityGroupId")
sg_id = kuryr_network_policy_crd['status']['securityGroupId']
security_group = \
self.os_admin.security_groups_client.show_security_group(sg_id)
self.assertEqual(security_group['security_group']['project_id'],
self.project_id)

View File

@ -1,87 +0,0 @@
# Copyright 2017 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log as logging
from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions
from kuryr_tempest_plugin.tests.scenario import base
LOG = logging.getLogger(__name__)
CONF = config.CONF
class TestCrossPingScenario(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(TestCrossPingScenario, cls).skip_checks()
if not CONF.network_feature_enabled.floating_ips:
raise cls.skipException("Floating ips are not available")
@decorators.idempotent_id('bddf5441-1244-449d-a125-b5fddfb1a1a8')
def test_vm_pod_ping(self):
if CONF.kuryr_kubernetes.ipv6:
raise self.skipException('FIPs are not supported with IPv6')
pod_name, pod = self.create_pod()
self.addCleanup(self.delete_pod, pod_name, pod)
pod_fip = self.assign_fip_to_pod(pod_name)
ssh_client, fip = self.create_vm_for_connectivity_test()
# check connectivity from VM to Pod
cmd = ("ping -c4 -w4 %s &> /dev/null; echo $?" %
pod_fip['floatingip']['floating_ip_address'])
try:
result = ssh_client.exec_command(cmd)
if result:
msg = ('Failed while trying to ping. Could not ping '
'from host "%s" to "%s".' % (
fip['floating_ip_address'],
pod_fip['floatingip']['floating_ip_address']))
LOG.error(msg)
self.assertEqual('0', result.rstrip('\n'))
except exceptions.SSHExecCommandFailed:
LOG.error("Couldn't ping server")
@decorators.idempotent_id('bddf5441-1244-449d-a125-b5fddfb1a1a8')
def test_pod_vm_ping(self):
if CONF.kuryr_kubernetes.ipv6:
raise self.skipException('FIPs are not supported with IPv6')
_, fip = self.create_vm_for_connectivity_test()
pod_name, pod = self.create_pod()
self.addCleanup(self.delete_pod, pod_name, pod)
# check connectivity from Pod to VM
cmd = [
"/bin/sh", "-c", "ping -c 4 {dst_ip}>/dev/null ; echo $?".format(
dst_ip=fip['floating_ip_address'])]
self.assertEqual(self.exec_command_in_pod(pod_name, cmd), '0')
@decorators.idempotent_id('bddf5441-1244-449d-a125-b5fddfb1a2a9')
def test_pod_pod_ping(self):
pod_name_list = []
for i in range(2):
pod_name, pod = self.create_pod()
self.addCleanup(self.delete_pod, pod_name, pod)
pod_name_list.append(pod_name)
pod_ip = self.get_pod_ip(pod_name_list[1])
cmd = [
"/bin/sh", "-c", "ping -c 4 {dst_ip}>/dev/null ; echo $?".format(
dst_ip=pod_ip)]
self.assertEqual(self.exec_command_in_pod(pod_name_list[0], cmd), '0')

View File

@ -1,58 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log as logging
from tempest import config
from tempest.lib import decorators
from kuryr_tempest_plugin.tests.scenario import base
from kuryr_tempest_plugin.tests.scenario import consts
LOG = logging.getLogger(__name__)
CONF = config.CONF
class TestCrossPingScenarioMultiWorker(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(TestCrossPingScenarioMultiWorker, cls).skip_checks()
if not CONF.kuryr_kubernetes.multi_worker_setup:
raise cls.skipException("Multi node workers are not available")
def _test_cross_ping_multi_worker(self, same_node=True):
if same_node:
pod_name_list = self.create_two_pods_affinity_setup(
labels={'type': 'demo'},
affinity={'podAffinity': consts.POD_AFFINITY})
self.assertEqual(self.get_pod_node_name(pod_name_list[0]),
self.get_pod_node_name(pod_name_list[1]))
else:
pod_name_list = self.create_two_pods_affinity_setup(
labels={'type': 'demo'},
affinity={'podAntiAffinity': consts.POD_AFFINITY})
self.assertNotEqual(self.get_pod_node_name(pod_name_list[0]),
self.get_pod_node_name(pod_name_list[1]))
pod_ip = self.get_pod_ip(pod_name_list[1])
cmd = [
"/bin/sh", "-c", "ping -c 4 {dst_ip}>/dev/null ; echo $?".format(
dst_ip=pod_ip)]
self.assertEqual(self.exec_command_in_pod(pod_name_list[0], cmd), '0')
@decorators.idempotent_id('7d036b6d-b5cf-47e9-a0c0-7696240a1c5e')
def test_pod_pod_ping_different_host(self):
self._test_cross_ping_multi_worker(same_node=False)
@decorators.idempotent_id('bddf5441-1244-449d-a125-b5fddfb1a2a9')
def test_pod_pod_ping_same_node(self):
self._test_cross_ping_multi_worker(same_node=True)

View File

@ -1,46 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from tempest import config
from tempest.lib import decorators
from kuryr_tempest_plugin.tests.scenario import base
CONF = config.CONF
class TestKuryrDaemon(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(TestKuryrDaemon, cls).skip_checks()
if (not CONF.kuryr_kubernetes.containerized) or (
not CONF.kuryr_kubernetes.kuryr_daemon_enabled):
raise cls.skipException("Kuryr cni should be containerized "
"and Kuryr Daemon should be enabled "
"to run this test.")
@decorators.idempotent_id('bddf5441-1244-a49d-a125-b5fd3fb111a7')
def test_kuryr_cni_daemon(self):
namespace = CONF.kuryr_kubernetes.kube_system_namespace
kube_system_pods = self.get_pod_name_list(
namespace=namespace)
cmd = ['cat', '/proc/1/cmdline']
for kuryr_pod_name in kube_system_pods:
if kuryr_pod_name.startswith('kuryr-cni'):
self.assertIn(
'kuryr-daemon --config-file',
self.exec_command_in_pod(kuryr_pod_name, cmd, namespace,
container='kuryr-cni'))

View File

@ -1,209 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import json
import threading
import time
import uuid
import kubernetes
from oslo_log import log as logging
from tempest import config
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from kuryr_tempest_plugin.tests.scenario import base
from kuryr_tempest_plugin.tests.scenario import consts
LOG = logging.getLogger(__name__)
CONF = config.CONF
TIMEOUT = 120
class TestHighAvailabilityScenario(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(TestHighAvailabilityScenario, cls).skip_checks()
if not (CONF.kuryr_kubernetes.ap_ha and
CONF.kuryr_kubernetes.containerized):
raise cls.skipException("kuryr-controller A/P HA must be enabled "
"and kuryr-kubernetes must run in "
"containerized mode.")
def get_kuryr_leader_annotation(self):
try:
endpoint = self.k8s_client.CoreV1Api().read_namespaced_endpoints(
consts.HA_ENDPOINT_NAME,
CONF.kuryr_kubernetes.kube_system_namespace)
annotation = endpoint.metadata.annotations[
'control-plane.alpha.kubernetes.io/leader']
return json.loads(annotation)
except kubernetes.client.rest.ApiException:
return None
def wait_for_deployment_scale(self, desired_replicas,
desired_state='Running'):
def has_scaled():
pods = self.k8s_client.CoreV1Api().list_namespaced_pod(
CONF.kuryr_kubernetes.kube_system_namespace,
label_selector='name=kuryr-controller')
return (len(pods.items) == desired_replicas and
all([pod.status.phase == desired_state
for pod in pods.items]))
self.assertTrue(test_utils.call_until_true(has_scaled, TIMEOUT, 5),
'Timed out waiting for deployment to scale')
def scale_controller_deployment(self, replicas):
self.k8s_client.AppsV1Api().patch_namespaced_deployment(
'kuryr-controller', CONF.kuryr_kubernetes.kube_system_namespace,
{'spec': {'replicas': replicas}})
self.wait_for_deployment_scale(replicas)
@decorators.idempotent_id('3f09e7d1-0897-46b1-ba9d-ea4116523025')
def test_scale_up_controller(self):
controller_deployment = (
self.k8s_client.AppsV1Api().read_namespaced_deployment(
CONF.kuryr_kubernetes.controller_deployment_name,
CONF.kuryr_kubernetes.kube_system_namespace))
# On cleanup scale to original number of replicas
self.addCleanup(self.scale_controller_deployment,
controller_deployment.spec.replicas)
# Scale to just a single replica
self.scale_controller_deployment(1)
# Create a pod and check connectivity
pod = self.create_and_ping_pod()
# Get current leader annotation
annotation = self.get_kuryr_leader_annotation()
self.assertIsNotNone(annotation)
transitions = annotation['leaderTransitions']
# Scale the controller up and wait until it starts
self.scale_controller_deployment(2)
# Check if leader haven't switched
annotation = self.get_kuryr_leader_annotation()
self.assertEqual(transitions, annotation['leaderTransitions'])
# Create another pod and check connectivity
self.create_and_ping_pod()
# Check connectivity to an existing pod
self.assertTrue(self.ping_ip_address(self.get_pod_ip(
pod.metadata.name)))
@decorators.idempotent_id('afe75fa5-e9ca-4f7d-bc16-8f1dd7884eea')
def test_scale_down_controller(self):
controller_deployment = (
self.k8s_client.AppsV1Api().read_namespaced_deployment(
CONF.kuryr_kubernetes.controller_deployment_name,
CONF.kuryr_kubernetes.kube_system_namespace))
# On cleanup scale to original number of replicas
self.addCleanup(self.scale_controller_deployment,
controller_deployment.spec.replicas)
# Scale to 2 replicas
self.scale_controller_deployment(2)
# Create a pod and check connectivity
pod = self.create_and_ping_pod()
# Scale the controller down and wait until it stops
self.scale_controller_deployment(1)
# Create another pod and check connectivity
self.create_and_ping_pod()
# Check connectivity to an existing pod
self.assertTrue(self.ping_ip_address(self.get_pod_ip(
pod.metadata.name)))
@decorators.idempotent_id('3b218c11-c77b-40a8-ba09-5dd5ae0f8ae3')
def test_auto_fencing(self):
controller_deployment = (
self.k8s_client.AppsV1Api().read_namespaced_deployment(
CONF.kuryr_kubernetes.controller_deployment_name,
CONF.kuryr_kubernetes.kube_system_namespace))
# On cleanup scale to original number of replicas
self.addCleanup(self.scale_controller_deployment,
controller_deployment.spec.replicas)
# Scale to 2 replicas
self.scale_controller_deployment(2)
# Create a pod and check connectivity
self.create_and_ping_pod()
def hostile_takeover():
"""Malform endpoint annotation to takeover the leadership
This method runs for 3 minutes and for that time it malforms the
endpoint annotation to simulate another kuryr-controller taking
over the leadership. This should make other kuryr-controllers to
step down and stop processing any events for those 3 minutes.
"""
timeout = datetime.datetime.utcnow() + datetime.timedelta(
minutes=3)
fake_name = str(uuid.uuid4())
while datetime.datetime.utcnow() < timeout:
current = datetime.datetime.utcnow()
renew = current + datetime.timedelta(seconds=5)
malformed = {
"holderIdentity": fake_name,
"leaseDurationSeconds": 5,
"acquireTime": current.strftime("%Y-%m-%dT%H:%M:%SZ"),
"renewTime": renew.strftime("%Y-%m-%dT%H:%M:%SZ"),
"leaderTransitions": 0,
}
self.k8s_client.CoreV1Api().patch_namespaced_endpoints(
consts.HA_ENDPOINT_NAME,
CONF.kuryr_kubernetes.kube_system_namespace,
{'metadata': {'annotations': {
'control-plane.alpha.kubernetes.io/leader':
json.dumps(malformed)}}})
time.sleep(2)
t = threading.Thread(target=hostile_takeover)
t.start()
# Create another pod and check that it's not getting wired.
time.sleep(15) # We need to wait a bit for controller to autofence.
name, pod = self.create_pod(wait_for_status=False)
def is_pod_running():
pod_obj = self.k8s_client.CoreV1Api().read_namespaced_pod(
name, 'default')
return pod_obj.status.phase == 'Running'
self.addCleanup(self.delete_pod, name)
self.assertFalse(test_utils.call_until_true(is_pod_running, TIMEOUT,
5))
# Wait 120 seconds more, malformed annotation should get cleared
time.sleep(TIMEOUT)
# Now pod should have the IP and be pingable
ip = self.get_pod_ip(name)
self.assertIsNotNone(ip)
self.assertTrue(self.ping_ip_address(ip, ping_timeout=TIMEOUT))

View File

@ -1,88 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
from tempest import config
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
from kuryr_tempest_plugin.tests.scenario import base
from kuryr_tempest_plugin.tests.scenario import consts
CONF = config.CONF
class TestKuryrRestartScenario(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(TestKuryrRestartScenario, cls).skip_checks()
if (not CONF.kuryr_kubernetes.containerized or
not CONF.kuryr_kubernetes.run_tests_serial):
raise cls.skipException(
"CNI and controller should be containerized and this test "
"should run on gate, configured to run sequentially.")
@decorators.idempotent_id('bddf5441-1244-449d-a125-b5fdcfb1a1a7')
def test_kuryr_pod_delete(self):
# find kuryr CNI and controller pods, delete them one by one and create
# a regular pod just after removal
client_pod_name, pod = self.create_pod()
self.addCleanup(self.delete_pod, client_pod_name)
kube_system_pods = self.get_pod_name_list(
namespace=CONF.kuryr_kubernetes.kube_system_namespace)
for kuryr_pod_name in kube_system_pods:
if kuryr_pod_name.startswith('kuryr'):
self.delete_pod(
pod_name=kuryr_pod_name,
body={"kind": "DeleteOptions",
"apiVersion": "v1",
"gracePeriodSeconds": 0},
namespace=CONF.kuryr_kubernetes.kube_system_namespace)
# make sure the kuryr pod was deleted
pod_delete_retries = 6
while self.get_pod_status(
kuryr_pod_name,
namespace=CONF.kuryr_kubernetes.kube_system_namespace):
time.sleep(5)
pod_delete_retries -= 1
if pod_delete_retries == 0:
raise lib_exc.TimeoutException()
# Create a new pod while kuryr CNI or kuryr controller Pods are
# not in the running state.
# Check once for controller kuryr pod and once for CNI pod
pod_name, pod = self.create_pod()
self.addCleanup(self.delete_pod, pod_name)
dst_pod_ip = self.get_pod_ip(pod_name)
curl_tmpl = self.get_curl_template(dst_pod_ip,
extra_args='-m 10',
port=8080)
cmd = ["/bin/sh", "-c", curl_tmpl.format(dst_pod_ip, ':8080')]
self.assertIn(consts.POD_OUTPUT, self.exec_command_in_pod(
client_pod_name, cmd),
"Connectivity from %s to pod with ip %s failed." % (
client_pod_name, dst_pod_ip))
# Check that both kuryr-pods are up and running
# The newly created pods are running because create_pod is written
# that way. Will be refactored in another patch
for new_kuryr_pod in self.get_pod_name_list(
namespace=CONF.kuryr_kubernetes.kube_system_namespace):
self.assertEqual("Running", self.get_pod_status(
new_kuryr_pod,
namespace=CONF.kuryr_kubernetes.kube_system_namespace))

View File

@ -1,444 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import kubernetes
import time
from oslo_log import log as logging
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from kuryr_tempest_plugin.tests.scenario import base
from kuryr_tempest_plugin.tests.scenario import consts
LOG = logging.getLogger(__name__)
CONF = config.CONF
TIMEOUT_PERIOD = 120
class TestNamespaceScenario(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(TestNamespaceScenario, cls).skip_checks()
if not CONF.kuryr_kubernetes.subnet_per_namespace:
raise cls.skipException('Namespace handler and namespace subnet '
'driver must be enabled to run these '
'tests')
@classmethod
def setup_clients(cls):
super(TestNamespaceScenario, cls).setup_clients()
@decorators.idempotent_id('bddd5441-1244-429d-a125-b53ddfb132a9')
def test_namespace(self):
# Check resources are created
namespace_name, namespace = self.create_namespace()
self.namespaces.append(namespace)
ns_uid = namespace.metadata.uid
existing_namespaces = [ns.metadata.name
for ns in self.list_namespaces().items]
self.assertIn(namespace_name, existing_namespaces)
pod_name, pod = self.create_pod(labels={"app": 'pod-label'},
namespace=namespace_name)
kuryr_net_crd_name = 'ns-' + namespace_name
seen_subnets = self.os_admin.subnets_client.list_subnets()
seen_subnet_names = [n['name'] for n in seen_subnets['subnets']]
subnet_name = (f"{ns_uid}/{namespace_name}")
if subnet_name not in seen_subnet_names:
subnet_name = 'ns/' + namespace_name + '-subnet'
if subnet_name not in seen_subnet_names:
subnet_name = namespace_name
self.assertIn(subnet_name, seen_subnet_names)
subnet_id = [n['id'] for n in seen_subnets['subnets']
if n['name'] == subnet_name]
net_id = [n['network_id'] for n in seen_subnets['subnets']
if n['name'] == subnet_name]
if CONF.kuryr_kubernetes.kuryrnetworks:
kuryr_net_crd = self.get_kuryr_network_crds(namespace_name)
self.assertIn(namespace_name,
kuryr_net_crd['metadata']['name'])
self.assertIn(kuryr_net_crd['status']['subnetId'], subnet_id)
self.assertIn(kuryr_net_crd['status']['netId'], net_id)
else:
kuryr_net_crd = self.get_kuryr_net_crds(kuryr_net_crd_name)
self.assertIn(kuryr_net_crd_name,
kuryr_net_crd['metadata']['name'])
self.assertIn(kuryr_net_crd['spec']['subnetId'], subnet_id)
self.assertIn(kuryr_net_crd['spec']['netId'], net_id)
# Check namespace pod connectivity
self.create_setup_for_service_test(namespace=namespace_name,
cleanup=False)
self.check_service_internal_connectivity(namespace=namespace_name,
cleanup=False)
# Check resources are deleted
self._delete_namespace_resources(namespace_name, kuryr_net_crd,
subnet_name)
@decorators.idempotent_id('bdde5441-1b44-449d-a125-b5fdbfb1a2a9')
def test_namespace_sg_isolation(self):
if not CONF.kuryr_kubernetes.namespace_enabled:
raise self.skipException('No need to run Namespace Isolation when '
'the security group driver is not '
'namespace')
# Check security group resources are created
ns1_name, ns1 = self.create_namespace()
ns2_name, ns2 = self.create_namespace()
existing_namespaces = [ns.metadata.name
for ns in self.list_namespaces().items]
seen_sgs = self.list_security_groups()
seen_sg_ids = [sg['id'] for sg in seen_sgs]
subnet_ns1_name, net_crd_ns1 = self._get_and_check_ns_resources(
ns1, existing_namespaces, seen_sg_ids)
subnet_ns2_name, net_crd_ns2 = self._get_and_check_ns_resources(
ns2, existing_namespaces, seen_sg_ids)
self.assertIn('default', existing_namespaces)
# Create pods in different namespaces
pod_ns1_name, pod_ns1 = self.create_pod(labels={"app": 'pod-label'},
namespace=ns1_name)
pod_ns2_name, pod_ns2 = self.create_pod(labels={"app": 'pod-label'},
namespace=ns2_name)
pod_nsdefault_name, pod_nsdefault = self.create_pod(
labels={"app": 'pod-label'}, namespace='default')
self.addCleanup(self.delete_pod, pod_nsdefault_name)
# Check namespace pod connectivity and isolation
pod_ns2_ip = self.get_pod_ip(pod_ns2_name, ns2_name)
pod_nsdefault_ip = self.get_pod_ip(pod_nsdefault_name)
# check connectivity from NS1 to default
cmd = ["/bin/sh", "-c", "curl {dst_ip}:8080".format(
dst_ip=pod_nsdefault_ip)]
self.assertIn(consts.POD_OUTPUT,
self.exec_command_in_pod(pod_ns1_name, cmd, ns1_name))
# check no connectivity from NS1 to NS2
cmd = ["/bin/sh", "-c", "curl {dst_ip}:8080".format(
dst_ip=pod_ns2_ip)]
self.assertNotIn(consts.POD_OUTPUT,
self.exec_command_in_pod(pod_ns1_name, cmd, ns1_name))
# check connectivity from default to NS2
cmd = ["/bin/sh", "-c", "curl {dst_ip}:8080".format(
dst_ip=pod_ns2_ip)]
self.assertIn(consts.POD_OUTPUT,
self.exec_command_in_pod(pod_nsdefault_name, cmd))
seen_subnets = self.os_admin.subnets_client.list_subnets()
if subnet_ns1_name not in seen_subnets:
subnet_ns1_name = f'ns/{ns1_name}-subnet'
if subnet_ns1_name not in seen_subnets:
subnet_ns1_name = ns1_name
if subnet_ns2_name not in seen_subnets:
subnet_ns2_name = f'ns/{ns2_name}-subnet'
if subnet_ns2_name not in seen_subnets:
subnet_ns2_name = ns2_name
self._delete_namespace_resources(ns1_name, net_crd_ns1,
subnet_ns1_name)
self._delete_namespace_resources(ns2_name, net_crd_ns2,
subnet_ns2_name)
def _get_and_check_ns_resources(self, ns, existing_namespaces,
existing_sgs):
ns_name = ns.metadata.name
ns_uid = ns.metadata.name
subnet_ns_name = f'{ns_uid}/{ns_name}'
net_crd_ns_name = 'ns-' + ns_name
self.assertIn(ns_name, existing_namespaces)
net_crd_ns = self.get_kuryr_net_crds(net_crd_ns_name)
self.assertIn(net_crd_ns_name, net_crd_ns['metadata']['name'])
self.assertIn(net_crd_ns['spec']['sgId'], existing_sgs)
return subnet_ns_name, net_crd_ns
def _create_ns_resources(self, namespace, labels=None,
spec_type='ClusterIP', checking_pod=None):
pod_name, pod_ns = self.create_pod(labels=labels, namespace=namespace)
svc_name, _ = self.create_service(pod_label=pod_ns.metadata.labels,
spec_type=spec_type,
namespace=namespace)
svc_ip = self.get_service_ip(service_name=svc_name,
spec_type=spec_type,
namespace=namespace)
# Wait for service to be ready
if checking_pod:
self.assert_backend_amount_from_pod(
svc_ip, 1, checking_pod,
namespace_name='default')
else:
self.assert_backend_amount_from_pod(
svc_ip, 1, pod_name,
namespace_name=namespace)
return pod_name, svc_ip
@decorators.unstable_test(bug='1853603')
@decorators.idempotent_id('b43f5421-1244-449d-a125-b5fddfb1a2a9')
def test_namespace_sg_svc_isolation(self):
if not CONF.kuryr_kubernetes.namespace_enabled:
raise self.skipException('No need to run Namespace Isolation when '
'Namespace driver is not enabled')
# Check security group resources are created
ns1_name, ns1 = self.create_namespace()
ns2_name, ns2 = self.create_namespace()
existing_namespaces = [ns.metadata.name
for ns in self.list_namespaces().items]
seen_sgs = self.list_security_groups()
seen_sg_ids = [sg['id'] for sg in seen_sgs]
subnet_ns1_name, net_crd_ns1 = self._get_and_check_ns_resources(
ns1, existing_namespaces, seen_sg_ids)
subnet_ns2_name, net_crd_ns2 = self._get_and_check_ns_resources(
ns2, existing_namespaces, seen_sg_ids)
self.assertIn('default', existing_namespaces)
pod_nsdefault_name, pod_nsdefault = self.create_pod(
labels={"app": 'pod-label'}, namespace='default')
self.addCleanup(self.delete_pod, pod_nsdefault_name)
# Create pods and services in different namespaces
pod_ns1_name, svc_ns1_ip = self._create_ns_resources(
ns1_name, labels={"app": 'pod-label'},
checking_pod=pod_nsdefault_name)
pod_ns2_name, svc_ns2_ip = self._create_ns_resources(
ns2_name, labels={"app": 'pod-label'}, spec_type='LoadBalancer',
checking_pod=pod_nsdefault_name)
# Check namespace svc connectivity and isolation
# check connectivity from NS1 pod to NS1 service
cmd = ["/bin/sh", "-c", "curl {dst_ip}".format(
dst_ip=svc_ns1_ip)]
self.assertIn(consts.POD_OUTPUT,
self.exec_command_in_pod(pod_ns1_name, cmd, ns1_name))
# check no connectivity from NS2 pod to NS1 service
cmd = ["/bin/sh", "-c", "curl {dst_ip}".format(
dst_ip=svc_ns1_ip)]
self.assertNotIn(consts.POD_OUTPUT,
self.exec_command_in_pod(pod_ns2_name, cmd, ns2_name))
# check connectivity from default pod to NS1 service
cmd = ["/bin/sh", "-c", "curl {dst_ip}".format(
dst_ip=svc_ns1_ip)]
self.assertIn(consts.POD_OUTPUT,
self.exec_command_in_pod(pod_nsdefault_name, cmd))
# check connectivity from NS1 pod to NS2 LoadBalancer type service
cmd = ["/bin/sh", "-c", "curl {dst_ip}".format(
dst_ip=svc_ns2_ip)]
self.assertIn(consts.POD_OUTPUT,
self.exec_command_in_pod(pod_ns1_name, cmd, ns1_name))
# Check resources are deleted
seen_subnets = self.os_admin.subnets_client.list_subnets()
if subnet_ns1_name not in seen_subnets:
subnet_ns1_name = f'ns/{ns1_name}-subnet'
if subnet_ns1_name not in seen_subnets:
subnet_ns1_name = ns1_name
if subnet_ns2_name not in seen_subnets:
subnet_ns2_name = f'ns/{ns2_name}-subnet'
if subnet_ns2_name not in seen_subnets:
subnet_ns2_name = ns2_name
self._delete_namespace_resources(ns1_name, net_crd_ns1,
subnet_ns1_name)
self._delete_namespace_resources(ns2_name, net_crd_ns2,
subnet_ns2_name)
@decorators.idempotent_id('bddd5441-1244-429d-a125-b53ddfb132a9')
def test_host_to_namespace_pod_connectivity(self):
# Create namespace and pod in that namespace
namespace_name, namespace = self.create_namespace()
self.addCleanup(self.delete_namespace, namespace_name)
# Check host to namespace pod and pod to host connectivity
pod_name, pod = self.create_pod(labels={"app": 'pod-label'},
namespace=namespace_name)
pod_ip = self.get_pod_ip(pod_name, namespace=namespace_name)
host_ip_of_pod = self.get_host_ip_for_pod(
pod_name, namespace=namespace_name)
# Check connectivity to pod in the namespace from host pod resides on
self.ping_ip_address(pod_ip)
# check connectivity from Pod to host pod resides on
cmd = [
"/bin/sh", "-c", "ping -c 4 {dst_ip}>/dev/null ; echo $?".format(
dst_ip=host_ip_of_pod)]
self.assertEqual(self.exec_command_in_pod(
pod_name, cmd, namespace_name), '0')
def _delete_namespace_resources(self, namespace, net_crd, subnet):
# Check resources are deleted
self.delete_namespace(namespace)
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
time.sleep(10)
try:
if CONF.kuryr_kubernetes.kuryrnetworks:
self.get_kuryr_network_crds(namespace)
else:
self.get_kuryr_net_crds(net_crd['metadata']['name'])
except kubernetes.client.rest.ApiException:
break
# Also wait for the namespace removal
while time.time() - start < TIMEOUT_PERIOD:
time.sleep(10)
try:
self.get_namespace(namespace)
except kubernetes.client.rest.ApiException:
break
existing_namespaces = [ns.metadata.name
for ns in self.list_namespaces().items]
self.assertNotIn(namespace, existing_namespaces)
seen_subnets = self.os_admin.subnets_client.list_subnets()
seen_subnet_names = [n['name'] for n in seen_subnets['subnets']]
self.assertNotIn(subnet, seen_subnet_names)
@decorators.idempotent_id('90b7cb81-f80e-4ff3-9892-9e5fdcd08289')
def test_create_kuryrnet_crd_without_net_id(self):
if not CONF.kuryr_kubernetes.validate_crd:
raise self.skipException('CRD validation must be enabled to run '
'this test.')
if CONF.kuryr_kubernetes.kuryrnetworks:
raise self.skipException('Kuryrnetworks CRD should not be used '
'to run this test.')
kuryrnet = dict(self._get_kuryrnet_obj())
del kuryrnet['spec']['netId']
error_msg = 'spec.netId in body is required'
field = 'spec.netId'
self._create_kuryr_net_crd_obj(kuryrnet, error_msg, field)
@decorators.idempotent_id('94641749-9fdf-4fb2-a46d-064f75eac113')
def test_create_kuryrnet_crd_with_populated_as_string(self):
if not CONF.kuryr_kubernetes.validate_crd:
raise self.skipException('CRD validation must be enabled to run '
'this test.')
if CONF.kuryr_kubernetes.kuryrnetworks:
raise self.skipException('Kuryrnetworks CRD should not be used '
'to run this test.')
kuryrnet = dict(self._get_kuryrnet_obj())
kuryrnet['spec']['populated'] = 'False'
error_msg = 'spec.populated in body must be of type boolean'
field = 'populated'
self._create_kuryr_net_crd_obj(kuryrnet, error_msg, field)
def _get_kuryrnet_obj(self):
return {
"apiVersion": "openstack.org/v1",
"kind": "KuryrNet",
"metadata": {
"annotations": {
"namespaceName": "kube-system"
},
"name": "ns-test",
},
"spec": {
"netId": "",
"routerId": "",
"subnetCIDR": "",
"subnetId": ""
}
}
def _create_kuryr_net_crd_obj(self, crd_manifest, error_msg, field):
if CONF.kuryr_kubernetes.kuryrnetworks:
raise self.skipException('Kuryrnetworks CRD should not be used '
'to run this test.')
version = 'v1'
group = 'openstack.org'
plural = 'kuryrnets'
custom_obj_api = self.k8s_client.CustomObjectsApi()
try:
custom_obj_api.create_cluster_custom_object(
group, version, plural, crd_manifest)
except kubernetes.client.rest.ApiException as e:
self.assertEqual(e.status, 422)
error_body = json.loads(e.body)
error_causes = error_body['details']['causes']
err_msg_cause = error_causes[0].get('message', "")
err_field_cause = error_causes[0].get('field', "[]")
if err_field_cause != "[]":
self.assertTrue(field in
err_field_cause)
else:
self.assertTrue(error_msg in err_msg_cause)
else:
body = self.k8s_client.V1DeleteOptions()
self.addCleanup(custom_obj_api.delete_cluster_custom_object,
group, version, plural,
crd_manifest['metadata']['name'], body)
raise Exception('{} for Kuryr Net CRD'.format(error_msg))
@decorators.idempotent_id('9e3ddb2d-d765-4ac5-8ab0-6a404adddd49')
def test_recreate_pod_in_namespace(self):
ns_name = data_utils.rand_name(prefix='kuryr-ns')
ns_name, ns = self.create_namespace(
name=ns_name, wait_for_crd=False)
# Allow controller manager to create a token for the service account
time.sleep(1)
self.addCleanup(self.delete_namespace, ns_name)
pod_name, pod = self.create_pod(
namespace=ns_name, wait_for_status=False)
self.delete_namespace(ns_name)
# wait for namespace to be deleted
# FIXME(itzikb) Set retries to 24 when BZ#1997120 is fixed
retries = 120
while True:
try:
time.sleep(5)
self.k8s_client.CoreV1Api().read_namespace(ns_name)
retries -= 1
self.assertNotEqual(0, retries,
"Timed out waiting for namespace %s to"
" be deleted" % ns_name)
except kubernetes.client.rest.ApiException as e:
if e.status == 404:
break
ns_name, ns = self.create_namespace(
name=ns_name, wait_for_crd=False)
# Allow controller manager to create a token for the service account
time.sleep(1)
pod_name, pod = self.create_pod(
namespace=ns_name, wait_for_status=False)
self.wait_for_pod_status(pod_name, namespace=ns_name,
pod_status='Running', retries=180)

View File

@ -1,369 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import time
import kubernetes
from oslo_log import log as logging
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
from kuryr_tempest_plugin.tests.scenario import base
from kuryr_tempest_plugin.tests.scenario import base_network_policy as base_np
from kuryr_tempest_plugin.tests.scenario import consts
LOG = logging.getLogger(__name__)
CONF = config.CONF
TIMEOUT_PERIOD = 120
KURYR_NET_POLICY_CRD_PLURAL = 'kuryrnetpolicies'
KURYR_NETWORK_POLICY_CRD_PLURAL = 'kuryrnetworkpolicies'
class OldNetworkPolicyScenario(base_np.TestNetworkPolicyScenario):
@classmethod
def skip_checks(cls):
super(OldNetworkPolicyScenario, cls).skip_checks()
if CONF.kuryr_kubernetes.new_kuryrnetworkpolicy_crd:
raise cls.skipException(
'Old KuryrNetPolicy NP CRDs must be used to run these tests')
@decorators.idempotent_id('09a24a0f-322a-40ea-bb89-5b2246c8725d')
def test_create_knp_crd_without_ingress_rules(self):
if not CONF.kuryr_kubernetes.validate_crd:
raise self.skipException('CRD validation must be enabled to run '
'this test.')
np_name = 'test'
knp_obj = dict(self._get_knp_obj(np_name))
del knp_obj['spec']['ingressSgRules']
error_msg = 'ingressSgRules in body is required'
field = 'ingressSgRules'
self._create_kuryr_net_policy_crd_obj(knp_obj, error_msg, field)
@decorators.idempotent_id('f036d26e-f603-4d00-ad92-b409b5a3ee6c')
def test_create_knp_crd_without_sg_rule_id(self):
if not CONF.kuryr_kubernetes.validate_crd:
raise self.skipException('CRD validation must be enabled to run '
'this test.')
np_name = 'test'
sg_rule = dict(self._get_sg_rule())
del sg_rule['id']
knp_obj = self._get_knp_obj(np_name, sg_rule)
error_msg = 'security_group_rule.id in body is required'
field = 'security_group_rule.id'
self._create_kuryr_net_policy_crd_obj(knp_obj, error_msg, field)
@decorators.idempotent_id('47f0e412-3e13-40b2-93e5-503790df870b')
def test_create_knp_crd_with_networkpolicy_spec_wrong_type(self):
if not CONF.kuryr_kubernetes.validate_crd:
raise self.skipException('CRD validation must be enabled to run '
'this test.')
np_name = 'test'
knp_obj = dict(self._get_knp_obj(np_name))
knp_obj['spec']['networkpolicy_spec'] = []
error_msg = 'networkpolicy_spec in body must be of type object'
field = 'networkpolicy_spec'
self._create_kuryr_net_policy_crd_obj(knp_obj, error_msg, field)
def _get_sg_rule(self):
return {
'description': 'kuryr-kubernetes netpolicy sg rule',
'direction': 'egress',
'ethertype': 'ipv4',
'id': '',
'security_group_id': ''
}
def _get_knp_obj(self, name, sg_rule=None, namespace='default'):
if not sg_rule:
sg_rule = self._get_sg_rule()
return {
'apiVersion': 'openstack.org/v1',
'kind': 'KuryrNetPolicy',
'metadata':
{
'name': "np-" + name,
'annotations': {
'networkpolicy_name': name,
'networkpolicy_namespace': namespace,
'networkpolicy_uid': ''
}
},
'spec': {
'egressSgRules': [{'security_group_rule': sg_rule}],
'ingressSgRules': [],
'networkpolicy_spec': {
'policyTypes': ['Ingress'],
'podSelector': {}},
'podSelector': {},
'securityGroupId': '',
'securityGroupName': "sg-" + name}}
def _create_kuryr_net_policy_crd_obj(self, crd_manifest, error_msg,
field, namespace='default'):
version = 'v1'
group = 'openstack.org'
plural = 'kuryrnetpolicies'
custom_obj_api = self.k8s_client.CustomObjectsApi()
try:
custom_obj_api.create_namespaced_custom_object(
group, version, namespace, plural, crd_manifest)
except kubernetes.client.rest.ApiException as e:
self.assertEqual(e.status, 422)
error_body = json.loads(e.body)
error_causes = error_body['details']['causes']
err_msg_cause = error_causes[0].get('message', "")
err_field_cause = error_causes[0].get('field', "[]")
if err_field_cause != "[]":
self.assertTrue(field in err_field_cause)
else:
self.assertTrue(error_msg in err_msg_cause)
else:
body = self.k8s_client.V1DeleteOptions()
self.addCleanup(custom_obj_api.delete_namespaced_custom_object,
group, version, namespace, plural,
crd_manifest['metadata']['name'], body)
raise Exception('{} for Kuryr Net Policy CRD'.format(error_msg))
def get_np_crd_info(self, name, namespace='default', **kwargs):
name = 'np-' + name
crd = self.k8s_client.CustomObjectsApi().get_namespaced_custom_object(
group=base.KURYR_CRD_GROUP, version=base.KURYR_CRD_VERSION,
namespace=namespace, plural=KURYR_NET_POLICY_CRD_PLURAL,
name=name, **kwargs)
return (crd['spec'].get('securityGroupId'),
crd['spec'].get('podSelector'),
True)
class NetworkPolicyScenario(base_np.TestNetworkPolicyScenario):
@classmethod
def skip_checks(cls):
super(NetworkPolicyScenario, cls).skip_checks()
if not CONF.kuryr_kubernetes.new_kuryrnetworkpolicy_crd:
raise cls.skipException(
'New KuryrNetworkPolicy NP CRDs must be used to run these '
'tests')
def get_np_crd_info(self, name, namespace='default', **kwargs):
crd = self.k8s_client.CustomObjectsApi().get_namespaced_custom_object(
group=base.KURYR_CRD_GROUP, version=base.KURYR_CRD_VERSION,
namespace=namespace, plural=KURYR_NETWORK_POLICY_CRD_PLURAL,
name=name, **kwargs)
expected = len(crd['spec'].get('egressSgRules', []) +
crd['spec'].get('ingressSgRules', []))
existing = len(crd['status']['securityGroupRules'])
# Third result tells us if all the SG rules are created.
return (crd['status'].get('securityGroupId'),
crd['status'].get('podSelector'),
expected == existing)
class ServiceWOSelectorsNPScenario(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(ServiceWOSelectorsNPScenario, cls).skip_checks()
if not CONF.kuryr_kubernetes.network_policy_enabled:
raise cls.skipException('Network Policy driver and handler must '
'be enabled to run this tests')
if not CONF.kuryr_kubernetes.test_services_without_selector:
raise cls.skipException("Service without selectors tests are not "
"enabled")
if not CONF.kuryr_kubernetes.new_kuryrnetworkpolicy_crd:
raise cls.skipException('New KuryrNetworkPolicy NP CRDs must be '
'used to run these tests')
def get_np_crd_info(self, name, namespace='default', **kwargs):
crd = self.k8s_client.CustomObjectsApi().get_namespaced_custom_object(
group=base.KURYR_CRD_GROUP, version=base.KURYR_CRD_VERSION,
namespace=namespace, plural=KURYR_NETWORK_POLICY_CRD_PLURAL,
name=name, **kwargs)
expected = len(crd['spec'].get('egressSgRules', []) +
crd['spec'].get('ingressSgRules', []))
existing = len(crd['status']['securityGroupRules'])
# Third result tells us if all the SG rules are created.
return (crd['status'].get('securityGroupId'),
crd['status'].get('podSelector'),
expected == existing)
@decorators.idempotent_id('abcfa34d-078c-485f-a80d-765c173d7652')
def test_egress_np_to_service_wo_selectors(self):
# create namespace for client
client_ns_name = data_utils.rand_name(prefix='client-ns')
client_label = {'app': data_utils.rand_name('client')}
self.create_namespace(name=client_ns_name)
self.addCleanup(self.delete_namespace, client_ns_name)
# create client pod in client ns
client_pod_name = data_utils.rand_name(prefix='client-pod')
self.create_pod(namespace=client_ns_name, name=client_pod_name,
labels=client_label)
# create ns for server
server_ns_name = data_utils.rand_name(prefix='server-ns')
server_label = {'app': data_utils.rand_name('server')}
self.create_namespace(name=server_ns_name, labels=server_label)
self.addCleanup(self.delete_namespace, server_ns_name)
# create server pod under it
server_pod_name = data_utils.rand_name(prefix='server-pod')
self.create_pod(namespace=server_ns_name, name=server_pod_name,
labels=server_label)
server_pod_addr = self.get_pod_ip(server_pod_name,
namespace=server_ns_name)
# create another server pod with different label
server2_label = {'app': data_utils.rand_name('server2')}
server2_pod_name = data_utils.rand_name(prefix='server2-pod')
self.create_pod(namespace=server_ns_name, name=server2_pod_name,
labels=server2_label)
server2_pod_addr = self.get_pod_ip(server2_pod_name,
namespace=server_ns_name)
# create service w/o selectors
service_name, _ = self.create_service(namespace=server_ns_name,
pod_label=None)
# manually create endpoint for the service
endpoint = self.k8s_client.V1Endpoints()
endpoint.metadata = self.k8s_client.V1ObjectMeta(name=service_name)
addresses = [self.k8s_client.V1EndpointAddress(ip=server_pod_addr)]
try:
ports = [self.k8s_client.V1EndpointPort(
name=None, port=8080, protocol='TCP')]
except AttributeError:
# FIXME(dulek): kubernetes==21.7.0 renamed V1EndpointPort to
# CoreV1EndpointPort, probably mistakenly. Bugreport:
# https://github.com/kubernetes-client/python/issues/1661
ports = [self.k8s_client.CoreV1EndpointPort(
name=None, port=8080, protocol='TCP')]
endpoint.subsets = [self.k8s_client.V1EndpointSubset(
addresses=addresses,
ports=ports)]
self.k8s_client.CoreV1Api().create_namespaced_endpoints(
namespace=server_ns_name, body=endpoint)
# create another service
service2_name, _ = self.create_service(namespace=server_ns_name,
pod_label=None)
# manually create endpoint for the service
endpoint = self.k8s_client.V1Endpoints()
endpoint.metadata = self.k8s_client.V1ObjectMeta(name=service2_name)
addresses = [self.k8s_client.V1EndpointAddress(ip=server2_pod_addr)]
try:
ports = [self.k8s_client.V1EndpointPort(
name=None, port=8080, protocol='TCP')]
except AttributeError:
# FIXME(dulek): kubernetes==21.7.0 renamed V1EndpointPort to
# CoreV1EndpointPort, probably mistakenly. Bugreport:
# https://github.com/kubernetes-client/python/issues/1661
ports = [self.k8s_client.CoreV1EndpointPort(
name=None, port=8080, protocol='TCP')]
endpoint.subsets = [self.k8s_client.V1EndpointSubset(
addresses=addresses,
ports=ports)]
self.k8s_client.CoreV1Api().create_namespaced_endpoints(
namespace=server_ns_name, body=endpoint)
# check endpoints configured
service_ip = self.get_service_ip(service_name,
namespace=server_ns_name)
service2_ip = self.get_service_ip(service2_name,
namespace=server_ns_name)
self.verify_lbaas_endpoints_configured(service_name, 1, server_ns_name)
self.verify_lbaas_endpoints_configured(service2_name, 1,
server_ns_name)
self.wait_until_service_LB_is_active(service_name, server_ns_name)
self.wait_until_service_LB_is_active(service2_name, server_ns_name)
# check connectivity
curl_tmpl = self.get_curl_template(service_ip, extra_args='-m 10')
cmd = ["/bin/sh", "-c", curl_tmpl.format(service_ip)]
cmd2 = ["/bin/sh", "-c", curl_tmpl.format(service2_ip)]
self.assertIn(consts.POD_OUTPUT,
self.exec_command_in_pod(client_pod_name, cmd,
namespace=client_ns_name),
"Connectivity from %s to service %s (%s) failed." %
(client_pod_name, service_ip, service_name))
self.assertIn(consts.POD_OUTPUT,
self.exec_command_in_pod(client_pod_name, cmd2,
namespace=client_ns_name),
"Connectivity from %s to service2 %s (%s) failed." %
(client_pod_name, service2_ip, service2_name))
# create NP for client to be able to reach server
np_name = data_utils.rand_name(prefix='kuryr-np')
np = self.k8s_client.V1NetworkPolicy()
np.kind = 'NetworkPolicy'
np.api_version = 'networking.k8s.io/v1'
np.metadata = self.k8s_client.V1ObjectMeta(name=np_name,
namespace=client_ns_name)
to = self.k8s_client.V1NetworkPolicyPeer(
pod_selector=self.k8s_client.V1LabelSelector(
match_labels=server_label),
namespace_selector=self.k8s_client.V1LabelSelector(
match_labels=server_label))
np.spec = self.k8s_client.V1NetworkPolicySpec(
pod_selector=self.k8s_client.V1LabelSelector(
match_labels=client_label),
policy_types=['Egress'],
egress=[self.k8s_client.V1NetworkPolicyEgressRule(to=[to])])
np = (self.k8s_client.NetworkingV1Api()
.create_namespaced_network_policy(namespace=client_ns_name,
body=np))
self.addCleanup(self.delete_network_policy, np.metadata.name,
client_ns_name)
start = time.time()
while time.time() - start < TIMEOUT_PERIOD:
try:
time.sleep(5)
_, _, ready = self.get_np_crd_info(np_name, client_ns_name)
if ready:
break
except kubernetes.client.rest.ApiException as e:
LOG.info("ApiException ocurred: %s" % e.body)
continue
else:
msg = "Timed out waiting for %s %s CRD pod selector" % (
np_name, KURYR_NETWORK_POLICY_CRD_PLURAL)
raise lib_exc.TimeoutException(msg)
# Even though the SG rules are up it might still take a moment until
# they're enforced.
time.sleep(10)
# after applying NP, we still should have an access from client to the
# service with matched labels,
self.assertIn(consts.POD_OUTPUT,
self.exec_command_in_pod(client_pod_name, cmd,
namespace=client_ns_name))
# while for the other service, we should not.
self.assertNotIn(consts.POD_OUTPUT,
self.exec_command_in_pod(client_pod_name, cmd2,
namespace=client_ns_name))

View File

@ -1,84 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_log import log as logging
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib import decorators
from kuryr_tempest_plugin.tests.scenario import base
LOG = logging.getLogger(__name__)
CONF = config.CONF
K8S_ANNOTATION_PREFIX = 'openstack.org/kuryr'
NAD_CRD_NAME = "network-attachment-definitions.k8s.cni.cncf.io"
class TestNpwgMultiVifScenario(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(TestNpwgMultiVifScenario, cls).skip_checks()
if not CONF.kuryr_kubernetes.npwg_multi_vif_enabled:
raise cls.skipException(
"NPWG Multi-VIF feature should be enabled to run this test.")
@decorators.idempotent_id('bddf3211-1244-449d-a125-b5fddfb1a3aa')
def test_npwg_multi_vif(self):
nad_name, nad = self._create_network_crd_obj()
# create a pod with additional interfaces
annotations = {'k8s.v1.cni.cncf.io/networks': nad_name}
pod_name, pod = self.create_pod(annotations=annotations)
command = ['/bin/ip', 'a']
output = self.exec_command_in_pod(pod_name, command)
self.assertIn('eth1', output)
self.addCleanup(self.delete_pod, pod_name, pod)
def _create_network_crd_obj(self, name=None, namespace='default'):
if not name:
name = data_utils.rand_name(prefix='net')
self.new_net = self._create_network()
self.new_subnet = self.create_subnet(network=self.new_net)
subnet_id = self.new_subnet['id']
self.nad_obj_manifest = {
'apiVersion': 'k8s.cni.cncf.io/v1',
'kind': 'NetworkAttachmentDefinition',
'metadata':
{
'name': name,
'annotations': {
'openstack.org/kuryr-config':
'{"subnetId": "' + subnet_id + '"}'
}
}
}
version = 'v1'
group = 'k8s.cni.cncf.io'
plural = 'network-attachment-definitions'
custom_obj_api = self.k8s_client.CustomObjectsApi()
obj = custom_obj_api.create_namespaced_custom_object(
group, version, namespace, plural, self.nad_obj_manifest
)
body = self.k8s_client.V1DeleteOptions()
self.addCleanup(custom_obj_api.delete_namespaced_custom_object,
group, version, namespace, plural, name, body)
return name, obj

View File

@ -1,432 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import testtools
import time
from oslo_concurrency import lockutils
from oslo_log import log as logging
from tempest import config
from tempest.lib import decorators
from kuryr_tempest_plugin.tests.scenario import base
from kuryr_tempest_plugin.tests.scenario import consts
LOG = logging.getLogger(__name__)
CONF = config.CONF
PREPOPULAION_RETRIES = 60
NON_PREPOPULAION_RETRIES = 5
class TestPortPoolScenario(base.BaseKuryrScenarioTest):
CONFIG_MAP_NAME = 'kuryr-config'
CONF_TO_UPDATE = 'kuryr.conf'
VIF_POOL_SECTION = 'vif_pool'
@classmethod
def skip_checks(cls):
super(TestPortPoolScenario, cls).skip_checks()
if not CONF.kuryr_kubernetes.subnet_per_namespace:
raise cls.skipException('Subnet per namespace must be '
'enabled to run these tests')
if not CONF.kuryr_kubernetes.port_pool_enabled:
raise cls.skipException(
"Port pool feature should be enabled to run these tests.")
@classmethod
def resource_setup(cls):
super(TestPortPoolScenario, cls).resource_setup()
cls.PORTS_POOL_DEFAULT_DICT = cls.get_config_map_ini_value(
name=cls.CONFIG_MAP_NAME, conf_for_get=cls.CONF_TO_UPDATE,
section=cls.VIF_POOL_SECTION, keys=[
'ports_pool_batch', 'ports_pool_min', 'ports_pool_max',
'ports_pool_update_frequency'])
def get_subnet_id_for_ns(self, namespace):
subnets_list = self.os_admin.subnets_client.list_subnets()
ns_name = namespace.metadata.name
ns_uid = namespace.metadata.uid
for subnet_name in (f'{ns_uid}/{ns_name}',
f'{ns_name}/{ns_uid}',
f'{ns_name}-subnet',
f'ns/{ns_name}-subnet',
ns_name):
subnet_id = [n['id'] for n in subnets_list['subnets']
if n['name'] == subnet_name]
if subnet_id:
return subnet_id[0]
return None
def check_initial_ports_num(self, subnet_id, namespace_name, pool_batch):
# check the original length of list of ports for new ns
num_nodes = len(self.k8s_client.CoreV1Api().list_node().items)
if CONF.kuryr_kubernetes.prepopulation_enabled:
num_ports_initial = num_nodes * (int(pool_batch/2) + 1)
retries = PREPOPULAION_RETRIES
else:
num_ports_initial = 1
retries = NON_PREPOPULAION_RETRIES
port_list_num = len(self.os_admin.ports_client.list_ports(
fixed_ips='subnet_id=%s' % subnet_id)['ports'])
while port_list_num != num_ports_initial:
retries -= 1
if retries == 0:
self.assertNotEqual(0, retries, "Timed out waiting for port "
"prepopulation for namespace %s to end. "
"Expected %d ports, "
"found %d" % (namespace_name,
num_ports_initial,
port_list_num))
time.sleep(3)
port_list_num = len(self.os_admin.ports_client.list_ports(
fixed_ips='subnet_id=%s' % subnet_id)['ports'])
return port_list_num
@decorators.idempotent_id('bddf5441-1244-449d-a125-b5fddfb1a3aa')
@lockutils.synchronized('port-pool-restarts')
def test_port_pool(self):
namespace_name, namespace = self.create_namespace()
self.addCleanup(self.delete_namespace, namespace_name)
pool_batch = self.PORTS_POOL_DEFAULT_DICT['ports_pool_batch']
if CONF.kuryr_kubernetes.trigger_namespace_upon_pod:
port_list_num = 1
# create a pod to test the port pool increase
pod_name1, _ = self.create_pod(namespace=namespace_name,
labels={'type': 'demo'})
subnet_id = self.get_subnet_id_for_ns(namespace)
else:
subnet_id = self.get_subnet_id_for_ns(namespace)
port_list_num = self.check_initial_ports_num(subnet_id,
namespace_name,
pool_batch)
# create a pod to test the port pool increase
pod_name1, _ = self.create_pod(namespace=namespace_name,
labels={'type': 'demo'})
# port number should increase by ports_pool_batch value
updated_port_list_num = len(self.os_admin.ports_client.list_ports(
fixed_ips='subnet_id=%s' % subnet_id)['ports'])
LOG.info("New_port_list_num = {} while pool_batch_conf = {}".format(
updated_port_list_num, self.PORTS_POOL_DEFAULT_DICT[
'ports_pool_batch']))
num_to_compare = updated_port_list_num - int(
self.PORTS_POOL_DEFAULT_DICT['ports_pool_batch'])
self.assertEqual(num_to_compare, port_list_num)
# create additional pod
self.create_pod(namespace=namespace_name,
affinity={'podAffinity': consts.POD_AFFINITY})
# the port pool should stay the same
updated2_port_list_num = len(self.os_admin.ports_client.list_ports(
fixed_ips='subnet_id=%s' % subnet_id)['ports'])
self.assertEqual(updated_port_list_num, updated2_port_list_num)
# to test the reload of the pools, we will also test the restart of the
# kuryr-controller
self.restart_kuryr_controller()
port_list_num_after_restart = len(
self.os_admin.ports_client.list_ports(
fixed_ips='subnet_id=%s' % subnet_id)['ports'])
self.assertEqual(updated_port_list_num, port_list_num_after_restart,
"Number of Neutron ports on namespace %s subnet "
"changed after kuryr-controller "
"restart" % namespace_name)
# create additional pod
pod_name3, _ = self.create_pod(
namespace=namespace_name,
affinity={'podAffinity': consts.POD_AFFINITY})
# Check the number of pods based on the values of the ports_pool_batch
# and ports_pool_min. If the difference between the values is greater
# than two it means that once we create another pod ports_pool_patch
# number of ports will be created, If the difference is more than two
# no new ports will be created once we create a new pod
ports_pool_batch = int(self.PORTS_POOL_DEFAULT_DICT[
'ports_pool_batch'])
ports_pool_min = int(self.PORTS_POOL_DEFAULT_DICT['ports_pool_min'])
population_threshold = ports_pool_batch - ports_pool_min
if population_threshold < 3:
num_ports_expected = ports_pool_batch * 2 + 1
else:
num_ports_expected = ports_pool_batch + 1
updated3_port_list_num = self.check_ports_num_increase(
num_ports_expected, subnet_id)
self.assertEqual(num_ports_expected, updated3_port_list_num)
# check connectivity between pods
pod_ip = self.get_pod_ip(pod_name1, namespace=namespace_name)
cmd = [
"/bin/sh", "-c", "ping -c 4 {dst_ip}>/dev/null ; echo $?".format(
dst_ip=pod_ip)]
self.assertEqual(self.exec_command_in_pod(
pod_name3, cmd, namespace=namespace_name), '0')
@decorators.idempotent_id('bddd5441-1244-429d-a125-b55ddfb134a9')
@testtools.skipUnless(
CONF.kuryr_kubernetes.configmap_modifiable,
"Config map must be modifiable")
@lockutils.synchronized('port-pool-restarts')
def test_port_pool_update(self):
UPDATED_POOL_BATCH = int(self.PORTS_POOL_DEFAULT_DICT[
'ports_pool_batch']) + 2
# Check resources are created
namespace_name, namespace = self.create_namespace()
self.addCleanup(self.delete_namespace, namespace_name)
subnet_id = self.get_subnet_id_for_ns(namespace)
self.update_config_map_ini_section_and_restart(
name=self.CONFIG_MAP_NAME,
conf_to_update=self.CONF_TO_UPDATE,
section=self.VIF_POOL_SECTION,
ports_pool_max=0,
ports_pool_batch=UPDATED_POOL_BATCH,
ports_pool_min=1)
self.addCleanup(
self.update_config_map_ini_section_and_restart,
self.CONFIG_MAP_NAME, self.CONF_TO_UPDATE, self.VIF_POOL_SECTION,
ports_pool_batch=self.PORTS_POOL_DEFAULT_DICT['ports_pool_batch'],
ports_pool_max=self.PORTS_POOL_DEFAULT_DICT['ports_pool_max'],
ports_pool_min=self.PORTS_POOL_DEFAULT_DICT['ports_pool_min'])
# check the original length of list of ports for new ns
initial_port_list_num = self.check_initial_ports_num(
subnet_id,
namespace_name,
UPDATED_POOL_BATCH)
# create a pod to test the port pool increase by updated value
pod_name1, pod1 = self.create_pod(namespace=namespace_name,
labels={'type': 'demo'})
# port number should increase by updated ports_pool_batch value
updated_port_list_num = len(self.os_admin.ports_client.list_ports(
fixed_ips='subnet_id=%s' % subnet_id)['ports'])
if not CONF.kuryr_kubernetes.prepopulation_enabled:
num_to_compare = updated_port_list_num - initial_port_list_num
else:
num_to_compare = UPDATED_POOL_BATCH
self.assertEqual(num_to_compare, UPDATED_POOL_BATCH)
# create additional pod
pod_name2, pod2 = self.create_pod(
namespace=namespace_name,
affinity={'podAffinity': consts.POD_AFFINITY})
# the total number of ports should stay the same
updated2_port_list_num = len(self.os_admin.ports_client.list_ports(
fixed_ips='subnet_id=%s' % subnet_id)['ports'])
self.assertEqual(updated_port_list_num, updated2_port_list_num)
# check connectivity between pods
pod_ip = self.get_pod_ip(pod_name1, namespace=namespace_name)
cmd = [
"/bin/sh", "-c", "ping -c 4 {dst_ip}>/dev/null ; echo $?".format(
dst_ip=pod_ip)]
self.assertEqual(self.exec_command_in_pod(
pod_name2, cmd, namespace=namespace_name), '0')
@decorators.idempotent_id('bddd5441-1244-459d-a133-b56ddfb147a6')
@testtools.skipUnless(
CONF.kuryr_kubernetes.configmap_modifiable,
"Config map must be modifiable")
@lockutils.synchronized('port-pool-restarts')
def test_port_pool_noop_update(self):
KUBERNETES_SECTION = 'kubernetes'
VIF_POOL_SECTION = 'vif_pool'
# Check resources are created
namespace_name, namespace = self.create_namespace()
self.addCleanup(self.delete_namespace, namespace_name)
subnet_id = self.get_subnet_id_for_ns(namespace)
# Read the value of the drivers
update_pools_vif_drivers = self.get_config_map_ini_value(
name=self.CONFIG_MAP_NAME, conf_for_get=self.CONF_TO_UPDATE,
section=VIF_POOL_SECTION,
keys=['pools_vif_drivers'])['pools_vif_drivers']
vif_pool_driver = self.get_config_map_ini_value(
name=self.CONFIG_MAP_NAME, conf_for_get=self.CONF_TO_UPDATE,
section=KUBERNETES_SECTION,
keys=['vif_pool_driver'])['vif_pool_driver']
if update_pools_vif_drivers:
self.update_config_map_ini_section(
name=self.CONFIG_MAP_NAME,
conf_to_update=self.CONF_TO_UPDATE,
section=VIF_POOL_SECTION,
pools_vif_drivers='')
self.addCleanup(
self.update_config_map_ini_section,
self.CONFIG_MAP_NAME, self.CONF_TO_UPDATE, VIF_POOL_SECTION,
pools_vif_drivers=update_pools_vif_drivers)
self.update_config_map_ini_section_and_restart(
name=self.CONFIG_MAP_NAME,
conf_to_update=self.CONF_TO_UPDATE,
section=KUBERNETES_SECTION,
vif_pool_driver='noop')
self.addCleanup(
self.update_config_map_ini_section_and_restart,
self.CONFIG_MAP_NAME, self.CONF_TO_UPDATE, KUBERNETES_SECTION,
vif_pool_driver=vif_pool_driver)
# check the original length of list of ports for new ns
initial_ports_num = self.check_initial_ports_num(subnet_id,
namespace_name,
1)
# create a pod to test the port pool increase by 1
self.create_pod(namespace=namespace_name, labels={'type': 'demo'})
# port number should increase by 1
new_port_list_num = len(self.os_admin.ports_client.list_ports(
fixed_ips='subnet_id=%s' % subnet_id)['ports'])
self.assertEqual(initial_ports_num+1, new_port_list_num)
# update pools_vif_drivers and vif_pool_driver to the previous values
if update_pools_vif_drivers:
self.update_config_map_ini_section(
name=self.CONFIG_MAP_NAME,
conf_to_update=self.CONF_TO_UPDATE,
section=VIF_POOL_SECTION,
pools_vif_drivers='')
self.update_config_map_ini_section_and_restart(
name=self.CONFIG_MAP_NAME,
conf_to_update=self.CONF_TO_UPDATE,
section=KUBERNETES_SECTION,
vif_pool_driver=vif_pool_driver)
# check that everything works as before when returning back from noop
# configuration for vif_pool_driver
self.create_pod(namespace=namespace_name,
affinity={'podAffinity': consts.POD_AFFINITY})
# port number should increase by default ports_pool_batch value
updated_port_list_num = len(self.os_admin.ports_client.list_ports(
fixed_ips='subnet_id=%s' % subnet_id)['ports'])
num_to_compare = updated_port_list_num - int(
self.PORTS_POOL_DEFAULT_DICT['ports_pool_batch'])
self.assertEqual(num_to_compare, new_port_list_num)
def check_ports_num_increase(self, expected_num_ports,
subnet_id, retries=5):
for i in range(retries):
port_list_num = len(self.os_admin.ports_client.list_ports(
fixed_ips='subnet_id=%s' % subnet_id)['ports'])
if port_list_num == expected_num_ports:
break
time.sleep(5)
return port_list_num
@decorators.idempotent_id('bddd5441-1244-429d-a123-b55ddfb137a6')
@testtools.skipUnless(
CONF.kuryr_kubernetes.configmap_modifiable,
"Config map must be modifiable")
@lockutils.synchronized('port-pool-restarts')
def test_port_pool_min_max_update(self):
POOL_BATCH = 2
POOL_MAX = 2
POOL_MIN = 1
# Check resources are created
namespace_name, namespace = self.create_namespace()
self.addCleanup(self.delete_namespace, namespace_name)
subnet_id = self.get_subnet_id_for_ns(namespace)
self.update_config_map_ini_section_and_restart(
name=self.CONFIG_MAP_NAME,
conf_to_update=self.CONF_TO_UPDATE,
section=self.VIF_POOL_SECTION,
ports_pool_max=POOL_MAX,
ports_pool_batch=POOL_BATCH,
ports_pool_min=POOL_MIN)
self.addCleanup(
self.update_config_map_ini_section_and_restart,
self.CONFIG_MAP_NAME, self.CONF_TO_UPDATE, self.VIF_POOL_SECTION,
ports_pool_batch=self.PORTS_POOL_DEFAULT_DICT['ports_pool_batch'],
ports_pool_max=self.PORTS_POOL_DEFAULT_DICT['ports_pool_max'],
ports_pool_min=self.PORTS_POOL_DEFAULT_DICT['ports_pool_min'])
initial_ports_num = self.check_initial_ports_num(subnet_id,
namespace_name,
POOL_BATCH)
# create a pod to test the port pool increase by updated batch value
pod_name1, pod1 = self.create_pod(namespace=namespace_name,
labels={'type': 'demo'})
# port number should increase by updated ports_pool_batch value
updated_port_list_num = len(self.os_admin.ports_client.list_ports(
fixed_ips='subnet_id=%s' % subnet_id)['ports'])
updated_port_list_num = self.check_ports_num_increase(
initial_ports_num + POOL_BATCH, subnet_id)
self.assertEqual(updated_port_list_num, initial_ports_num + POOL_BATCH)
# need to wait till ports_pool_update_frequency expires so new batch
# creation could be executed in order to create additional pod
time.sleep(int(
self.PORTS_POOL_DEFAULT_DICT['ports_pool_update_frequency']))
pod_name2, pod2 = self.create_pod(
namespace=namespace_name,
affinity={'podAffinity': consts.POD_AFFINITY})
# the total number of ports should increase by 2 as there is only 1
# port free and POOL_MIN=1, so new port batch will be created
# We wait a bit because sometimes it takes a while for the ports to be
# created
updated2_port_list_num = self.check_ports_num_increase(
initial_ports_num + 2 * POOL_BATCH, subnet_id)
self.assertEqual(updated2_port_list_num,
initial_ports_num + 2 * POOL_BATCH)
# create additional pod
pod_name3, pod3 = self.create_pod(
namespace=namespace_name,
affinity={'podAffinity': consts.POD_AFFINITY})
# We wait to make sure no additional ports are created
time.sleep(30)
# the total number of ports should stay the same
updated3_port_list_num = len(self.os_admin.ports_client.list_ports(
fixed_ips='subnet_id=%s' % subnet_id)['ports'])
self.assertEqual(updated2_port_list_num, updated3_port_list_num)
# check connectivity between pods
pod_ip = self.get_pod_ip(pod_name1, namespace=namespace_name)
cmd = [
"/bin/sh", "-c", "ping -c 4 {dst_ip}>/dev/null ; echo $?".format(
dst_ip=pod_ip)]
self.assertEqual(self.exec_command_in_pod(
pod_name2, cmd, namespace=namespace_name), '0')
# delete all pods and make sure the number of new ports added during
# pod creation is equal to POOL_MAX
for pod_name in (pod_name1, pod_name2, pod_name3):
self.delete_pod(pod_name, namespace=namespace_name)
# timeout is needed as it takes time for ports to be deleted
time.sleep(30)
updated4_port_list_num = len(self.os_admin.ports_client.list_ports(
fixed_ips='subnet_id=%s' % subnet_id)['ports'])
LOG.info("Number of ports after pods deletion and timeout = {}".format(
updated4_port_list_num))
self.assertEqual(updated4_port_list_num - initial_ports_num, POOL_MAX)

View File

@ -1,384 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import testtools
import time
import kubernetes
from oslo_log import log as logging
from tempest import config
from tempest.lib.common.utils import data_utils
from tempest.lib.common.utils import test_utils
from tempest.lib import decorators
from tempest.lib import exceptions as lib_exc
from kuryr_tempest_plugin.tests.scenario import base
from kuryr_tempest_plugin.tests.scenario import consts
LOG = logging.getLogger(__name__)
CONF = config.CONF
class TestServiceScenario(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(TestServiceScenario, cls).skip_checks()
if not CONF.network_feature_enabled.floating_ips:
raise cls.skipException("Floating ips are not available")
if not CONF.kuryr_kubernetes.service_tests_enabled:
raise cls.skipException("Service tests are not enabled")
@classmethod
def resource_setup(cls):
super(TestServiceScenario, cls).resource_setup()
name = data_utils.rand_name(prefix='pod-service-curl')
cls.create_setup_for_service_test(service_name=name)
@decorators.idempotent_id('bddf5441-1244-449d-a125-b5fdcfa1a7a9')
def test_pod_service_curl(self):
LOG.info("Trying to curl the service IP %s" % self.service_ip)
self.check_service_internal_connectivity()
class TestLoadBalancerServiceScenario(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(TestLoadBalancerServiceScenario, cls).skip_checks()
if not CONF.network_feature_enabled.floating_ips:
raise cls.skipException("Floating ips are not available")
if not CONF.kuryr_kubernetes.service_tests_enabled:
raise cls.skipException("Service tests are not enabled")
if CONF.kuryr_kubernetes.ipv6:
raise cls.skipException('FIPs are not supported with IPv6')
@classmethod
def resource_setup(cls):
super(TestLoadBalancerServiceScenario, cls).resource_setup()
cls.create_setup_for_service_test(spec_type="LoadBalancer")
@decorators.idempotent_id('bddf5441-1244-449d-a175-b5fdcfc2a1a9')
def test_lb_service_http(self):
retries = 10
self.check_service_internal_connectivity()
LOG.info("Trying to curl the service IP %s" % self.service_ip)
for i in range(retries):
self.assert_backend_amount(self.service_ip, self.pod_num)
time.sleep(30)
# TODO(yboaron): Use multi threads for 'test_vm_service_http' test
@decorators.idempotent_id('bddf5441-1244-449d-a125-b5fdcfa1b5a9')
def test_vm_service_http(self):
self.check_service_internal_connectivity()
ssh_client, fip = self.create_vm_for_connectivity_test()
LOG.info("Trying to curl the service IP %s from VM" % self.service_ip)
cmd = ("curl {dst_ip}".format(dst_ip=self.service_ip))
def curl():
return ssh_client.exec_command(cmd)
self._run_and_assert_fn(curl)
@decorators.idempotent_id('bddf5441-1244-449d-a125-b5fdbfc1b2a7')
@testtools.skipUnless(
CONF.kuryr_kubernetes.containerized,
"test_unsupported_service_type only runs on containerized setups")
def test_unsupported_service_type(self):
# Testing that kuryr controller didn't crash for 100 seconds since
# creation of service with unsupported type
self.check_service_internal_connectivity()
self.create_setup_for_service_test(spec_type="NodePort", get_ip=False)
self.check_controller_pod_status_for_time_period()
class TestUdpServiceScenario(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(TestUdpServiceScenario, cls).skip_checks()
if not CONF.kuryr_kubernetes.service_tests_enabled:
raise cls.skipException("Service tests are not enabled")
if not CONF.kuryr_kubernetes.test_udp_services:
raise cls.skipException("Service UDP tests are not enabled")
@decorators.idempotent_id('bddf5441-1244-449d-a125-b5fda1670781')
def test_service_udp_ping(self):
# NOTE(ltomasbo): Using LoadBalancer type to avoid namespace isolation
# restrictions as this test targets svc udp testing and not the
# isolation
self.create_setup_for_service_test(protocol="UDP", port=90,
target_port=9090)
# NOTE(ltomasbo): Ensure usage of svc clusterIP IP instead of the FIP
# as the focus of this test is not to check FIP connectivity.
self.check_service_internal_connectivity(service_port='90',
protocol='UDP')
class TestServiceWithoutSelectorScenario(base.BaseKuryrScenarioTest):
credentials = ['admin', 'primary', ['lb_admin', 'load-balancer_admin']]
@classmethod
def skip_checks(cls):
super(TestServiceWithoutSelectorScenario, cls).skip_checks()
if not CONF.kuryr_kubernetes.test_services_without_selector:
raise cls.skipException("Service without selectors tests"
" are not enabled")
@classmethod
def setup_clients(cls):
super(TestServiceWithoutSelectorScenario, cls).setup_clients()
cls.lbaas = cls.os_roles_lb_admin.load_balancer_v2.LoadbalancerClient()
cls.member_client = cls.os_admin.load_balancer_v2.MemberClient()
cls.pool_client = cls.os_roles_lb_admin.load_balancer_v2.PoolClient()
@decorators.idempotent_id('bb8cc977-c867-4766-b623-133d8495ee50')
def test_service_without_selector(self):
# Create a service without selector
timeout = 300
ns_name, ns_obj = self.create_namespace()
self.addCleanup(self.delete_namespace, ns_name)
self.service_without_selector_base(namespace=ns_name)
self.check_service_internal_connectivity(namespace=ns_name)
klb_crd_id = self.wait_for_status(timeout, 15, self.get_klb_crd_id,
service_name=self.service_name,
svc_namespace=ns_name)
pool_query = "loadbalancer_id=%s" % klb_crd_id
pool = self.wait_for_status(timeout, 15, self.pool_client.list_pools,
query_params=pool_query)
if CONF.kuryr_kubernetes.test_endpoints_object_removal:
# Check that there are no pool members after endpoint deletion
pool_id = pool[0].get('id')
self.delete_endpoint(ep_name=self.endpoint.metadata.name,
namespace=ns_name)
self.check_lb_members(pool_id, 0)
class TestSCTPServiceScenario(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(TestSCTPServiceScenario, cls).skip_checks()
if not CONF.kuryr_kubernetes.service_tests_enabled:
raise cls.skipException("Service tests are not enabled")
if not CONF.kuryr_kubernetes.test_sctp_services:
raise cls.skipException("Service SCTP tests are not enabled")
@decorators.idempotent_id('bb8cc977-c867-4766-b623-137d8395cb60')
def test_service_sctp_ping(self):
self.create_setup_for_service_test(
protocol="SCTP", port=90, target_port=9090)
self.check_service_internal_connectivity(
service_port='90', protocol='SCTP')
class TestListenerTimeoutScenario(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(TestListenerTimeoutScenario, cls).skip_checks()
if not CONF.kuryr_kubernetes.service_tests_enabled:
raise cls.skipException("Service tests are not enabled")
if not CONF.kuryr_kubernetes.test_configurable_listener_timeouts:
raise cls.skipException("Listener timeout tests are not enabled")
@decorators.idempotent_id('ca9bd886-d776-5675-b532-228c92a4da7f')
def test_updated_listener_timeouts(self):
self.create_setup_for_service_test(
service_name="kuryr-listener-demo")
self.check_updated_listener_timeout(
service_name="kuryr-listener-demo")
class TestDeployment(base.BaseKuryrScenarioTest):
credentials = ['admin', 'primary', ['lb_admin', 'load-balancer_admin']]
@classmethod
def skip_checks(cls):
super(TestDeployment, cls).skip_checks()
if not CONF.kuryr_kubernetes.service_tests_enabled:
raise cls.skipException("Service tests are not enabled")
@classmethod
def setup_clients(cls):
super(TestDeployment, cls).setup_clients()
cls.lbaas = cls.os_roles_lb_admin.load_balancer_v2.LoadbalancerClient()
cls.member_client = cls.os_admin.load_balancer_v2.MemberClient()
cls.pool_client = cls.os_roles_lb_admin.load_balancer_v2.PoolClient()
@testtools.skipUnless(
CONF.kuryr_kubernetes.kuryrloadbalancers,
"kuryrloadbalancers CRDs should be used to run this test")
@decorators.idempotent_id('bbacc377-c861-4766-b123-133d8195ee50')
def test_deployment_scale(self):
"""Deploys a deployment, rescales and check LB members
Deploys a deployment with 3 pods and deploys a service.
Checks the number of LB members , Scales to 5 and do the same,
and also checks connectivity to the service. Scales to 0 and
checks that the number LB memebers is also 0
"""
timeout = CONF.kuryr_kubernetes.lb_build_timeout
deployment_name, _ = self.create_deployment()
service_name, _ = self.create_service(pod_label={"app": "demo"},
spec_type='ClusterIP')
service_ip = self.get_service_ip(service_name, spec_type='ClusterIP')
self.addCleanup(self.delete_service, service_name)
klb_crd_id = self.wait_for_status(timeout, 15, self.get_klb_crd_id,
service_name=service_name)
pool_query = "loadbalancer_id=%s" % klb_crd_id
self.wait_for_status(timeout, 15, self.lbaas.show_loadbalancer,
klb_crd_id)
pool = self.wait_for_status(timeout, 15, self.pool_client.list_pools,
query_params=pool_query)
pool_id = pool[0].get('id')
self.check_lb_members(pool_id, 3)
self.scale_deployment(5, deployment_name)
pod_name, _ = self.create_pod()
self.addCleanup(self.delete_pod, pod_name)
self.check_lb_members(pool_id, 5)
self.assert_backend_amount_from_pod(service_ip, 5, pod_name)
self.scale_deployment(0, deployment_name)
self.check_lb_members(pool_id, 0)
class TestLoadBalancerReconciliationScenario(
base.BaseReconciliationScenarioTest):
@decorators.idempotent_id('da9bd886-e895-4869-b356-228c92a4da7f')
def test_loadbalancers_reconcilation(self):
service_name = data_utils.rand_name(prefix='kuryr-loadbalancer')
namespace = "default"
resource = consts.LOADBALANCER
_, svc_pods = self.create_setup_for_service_test(
service_name=service_name)
self.check_service_internal_connectivity(service_name=service_name)
# if there is a connectivity
LOG.info("Retrieving the LoadBalancer ID from KuryrLoadBalancer CRD")
try:
klb_crd_id = self.get_kuryr_loadbalancer_crds(service_name,
namespace).get(
'status', {}).get(
'loadbalancer',
{}).get('id')
except kubernetes.client.rest.ApiException:
raise lib_exc.ServerFault()
# NOTE(digitalsimboja): We need to await for DELETE to
# complete on Octavia
self.lbaas.delete_loadbalancer(klb_crd_id, cascade=True)
LOG.debug("Waiting for loadbalancer to be completely gone")
self.check_for_resource_reconciliation(service_name, svc_pods,
resource, klb_crd_id,
self.lbaas.show_loadbalancer,
namespace)
class TestListenerReconciliationScenario(base.BaseReconciliationScenarioTest):
@classmethod
def skip_checks(cls):
super(TestListenerReconciliationScenario, cls).skip_checks()
if not CONF.kuryr_kubernetes.enable_listener_reconciliation:
raise cls.skipException("Listener reconciliation is not enabled")
@decorators.idempotent_id('da9bd886-e895-4869-b356-230c92a5da8c')
def test_listeners_reconcilation(self):
service_name = data_utils.rand_name(
prefix='kuryr-loadbalancer-listener')
namespace = "default"
resource = consts.LISTENER
_, svc_pods = self.create_setup_for_service_test(
service_name=service_name)
self.check_service_internal_connectivity(service_name=service_name)
# if there is a connectivity
LOG.info("Retrieving the Listener ID from KuryrLoadBalancer CRD")
try:
klb_lsnr_id = self.get_kuryr_loadbalancer_crds(service_name,
namespace).get(
'status', {}).get(
'listeners',
[])[0].get('id')
except kubernetes.client.rest.ApiException:
raise lib_exc.ServerFault()
self.lsnr.delete_listener(klb_lsnr_id)
LOG.debug("Waiting for listener to be completely gone")
self.check_for_resource_reconciliation(service_name, svc_pods,
resource, klb_lsnr_id,
self.lsnr.show_listener,
namespace)
class TestServiceWithNotReadyEndpoints(base.BaseKuryrScenarioTest):
@classmethod
def skip_checks(cls):
super(TestServiceWithNotReadyEndpoints, cls).skip_checks()
if not CONF.kuryr_kubernetes.service_tests_enabled:
raise cls.skipException("Service tests are not enabled")
if not CONF.kuryr_kubernetes.containerized:
raise cls.skipException("Only runs on containerized setups")
@decorators.idempotent_id('bddf5441-1244-450d-a125-b5fdcfa1a7b0')
def test_service_with_not_ready_endpoints(self):
# Create a deployment with a failing probe
deployment_name, _ = self.create_deployment(failing_probe=True)
# Wait until the deployment's pods are running
res = test_utils.call_until_true(
self.check_pods_status_num, consts.POD_CHECK_TIMEOUT*3,
consts.POD_CHECK_SLEEP_TIME, namespace='default', label='app=demo',
num_pods=3)
self.assertTrue(res, 'Timed out waiting for pods to be running')
# Wait until the pods are not ready
self.wait_for_status(
consts.POD_CHECK_TIMEOUT*3, consts.POD_CHECK_SLEEP_TIME,
self.check_pods_ready_num, namespace='default', label='app=demo',
num_pods=0)
# Get current Kuryr pods restart count (for a later comparison)
controller_pods = self.get_controller_pod_names()
container_restarts_before = self.get_pod_containers_restarts(
pod_names=controller_pods,
namespace=CONF.kuryr_kubernetes.kube_system_namespace)
# Create a service
service_name, _ = self.create_service(pod_label={"app": "demo"},
spec_type='ClusterIP')
self.addCleanup(self.delete_service, service_name)
# Check Kuryr pods are not restarted
self.check_controller_pod_status_for_time_period(
retry_attempts=10,
time_between_attempts=3)
# Get current Kuryr pods restart count
container_restarts_after = self.get_pod_containers_restarts(
pod_names=controller_pods,
namespace=CONF.kuryr_kubernetes.kube_system_namespace)
# Compare Kuryr pods restart count with previously stored data
self.assertEqual(container_restarts_before, container_restarts_after,
"Kuryr controller pod(s) were restarted during the "
"service creation, expected: %s, obtained: %s" %
(container_restarts_before, container_restarts_after))

View File

@ -1,6 +0,0 @@
---
upgrade:
- |
Python 2.7 support has been dropped. Last release of kuryr-tempest-plugin
to support py2.7 is OpenStack Train. The minimum version of Python now
supported by kuryr-tempest-plugin is Python 3.6.

View File

@ -1,15 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr!=2.1.0,>=2.0.0 # Apache-2.0
oslotest>=3.2.0 # Apache-2.0
os-testr>=1.0.0 # Apache-2.0
six>=1.10.0 # MIT
tempest>=17.1.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testtools>=2.2.0 # MIT
kubernetes>=5.0.0 # Apache-2.0
oslo.concurrency>=3.26.0 # Apache-2.0
netaddr>=0.7.19 # BSD

View File

@ -1,28 +0,0 @@
[metadata]
name = kuryr-tempest-plugin
summary = Kuryr Tempest Plugin
description_file = README.rst
license = Apache Software License
classifiers =
Programming Language :: Python
Programming Language :: Python :: 3
Programming Language :: Python :: 3.8
Programming Language :: Python :: 3.9
Programming Language :: Python :: 3.10
Programming Language :: Python :: 3.11
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
author = OpenStack
author_email = openstack-discuss@lists.openstack.org
home_page = https://docs.openstack.org/kuryr-tempest-plugin/latest/
[files]
packages =
kuryr_tempest_plugin
[entry_points]
tempest.test_plugins =
kuryr_tempest_tests = kuryr_tempest_plugin.plugin:KuryrTempestPlugin

View File

@ -1,20 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import setuptools
setuptools.setup(
setup_requires=['pbr>=2.0.0'],
pbr=True)

View File

@ -1,9 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking>=3.0.1,<3.1.0 # Apache-2.0
bashate>=0.5.1 # Apache-2.0
coverage!=4.4,>=4.0 # Apache-2.0
pylint==1.4.5 # GNU GPL v2

View File

@ -1,11 +0,0 @@
FROM quay.io/kuryr/alpine:3.12
ADD rootfs.tar.xz /
RUN apk update && apk add iputils
RUN adduser -S kuryr --uid 100
USER 100
WORKDIR /home/kuryr
EXPOSE 8080
CMD ["/usr/bin/helloserver"]

View File

@ -1,163 +0,0 @@
FROM quay.io/kuryr/alpine:3.12
RUN apk add --no-cache \
bash \
bzip2 \
coreutils \
curl \
gcc \
go \
linux-headers \
make \
musl-dev \
perl \
tzdata
ENV BUSYBOX_VERSION 1.31.1
RUN set -ex; \
tarball="busybox-${BUSYBOX_VERSION}.tar.bz2"; \
curl -fL -o "${tarball}" "https://busybox.net/downloads/$tarball"; \
curl -fL -o "${tarball}.sha256" "https://busybox.net/downloads/$tarball.sha256"; \
sha256sum -c "$tarball.sha256"; \
mkdir -p /usr/src/busybox; \
tar -xjf "$tarball" -C /usr/src/busybox --strip-components 1; \
rm "${tarball}" "${tarball}.sha256"
WORKDIR /usr/src/busybox
# https://www.mail-archive.com/toybox@lists.landley.net/msg02528.html
# https://www.mail-archive.com/toybox@lists.landley.net/msg02526.html
RUN sed -i 's/^struct kconf_id \*$/static &/g' scripts/kconfig/zconf.hash.c_shipped
# CONFIG_LAST_SUPPORTED_WCHAR: see https://github.com/docker-library/busybox/issues/13 (UTF-8 input)
# see http://wiki.musl-libc.org/wiki/Building_Busybox
RUN set -ex; \
\
setConfs=' \
CONFIG_FEATURE_SUID=y \
CONFIG_AR=y \
CONFIG_FEATURE_AR_CREATE=y \
CONFIG_FEATURE_AR_LONG_FILENAMES=y \
CONFIG_LAST_SUPPORTED_WCHAR=0 \
CONFIG_STATIC=y \
CONFIG_BBCONFIG=y \
'; \
\
unsetConfs=' \
CONFIG_FEATURE_SYNC_FANCY \
\
CONFIG_FEATURE_HAVE_RPC \
CONFIG_FEATURE_INETD_RPC \
CONFIG_FEATURE_UTMP \
CONFIG_FEATURE_WTMP \
'; \
\
make defconfig; \
\
for conf in $unsetConfs; do \
sed -i \
-e "s!^$conf=.*\$!# $conf is not set!" \
.config; \
done; \
\
for confV in $setConfs; do \
conf="${confV%=*}"; \
sed -i \
-e "s!^$conf=.*\$!$confV!" \
-e "s!^# $conf is not set\$!$confV!" \
.config; \
if ! grep -q "^$confV\$" .config; then \
echo "$confV" >> .config; \
fi; \
done; \
\
make oldconfig; \
\
# trust, but verify
for conf in $unsetConfs; do \
! grep -q "^$conf=" .config; \
done; \
for confV in $setConfs; do \
grep -q "^$confV\$" .config; \
done;
RUN set -ex \
&& make -j "$(nproc)" \
busybox \
&& ./busybox --help \
&& mkdir -p rootfs/bin \
&& cp busybox rootfs/bin/ \
&& chroot rootfs /bin/busybox --install -s /bin
# grab a simplified getconf port from Alpine we can statically compile
RUN set -x \
&& aportsVersion="v$(cat /etc/alpine-release)" \
&& curl -fsSL \
"http://git.alpinelinux.org/cgit/aports/plain/main/musl/getconf.c?h=${aportsVersion}" \
-o /usr/src/getconf.c \
&& gcc -o rootfs/bin/getconf -static -Os /usr/src/getconf.c \
&& chroot rootfs /bin/getconf _NPROCESSORS_ONLN
# download a few extra files from buildroot (/etc/passwd, etc)
RUN set -ex; \
buildrootVersion='2017.11.1'; \
mkdir -p rootfs/etc; \
for f in passwd shadow group; do \
curl -fL -o "rootfs/etc/$f" "https://git.busybox.net/buildroot/plain/system/skeleton/etc/$f?id=$buildrootVersion"; \
done; \
# set expected permissions, etc too (https://git.busybox.net/buildroot/tree/system/device_table.txt)
curl -fL -o buildroot-device-table.txt "https://git.busybox.net/buildroot/plain/system/device_table.txt?id=$buildrootVersion"; \
awk ' \
!/^#/ { \
if ($2 != "d" && $2 != "f") { \
printf "error: unknown type \"%s\" encountered in line %d: %s\n", $2, NR, $0 > "/dev/stderr"; \
exit 1; \
} \
sub(/^\/?/, "rootfs/", $1); \
if ($2 == "d") { \
printf "mkdir -p %s\n", $1; \
} \
printf "chmod %s %s\n", $3, $1; \
} \
' buildroot-device-table.txt | sh -eux; \
rm buildroot-device-table.txt
# create missing home directories
RUN set -ex \
&& cd rootfs \
&& for userHome in $(awk -F ':' '{ print $3 ":" $4 "=" $6 }' etc/passwd); do \
user="${userHome%%=*}"; \
home="${userHome#*=}"; \
home="./${home#/}"; \
if [ ! -d "$home" ]; then \
mkdir -p "$home"; \
chown "$user" "$home"; \
chmod 755 "$home"; \
fi; \
done
# test and make sure it works
RUN chroot rootfs /bin/sh -xec 'true'
# ensure correct timezone (UTC)
RUN set -ex; \
ln -vL /usr/share/zoneinfo/UTC rootfs/etc/localtime; \
[ "$(chroot rootfs date +%Z)" = 'UTC' ]
# test and make sure DNS works too
RUN cp -L /etc/resolv.conf rootfs/etc/ \
&& chroot rootfs /bin/sh -xec 'nslookup google.com' \
&& rm rootfs/etc/resolv.conf
ADD ./curl_builder.sh .
RUN mkdir -p rootfs/usr/bin; \
./curl_builder.sh; \
cp /usr/local/bin/curl rootfs/usr/bin/curl
ADD ./server.go .
ADD ./udp_client.go .
RUN go build -ldflags "-linkmode external -extldflags -static" -o rootfs/usr/bin/helloserver server.go
RUN go build -ldflags "-linkmode external -extldflags -static" -o rootfs/usr/bin/udp_client udp_client.go
RUN mkdir -p rootfs/etc/ssl/certs \
&& cp /etc/ssl/certs/ca-certificates.crt rootfs/etc/ssl/certs/ca-certificates.crt

View File

@ -1,71 +0,0 @@
======================================
Kuryr Testing container infrastructure
======================================
This directory is the official source for building Quay.io kuryr/demo images.
The build consists on two parts:
Builder container
-----------------
The builder container is based on the `musl`_ compiled Alpine distribution. In
the process of building the image, it downloads and compiles:
* busybox
* musl
* curl and its dependencies
It also includes golang so that we can use it in our test web server:
* server.go
Everything that is to be included in the kuryr/demo image is put in
``/usr/src/busybox/rootfs``.
The reason for this is that this build is based on Docker's busybox build
system and the rootfs won't have any library, so all you want to add must be
statically compiled there.
kuryr/demo container
--------------------
This is the actual container used in the tests. It includes:
* Busybox: It gives us a very lightweight userspace that provides things like
the ip command, vi, etc.
* curl: Useful for testing HTTP/HTTPS connectivity to the API and other
services.
* helloserver: An HTTP server that binds to 8080 and prints out a message
that includes the hostname, so it can be used to see which pod replies to a
service request.
When and how to build
---------------------
builder container + kuryr/demo
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You should only need to build the whole set if you want to change the library
app version of something in kuryr/demo or add another tool like bind9 dig.
The way to do this is::
.. code-block:: console
$ sudo ./mkrootfs.sh
kuryr/demo
~~~~~~~~~~
Everytime you want to run the tests, you should build the kuryr/demo container
locally to avoid pulls from quay.io to make sure you run the latest
authoritative version.
Note that the kuryr-tempest-plugin devstack will build it for you.
.. _musl: https://musl.libc.org

View File

@ -1,117 +0,0 @@
#!/usr/bin/env bash
# Names of latest versions of each package
export VERSION_MUSL=musl-1.2.0
export VERSION_ZLIB=zlib-1.2.11
export VERSION_LIBRESSL=libressl-3.2.1
export VERSION_CURL=curl-7.72.0
# URLs to the source directories
export SOURCE_MUSL=http://www.musl-libc.org/releases/
export SOURCE_LIBRESSL=http://ftp.openbsd.org/pub/OpenBSD/LibreSSL/
export SOURCE_CURL=https://curl.haxx.se/download/
export SOURCE_ZLIB=http://zlib.net/
# Path to local build
export BUILD_DIR=/tmp/curl-static-libressl/build
# Path for libressl
export STATICLIBSSL="${BUILD_DIR}/${VERSION_LIBRESSL}"
function setup() {
# create and clean build directory
mkdir -p ${BUILD_DIR}
rm -Rf ${BUILD_DIR}/*
# install build environment tools
apk add linux-headers perl
}
function download_sources() {
# todo: verify checksum / integrity of downloads!
echo "Download sources"
pushd ${BUILD_DIR}
curl -sSLO "${SOURCE_MUSL}${VERSION_MUSL}.tar.gz"
curl -sSLO "${SOURCE_ZLIB}${VERSION_ZLIB}.tar.gz"
curl -sSLO "${SOURCE_LIBRESSL}${VERSION_LIBRESSL}.tar.gz"
curl -sSLO "${SOURCE_CURL}${VERSION_CURL}.tar.gz"
popd
}
function extract_sources() {
echo "Extracting sources"
pushd ${BUILD_DIR}
tar -xf "${VERSION_MUSL}.tar.gz"
tar -xf "${VERSION_LIBRESSL}.tar.gz"
tar -xf "${VERSION_CURL}.tar.gz"
tar -xf "${VERSION_ZLIB}.tar.gz"
popd
}
function compile_musl() {
echo "Configure & build static musl"
pushd "${BUILD_DIR}/${VERSION_MUSL}"
make clean
./configure --prefix=/usr/local --disable-shared
make -j4
make install
}
function compile_zlib() {
echo "Configure & build static zlib"
pushd "${BUILD_DIR}/${VERSION_ZLIB}"
make clean
./configure --static --prefix=/usr/local
make -j4
make install
}
function compile_libressl() {
echo "Configure & build static libressl"
pushd "${BUILD_DIR}/${VERSION_LIBRESSL}"
make clean
./configure --prefix=/usr/local --enable-shared=no
make -j4
make install
}
function compile_curl() {
echo "Configure & Build curl"
pushd "${BUILD_DIR}/${VERSION_CURL}"
make clean
LIBS="-ldl -lpthread" LDFLAGS="-static" CFLAGS="-no-pie" PKG_CONFIG_FLAGS="--static" PKG_CONFIG_PATH=/usr/local/lib/pkgconfig/ ./configure --disable-shared --enable-static
make -j4
make install
popd
}
echo "Building ${VERSION_CURL} with static ${VERSION_LIBRESSL}, and ${VERSION_ZLIB} ..."
setup && download_sources && extract_sources && compile_musl && compile_zlib && compile_libressl && compile_curl
retval=$?
echo ""
if [ $retval -eq 0 ]; then
echo "Your curl binary is located at ${BUILD_DIR}/${VERSION_CURL}/src/curl."
echo "Listing dynamically linked libraries ..."
ldd ${BUILD_DIR}/${VERSION_CURL}/src/curl
echo ""
${BUILD_DIR}/${VERSION_CURL}/src/curl --version
else
echo "Ooops, build failed. Check output!"
fi

View File

@ -1,29 +0,0 @@
FROM quay.io/kuryr/alpine:3.12
RUN apk add --no-cache \
bash \
gcc \
g++ \
libstdc++ \
linux-headers \
lksctp-tools \
lksctp-tools-dev \
openssh-client \
net-tools \
python3 \
py3-pip \
python3-dev
ENV BUSYBOX_VERSION 1.31.1
RUN adduser -S kuryr
USER kuryr
WORKDIR /home/kuryr
COPY kuryr_sctp_demo/sctp_server.py /sctp_server.py
COPY kuryr_sctp_demo/sctp_client.py /home/kuryr/sctp_client.py
RUN pip3 --no-cache-dir install -U pip \
&& python3 -m pip install pysctp
EXPOSE 9090
ENTRYPOINT ["python3", "/sctp_server.py"]

View File

@ -1,46 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import socket
import sys
import sctp
def connect_plus_message(out_ip, out_port):
for res in socket.getaddrinfo(out_ip, out_port, socket.AF_UNSPEC,
socket.SOCK_STREAM, 0, socket.AI_PASSIVE):
addr_fam, socktype, proto, canonname, sa = res
try:
sock = sctp.sctpsocket_tcp(addr_fam)
except OSError:
sock = None
continue
try:
sock.connect(sa)
except OSError:
sock.close()
sock = None
continue
break
if sock:
print("Sending Message")
sock.sctp_send(msg='HELLO, I AM ALIVE!!!')
msg_from_server = sock.recvfrom(1024)
print(msg_from_server[0].decode('utf-8'))
sock.shutdown(0)
sock.close()
if __name__ == '__main__':
connect_plus_message(sys.argv[1], int(sys.argv[2]))

View File

@ -1,43 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import platform
import socket
import sctp
host = '::'
port = 9090
sock = sctp.sctpsocket_tcp(socket.AF_INET6)
sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY, 0)
sock.bind((host, port))
sock.listen(1)
while True:
# wait for a connection
connection, client_address = sock.accept()
try:
while True:
data = connection.recv(1024)
if data:
# send response to client.
response = '%s: HELLO, I AM ALIVE!!!' % platform.node()
sent = connection.send(response.encode('utf-8'))
else:
# no more data -- quit the loop
break
finally:
# Clean up the connection
connection.close()

View File

@ -1,15 +0,0 @@
#!/bin/sh
BUILDER_NAME=$(uuidgen)
docker build -t kuryr/demo_builder . -f Dockerfile.builder
docker run --name ${BUILDER_NAME} kuryr/demo_builder
rm -fr rootfs
rm -fr rootfs.tar.xz
docker cp ${BUILDER_NAME}:/usr/src/busybox/rootfs rootfs
docker rm ${BUILDER_NAME}
# In order for ping and traceroute to work, we need to give suid to busybox
chmod +s rootfs/bin/busybox
tar -J -f rootfs.tar.xz --numeric-owner --exclude='dev/*' -C rootfs -c .
rm -fr rootfs
docker build -t quay.io/kuryr/demo . -f Dockerfile

Binary file not shown.

View File

@ -1,104 +0,0 @@
package main
import (
"fmt"
"log"
"net"
"net/http"
"os"
"runtime"
"strconv"
"strings"
"sync"
)
func handler(w http.ResponseWriter, r *http.Request) {
hostname, err := os.Hostname()
log.Println("Received request")
if err == nil {
fmt.Fprintf(w, "%s: HELLO! I AM ALIVE!!!\n", hostname)
}
}
func send_udp_response(conn *net.UDPConn, addr *net.UDPAddr) {
hostname, err := os.Hostname()
if err == nil {
resp_str := fmt.Sprintf("%s: HELLO! I AM ALIVE!!!\n", hostname)
_, err := conn.WriteToUDP([]byte(resp_str), addr)
if err != nil {
log.Println("Failed to reply to client")
}
}
}
func run_udp_server(port int) {
p := make([]byte, 2048)
log.Println("Running UDP server")
ser, _ := net.ListenUDP("udp", &net.UDPAddr{IP: []byte{0, 0, 0, 0}, Port: port, Zone: ""})
defer ser.Close()
for {
_, remoteaddr, err := ser.ReadFromUDP(p)
if err != nil {
log.Println("We got an Error on reading")
continue
}
log.Println("Received UDP request")
send_udp_response(ser, remoteaddr)
}
}
func udp_handling(wg sync.WaitGroup) {
udpPort, udpPortPresent := os.LookupEnv("UDP_PORT")
var port_num int = 9090
if udpPortPresent {
port_num, _ = strconv.Atoi(strings.TrimSpace(udpPort))
}
run_udp_server(port_num)
}
func http_handling(wg sync.WaitGroup) {
defer wg.Done()
http.HandleFunc("/", handler)
httpsPort, httpsPortPresent := os.LookupEnv("HTTPS_PORT")
var port string
if httpsPortPresent {
port = ":" + strings.TrimSpace(httpsPort)
cert, certPresent := os.LookupEnv("HTTPS_CERT_PATH")
key, keyPresent := os.LookupEnv("HTTPS_KEY_PATH")
if !certPresent || !keyPresent {
log.Fatal("HTTPS_PORT configured but missing HTTPS_CERT_PATH and/or HTTPS_KEY_PATH")
}
log.Println("Running HTTPS server")
log.Fatal(http.ListenAndServeTLS(port, cert, key, nil))
} else {
httpPort, confPresent := os.LookupEnv("HTTP_PORT")
if confPresent {
port = ":" + strings.TrimSpace(httpPort)
} else {
port = ":8080"
}
log.Println("Running HTTP server")
log.Fatal(http.ListenAndServe(port, nil))
}
log.Println("Exit HTTP server...")
}
func main() {
runtime.GOMAXPROCS(2)
var wg sync.WaitGroup
wg.Add(2)
go http_handling(wg)
go udp_handling(wg)
wg.Wait()
}

View File

@ -1,30 +0,0 @@
package main
import (
"bufio"
"fmt"
"net"
"os"
)
// udp_client.go syntax : udp_client <server_IP> <server_port>
func main() {
server_ip_port := os.Args[1] + ":" + os.Args[2]
p := make([]byte, 2048)
conn, err := net.Dial("udp", server_ip_port)
if err != nil {
fmt.Printf("Some error %v", err)
return
}
fmt.Fprintf(conn, "Hi UDP Server, How are you?")
_, err = bufio.NewReader(conn).Read(p)
if err == nil {
fmt.Printf("%s\n", p)
} else {
fmt.Printf("Some error %v\n", err)
}
conn.Close()
}

36
tox.ini
View File

@ -1,36 +0,0 @@
[tox]
envlist = py3,pep8
minversion = 3.1.1
skipsdist = True
ignore_basepython_conflict = True
[testenv]
basepython = python3
usedevelop = True
setenv =
VIRTUAL_ENV={envdir}
deps =
-c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}
-r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
commands = ostestr {posargs}
passenv = http_proxy,HTTP_PROXY,https_proxy,HTTPS_PROXY,no_proxy,NO_PROXY,OS_*
[testenv:venv]
commands = {posargs}
passenv = OS_*
[testenv:docs]
deps =
-c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}
-r{toxinidir}/doc/requirements.txt
commands = sphinx-build -W -b html doc/source doc/build/html
[testenv:pep8]
commands =
flake8 {posargs}
[flake8]
show-source = true
builtins = _
exclude=.venv,.git,.tox,dist,*lib/python*,*egg,tools,doc