docs: add test strategy and feature classification
This effort is trying to ensure we better document what is currently tested and know to work, and what is not currently tested. This renames the Hypervisor support matrix, to the Feature support matrix. The vision is to move the support matrix ticks to appear only for features that have tests passing. To enable this to happen, the column will change from being the virt driver, to being a specific combination of technologies (such as libvirt + KVM + ceph + neutron ML2 with ovs) The second step is to include information about the maturity of the specific feature that is being tested. This will mean the matrix rows will instead reference a feature group, that has an associated list of tempest test uuid and links to detailed API docs. Change-Id: Ia2d489cb4e1fd57737468df4f9fc10e9ad8c011c
This commit is contained in:
parent
b8804e68eb
commit
2f0b8df9bf
172
doc/source/feature_classification.rst
Normal file
172
doc/source/feature_classification.rst
Normal file
@ -0,0 +1,172 @@
|
||||
..
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
======================
|
||||
Feature Classification
|
||||
======================
|
||||
|
||||
This document aims to define how we describe features listed in the
|
||||
:doc:`support-matrix`.
|
||||
|
||||
Aims
|
||||
====
|
||||
|
||||
Our users want the features they rely on to be reliable and always continue
|
||||
to solve for their use case.
|
||||
When things break, users request that we solve their issues quickly.
|
||||
It would be better if we never had those regressions in the first place.
|
||||
|
||||
We are taking a two-pronged approach:
|
||||
|
||||
* Tell our users what features are complete, well-documented, and are kept
|
||||
stable by good tests. They will get a good experience if they stick to
|
||||
using those features.
|
||||
Please note that the tests are specific to particular combinations of
|
||||
technologies. A deployment's choice of storage, networking and
|
||||
hypervisor makes a big difference to what features will work.
|
||||
|
||||
* Get help for the features that are not in the above state, and warn our
|
||||
users about the risks of using those features before they are ready.
|
||||
It should make it much clearer how to help improve the feature.
|
||||
|
||||
Concepts
|
||||
========
|
||||
|
||||
Some definitions to help understand the later part of the document.
|
||||
|
||||
Users
|
||||
-----
|
||||
|
||||
These are the users we will talk about in this document:
|
||||
|
||||
* application deployer: creates/deletes servers, directly or indirect via API
|
||||
* application developer: creates images and apps that run on the cloud
|
||||
* cloud operator: administers the cloud
|
||||
* self service administrator: both runs and uses the cloud
|
||||
|
||||
Now in reality the picture is way more complex. Specifically, there are
|
||||
likely to be different roles for observer, creator and admin roles for
|
||||
the application developer. Similarly, there are likely to be various
|
||||
levels of cloud operator permissions, some read only, see a subset of
|
||||
tenants, etc.
|
||||
|
||||
Note: this is not attempting to be an exhaustive set of personas that consider
|
||||
various facets of the different users, but instead aims to be a minimal set of
|
||||
users, such that we use a consistent terminology throughout this document.
|
||||
|
||||
Feature Group
|
||||
-------------
|
||||
|
||||
To reduce the size of the matrix, we organize the features into groups.
|
||||
Each group maps to a set of user stories, that can be validated by a set
|
||||
of scenarios, tests. Typically, this means a set of tempest tests.
|
||||
|
||||
This list focuses on API concepts like attach and detach volumes, rather
|
||||
than deployment specific concepts like attach iSCSI volume to KVM based VM.
|
||||
|
||||
Deployment
|
||||
----------
|
||||
|
||||
A deployment maps to a specific test environment. A full description of the
|
||||
environment should be provided, so its possible to reproduce the test results
|
||||
that are reported for each of the Feature Groups.
|
||||
|
||||
Note: this description includes all aspects of the deployment:
|
||||
the hypervisor, the number of nova-compute services, the storage being used,
|
||||
the network driver being used, the types of images being tested, etc.
|
||||
|
||||
Feature Group Maturity
|
||||
-----------------------
|
||||
|
||||
The Feature Group Maturity rating is specific to the API concepts, rather than
|
||||
specific to a particular deployment. That detail is covered in the deployment
|
||||
rating for each feature group.
|
||||
|
||||
We are starting out these Feature Group ratings:
|
||||
|
||||
* Incomplete
|
||||
* Experimental
|
||||
* Complete
|
||||
* Complete and Required
|
||||
* Deprecated (scheduled to be removed in a future release)
|
||||
|
||||
Incomplete features are those that don't have enough functionality to satisfy
|
||||
real world use cases.
|
||||
|
||||
Experimental features should be used with extreme caution.
|
||||
They are likely to have little or no upstream testing.
|
||||
With little testing there are likely to be many unknown bugs.
|
||||
|
||||
For a feature to be considered complete, we must have:
|
||||
|
||||
* Complete API docs (concept and REST call definition)
|
||||
* Complete Adminstrator docs
|
||||
* Tempest tests that define if the feature works correctly
|
||||
* Has enough functionality, and works reliably enough to be useful
|
||||
in real world scenarios
|
||||
* Unlikely to ever have a reason to drop support for the feature
|
||||
|
||||
There are various reasons why a feature, once complete, becomes required, but
|
||||
currently its largely when a feature is supported by all drivers. Note that
|
||||
any new drivers need to prove they support all required features before it
|
||||
would be allowed in upstream Nova.
|
||||
Please note that this list is technically unrelated to the DefCore
|
||||
effort, despite there being obvious parallels that could be drawn.
|
||||
|
||||
Required features are those that any new technology must support before
|
||||
being allowed into tree. The larger the list, the more features can be
|
||||
expected to be available on all Nova based clouds.
|
||||
|
||||
Deprecated features are those that are scheduled to be removed in a future
|
||||
major release of Nova. If a feature is marked as complete, it should
|
||||
never be deprecated.
|
||||
If a feature is incomplete or experimental for several releases,
|
||||
it runs the risk of being deprecated, and later removed from the code base.
|
||||
|
||||
Deployment Rating for a Feature Group
|
||||
--------------------------------------
|
||||
|
||||
The deployment rating is purely about the state of the tests for each
|
||||
Feature Group on a particular deployment.
|
||||
|
||||
There will the following ratings:
|
||||
|
||||
* unknown
|
||||
* not implemented
|
||||
* implemented: self declare the tempest tests pass
|
||||
* regularly tested: tested by third party CI
|
||||
* checked: Tested as part of the check or gate queue
|
||||
|
||||
The eventual goal is to automate this list from some third party CI reporting
|
||||
system, but so we can make progress, this will be a manual inspection that is
|
||||
documented by an hand written ini file. Ideally, this will be reviewed every
|
||||
milestone.
|
||||
|
||||
Feature Group Definitions
|
||||
=========================
|
||||
|
||||
This is a look at features targeted at application developers, and the current
|
||||
state of each feature, independent of the specific deployment.
|
||||
|
||||
Please note: this is still a work in progress!
|
||||
|
||||
Key TODOs:
|
||||
|
||||
* use new API docs as a template for the feature groups, into ini file
|
||||
* add lists of tempest UUIDs for each group
|
||||
* link from hypervisor support matrix into feature group maturity ratings
|
||||
* add maturity rating into the feature groups, with a justification, which
|
||||
is likely to include lints to API docs, etc
|
||||
* replace tick and cross in support matrix with "deployment ratings"
|
||||
* eventually generate the tick and cross from live, historical, CI results
|
||||
|
@ -72,16 +72,26 @@ There was a session on the v2.1 API at the Liberty summit which you can watch
|
||||
|
||||
|
||||
|
||||
Hypervisor Support Matrix
|
||||
=========================
|
||||
Feature Status
|
||||
==============
|
||||
|
||||
The hypervisor support matrix is how we document what features we require
|
||||
hypervisor drivers to implement, as well as the level of support for optional
|
||||
features that we currently have. You can see the support matrix here:
|
||||
Nova aims to have a single compute API that works the same across
|
||||
all deployments of Nova.
|
||||
While many features are well-tested, well-documented, support live upgrade,
|
||||
and are ready for production, some are not. Also the choice of underlying
|
||||
technology affects the list of features that are ready for production.
|
||||
|
||||
Our first attempt to communicate this is the feature support matrix
|
||||
(previously called the hypervisor support matrix).
|
||||
Over time we hope to evolve that to include a classification of each feature's
|
||||
maturity and exactly what technology combinations are covered by current
|
||||
integration testing efforts.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
test_strategy
|
||||
feature_classification
|
||||
support-matrix
|
||||
|
||||
Developer Guide
|
||||
|
@ -1,6 +1,11 @@
|
||||
|
||||
Hypervisor Support Matrix
|
||||
=========================
|
||||
Feature Support Matrix
|
||||
======================
|
||||
|
||||
.. warning::
|
||||
Please note, while this document is still being maintained, this is slowly
|
||||
being updated to re-group and classify features using the definitions
|
||||
described in here: :doc:`feature_classification`
|
||||
|
||||
When considering which capabilities should be marked as mandatory the
|
||||
following general guiding principles were applied
|
||||
|
110
doc/source/test_strategy.rst
Normal file
110
doc/source/test_strategy.rst
Normal file
@ -0,0 +1,110 @@
|
||||
..
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
not use this file except in compliance with the License. You may obtain
|
||||
a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
License for the specific language governing permissions and limitations
|
||||
under the License.
|
||||
|
||||
==============
|
||||
Test Strategy
|
||||
==============
|
||||
|
||||
A key part of the "four opens" is ensuring the OpenStack delivers well-tested
|
||||
and usable software. For more details see:
|
||||
http://docs.openstack.org/project-team-guide/introduction.html#the-four-opens
|
||||
|
||||
Experience has shown that untested features are frequently broken, in part
|
||||
due to the velocity of upstream changes. As we aim to ensure we keep all
|
||||
features working across upgrades, we must aim to test all features.
|
||||
|
||||
Reporting Test Coverage
|
||||
=======================
|
||||
|
||||
For details on plans to report the current test coverage, please see:
|
||||
:doc:`feature_classification`
|
||||
|
||||
Running tests and reporting results
|
||||
===================================
|
||||
|
||||
Voting in Gerrit
|
||||
----------------
|
||||
|
||||
On every review in gerrit, check tests are run on very patch set, and are
|
||||
able to report a +1 or -1 vote.
|
||||
For more details, please see:
|
||||
http://docs.openstack.org/infra/manual/developers.html#automated-testing
|
||||
|
||||
Before merging any code, there is an integrate gate test queue, to ensure
|
||||
master is always passing all tests.
|
||||
For more details, please see:
|
||||
http://docs.openstack.org/infra/zuul/gating.html
|
||||
|
||||
Infra vs Third-Party
|
||||
--------------------
|
||||
|
||||
Tests that use fully open source components are generally run by the
|
||||
OpenStack Infra teams. Test setups that use non-open technology must
|
||||
be run outside of that infrastructure, but should still report their
|
||||
results upstream.
|
||||
|
||||
For more details, please see:
|
||||
http://docs.openstack.org/infra/system-config/third_party.html
|
||||
|
||||
Ad-hoc testing
|
||||
--------------
|
||||
|
||||
It is particularly common for people to run ad-hoc tests on each released
|
||||
milestone, such as RC1, to stop regressions.
|
||||
While these efforts can help stabilize the release, as a community we have a
|
||||
much stronger preference for continuous integration testing. Partly this is
|
||||
because we encourage users to deploy master, and we generally have to assume
|
||||
that any upstream commit may already been deployed in production.
|
||||
|
||||
Types of tests
|
||||
==============
|
||||
|
||||
Unit tests
|
||||
----------
|
||||
|
||||
Unit tests help document and enforce the contract for each component.
|
||||
Without good unit test coverage it is hard to continue to quickly evolve the
|
||||
codebase.
|
||||
The correct level of unit test coverage is very subjective, and as such we are
|
||||
not aiming for a particular percentage of coverage, rather we are aiming for
|
||||
good coverage.
|
||||
Generally, every code change should have a related unit test:
|
||||
http://docs.openstack.org/developer/hacking/#creating-unit-tests
|
||||
|
||||
Integration tests
|
||||
-----------------
|
||||
|
||||
Today, our integration tests involve running the Tempest test suite on a
|
||||
variety of Nova deployment scenarios.
|
||||
|
||||
In addition, we have third parties running the tests on their preferred Nova
|
||||
deployment scenario.
|
||||
|
||||
Functional tests
|
||||
----------------
|
||||
|
||||
Nova has a set of in-tree functional tests that focus on things that are out
|
||||
of scope for tempest testing and unit testing.
|
||||
Tempest tests run against a full live OpenStack deployment, generally deployed
|
||||
using devstack. At the other extreme, unit tests typically use mock to test a
|
||||
unit of code in isolation.
|
||||
Functional tests don't run an entire stack, they are isolated to nova code,
|
||||
and have no reliance on external services. They do have a WSGI app, nova
|
||||
services and a database, with minimal stubbing of nova internals.
|
||||
|
||||
Interoperability tests
|
||||
-----------------------
|
||||
|
||||
The DefCore committee maintains a list that contains a subset of Tempest tests.
|
||||
These are used to verify if a particular Nova deployment's API responds as
|
||||
expected. For more details, see: https://github.com/openstack/defcore
|
Loading…
Reference in New Issue
Block a user