Add developer documentation for plugins/drivers contributions

This is the initial step to provide documentation and
how-to for developers interested in contributing plugins and
drivers according to the core-vendor-decomp proposal.

Partially-implements: blueprint core-vendor-decomposition

Change-Id: Ib8b6cc5fd72eb1b8b4b4b2bdbda132062c81cbc1
This commit is contained in:
armando-migliaccio 2014-12-11 13:18:40 -08:00
parent db6201db79
commit 9391526a58
3 changed files with 360 additions and 0 deletions

View File

@ -0,0 +1,256 @@
Contributing new extensions to Neutron
======================================
Neutron has a pluggable architecture, with a number of extension points.
This documentation covers aspects relevant to contributing new Neutron
v2 core (aka monolithic) plugins, ML2 mechanism drivers, and L3 service
plugins. This document will initially cover a number of process-oriented
aspects of the contribution process, and proceed to provide a how-to guide
that shows how to go from 0 LOC's to successfully contributing new
extensions to Neutron. In the remainder of this guide, we will try to
use practical examples as much as we can so that people have working
solutions they can start from.
This guide is for a developer who wants to have a degree of visibility
within the OpenStack Networking project. If you are a developer who
wants to provide a Neutron-based solution without interacting with the
Neutron community, you are free to do so, but you can stop reading now,
as this guide is not for you.
In fact, from the Kilo release onwards, the Neutron core team propose that
additions to the codebase adopt a structure where the *monolithic plugins*,
*ML2 MechanismDrivers*, and *L3 service plugins* are integration-only
(called "vendor integration" hereinafter) to code that lives outside the
tree (called "vendor library" hereinafter); the same applies for any
vendor-specific agents. The only part that is to stay in the tree is the
agent 'main' (a small python file that imports agent code from the vendor
library and starts it). 'Outside the tree' can be anything that is publicly
available: it may be a stackforge repo for instance, a tarball, a pypi package,
etc. A plugin/drivers maintainer team self-governs in order to promote sharing,
reuse, innovation, and release of the 'out-of-tree' deliverable. It should not
be required for any member of the core team to be involved with this process,
although core members of the Neutron team can participate in whichever capacity
is deemed necessary to facilitate out-of-tree development.
Below, the following strategies will be documented:
* Design and Development;
* Testing and Continuous Integration;
* Defect Management;
* Documentation;
This document will then provide a working example on how to contribute
new additions to Neutron.
Blueprint Spec Submission Strategy
----------------------------------
Provided contributors adhere to the abovementioned development footprint
they should not be required to follow the spec process for changes that
only affect their vendor integration and library. New contributions can
simply be submitted for code review, with the proviso that adequate
documentation and 3rd CI party is supplied at the time of the code
submission. For tracking purposes, the review itself can be tagged
with a Launchpad bug report. The bug should be marked as wishlist to
avoid complicating tracking of Neutron's primary deliverables. Design
documents can still be supplied in form of RST documents, within the same
vendor library repo. If substantial change to the common Neutron code are
required, a spec that targets common Neutron code will be required, however
every case is different and a contributor is invited to seek guidance from
the Neutron core team as to what steps to follow, and whether a spec or
a bug report is more suited for what a contributor needs to deliver.
Once again, for submitting the integration module to the Neutron codebase,
no spec is required.
Development Strategy
--------------------
* The following elements are suggested to be contributed in the tree
for plugins and drivers (called vendor integration hereinafter):
* Data models
* Extension definitions
* Configuration files
* Requirements file targeting vendor code
* Things that do not remain in the tree (called vendor library hereinafter):
* Vendor specific logic
* Associated unit tests
The idea here would be to provide in-tree the plugin/driver code that
implements an API, but have it delegate to out-of-tree code for
backend-specific interactions. The vendor integration will then typically
involve minor passthrough/parsing of parameters, minor handling of DB objects
as well as handling of responses, whereas the vendor library will do the
heavylifting and implement the vendor-specific logic. The boundary between
the in-tree layer and the out-of-tree one should be defined by the contributor
while asking these types of questions:
* If something changes in my backend, do I need to alter the integration
layer drastically? Clearly, the less impact there is, the better the
separation being achieved.
* If I expose vendor details (e.g. protocols, auth, etc.), can I easily swap
and replace the targeted backend (e.g. hardware with a newer version
being supplied) without affecting the integration too much? Clearly, the
more reusable the integration the better the separation.
As mentioned above, the vendor code *must* be available publicly, and a git
repository makes the most sense. By doing so, the module itself can be made
accessible using a pip requirements file. This file should not be confused
with the Neutron requirements file that lists all common dependencies. Instead
it should be a file 'requirements.txt' that is located in neutron/plugins/pluginXXX/,
whose content is something along the lines of 'my_plugin_xxx_library>=X.Y.Z'.
Vendors are responsible for ensuring that their library does not depend on
libraries conflicting with global requirements, but it could depend on
libraries not included in the global requirements. Just as in Neutron's
main requirements.txt, it will be possible to pin the version of the vendor
library.
For instance, a vendor integration module can become as simple as one that
performs only the following:
* Registering config options
* Registering the plugin class
* Registering the models
* Registering the extensions
Testing Strategy
----------------
The testing process will be as follow:
* There will be no unit tests for plugins and drivers in the tree; The
expectation is that contributors would run unit test in their own external
library (e.g. in stackforge where Jenkins setup is for free). For unit tests
that validate the vendor library, it is the responsibility of the vendor to
choose what CI system they see fit to run them. There is no need or
requirement to use OpenStack CI resources if they do not want to.
* 3rd Party CI will continue to validate vendor integration with Neutron via
functional testing. 3rd Party CI is a communication mechanism. This objective
of this mechanism is as follows:
* it communicates to plugin/driver contributors when someone has contributed
a change that is potentially breaking. It is then up to a given
contributor maintaining the affected plugin to determine whether the
failure is transient or real, and resolve the problem if it is.
* it communicates to a patch author that they may be breaking a plugin/driver.
If they have the time/energy/relationship with the maintainer of the
plugin/driver in question, then they can (at their discretion) work to
resolve the breakage.
* it communicates to the community at large whether a given plugin/driver
is being actively maintained.
* A maintainer that is perceived to be responsive to failures in their
3rd party CI jobs is likely to generate community goodwill.
Review and Defect Management Strategies
---------------------------------------
The usual process applies to the code that is part of OpenStack Neutron. More
precisely:
* Bugs that affect vendor code can be filed against the Neutron integration,
if the integration code is at fault. Otherwise, the code maintainer may
decide to fix a bug without oversight, and update their requirements file
to target a new version of their vendor library. It makes sense to
require 3rd party CI for a given plugin/driver to pass when changing their
dependency before merging to any branch (i.e. both master and stable branches).
* Vendor specific code should follow the same review guidelines as any other
code in the tree. However, the maintainer has flexibility to choose who
can approve/merge changes in this repo.
Documentation Strategies
------------------------
It is the duty of the new contributor to provide working links that can be
referenced from the OpenStack upstream documentation.
#TODO(armax): provide more info, when available.
How-to
------
The how-to below assumes that the vendor library will be hosted on StackForge.
Stackforge lets you tap in the entire OpenStack CI infrastructure and can be
a great place to start from to contribute your new or existing driver/plugin.
The list of steps below are somewhat the tl;dr; version of what you can find
on http://docs.openstack.org/infra/manual/creators.html. They are meant to
be the bare minimum you have to complete in order to get you off the ground.
* Create a public repository: this can be a personal github.com repo or any
publicly available git repo, e.g. https://github.com/john-doe/foo.git. This
would be a temporary buffer to be used to feed the StackForge one.
* Initialize the repository: if you are starting afresh, you may *optionally*
want to use cookiecutter to get a skeleton project. You can learn how to use
cookiecutter on https://github.com/openstack-dev/cookiecutter.
If you want to build the repository from an existing Neutron module, you may
want to skip this step now, build the history first (next step), and come back
here to initialize the remainder of the repository with other files being
generated by the cookiecutter (like tox.ini, setup.cfg, setup.py, etc.).
* Building the history: if you are contributing an existing driver/plugin,
you may want to preserve the existing history. If not, you can go to the
next step. To import the history from an existing project this is what
you need to do:
* Clone a copy of the neutron repository to be manipulated.
* Go into the Neutron repo to be changed.
* Execute file split.sh, available in ./tools, and follow instructions.
::
git clone https://github.com/openstack/neutron.git
cd neutron
./tools/split.sh
# Sit and wait for a while, or grab a cup of your favorite drink
At this point you will have the project pruned of everything else but
the files you want to export, with their history. The next steps are:
* Add a remote that points to the repository created before.
* (Optional) If the repository has already being initialized with
cookiecutter, you need to pull first; if not, you can either push
the existing commits/tags or apply and commit further changes to fix
up the structure of repo the way you see fit.
* Finally, push commits and tags to the public repository.
::
git remote add <foo> https://github.com/john-doe/foo.git
git pull foo master # OPTIONAL, if foo is non-empty
git push --all foo && git push --tags foo
* Create a StackForge repository: for this you need the help of the OpenStack
infra team. It is worth noting that you only get one shot at creating the
StackForge repository. This is the time you get to choose whether you want
to start from a clean slate, or you want to import the repo created during
the previous step. In the latter case, you can do so by specifying the
upstream section for your project in project-config/gerrit/project.yaml.
Steps are documented on the
`Project Creators Manual http://docs.openstack.org/infra/manual/creators.html`.
* Ask for a Launchpad user to be assigned to the core team created. Steps are
documented in
`this section http://docs.openstack.org/infra/manual/creators.html#update-the-gerrit-group-members`.
* Fix, fix, fix: at this point you have an external base to work on. You
can develop against the new stackforge project, the same way you work
with any other OpenStack project: you have pep8, docs, and python27 CI
jobs that validate your patches when posted to Gerrit. For instance, one
thing you would need to do is to define an entry point for your plugin
or driver in your own setup.cfg similarly as to how it is done
`here https://github.com/stackforge/networking-odl/blob/master/setup.cfg#L31`.
* Define an entry point for your plugin or driver in setup.cfg
* Create 3rd Party CI account: if you do not already have one, follow
instructions for
`3rd Party CI http://ci.openstack.org/third_party.html` to get one.
* TODO(armax): ...
The 'ODL ML2 Mechanism Driver' - example 1
------------------------------------------
* Create the StackForge repo: https://review.openstack.org/#/c/136854/
* TODO(armax): continue with adding meat on the bone here
The 'OVSvAPP Mechanism Driver' - example 2
------------------------------------------
* Create the StackForge repo: https://review.openstack.org/#/c/136091/
* Cookiecutter initial commit: https://review.openstack.org/#/c/141268/
* TODO(armax): continue with adding meat on the bone here

View File

@ -47,3 +47,5 @@ Grab the code::
.. include:: ../../../TESTING.rst
.. include:: ./contribute.rst

102
tools/split.sh Executable file
View File

@ -0,0 +1,102 @@
#!/bin/sh
#
# This script has been shamelessly copied and tweaked from original copy:
#
# https://github.com/openstack/oslo-incubator/blob/master/tools/graduate.sh
#
# Use this script to export a Neutron module to a separate git repo.
#
# You can call this script Call script like so:
#
# ./split.sh <path to file containing list of files to export> <project name>
#
# The file should be a text file like the one below:
#
# /path/to/file/file1
# /path/to/file/file2
# ...
# /path/to/file/fileN
#
# Such a list can be generated with a command like this:
#
# find $path -type f # path is the base dir you want to list files for
set -ex
file_list_path="$1"
project_name="$2"
files_to_keep=$(cat $file_list_path)
# Build the grep pattern for ignoring files that we want to keep
keep_pattern="\($(echo $files_to_keep | sed -e 's/^/\^/' -e 's/ /\\|\^/g')\)"
# Prune all other files in every commit
pruner="git ls-files | grep -v \"$keep_pattern\" | git update-index --force-remove --stdin; git ls-files > /dev/stderr"
# Find all first commits with listed files and find a subset of them that
# predates all others
roots=""
for file in $files_to_keep; do
file_root="$(git rev-list --reverse HEAD -- $file | head -n1)"
fail=0
for root in $roots; do
if git merge-base --is-ancestor $root $file_root; then
fail=1
break
elif !git merge-base --is-ancestor $file_root $root; then
new_roots="$new_roots $root"
fi
done
if [ $fail -ne 1 ]; then
roots="$new_roots $file_root"
fi
done
# Purge all parents for those commits
set_roots="
if [ 1 -eq 0 $(for root in $roots; do echo " -o \"\$GIT_COMMIT\" = '$root' "; done) ]; then
echo '';
else
cat;
fi"
# Enhance git_commit_non_empty_tree to skip merges with:
# a) either two equal parents (commit that was about to land got purged as well
# as all commits on mainline);
# b) or with second parent being an ancestor to the first one (just as with a)
# but when there are some commits on mainline).
# In both cases drop second parent and let git_commit_non_empty_tree to decide
# if commit worth doing (most likely not).
skip_empty=$(cat << \EOF
if [ $# = 5 ] && git merge-base --is-ancestor $5 $3; then
git_commit_non_empty_tree $1 -p $3
else
git_commit_non_empty_tree "$@"
fi
EOF
)
# Filter out commits for unrelated files
echo "Pruning commits for unrelated files..."
git filter-branch \
--index-filter "$pruner" \
--parent-filter "$set_roots" \
--commit-filter "$skip_empty" \
--tag-name-filter cat \
-- --all
# Generate the new .gitreview file
echo "Generating new .gitreview file..."
cat > .gitreview <<EOF
[gerrit]
host=review.openstack.org
port=29418
project=stackforge/${project_name}.git
EOF
git add . && git commit -m "Generated new .gitreview file for ${project_name}."
echo "Done."