Retire openstack-specs repo

As discussed in TC meeting[1], TC is retiring the
openstack-specs repo.

[1] https://meetings.opendev.org/meetings/tc/2021/tc.2021-06-17-15.00.log.html#l-98

Change-Id: Ieb37227e6b80a64ead680ece315973e2f040da6e
changes/63/796963/4
Ghanshyam Mann 2021-06-17 19:04:04 -05:00 committed by Ghanshyam
parent 70a7c6d7dd
commit 51789f3b4f
33 changed files with 12 additions and 3607 deletions

51
.gitignore vendored
View File

@ -1,51 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
.testrepository
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,3 +0,0 @@
- project:
templates:
- openstack-specs-jobs

View File

@ -1,20 +0,0 @@
==================================
Contributing to: openstack-specs
==================================
If you would like to contribute to the development of OpenStack,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/openstack

View File

@ -1,3 +0,0 @@
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,6 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@ -1,23 +1,15 @@
=====================================================
OpenStack Cross-Project Specifications and Policies
=====================================================
This project is no longer maintained.
This repository contains specifications and policies that apply to
OpenStack as a whole.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
.. note:: The OpenStack Cross-Project specification process has been
deprecated in favor of `OpenStack-wide Goals
<https://governance.openstack.org/tc/goals/index.html>`__ and
`OpenStack SIGs <https://wiki.openstack.org/wiki/OpenStack_SIGs>`__.
The documents found here are still useful as historical artifacts,
but at this time the specifications are not actionable
The OpenStack Cross-Project specification process has been deprecated
in favor of `OpenStack-wide Goals <https://governance.openstack.org/tc/goals/index.html>`__
, `OpenStack SIGs <https://wiki.openstack.org/wiki/OpenStack_SIGs>`__ , and
`OpenStack Popup-teams <https://governance.openstack.org/tc/reference/popup-teams.html>`__.
This work is licensed under a `Creative Commons Attribution 3.0
Unported License
<http://creativecommons.org/licenses/by/3.0/legalcode>`__.
The source files are available via the openstack/openstack-specs git
repository at http://git.openstack.org/cgit/openstack/openstack-specs.
Published versions of approved specifications and policies can be
found at http://specs.openstack.org/openstack/openstack-specs.
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.

View File

@ -1,97 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'openstackdocstheme',
'yasfb',
]
# Feed configuration for yasfb
feed_base_url = 'https://specs.openstack.org/openstack/openstack-specs'
feed_author = 'OpenStack Development Team'
exclude_patterns = [
'template.rst',
]
# Optionally allow the use of sphinxcontrib.spelling to verify the
# spelling of the documents.
try:
import sphinxcontrib.spelling
extensions.append('sphinxcontrib.spelling')
except ImportError:
pass
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'openstack-specs'
copyright = u'%s, OpenStack Foundation' % datetime.date.today().year
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- openstackdocstheme configuration -----------------------------------------
repository_name = 'openstack/openstack-specs'
html_theme = 'openstackdocs'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1 +0,0 @@
.. include:: ../../CONTRIBUTING.rst

View File

@ -1,45 +0,0 @@
=====================================================
OpenStack Cross-Project Specifications and Policies
=====================================================
This repository contains specifications and policies that apply to
OpenStack as a whole.
This work is licensed under a `Creative Commons Attribution 3.0
Unported License
<http://creativecommons.org/licenses/by/3.0/legalcode>`__.
.. note:: The OpenStack Cross-Project specification process has been
deprecated in favor of `OpenStack-wide Goals
<https://governance.openstack.org/tc/goals/index.html>`__ and
`OpenStack SIGs <https://wiki.openstack.org/wiki/OpenStack_SIGs>`__.
The documents found here are still useful as historical artifacts,
but at this time the specifications are not actionable
Specifications
==============
.. toctree::
:glob:
:maxdepth: 1
specs/*
Repository Information
======================
.. toctree::
:maxdepth: 1
readme
contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1 +0,0 @@
../../specs

View File

@ -1,3 +0,0 @@
pbr!=2.1.0,>=2.0.0 # Apache-2.0
openstackdocstheme>=2.0
yasfb>=0.5.1

View File

@ -1,13 +0,0 @@
[metadata]
name = openstack-specs
summary = OpenStack Cross-Project Specifications and Policies
description-file =
README.rst
author = OpenStack
author-email = openstack-discuss@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Developers
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux

View File

@ -1,22 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

View File

@ -1,403 +0,0 @@
==========================================
Chronicles of a distributed lock manager
==========================================
No blueprint, this is intended as a reference/consensus document.
The various OpenStack projects have an ongoing requirement to perform
some set of actions in an atomic manner performed by some distributed set of
applications on some set of distributed resources **without** having those
resources end up in some corrupted state due those actions being performed on
them without the traditional concept of `locking`_.
A `DLM`_ is one such concept/solution that can help (but not entirely
solve) these types of common resource manipulation patterns in distributed
systems. This specification will be an attempt at defining the problem
space, understanding what each project *currently* has done in regards of
creating its own `DLM`_-like entity and how we can make the situation better
by coming to consensus on a common solution that we can benefit from to
make everyone's lives (developers, operators and users of OpenStack
projects) that much better. Such a consensus being built will also
influence the future functionality and capabilities of OpenStack at large
so we need to be **especially** careful, thoughtful, and explicit here.
.. _DLM: https://en.wikipedia.org/wiki/Distributed_lock_manager
.. _locking: https://en.wikipedia.org/wiki/Lock_%28computer_science%29
Problem description
===================
Building distributed systems is **hard**. It is especially hard when the
distributed system (and the applications ``[X, Y, Z...]`` that compose the
parts of that system) manipulate mutable resources without the ability to do
so in a conflict-free, highly available, and
scalable manner (for example, application ``X`` on machine ``1`` resizes
volume ``A``, while application ``Y`` on machine ``2`` is writing files to
volume ``A``). Typically in local applications (running on a single
machine) these types of conflicts are avoided by using primitives provided
by the operating system (`pthreads`_ for example, or filesystem locks, or
other similar `CAS`_ like operations provided by the `processor instruction`_
set). In distributed systems these types of solutions do **not** work, so
alternatives have to either be invented or provided by some
other service (for example one of the many academia has created, such
as `raft`_ and/or other `paxos`_ variants, or services created
from these papers/concepts such as `zookeeper`_ or `chubby`_ or one of the
many `raft implementations`_ or the redis `redlock`_ algorithm). Sadly in
OpenStack this has meant that there are now multiple implementations/inventions
of such concepts (most using some variation of database locking), using
different techniques to achieve the defined goal (conflict-free, highly
available, and scalable manipulation of resources). To make things worse
some projects still desire to have this concept and have not reached the
point where it is needed (or they have reached this point but have been
unable to achieve consensus around an implementation and/or
direction). Overall this diversity, while nice for inventors and people
that like to explore these concepts does **not** appear to be the best
solution we can provide to operators, developers inside the
community, deployers and other users of the now (and every expanding) diverse
set of `OpenStack projects`_.
.. _redlock: http://redis.io/topics/distlock
.. _pthreads: http://man7.org/linux/man-pages/man7/pthreads.7.html
.. _CAS: https://en.wikipedia.org/wiki/Compare-and-swap
.. _processor instruction: http://www.felixcloutier.com/x86/CMPXCHG.html
.. _paxos: https://en.wikipedia.org/wiki/Paxos_%28computer_science%29
.. _raft: http://raftconsensus.github.io/
.. _zookeeper: https://en.wikipedia.org/wiki/Apache_ZooKeeper
.. _chubby: http://research.google.com/archive/chubby.html
.. _raft implementations: http://raftconsensus.github.io/#implementations
.. _OpenStack projects: http://git.openstack.org/cgit/openstack/\
governance/tree/reference/projects.yaml
What has been created
---------------------
To show the current diversity let's dive slightly into what *some* of the
projects have created and/or used to resolve the problems mentioned above.
Cinder
******
**Problem:**
Avoid multiple entities from manipulating the same volume resource(s)
at the same time while still being scalable and highly available.
**Solution:**
Currently is limited to file locks and basic volume state transitions. Has
limited scalability and reliability of cinder under failure/load; has been
worked on for a while to attempt to create a solution that will fix some of
these fundamental issues.
**Notes:**
- For further reading/details these links can/may offer more insight.
- https://review.openstack.org/#/c/149894/
- https://review.openstack.org/#/c/202615/
- https://etherpad.openstack.org/p/mitaka-cinder-volmgr-locks
- https://etherpad.openstack.org/p/mitaka-cinder-cvol-aa
- (and more)
Ironic
******
**Problem:**
Avoid multiple conductors from manipulating the same bare-metal
instances and/or nodes at the same time while still being scalable and
highly available.
Other required/implemented functionality:
* Track what services are running, supporting what drivers, and rebalance
work when service state changes (service discovery and rebalancing).
* Sync state of temporary agents instead of polling or heartbeats.
**Solution:**
Partition resources onto a hash-ring to allow for ownership to be scaled
out among many conductors as needed. To avoid entities in that hash-ring
from manipulating the same resource/node that they both may co-own a database
lock is used to ensure single ownership. Actions taken on nodes are performed
after the lock (shared or exclusive) has been obtained (a `state machine`_
built using `automaton`_ also helps ensure only valid transitions
are performed).
**Notes:**
- Has logic for shared and exclusive locks and provisions for upgrading
a shared lock to an exclusive lock as needed (only one exclusive lock
on a given row/key may exist at the same time).
- Reclaim/take over lock mechanism via periodic heartbeats into the
database (reclaims is apparently a manual and clunky process).
**Code/doc references:**
- Some of the current issues listed at `pluggable-locking`_.
- `Etcd`_ proposed @ `179965`_ I believe this further validates the view
that we need a consensus on a uniform solution around DLM (vs continually
having projects implement whatever suites there fancy/flavor of the week).
- https://github.com/openstack/ironic/blob/master/ironic/conductor/task_manager.py#L20
- https://github.com/openstack/ironic/blob/master/ironic/conductor/task_manager.py#L222
.. _state machine: http://docs.openstack.org/developer/ironic/dev/states.html
.. _automaton: http://docs.openstack.org/developer/automaton/
.. _179965: https://review.openstack.org/#/c/179965
.. _Etcd: https://github.com/coreos/etcd
.. _pluggable-locking: https://blueprints.launchpad.net/ironic/+spec/pluggable-locking
Heat
****
**Problem:**
Multiple engines working on the same stack (or nested stack of). The
ongoing convergence rework may change this state of the world (so in the
future the problem space might be slightly different, but the concept
of requiring locks on resources will still exist).
**Solution:**
Lock a stack using a database lock and disallow other engines
from working on that same stack (or stack inside of it if nested),
using expiry/staleness allow other engines to claim potentially
lost lock after period of time.
**Notes:**
- Liveness of stack lock not easy to determine? For example is an engine
just taking a long time working on a stack, has the engine had a network
partition from the database but is still operational, or has the engine
really died?
- To resolve this a combination of an ``oslo.messaging`` ping used to
determine when a lock may be dead (or the owner of it is dead), if an
engine is non-responsive to pings/pongs after period of time (and its
associated database entry has expired) then stealing is allowed to occur.
- Lacks *simple* introspection capabilities? For example it is necessary
to examine the database or log files to determine who is trying to acquire
the lock, how long they have waited and so on.
- Lock releasing may fail (which is highly undesirable, *IMHO* it should
**never** be possible to fail releasing a lock); implementation does not
automatically release locks on application crash/disconnect/other but relies
on ping/pongs and database updating (each operation in this
complex 'stealing dance' may fail or be problematic, and therefore is not
especially simple).
**Code/doc references:**
- http://docs.openstack.org/developer/heat/_modules/heat/engine/stack_lock.html
- https://github.com/openstack/heat/blob/master/heat/engine/resource.py#L1307
Ceilometer and Sahara
*********************
**Problem:**
Distributing tasks across central agents.
**Solution:**
Token ring based on `tooz`_.
**Notes:**
Your project here
*****************
Solution analysis
=================
The proposed change would be to choose one of the following:
- Select a distributed lock manager (one that is opensource) and integrate
it *deeply* into openstack, work with the community that owns it to develop
and issues (or fix any found bugs) and use it for lock management
functionality and service discovery...
- Select a API (likely `tooz`_) that will be backed by capable
distributed lock manager(s) and integrate it *deeply* into openstack and
use it for lock management functionality and service discovery...
* `zookeeper`_ (`community respected
analysis <https://aphyr.com/posts/291-call-me-maybe-zookeeper>`__)
* `consul`_ (`community respected
analysis <https://aphyr.com/posts/316-call-me-maybe-etcd-and-consul>`__)
* `etc.d`_ (`community respected
analysis <https://aphyr.com/posts/316-call-me-maybe-etcd-and-consul>`__)
Zookeeper
---------
Summary:
Age: around 8 years
* Changelog was created in svn repository on aug 27, 2007.
License: Apache License 2.0
Approximate community size:
Features (overview):
- `Zab`_ based (paxos variant)
- Reliable filesystem like-storage (see `zk data model`_)
- Mature (and widely used) python client (via `kazoo`_)
- Mature shell/REPL interface (via `zkshell`_)
- Ephemeral nodes (filesystem entries that are tied to presence
of their creator)
- Self-cleaning trees (implemented in 3.5.0 via
https://issues.apache.org/jira/browse/ZOOKEEPER-2163)
- Dynamic reconfiguration (making upgrades/membership changes that
much easier to get right)
- https://zookeeper.apache.org/doc/trunk/zookeeperReconfig.html
Operability:
- Rolling restarts < 3.5.0 (to allow for upgrades to happen)
- Starting >= 3.5.0, 'rolling restarts' are no longer needed (see
mention of dynamic reconfiguration above)
- Java stack experience required
Language written in: java
.. _kazoo: http://kazoo.readthedocs.org/
.. _zkshell: https://pypi.python.org/pypi/zk_shell/
.. _zk data model: http://zookeeper.apache.org/doc/\
trunk/zookeeperProgrammers.html#ch_zkDataModel
.. _Zab: https://web.stanford.edu/class/cs347/reading/zab.pdf
Packaged: yes (at least on ubuntu and fedora)
* http://packages.ubuntu.com/trusty/java/zookeeperd
* https://apps.fedoraproject.org/packages/zookeeper
Consul
------
Summary:
Age: around 1.5 years
* Repository changelog denotes added in april 2014.
License: Mozilla Public License, version 2.0
Approximate community size:
Features (overview):
- Raft based
- DNS interface
- HTTP interface
- Reliable K/V storage
- Suited for multi-datacenter usage
- Python client (via `python-consul`_)
.. _python-consul: https://pypi.python.org/pypi/python-consul
.. _consul: https://www.consul.io/
Operability:
* Go stack experience required
Language written in: go
Packaged: somewhat (at least on ubuntu and fedora)
* Ppa at https://launchpad.net/~bcandrea/+archive/ubuntu/consul
* https://admin.fedoraproject.org/pkgdb/package/consul/ (?)
Etc.d
-----
Summary:
Age: Around 1.09 years old
License: Apache License 2.0
Approximate community size:
Features (overview):
Language written in: go
Operability:
* Go stack experience required
Packaged: ?
Proposed change
===============
Place all functionality behind `tooz`_ (as much as possible) and let the
operator choose which implementation to use. Do note that functionality that
is not possible in all backends (for example consul provides a `DNS`_ interface
that complements its HTTP REST interface) will not be able to be exposed
through a `tooz`_ API, so this may limit the developer using `tooz`_ to
implement some feature/s).
Compliance: further details about what each `tooz`_ driver must
conform to (as in regard to how it operates, what functionality it must support
and under what consistency, availability, and partition tolerance scheme
it must operate under) will be detailed at: `240645`_
It is expected as the result of `240645`_ that
certain existing `tooz`_ drivers will be deprecated and eventually removed
after a given number of cycles (due to there inherent inability to meet the
policy constraints created by that specification) so that the quality
and consistency of there operating policy can be guaranteed (this guarantee
reduces the divergence in implementations that makes plugins that much
harder to diagnosis, debug, and validate).
.. Note::
Do note that the `tooz`_ alternative which needs to be understood
is that `tooz`_ is a tiny layer around solutions mentioned above, which
is an admirable goal (I guess I can say this since I helped make that
library) but it does favor pluggability over picking one solution and
making it better. This is obviously a trade-off that must IMHO **not** be
ignored (since ``X`` solutions mean that it becomes that much harder to
diagnose and fix upstream issues because ``X - Y`` solutions may not have
the issue in the first place); TLDR: pluggability comes at a cost.
.. _DNS: http://www.consul.io/docs/agent/dns.html
.. _tooz: http://docs.openstack.org/developer/tooz/
.. _240645: https://review.openstack.org/#/c/240645/
Implementation
==============
Assignee(s)
-----------
- All the reviewers, code creators, PTL(s) of OpenStack?
Work Items
----------
Dependencies
============
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Mitaka
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,123 +0,0 @@
==========================================
CLI Sorting Argument Guidelines
==========================================
To varying degrees, the REST APIs for various projects support sort keys
and sort directions; these sorting options are exposed as python client
arguments. This specification defines the syntax for these arguments so
that there is consistency across clients.
Problem description
===================
Different projects have implemented the CLI sorting options in different
ways. For example:
- Nova: --sort key1[:dir1],key2[:dir2]
- Cinder: --sort_key <key> --sort_dir <dir>
- Ironic: --sort-key <key> --sort-dir <dir>
- Neutron: --sort-key <key1> --sort-dir <dir1>
--sort-key <key2> --sort-dir <dir2>
- Glance (under review): --sort-key <key1> --sort-key <key2> --sort-dir <dir>
Proposed change
===============
Based on mailing list feedback (see References sections), the consensus is to
follow the syntax that nova currently implements: --sort <key>[:<direction>]
Where the --sort parameter is comma-separated and used to specify one or more
sort keys and directions. A sort direction is optionally appended to each key
and is either 'asc' for ascending or 'desc' for descending.
For example:
* nova list --sort display_name
* nova list --sort display_name,vm_state
* nova list --sort display_name:asc,vm_state:desc
* nova list --sort display_name,vm_state:asc
Unfortunately, the REST APIs for each project support sorting to different
degrees:
- Nova and Neutron: Multiple sort keys and multiple sort directions
- Cinder and Ironic: Single sort key and single sort direction (Note: approved
kilo spec in Cinder for adding adding for multiple key and direction
support)
- Glance: Multiple sort keys and single sort direction
In the event that the corresponding REST APIs do not support multiple sort
keys and multiple sort directions, the client may:
- Support a single key and direction
- Support multiple keys and directions and implement any remaining sorting
in the client
Alternatives
------------
Each sort key and associated direction could be supplied independently, for
example:
--sort-key key1 --sort-dir dir1 --sort-key key2 --sort-dir dir2
Implementation
==============
Assignee(s)
-----------
Primary assignee:
* Cinder: Steven Kaufer (kaufer)
* Glance: Mike Fedosin (mfedosin)
Work Items
----------
Cinder:
* Deprecate --sort_key and --sort_dir and add support for --sort
* Note that the Cinder REST API currently supports only a single sort key
and direction so the CLI will have the same restriction, this restriction
can be lifted once the following is implemented:
https://blueprints.launchpad.net/cinder/+spec/cinder-pagination
Ironic/Neutron:
* Deprecate --sort-key and --sort-dir and add support for --sort
Glance:
* Modify the existing patch set to adopt the --sort parameter:
https://review.openstack.org/#/c/120777/
* Note that Glance supports multiple sort keys but only a single sort
direction.
Dependencies
============
- Cinder BP for multiple sort keys and directions:
https://blueprints.launchpad.net/cinder/+spec/cinder-pagination
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Kilo
- Introduced
References
==========
- Nova review that implemented the --sort argument:
https://review.openstack.org/#/c/117591/
- Glance client review: https://review.openstack.org/#/c/120777/
- http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg42854.html
- http://www.mail-archive.com/openstack-dev%40lists.openstack.org/msg42954.html
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,146 +0,0 @@
================================
clouds.yaml support in clients
================================
`clouds.yaml` is a config file for the facilitation of consuming multiple
OpenStack clouds in a reasonable way. It is processed by the `os-client-config`
library, and is currently supported by `python-openstackclient`, `shade`,
`nodepool` and Ansible 2.0.
It should be supported across the board in our client utilities.
Problem description
===================
One of the goals of our efforts in OpenStack is interoperability between
clouds. Although there are several reasons that this is important, one of
them is to allow consumers to spread their workloads across multiple clouds.
Once a user has more than one cloud, dealing with credentials for the tasks of
selecting a specific cloud to operate on, or of performing actions across all
available clouds, becomes important.
Because the only auth information mechanism the OpenStack project has provided
so far, `openrc`, is targetted towards a single cloud, projects have have
attempted to deal with the problem in a myriad of different ways that do not
carry over to each other.
Although `python-openstackclient` supports `clouds.yaml` cloud definitions,
there are still some functions not yet exposed in `python-openstackclient` and
cloud users sometimes have to fall back to the legacy client utilities. That
means that even though `python-openstackclient` allows the user to manage
their clouds simply, the problem of dealing with piles of `openrc` files
remains, making it a net loss complexity-wise.
Proposed change
===============
Each of the python client utilities that exist should use `os-client-config` to
process their input parameters. New projects that do not yet have a CLI
utility should use `python-openstackclient` instead, and should not write new
CLI utilities.
An example of migrating an existing utility to `os-client-config` can be seen
in https://review.openstack.org/#/c/236325/ which adds the support to
`python-neutronclient`. Since all of those need to migrate to `keystoneauth1`
anyway, and since `os-client-config` is well integrated with `keystoneauth1`
it makes sense to do it as a single change.
This change will also add `OS_CLOUD` and `--os-cloud` as options supported
everywhere for selecting a named cloud from a collection of configured
cloud configurations.
Horizon should add a 'Download clouds.yaml' link where the 'Download openrc'
link is.
Reach out to the ecosystem of client utilities and libraries to suggest adding
support for consuming `clouds.yaml` files.
`gophercloud` https://github.com/rackspace/gophercloud/issues/487 has been
contacted already, but at least `libcloud`, `fog`, `jclouds` - or any other
framework that is in the Getting Started guide should at least be contacted
about adding support.
It should be pointed out that `os-client-config` does not require the use of
or existence of `clouds.yaml` and the traditional `openrc` environment
variables will continue to work as always.
http://inaugust.com/posts/multi-cloud-with-python-openstackclient.html is
a walkthrough on what life looks like in a world of `os-client-config` and
`python-openstackclient`.
Alternatives
------------
Using `envdir` has been suggested and is a good fit for direct user
consumption. However, some calling environments like `tox` and `ansible` make
communicating information from the calling context to the execution context
via environment variables clunkier than one would like. `python-neutronclient`
for instance has a `functional-creds.conf` that it writes out to avoid the
problems with environment variables and `tox`.
Just focus on `python-openstackclient`. While this is a wonderful future, it's
still the future. Adding `clouds.yaml` support to the existing clients gets us
a stronger bridge to the future state of everyone using
`python-openstackclient` for everything.
Use `oslo.config` as the basis of credentials configuration instead of yaml.
This was originally considered when `os-client-config` was being written, but
due to the advent of keystone auth plugins, it becomes important for some
use cases to have nested data structures, which is not particularly clean
to express in ini format.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
mordred
mordred is happy to do all of the work - but is also not territorial and if
elements of the work magically get done by happy friends, the world would be
a lovely place.
Work Items
----------
Not exhaustive, but should be close. Many projects provide openstackclient
extensions rather than their own client, so are covered already.
* Add support to python-barbicanclient
* Add support to python-ceilometerclient
* Add support to python-cinderclient
* Add support to python-designateclient
* Add support to python-glanceclient
* Add support to python-heatclient
* Add support to python-ironicclient
* Add support to python-keystoneclient
* Add support to python-magnumclient
* Add support to python-manilaclient
* Add support to python-neutronclient
* Add support to python-novaclient
* Add support to python-saharaclient
* Add support to python-swiftclient
* Add download link to horizon
Dependencies
============
None
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Mitaka
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,130 +0,0 @@
============
CORS Support
============
The W3C has released a Technical Recommendation (TR) by which an API may
permit a user agent - usually a web browser - to selectively break the
`same-origin policy`_. This permits javascript running in the user agent to
access the API from domains, protocols, and ports that do not match the API
itself. This TR is called Cross Origin Resource Sharing (CORS_).
This specification details how CORS_ is implemented and supported across
OpenStack's services.
Problem description
===================
User Agents (browsers), in order to limit Cross-Site Scripting exploits, do
not permit access to an API that does not match the hostname, protocol, and
port from which the javascript itself is hosted. For example, if a user
agent's javascript is hosted at `https://example.com:443/`, and tries to access
openstack ironic at `https://example.com:6354/`, it would not be permitted to
do so because the ports do not match. This is called the `same-origin policy`_.
The `default ports`_ for most openstack services (excluding horizon) are not
the ports commonly used by user agents to access websites (80, 443). As such,
even if the services were hosted on the same domain and protocol, it would be
impossible for any user agent's application to access these services
directly, as any request would violate the above policy.
The current method of addressing this is to provide an API proxy, currently
part of the horizon project, which is accessible from the same location as
any javascript that might wish to access it. This additional code requires
additional maintenance for both upstream and downstream teams, and is largely
unnecessary.
This specification does *not* presume to require an additional configuration
step for operators for a 'default' install of OpenStack and its user
interface. Horizon currently maintains, and shall continue to maintain, its
own installation requirements.
This specification does *not* presume to set front-end application design
standards- rather it exists to expand the options that front-end teams have,
and allow them to make whatever choice makes the most sense for them.
This specification *does* provide a method by which teams, whether upstream or
downstream, can choose to implement additional user interfaces of their own. An
example use case may be Ironic, which may wish to ship an interface that can
live independently of horizon, for such users who do not want to install
additional components.
Proposed change
===============
All OpenStack API's should implement a common middleware that implements CORS
in a reusable, optional fashion. This middleware must be well documented,
with security concerns highlighted, in order to properly educate the operator
community on their choices. The middleware must default to inactive, unless
it is activated either explicitly, or implicitly via a provided configuration.
`CORS Middleware`_ is available in oslo_middleware version 0.3.0. This
particular implementation defaults to inactive, unless appropriate configuration
options are detected in oslo_config, and its documentation already covers key
security concerns. Additional work would be required to add this middleware
to the appropriate services, and to add the necessary documentation to the
docs repository.
Note that improperly implemented CORS_ support is a security concern, and
this should be highlighted in the documentation.
Alternatives
------------
One alternative is to provide a proxy, much like horizon's implementation,
or a well configured Apache mod_proxy. It would require additional documentation
that teaches UI development teams on how to implement and build on it. These
options are already available and well documented, however they do not enable
experimentation or deployment of alternative UIs in the same way that CORS can,
since they require the UI to be hosted in the same endpoint. This requires
either close deployment cooperation, or deployment of a proxy-per-UI. CORS can
permit UIs to be deployed using static files, allowing much lower cost-of-entry
overheads.
Implementation
==============
Assignee
--------
Primary assignee:
Michael Krotscheck (krotscheck)
Work Items
----------
- Update Global Requirements to use oslo_middleware version 1.2.0 (complete)
- Propose `CORS Middleware`_ to OpenStack API's that do not already support it.
This includes, but is not restricted to: Nova, Glance, Neutron, Cinder,
Keystone, Ceilometer, Heat, Trove, Sahara, and Ironic.
- Propose refactor to use `CORS Middleware`_ to OpenStack API's that already
support it via other means. This includes, but is not restricted to: Swift.
- Write documentation for CORS configuration.
- The authoritative content will live in the Cloud Admin Guide.
- The Security Guide will contain a comment and link to the Cloud Admin Guide.
Dependencies
============
- Depends on oslo_middleware version 1.2.0 (already in Global Requirements)
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Liberty
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
.. _CORS: http://www.w3.org/TR/cors/
.. _`default ports`: http://docs.openstack.org/juno/config-reference/content/firewalls-default-ports.html
.. _`Same-origin Policy`: http://en.wikipedia.org/wiki/Same-origin_policy
.. _`CORS Middleware`: http://docs.openstack.org/developer/oslo.middleware/cors.html

View File

@ -1,192 +0,0 @@
=========================
Deprecate Individual CLIs
=========================
https://blueprints.launchpad.net/+spec/deprecate-clis
Historically, each service has offered a CLI application that is included with
the python-\*client that provides administrative and user control over the
service. With the popularity of OpenStack Client and the majority of functions
having been implemented there we should look to officially deprecate the
individual CLI applications.
This does not imply that the entire python-\*client will be deprecated, just
the CLI portion of the library. The python bindings are expected to continue to
work.
Problem description
===================
There is currently no standard workflow for interacting with OpenStack services
on the command line. In the beginning it made sense that there was a nova CLI
for working with nova. As keystone, glance and cinder split out they cloned
novaclient and adapted it to their needs. By the time neutron and the deluge of
big tent services came along there was a clear pattern that each service would
provide a CLI along with their library.
Given the common base and some strong persuasion there is at least a common
subset of parameters and environment variables that are accepted by all CLI
applications. However as new features come up such as YAML based configuration,
keystone's v3 authentication or SSL handling issues these must be addressed in
each project individually and the way these parameters are handled have
drifted or are supported to various levels.
This also creates a horrible user experience for those trying to interact with
the CLI as you have to continually switch between different formatting, command
structures, capabilities and requires a deep knowledge of which service is
responsible for different tasks - a pattern we have been trying to break in
favour of a unified OpenStack interface.
To deal with this the OpenStack client project has now been around for nearly 2
years. It provides a pluggable way to register CLI tasks and a common place to
fix security and usability issues.
Proposed change
===============
Whilst there has been general support for the OpenStack Client project and
support from individual services (it is the primary/supported CLI for keystone
and several newer services) there is no clear direction on whether our users
should use it or continue using the project specific CLIs. Similarly there is
no clear direction on whether developers should contribute new features in
services to the service specific CLI or to OpenStack client.
This blueprint proposes that as a community we ratify that OpenStack Client is
to be the default supported CLI application going forward. This will give
services the direction to deprecate the project CLIs and start pushing their
new features to OpenStack Client. It will give our documentation teams the
direction to start using OpenStack Client as the command for setting up
functionality.
Given that various projects currently have different needs from their CLI I do
not expect that we will be immediately able to deprecate all CLIs. There may be
certain tasks for which there will always need to be a project specific CLI.
The intent of this blueprint initially is not to provide a timeline or force
projects away from their own CLIs. Instead to provide direction to start
deprecating the CLIs for which OpenStack Client already has functionality
compatibility and properly start the process.
Alternatives
------------
We could look at an oslo project that handles the common components of CLI
generation such that we could standardize parameters and handle client creation
in a cross service way. There may be an advantage to doing this anyway as there
will likely always be tools that want to provide a CLI interface to an
OpenStack API that do not belong in OpenStack Client and these should remain
consistent.
Doing nothing is always an option. OpenStack client is steadily gaining
adoption naturally because it can quickly provide new features across a range
of services and so CLI deprecation may happen naturally over time. However
until then we must duplicate the effort of supporting features in multiple
places.
Implementation
==============
As with all OpenStack applications there will have to be a 2 cycle deprecation
period for all these tools.
There are multiple components to this spec and much of the work required will
have to be performed individually in each of the services and documentation
projects. The intention of this spec is to indicate to projects that this is
the direction of the community so we can figure out the project specific
requirements in those groups.
Assignee(s)
-----------
Primary assignee:
jamielennox
dtroyer
Work Items
----------
- Add a deprecation warning to clients that have OpenStack Client equivalent
functionality.
- Update documentation to use OpenStack Client commands instead of project
specific CLIs (see Documentation Impact).
- Remove CLI components from CLIs after deprecation period complete.
Service Impact
--------------
For most CLI applications we must first start emitting deprecation warnings for
the CLI tools that ship with the deployment libraries.
For core services most functionality is already present and maintained in the
OpenStack Client repository so they would need to ensure feature parity however
they would typically not require any additional code.
As part of core functionality OSC currently supports:
- Nova
- Glance
- Cinder
- Swift
- Neutron
- Keystone
A number of additional projects have already implemented their CLI as an
OpenStack Client plugin. These projects will not be affected. Projects that
have not created plugins would need to implement a plugin that handles the
features they wish to expose via CLI.
Services that currently include an OpenStack Client plugin in their repository
include (but not limited to):
- Zaqar
- Sahara
- Designate
Documentation impact
--------------------
This will be a fairly fundamental change in the way we have communicated with
users to consume openstack and so will require significant documentation
changes.
This will include (but not limited to):
- Install Guides
- Admin Guide
- Ops Guide
- CLI Reference can be deprecated or redirected to OpenStack Client
documentation.
The OpenStack Client is already in use and deployed as a part of most
installations (it is required for keystone). Therefore changes to documentation
would not be dependant on any work happening in the services. The spec attempts
to ratify that this is the correct approach.
Dependencies
============
There have been many required steps to this goal such as os-client-config,
keystoneauth, cliff, stevedore and the work that has already gone into
OpenStack client. We are now at the point where we can move forward with the
change.
The OpenStack SDK is not listed as a dependency here because it is not
currently a dependency of OpenStack Client. It is intended that when OpenStack
SDK is released it will be consumed by OpenStack Client however that can be
considered an implementation detail.
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Mitaka
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,242 +0,0 @@
========================================================
Enabling Python 3 for Integration and Functional Tests
========================================================
The 2.x series of the C Python interpreter on which OpenStack releases
through Kilo are built is reaching the end of its extended support
period, defined by the upstream developers. This spec describes
motivation for porting fully to Python 3 and some of the work we will
need to enable testing applications as they move to Python 3.
Problem description
===================
There are a lot of small motivations for moving to Python 3, including
better unicode support and new features in the language and standard
library. The primary motivation, however, is that Python 2 is reaching
its end-of-life for support from its developers.
Just as we expect our users to update to new versions of OpenStack in
order to continue to receive support, the python-dev team expects
users of the language to update to reasonably modern and supported
versions of the interpreter in order to receive bug and security
fixes. When Python 3 was introduced, the support period for Python 2
was extended beyond the normal length of time to allow projects plenty
of time to migrate, and to allow the python-dev team to receive
feedback to make changes to the language so that migration is
easier. That period is coming to an end, and we need to consider
migration seriously.
"Python 3.0 was released in 2008. The final 2.x version 2.7 release
came out in mid-2010, with a statement of extended support for this
end-of-life release. The 2.x branch will see no new major releases
after that. 3.x is under active development and has already seen
over five years of stable releases, including version 3.3 in 2012
and 3.4 in 2014. This means that all recent standard library
improvements, for example, are only available by default in Python
3.x." -- Python2orPython3_
That said, we cannot expect all of OpenStack to be ported at one
time. It's likely that we could not port everything in a single
release cycle, given the other work going on. So we need a way to
stage the porting work so that projects can port when they are ready,
without having to wait for any other projects to finish their ports.
Proposed change
===============
Our services communicate through REST APIs and the message bus. This
means they are decoupled enough that we can port them one at a time,
if our tools support running some services on Python 2 and some on
Python 3. Our unit test tool, tox, supports multiple Python versions
already, and in fact most of our library projects are testing under
Python 2.6, 2.7, and 3.4 today. Our integration tests, however, do not
yet support multiple Python versions, so that's the next step to take.
General Strategy
----------------
#. Update devstack to install apps with the "right" version of the
interpreter.
* Use the version declared to be supported by the project through
its trove classifiers.
* Allowing apps to be installed with the right version of the
interpreter independently of other apps means we can port one
app at a time.
#. Port each application to 3.4, but support both 2.7 and 3.4.
* Set up an appropriate devstack-gate job using Python 3 as
non-voting for projects when they start to port.
* Make incremental changes to the applications until the non-voting
job passes reliably, then update it to make it a voting job.
* Technically there is no need to run integration tests for an
application under both versions, since they only need to be
deployed under one version at a time. However, different
packagers and deployers may want to choose to wait to move to
Python 3 and so we can continue to run the tests under both
versions.
.. note::
Even after all applications are on 3.x, we need to maintain some
python 2.7 support for client libraries and the Oslo libraries they
use. We should consider the deprecation policy of Python 2 for the
client libraries independently of porting the applications to 3.
Which version of Python to use?
-------------------------------
We have discussed this before, and it continues to be a moving
target. Version 3.4 seems to be our best goal for now.
- 3.0 - 3.2 are no longer actively supported
- 3.3 is not available on all distros
- **3.4 is (or soon will be) available on all distros**
- 3.5 is in beta and so is not ready for us to use, yet
Functional Tests for Libraries
------------------------------
Besides the functional and integration tests for applications, we also
have functional tests for libraries. I propose that we configure the
test jobs to run those only under Python 3, to avoid duplication and
expose porting issues that would have an impact on applications as
early as possible.
Alternatives
------------
Stay with C Python 2
~~~~~~~~~~~~~~~~~~~~
Commercial support is likely to be available from distros for longer
than it is available upstream, but even that will stop at some point.
Use PyPy or Another Implementation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some applications may benefit from PyPy's JIT compiler. It currently
supports 2.7.8 and 3.2.5, which means our Python 2 code would probably
run but code designed for Python 3.4 will not. I'm not aware of any
large deployments using PyPy to run services, so I'm not sure this is
really a problem. Given the expected long time frame for porting to
Python 3, it is likely that PyPy will be able to catch up to the
language level needed to run OpenStack by the time we are fully moved
to Python 3.
Wait for Python 3.5
~~~~~~~~~~~~~~~~~~~
Moving from 3.4 to 3.5 should require much less work than moving from
2.7 to 3.4. We can therefore start now, and monitor adoption of 3.5 by
distributions to decide whether to ultimately use 3.4 or a later
version.
Use Separate Virtualenvs in devstack
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We have discussed installing applications into virtualenvs a couple of
times. Doing that is orthogonal to these proposed changes, since we
would still need to use the correct version of Python within the
virtualenv.
Functional tests for libraries on 2 and 3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We could run parallel test jobs configured to run the functional tests
for libraries under both Python 2 and 3. This would largely duplicate
effort, though it might uncover some inconsistent handling of bytes
vs. strings. We shouldn't start out trying to do this, but if we do
uncover problems we can add more test jobs.
Implementation
==============
Assignee(s)
-----------
Primary assignee: Doug Hellmann
Work Items
----------
1. Update devstack to install pip for both Python 2 and Python 3.
2. Update devstack to look at the supported Python versions for a
project, and choose the correct copy of pip to install it and its
dependencies.
This may be as simple as::
python setup.py --classifiers | grep 'Language' | cut -f5 -d: | grep '\.'
3. When installing libraries from source using the ``LIBS_FROM_GIT``
feature of devstack, ensure that the libraries are installed for
both Python 2 and Python 3.
4. Begin porting applications to Python 3.
* Unit tests can be run under Python 3 for applications just as
they are for libraries, by enabling the appropriate job. Having
the unit tests working with Python 3 is a good first step, before
enabling the integration tests.
* Integration tests can be run by submitting a patch updating the
trove classifier.
* Some projects will have dependencies blocking them from moving to
Python 3 at first, and those should be tracked separately from
this proposal.
Some functions in Oslo libraries have been identified as having
incompatibilities with Python 3. As these cases are reported, we will
need to decide, on a case-by-case basis whether it is feasible to
create versions of those functions that work for both Python 2 and 3,
or if we will need to create some new APIs for use under Python 3 (see
``oslo_utils.encodeutils.safe_decode``,
``oslo_utils.strutils.mask_password``, and
``oslo_concurrency.processutils.execute`` as examples).
References
==========
- A proof-of-concept patch to devstack: https://review.openstack.org/181165
- Our notes about the state of Python 3 support:
https://wiki.openstack.org/wiki/Python3
- Advice from the python-dev community about choosing a Python
version: Python2orPython3_
- Summit discussions
- `Havana <https://etherpad.openstack.org/p/havana-python3>`__
- `Icehouse <https://etherpad.openstack.org/p/IcehousePypyPy3>`__
- `Juno <https://etherpad.openstack.org/p/juno-cross-project-future-of-python>`__
- Project-specific specs related to Python 3
- `Heat <http://specs.openstack.org/openstack/heat-specs/specs/liberty/heat-python34-support.html>`__
- `Keystone <https://review.openstack.org/#/c/177380/>`__
- `Neutron <https://review.openstack.org/#/c/172962/>`__
- `Nova <https://review.openstack.org/#/c/176868>`__
.. _Python2orPython3: https://wiki.python.org/moin/Python2orPython3
History
=======
.. list-table:: Revisions
:header-rows: 1
* - Release Name
- Description
* - Liberty
- Introduced
.. note::
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,196 +0,0 @@
=========================
Eventlet Best Practices
=========================
No blueprint, this is intended as a reference document.
Eventlet is used in many of the OpenStack projects as the default concurrency
model, and there are some things we've learned about it over the years that
currently exist only as tribal knowledge. This is an attempt to codify those
in a central location for everyone's benefit.
It is worth noting that while there has been a push from some members of the
community to move away from eventlet entirely, there is currently no approved
plan to do so. Even if there were, it will likely take a long time to
implement, so eventlet will be something we have to care about for at least
the short and medium term.
Problem description
===================
In some ways eventlet behaves much differently from other concurrency models
and can even change the behavior of the Python standard library. This means
that scenarios exist where a bad interaction between eventlet and some other
code, often code that is not eventlet-aware, can cause problems. We need some
best practices that will minimize the potential for these issues to occur.
Proposed change
===============
Guidelines for using eventlet:
Monkey Patching
---------------
* When using eventlet.monkey_patch, do it first or not at all. In practice,
this means monkey patching in a top-level __init__.py which is guaranteed
to be run before any other project code. As an example, Nova monkey patches
in nova/cmd/__init__.py and nova/tests/unit/__init__.py so that in both the
runtime and test scenarios the monkey patching happens before any Nova code
executes.
The reasoning behind this is that unpatched stdlib modules may not play
nicely with eventlet monkey patched ones. For example, if thread A is
started, the application monkey patches, then starts thread B, now you've
mixed native threads and green threads and the results are undefined but
most likely bad.
It is not practical to expect developers to recognize all such
possible race conditions during development or review, and in fact it is
impossible because the race condition could be introduced by code we
consume from another library. Because of this, it is safest to
simply eliminate the races by monkey patching before any other code is run.
* Monkey patching should also be done in a way that allows services to run
without it, such as when an API service runs under Apache. This is the
reason for Nova not simply monkey patching in nova/__init__.py.
Another example is Keystone, which recommends running under Apache but also
supports eventlet. They have a separate eventlet binary 'keystone-all' which
handles monkey patching before running any other code. Note that
`eventlet is deprecated`_ in Keystone as of the Kilo cycle.
.. _`eventlet is deprecated`: http://lists.openstack.org/pipermail/openstack-dev/2015-February/057359.html
* Monkey patching with thread=False is likely to cause problems. This is done
conditionally in many services due to `problems running under a debugger`_
with the threading module monkey patched. Unfortunately, even simple
concurrency scenarios can result in deadlocks with this sort of setup. For
example, the following code provided by Josh Harlow will cause hangs::
import eventlet
eventlet.monkey_patch(os=False, thread=False)
import threading
import time
thingy_lock = threading.Lock()
def do_it():
with thingy_lock:
time.sleep(1)
threads = []
for i in range(0, 5):
threads.append(eventlet.spawn(do_it))
while threads:
t = threads.pop()
t.wait()
It is unclear at this time whether there is a way to enable debuggers and
also have a sane monkey patched environment. The `eventlet backdoor`_ was
mentioned as a possible alternative.
.. _`problems running under a debugger`: http://lists.openstack.org/pipermail/openstack-dev/2012-August/000693.html
.. _`eventlet backdoor`: http://lists.openstack.org/pipermail/openstack-dev/2012-August/000873.html
* Monkey patching can cause problems running flake8 with multiple workers.
If it does, the monkey patching can be made conditional based on an
environment variable that can be set during flake8 test runs. This should
not be a problem as monkey patching is not needed for flake8.
For example::
import os
if not os.environ.get('DISABLE_EVENTLET_PATCHING'):
import eventlet
eventlet.monkey_patch()
Even though os is being imported before monkey patching, this should be safe
as long as no other code is run before monkey patching occurs.
Greenthread-aware Modules
-------------------------
* There is a greenthread-aware subprocess module in eventlet, but it does
*not* get patched in by eventlet.monkey_patch. Code that has interactions
between green threads and the subprocess module must be sure to use the
green subprocess module explicitly. A simpler alternative is to use
processutils from oslo.concurrency, which selects the appropriate module
depending on the status of eventlet's monkey patching.
Database Drivers
----------------
* Eventlet can cause deadlocks_ in some Python database drivers. The current
plan is to move our recommended and default driver_ to something that is more
eventlet-friendly.
.. _deadlocks: https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#MySQLdb_.2B_eventlet_.3D_sad
.. _driver: https://wiki.openstack.org/wiki/PyMySQL_evaluation#MySQL_DB_Drivers_Comparison
Tools for Ensuring Monkey Patch Sanity
--------------------------------------
* The oslo.utils project has an eventletutils_ module that can help ensure
proper monkey patching for code that knows what it needs patched. This
could, for example, be used to raise a warning when a service is run under
a debugger without threading patched. At least that way the user will have
a clue what is wrong if deadlocks occur.
.. _eventletutils: http://docs.openstack.org/developer/oslo.utils/api/eventletutils.html
Alternatives