Retire development-proposals (step 3)

The product WG is abandoned, this repo can be retired.
The content is available at
http://specs.openstack.org/openstack/development-proposals/

Depends-On: https://review.openstack.org/600644
Change-Id: I257aab9bfe8066131266ca4c431c0c257417a8aa
This commit is contained in:
Andreas Jaeger 2018-09-07 06:51:52 +02:00
parent 23ff6aea85
commit 2a733e13eb
134 changed files with 10 additions and 7088 deletions

View File

@ -1,7 +0,0 @@
[run]
branch = True
source = openstack-user-stories
omit = openstack-user-stories/openstack/*
[report]
ignore_errors = True

54
.gitignore vendored
View File

@ -1,54 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
.eggs
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
.testrepository
.venv
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
.*sw?

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,7 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,5 +0,0 @@
Contributing to openstack-user-stories
======================================
Please see `HACKING.rst <HACKING.rst>`_

View File

@ -1,78 +0,0 @@
openstack-user-stories Style Commandments
===============================================
- **Step 1**: Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/
- **Step 2**: Read the following Product WG documents in recommended order:
- `Product WG Wiki/Overview Page <https://wiki.openstack.org/wiki/ProductTeam#Mission>`_
- `Product WG Taxonomy <doc/source/workflow/taxonomy.rst>`_
- `Product WG Workflow <doc/source/workflow/workflow.rst>`_
- **Step 3**: Read HACKING.rst (this doc)
Product WG User Story Submission Process
----------------------------------------
- Follow instructions at `First Timers Documentation
<https://wiki.openstack.org/wiki/Documentation/HowTo/FirstTimers>`_
- This page will help you configure your local environment
- Clone openstack-user-stories using "git clone
https://github.com/openstack/openstack-user-stories" and create a branch
using "git checkout -b <branch>".
- Document your user story using our `approved template <user-story-template.rst>`_
- Include use cases that reference the `official OpenStack UX Personas <http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html>`_.
- Save the user story in the "user-stories/proposed" directory with a unique name
- For initial submissions of a user story expect a robust discussion around
the intent, value and content of your story
- Subsequent commits after general agreement should be more bite-sized in
conformance with `general OpenStack best practices <https://wiki.openstack.org/wiki/GitCommitMessages#Structural_split_of_changes>`_.
- Document relevant information about the submission in a JSON tracker. Here is
a link to the `tracker template <user-story-tracker.json>`_.
- For more information on the tracker format, you can read the `Tracker
Overview <doc/source/tracker_overview.rst>`_
- Save the tracker file (with as much information as you are able to provide)
in the "trackers" directory using the same unique name as the user-story but
ending with a .json extension
- A member of the Product WG might contact you to review the submission using
the email address associated with your Gerrit user.
User Story Endorsement Process
------------------------------
We might often have a user story in the repository that might be beneficial to
multiple working groups or community members. If you find a user story that
applies to you or your SIG (Special Interest Group) then please endorse the
user story. Endorsement shows that the user story has importance to multiple
groups and can help it earn a higher priority.
- Follow instructions at `Documentation/HowTo/FirstTimers <https://wiki.openstack.org/wiki/Documentation/HowTo/FirstTimers>`_
- This page will help you configure your local environment
- Clone openstack-user-stories using "git clone
https://github.com/openstack/openstack-user-stories" and create a branch
using "git checkout -b <branch>"
- Find associated tracker for the user story in the "trackers" directory
- Modify the 'supporting working groups' or 'supporting individual members'
accordingly
User Story Tracker
------------------------------
The tracker file for the user story has to be updated once a user story has
gone through the workflow and has blueprints associated with it.
- Follow instructions at `Documentation/HowTo/FirstTimers <https://wiki.openstack.org/wiki/Documentation/HowTo/FirstTimers>`_
- This page will help you configure your local environment
- Clone openstack-user-stories using "git
clone https://github.com/openstack/openstack-user-stories" and create a
branch using "git checkout -b <branch>"
- Find associated tracker for the user story in the "trackers" directory
- Modify the 'blueprints/specs' with the following information:
- Project Blueprint/Spec Pertains To
- Link to Blueprint/Spec
- Status of Blueprint/Spec
- The overall completion for a user story is determined by number of
blueprints/specs in completed status versus the overall number of
blueprints/specs submitted against a user story.

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,6 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@ -1,27 +1,13 @@
OpenStack User Stories Status This project is no longer maintained.
=============================
This repository was the working space for the `OpenStack Product WG The contents of this repository are still available in the Git
<https://wiki.openstack.org/wiki/ProductTeam>`_ and contains user source code management system. To see the contents of this
stories and their associated trackers. repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
The OpenStack Product WG has been `abandoned in February 2018 The former content of this repository is published at:
<http://lists.openstack.org/pipermail/user-committee/2018-February/002599.html>`_, http://specs.openstack.org/openstack/development-proposals/
this content is published for reference only.
Product WG / OpenStack User Stories Documentation For any further questions, please email
================================================= user-committee@lists.openstack.org or join #openstack-dev on
Freenode.
The /doc/source/workflow directory contains details about the Product
WG process.
The `HACKING.rst <HACKING.rst>`_ file contains details on how to
contribute user stories.
:Product WG Taxonomy Overview: doc/source/workflow/taxonomy.rst
:Product WG Workflow: doc/source/workflow/workflow.rst
:Active User Story Template: user-story-template.rst
:Tracker Template: user-story-tracker.json
The rendered user stories are available in `OpenStack User Stories
<http://specs.openstack.org/openstack/development-proposals/>`_.

View File

@ -1,2 +0,0 @@
[python: **.py]

View File

@ -1,192 +0,0 @@
.. This template should be in ReSTructured text. Please do not delete any of
.. the sections in this template. If you have nothing to say for a whole
.. section, just write: None. For help with syntax, see
.. http://sphinx-doc.org/rest.html You can also use an online RST editor at
.. rst.ninjs.org to generate proper RST.
The title of Development Proposal
=================================
.. In order to propose submitting a Development Proposal as a cross project
.. spec replace 'Cross Project Spec - None' with
.. 'Cross Project Spec - Ready for Submission',
.. after this change is accepted and merged then submit the Cross Project Spec
.. to the openstack/openstack-specs repository and replace 'Ready for
.. Submission' with a link to the review. After the Cross Project Spec is
.. merged, update this entry with the link to the spec as in
.. 'Cross Project Spec - <link>'.
.. Before proposing be sure to create and provide a link to the
.. Feature Tracker.
Cross Project Spec - None
Feature Tracker - None
Problem Overview
----------------
.. This section is mandatory.
.. Please use it to provide a detailed description of the problem that this
.. Development Proposal is trying to address. This should include the types of
.. functions that you expect to run on OpenStack and their interactions
.. both with OpenStack and with external systems.
Mandatory section. See rst comment in this document.
Opportunity/Justification
-------------------------
.. This section is mandatory.
.. Use this section to give opportunity details that support why
.. pursuing this development proposal would help address key barriers to
.. adoption or operation.
.. Some examples of information that might be included here are applicable
.. market segments, workloads, user bases, etc. and any associated data,
.. and any impact if such proposal/feature is not supported.
Mandatory section. See rst comment in this document.
Requirement Specification
-------------------------
Use Cases
+++++++++
.. This section is mandatory. You may submit multiple use cases in a single
.. submission as long as they are inter-related and can be associated with a
.. single epic and/or function. If the use cases are explaining goals that
.. fall under different epics/themes then please complete a separate submission
.. for each group of use cases.
.. Please provide a unique three character reference and three digit number for
.. each use case.
.. For example, CRM001, CRM002, etc, for use cases of Capacity Management.
.. Please describe as a list of use cases targeted at OpenStack UX Personas,
.. ideally in this or a similar format:
.. * XXX### As `<type of user>`_, I want to <goal> so that <benefit>
Mandatory section. See rst comment in this document.
This section utilizes the `OpenStack UX Personas`_.
.. _OpenStack UX Personas: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html
.. _<type of user>: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/<type_of_user>
Usage Scenario Examples
+++++++++++++++++++++++
.. This section is mandatory.
.. In order to explain your use cases, if possible, provide an example in the
.. form of a scenario to show how the specified user type might interact with the
.. use case and what they might expect. An example of a usage scenario can be
.. found at http://agilemodeling.com/artifacts/usageScenario.htm of a currently
.. implemented or documented planned solution.
.. If you have multiple usage scenarios/examples (the more the merrier) you may
.. want to use a numbered list with a title for each one, like the following:
.. 1. Usage Scenario Title
.. i. 1st Step
.. ii. 2nd Step
.. 2. Usage Scenario Title
.. i. 1st Step
.. ii. 2nd Step
.. [...]
Mandatory section. See rst comment in this document.
Acceptance Criteria
+++++++++++++++++++
.. This section is mandatory
.. In order to define completed implementation of a development proposal,
.. provide detailed definitions of acceptance criteria for this proposal.
.. This should include where applicable the specific project set appropriate,
.. the user focused experience and in some cases references to types of
.. specific artifacts.
.. Please reference the use cases by three character and three number
.. references defined above.
.. Ex. CRM001 - All Interop Projects obtain tag "FOO"
Mandatory section. See rst comment in this document.
Related Development Proposals
+++++++++++++++++++++++++++++
.. If there are related Development Proposals that have some overlap in the
.. problem domain or that you perceive may partially share requirements or a
.. solution, reference them here.
.. N/A if there is none.
N/A.
Requirements
++++++++++++
.. It might be useful to specify additional requirements that should be
.. considered but may not be apparent through the use cases and usage examples.
.. This information will help the development be aware of any additional known
.. constraints that need to be met for adoption of the newly implemented
.. features/functionality.
.. Use this section to define the functions that must be available or any
.. specific technical requirements that exist in order to successfully support
.. your use case. If there are requirements that are external to OpenStack,
.. include them as such.
.. Please always add a comprehensible description to ensure that people
.. understand your need.
.. * 1st Requirement
.. * 2nd Requirement
.. * [...]
.. N/A if there is none.
N/A.
External References
+++++++++++++++++++
.. Please use this section to add references for standards or well-defined
.. mechanisms. You can also use this section to reference existing
.. functionality that fits the Development Proposal outside of OpenStack.
.. If any of your requirements specifically call for the implementation of a
.. standard or protocol or other well-defined mechanism,
.. use this section to list them.
.. N/A if there is none.
N/A.
Rejected Proposals
------------------
.. Please fill out this section after a Development Proposal has been submitted
.. as a cross project spec to highlight any proposal deemed out of scope of the
.. relevant cross project spec.
.. N/A if there is none.
N/A.
Glossary
--------
.. It is highly suggested that you define any terms, abbreviations that are not
.. commonly used in order to ensure that your Development Proposal is
.. understood properly.
.. Provide a list of acronyms, their expansions, and what they actually mean in
.. general language here. Define any terms that are specific to your problem
.. domain. If there are devices, appliances, or software stacks that you expect
.. to interact with OpenStack, list them here.
.. Remember: OpenStack is used for a large number of deployments, and the better
.. you communicate your Development Proposal, the more likely it is to be
.. considered by the community.
.. Examples:
.. **reST** reStructuredText is a simple markup language
.. **TLA** Three-Letter Abbreviation is an abbreviation consisting of three
.. letters
.. **xyz** Another example abbreviation
.. N/A if there is none.
N/A.

View File

@ -1,93 +0,0 @@
Add Use Cases to Code Snippets in Docs
======================================
Cross Project Spec - None
User Story Tracker - None
Problem Description
-------------------
*Problem Definition*
++++++++++++++++++++
Operators have noted that it is easier to learn the commands associated with
the different OpenStack projects if examples, or usage snippets, were provided
for specific usage patterns. Some of the specific feedback we heard from
operators was:
* “You could add examples to the output of the CLI help commands, but I don't
think there was anything that was unclear enough, and that might actually
contribute to too much help clutter.”
* “Adding more examples in the documentation.”
* “It is a good client, adding examples to the help would be very helpful.”
The use case-based snippets provide two benefits to both operators and
application developers. First, the snippets help users understand how code
is structured for the various OpenStack projects. In addition, users are able
to re-use the snippets for their own needs. This is why the snippets should
be based on common use cases.
Opportunity/Justification
+++++++++++++++++++++++++
Use case-based snippets can significantly reduce the learning curve
associated with learning both the commands and structure associated for
the various projects. For example, operators mentioned during the
OpenStackClient usability studies in both Austin and Barcelona that it took
time to learn the command structure. However, they were able to efficiently
use the client once they became familiar with the structure.
Requirements Specification
--------------------------
Use Cases
+++++++++
These user stories utilize the standard `OpenStack UX Personas`_.
* As `Rey the Cloud Operator`_, I would like to quickly learn commands and
structure associated with each project. In addition, I would like to have
snippets for common use cases that I can modify for my own purposes.
.. _Rey the Cloud Operator: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/cloud-ops.html
.. _OpenStack UX Personas: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html
Usage Scenario Examples
+++++++++++++++++++++++
#. Rey has decided to explore the OpenStackClient (OSC) as an alternative to
using the individual APIs
#. Rey opens the OpenStack documentation page
#. Rey opens the new Use Case chapter (page) and sees several snippets of
code based on common use cases
#. Rey now understands how the commands and structure are used to
complete common tasks
Note: The best analogy seems to be to learning to read. A dictionary and an overview
of language structure is useful. However, actually seeing both words and structure
used in a sentence ties everything together in a meaningful way.
Related User Stories
++++++++++++++++++++
None.
*Requirements*
++++++++++++++
None.
*External References*
+++++++++++++++++++++
* `Operator OpenStackClient Validation Study (Barcelona 2016)`_
* `Operator OpenStackClient Validation Study (Austin 2016)`_
.. _Operator OpenStackClient Validation Study (Barcelona 2016): https://docs.google.com/presentation/d/1K-XImqK4-ODUvA1dr9t2LiUGib54MMKh1ANJJ2pldhU/edit?usp=sharing
.. _Operator OpenStackClient Validation Study (Austin 2016): https://docs.google.com/presentation/d/19ef_3mG9p_G2ZsUcgTAj9hmOynxL5LAyQD7KlXIbYBU/edit?usp=sharing
*Rejected User Stories / Usage Scenarios*
-----------------------------------------
None.
Glossary
--------
None.

View File

@ -1,129 +0,0 @@
Encrypted Storage
==================
Cross Project Spec - None
User Story Tracker - None
Problem Description
-------------------
*Problem Definition*
++++++++++++++++++++
Enterprises typically have their own data classification strategies. The types
of data stored typically include (but are not limited to): financial, personal,
health, and confidential business data. Some enterprises (especially finance and
health care industries) have stringent data requirements in order to be
compliant with regional law and regulations.
For example, PCI DSS Requirement 3.4 states that credit card payment details must
be rendered unreadable anywhere they are persistently stored (including portable
digital media, backup media and logs). Applications (including database
back-ends) that interact with these classes of data need to be able to specify
encrypted storage requirements when an application is launched and interacts with
some of these data classes. The data must be encrypted both in flight as well as
at rest. The application should not require administrative privileges to specify
or access encrypted storage.
In addition, proper key management processes need to be in place. The keys used
to encrypt/decrypt the data must be rotated on a regular basis and the access of
keys are restricted to authorized personnel only.
Opportunity/Justification
+++++++++++++++++++++++++
None.
Requirements Specification
--------------------------
Use Cases
+++++++++
* As the Enterprise IT Manager, I must ensure the appropriate security for the
HR Department database containing employee records that services several
applications. I would like to migrate the database into our company's
OpenStack private cloud so that I do not need to maintain the system it
currently resides on. However, because of the sensitive nature of the
information in the database our company policy does not allow this
information to reside on any shared system in an unencrypted form. To be able
to move the database into the private cloud I need to ensure that the
persistently stored data and all data in flight to/from the instance is
encrypted. While the HR Department would like to have improved uptime for
their database, they are used to having to manually restart/reboot as needed
and can live with this in the cloud as well.
* I am the Enterprise IT manager for an insurance company. My company
maintains a database with insurers credit card records for annual renewal
purposes. Our company would like to move the database into our OpenStack
private cloud. In order to comply with company data classification policy,
government law and financial regulations, I need to ensure that information
stored in the private cloud (including backups) is encrypted in flight and
at rest, and that keys used to encrypt the data are rotated annually.
Usage Scenarios Examples
++++++++++++++++++++++++
None.
Related User Stories
++++++++++++++++++++
* An application needs to be able to specify networking requirements
* An application needs to be able to specify workload isolation requirements
*Requirements*
++++++++++++++
* A block & object storage solution that enables encryption/decryption at the
instance source
* A block & object storage solution that enables encryption/decryption for
data at rest
* A method for the application to specify that it requires a block storage
system that includes encryption/decryption at the instance
* A method for rotating the key used to encrypt/decrypt the data after a
specific period of time
* OpenStack services to enforce the storage requirements for the application
* The application needs to be able to specify that it requires an encrypted
storage system that supports either or both encryption/decryption at the
instance, in addition to at rest.
*External References*
+++++++++++++++++++++
None.
*Gaps*
++++++
**Cinder issues:**
* The storage encryption functionality exists, but requires admin status.
Creating encrypted volumes should not require admin status.
* Encryption keys are set at creation time, however it is not clear how to
rotate the key(s), once a volume is in use. Rotating keys is a requirement of
many data storage standards.
**Swift issues:**
* At present an application is responsible for encrypting Objects prior to
calling a PUT operation, swift will store any object that is in the PUT
request. Swift should reject object PUTs that are tagged to be encrypted by
the application.
* Development of a data at rest solution is currently under development,
however, this does not solve for in flight data.
*Affected By*
+++++++++++++
* At the Hong Kong summit there was `a talk`_ on barbican/cinder/nova for this
type of functionality. Dont know if it was successfully integrated into
OpenStack yet.
* There is a `volume encryption spec`_ for some early work and the
`current documentation`_ implies that admin privilege is required.
* There is an `object encryption spec`_ which documents the approach for object
encryption (at rest).
*Rejected User Stories / Usage Scenarios*
-----------------------------------------
None.
Glossary
--------
* Data in Flight - Data in transit between an instance and storage system
* Data at Rest - Data stored persistently on a storage system
.. _a talk: https://www.openstack.org/summit/openstack-summit-hong-kong-2013/session-videos/presentation/encrypted-block-storage-technical-walkthrough
.. _current documentation: http://docs.openstack.org/liberty/config-reference/content/section_create-encrypted-volume-type.html
.. _volume encryption spec: https://wiki.openstack.org/wiki/VolumeEncryption
.. _object encryption spec: https://wiki.openstack.org/wiki/ObjectEncryption

View File

@ -1,67 +0,0 @@
Provide Config for External FW Appliance
========================================
Cross Project Spec - None
User Story Tracker - None
Problem Description
-------------------
Problem Definition
++++++++++++++++++
As a deployer of an OpenStack cloud I have to provide a specific network
configuration file to my network security team in order to enable appropriate
traffic to my cloud. At the moment I have to cobble together this configuration
from my deployment configuration. It would be much easier to programmatically
generate the bulk of this information as part of the deployment process or from
an available OpenStack service on demand.
Opportunity/Justification
+++++++++++++++++++++++++
None.
Requirements Specification
--------------------------
Use Cases
+++++++++
This section utilizes the `OpenStack UX Personas`_.
* As `Rey the Cloud Operator`_, I want to be able to access a configuration description that I
can provide to my network security team to properly configure any external
firewalls so that my users can quickly begin accessing the cloud.
* As the network security team for Rey, I want to be able to easily
consume the firewall configuration provided in a standard format, so that I
have minimal effort required to appropriately configure the firewall
.. _OpenStack UX Personas: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html
.. _Rey the Cloud Operator: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/cloud-ops.html#cloud-ops
Usage Scenarios Examples
++++++++++++++++++++++++
1. Initial Cloud Deployment
a. Deploy cloud using deployment configuration
b. Access templated firewall configuration from OpenStack service
c. Provide configuration to internal network security team
d. Network security team easily interprets configuration and configures FW
Related User Stories
++++++++++++++++++++
None.
Requirements
++++++++++++
None.
External References
+++++++++++++++++++
None.
Rejected User Stories / Usage Scenarios
---------------------------------------
None.
Glossary
--------
None.

View File

@ -1,184 +0,0 @@
.. This template should be in ReSTructured text. Please do not delete any of
.. the sections in this template. If you have nothing to say for a whole
.. section, just write: None. For help with syntax, see
.. http://sphinx-doc.org/rest.html You can also use an online RST editor at
.. rst.ninjs.org to generate proper RST.
Lifecycle Management for VMs
============================
**Sections in** *italics* **are optional.**
.. In order to propose submitting a User Story as a cross project spec replace
.. 'Cross Project Spec - None' with 'Cross Project Spec - Ready for Submission'
.. after this change is accepted and merged then submit the Cross Project Spec
.. to the openstack/openstack-specs repository and replace 'Ready for
.. Submission' with a link to the review, and after merger of the Cross Project
.. spec with a link to the spec. Before proposing be sure to create and provide
.. a link to the User Story Tracker
Cross Project Spec - None
User Story Tracker - None
Problem description
-------------------
*Problem Definition*
++++++++++++++++++++
.. This section is optional.
.. Please use it to provide additional details (if available) about your user story
.. (if warranted) for further expansion for clarity. A detailed description of the
.. problem. This should include the types of functions that you expect to run on
.. OpenStack and their interactions both with OpenStack and with external systems.
.. Please replace "None." with the problem description if you plan to use this
.. section.
From time to time, and for a variety of reasons, VMs that are created become
unused and they linger in the system consuming resources. We need a mechanism
to detect they are inactive and the clean them up.
By clean-them-up, I mean snap shot any needed data, send messages to any apps
as needed, log the appropriate information, then kill the VM and free up the
resources.
If a lease expires, then the VM is automatically be deleted. When Deleted all
resources used by that VM would be freed (cpu, memory, networks).
To address VM sprawl apply a life cycle model. When a tenant is launch a VM,
they can specify the type of use case (ex development) which will assign a
lease length (ex 90 days) and may have the ability to renew the lease before it
expires.
This will ensure efficient and valuable use of infrastructure resources.
Opportunity/Justification
+++++++++++++++++++++++++
.. This section is mandatory.
.. Use this section to give opportunity details that support why
.. pursuing these user stories would help address key barriers to adoption or
.. operation.
.. Some examples of information that might be included here are applicable market
.. segments, workloads, user bases, etc. and any associated data. Please replace
.. "None." with the appropriate data.
Elasticity is a key business driver for cloud and automating, or at
least simplifying, lifecycle management and resource reclamation will
provide enterprises with a cost-effective way to help maintain control
over resource sprawl. A second benefit will be that clouds running at
scale will operate more efficiently due to clean up older database
records associated with instances.
Requirements Specification
--------------------------
Use Cases
+++++++++
.. This section is mandatory. You may submit multiple
.. user stories in a single submission as long as they are inter-related and can be
.. associated with a single epic and/or function. If the user stories are
.. explaining goals that fall under different epics/themes then please complete a
.. separate submission for each group of user stories. Please replace "None." with
.. the appropriate data.
.. A list of user stories ideally in this or a similar format:
.. * As a <type of user>, I want to <goal> so that <benefit>
* As the sys admin I need to ensure the Hygiene of VMs in my deployment. This
includes cleaning up stuff, dealing with stuck/orphans VMs to free up unused
resources for other workloads to consumer.
* As a Public Cloud operator I have to be ably to comply with Government
orders/investigations. This may require that I quarantine a VM (and
associated resources) or that I make a VM (and associated resources)
available to investigators for digital forensics.
Usage Scenario Examples
+++++++++++++++++++++++
.. This section is mandatory.
.. In order to explain your user stories, if possible, provide an example in the
.. form of a scenario to show how the specified user type might interact with the
.. user story and what they might expect. An example of a usage scenario can be
.. found at http://agilemodeling.com/artifacts/usageScenario.htm of a currently
.. implemented or documented planned solution. Please replace "None." with the
.. appropriate data.
.. If you have multiple usage scenarios/examples (the more the merrier) you may
.. want to use a numbered list with a title for each one, like the following:
.. 1. Usage Scenario Title a. 1st Step b. 2nd Step 2. Usage Scenario Title a. 1st
.. Step b. 2nd Step 3. [...]
TBD
Related User Stories
++++++++++++++++++++
.. This section is mandatory.
.. If there are related user stories that have some overlap in the problem domain or
.. that you perceive may partially share requirements or a solution, reference them
.. here.
* Life cycle management for Storage (does not exist yet)
* DB Hygiene (does not exist yet)
*Requirements*
++++++++++++++
.. This section is optional. It might be useful to specify
.. additional requirements that should be considered but may not be
.. apparent through the user story and usage examples. This information will help
.. the development be aware of any additional known constraints that need to be met
.. for adoption of the newly implemented features/functionality. Use this section
.. to define the functions that must be available or any specific technical
.. requirements that exist in order to successfully support your use case. If there
.. are requirements that are external to OpenStack, note them as such. Please
.. always add a comprehensible description to ensure that people understand your
.. need.
.. * 1st Requirement
.. * 2nd Requirement
.. * [...]
TBD
*External References*
+++++++++++++++++++++
.. This section is optional.
.. Please use this section to add references for standards or well-defined
.. mechanisms. You can also use this section to reference existing functionality
.. that fits your user story outside of OpenStack. If any of your requirements
.. specifically call for the implementation of a standard or protocol or other
.. well-defined mechanism, use this section to list them.
None.
*Rejected User Stories / Usage Scenarios*
-----------------------------------------
.. This is optional
.. Please fill out this section after a User Story has been submitted as a
.. cross project spec to highlight any user stories deemed out of scope of the
.. relevant cross project spec.
None.
Glossary
--------
.. This section is optional.
.. It is highly suggested that you define any terms,
.. abbreviations that are not commonly used in order to ensure
.. that your user story is understood properly.
.. Provide a list of acronyms, their expansions, and what they actually mean in
.. general language here. Define any terms that are specific to your problem
.. domain. If there are devices, appliances, or software stacks that you expect to
.. interact with OpenStack, list them here.
.. Remember: OpenStack is used for a large number of deployments, and the better
.. you communicate your user story, the more likely it is to be considered by the
.. project teams and the product working group.
.. Examples:
.. **reST** reStructuredText is a simple markup language
.. **TLA** Three-Letter Abbreviation is an abbreviation consisting of three letters
.. **xyz** Another example abbreviation

View File

@ -1,247 +0,0 @@
.. This template should be in ReSTructured text. Please do not delete any of
.. the sections in this template. If you have nothing to say for a whole
.. section, just write: None. For help with syntax, see
.. http://sphinx-doc.org/rest.html You can also use an online RST editor at
.. rst.ninjs.org to generate proper RST.
Onboarding Management
=====================
**Sections in** *italics* **are optional.**
.. In order to propose submitting a User Story as a cross project spec replace
.. 'Cross Project Spec - None' with 'Cross Project Spec - Ready for Submission'
.. after this change is accepted and merged then submit the Cross Project Spec
.. to the openstack/openstack-specs repository and replace 'Ready for
.. Submission' with a link to the review, and after merger of the Cross Project
.. spec with a link to the spec. Before proposing be sure to create and provide
.. a link to the User Story Tracker
Cross Project Spec - None
User Story Tracker - None
Problem description
-------------------
*Problem Definition*
++++++++++++++++++++
.. This section is optional.
.. Please use it to provide additional details (if available) about your user story
.. (if warranted) for further expansion for clarity. A detailed description of the
.. problem. This should include the types of functions that you expect to run on
.. OpenStack and their interactions both with OpenStack and with external systems.
.. Please replace "None." with the problem description if you plan to use this
.. section.
You have a number of physical hosts and virtual Machines in your
infrastructure you would like to manage with OpenStack
* For each host that you wish to manage:
- Interrogate the hypervisor to obtain a list of all Virtual Machines
running in that host
- For each storage device attached to the host, obtain a list of all
volumes associated with each VM
- For each VM, obtain a list of all network interface addresses for
each VM
* The Onboarding process must be non-disruptive to the operation of the
host and the virtual machines running on the host
Opportunity/Justification
+++++++++++++++++++++++++
.. This section is mandatory.
.. Use this section to give opportunity details that support why
.. pursuing these user stories would help address key barriers to adoption or
.. operation.
.. Some examples of information that might be included here are applicable market
.. segments, workloads, user bases, etc. and any associated data. Please replace
.. "None." with the appropriate data.
Enterprises with extensive legacy applications environments would like
to consolidate management of those environments through OpenStack. Due
to the nature of the applications and their value to the business, the
onboarding process needs to be non-disruptive and suitable for use at
scale.
The Onboarding capability should work with any virtualization technology
that provides OpenStack APIs to manage the virtual machine configuration
The ability to onboard legacy environments is widely desired by any
business that is currently has a legacy IT environment and is using
OpenStack to manage new applications.
Support for onboarding legacy environments in a non-disruptive manner
will greatly increase the adoption of OpenStack.
Requirements Specification
--------------------------
Use Cases
+++++++++
.. This section is mandatory. You may submit multiple
.. user stories in a single submission as long as they are inter-related and can be
.. associated with a single epic and/or function. If the user stories are
.. explaining goals that fall under different epics/themes then please complete a
.. separate submission for each group of user stories. Please replace "None." with
.. the appropriate data.
.. A list of user stories ideally in this or a similar format:
.. * As a <type of user>, I want to <goal> so that <benefit>
* As the Enterprise IT manager that is deploying an OpenStack cloud
alongside my existing infrastructure, I need manage existing virtual
machines compute resources with OpenStack without disrupting or
changing the virtual machines
* As the Enterprise IT manager that is deploying an OpenStack cloud
alongside my existing infrastructure, I need to manage the block
storage used by my existing virtual machines into Cinder without
disrupting operation of my existing virtual machines
* As the Enterprise IT manager that is deploying an OpenStack cloud
alongside my existing infrastructure, I need manage existing virtual
machines network resources without disrupting those virtual machines
Usage Scenario Examples
+++++++++++++++++++++++
.. This section is mandatory.
.. In order to explain your user stories, if possible, provide an example in the
.. form of a scenario to show how the specified user type might interact with the
.. user story and what they might expect. An example of a usage scenario can be
.. found at http://agilemodeling.com/artifacts/usageScenario.htm of a currently
.. implemented or documented planned solution. Please replace "None." with the
.. appropriate data.
.. If you have multiple usage scenarios/examples (the more the merrier) you may
.. want to use a numbered list with a title for each one, like the following:
.. 1. Usage Scenario Title a. 1st Step b. 2nd Step 2. Usage Scenario Title a. 1st
.. Step b. 2nd Step 3. [...]
1. Managing existing Virtual Machines
a. For each physical host in a legacy virtualized server
environment, Obtain a list of all resources (Compute, Memory, Block
storage and Network) for each Virtual Machine
b. Create database entries for each virtual machine in the
Virtual Machine OpenStack so that each of the legacy VMs are
managed though OpenStack services such as Horizon, Nova, Cinder,
Neutron.
Related User Stories
++++++++++++++++++++
.. This section is mandatory.
.. If there are related user stories that have some overlap in the problem domain or
.. that you perceive may partially share requirements or a solution, reference them
.. here.
* https://etherpad.openstack.org/p/kilo-cinder-summit-topics
* https://etherpad.openstack.org/p/kilo-neutron-summit-topics
* https://goo.gl/Y73xXS
* https://blueprints.launchpad.net/cinder/+spec/over-subscription-in-thin-provisioning
*Requirements*
++++++++++++++
.. This section is optional. It might be useful to specify
.. additional requirements that should be considered but may not be
.. apparent through the user story and usage examples. This information will help
.. the development be aware of any additional known constraints that need to be met
.. for adoption of the newly implemented features/functionality. Use this section
.. to define the functions that must be available or any specific technical
.. requirements that exist in order to successfully support your use case. If there
.. are requirements that are external to OpenStack, note them as such. Please
.. always add a comprehensible description to ensure that people understand your
.. need.
.. * 1st Requirement
.. * 2nd Requirement
.. * [...]
1. Onboarding must be non-disruptive to legacy environments such that
the applications, virtual machines and physical hosts should not need to
be restarted
2. OpenStack needs to at least tolerate Virtual Machine resource
configuration changes made by non-OpenStack management tools
after the VM has been onboarded into OpenStack.
The eventual goal is for full synchronization between all resource
management tools
Three phases of synchronization related to onboarding are:
Phase 1 - No synchronization - The move to OpenStack management is one
way only. No out of band/non-OpenStack management will be accommodated
by OpenStack.
Example: Nova would delete a VM that was migrated by a control
mechanism outside of OpenStack,
Phase 2 - OpenStack toleration. Management actions initiated outside of
OpenStack would be tolerated and the OpenStack database would reflect
the changes in resources.
Example, in the case of a live migration, OpenStack would
accept that the VM had been moved to a different physical host
Similar accommodation needed for changes to storage volumes outside of
Cinder and networking changes outside of Neutron
Phase 3 - Full synchronization - This would allow multiple management
control points to take action against managed resources and have the
changes reflected in all resource managers. Most important for VMware
environments.
Example: Self service provisioning initiated in OpenStack Horizon would
result in the new VMs also showing up in vCenter
*External References*
+++++++++++++++++++++
.. This section is optional.
.. Please use this section to add references for standards or well-defined
.. mechanisms. You can also use this section to reference existing functionality
.. that fits your user story outside of OpenStack. If any of your requirements
.. specifically call for the implementation of a standard or protocol or other
.. well-defined mechanism, use this section to list them.
None.
*Rejected User Stories / Usage Scenarios*
-----------------------------------------
.. This is optional
.. Please fill out this section after a User Story has been submitted as a
.. cross project spec to highlight any user stories deemed out of scope of the
.. relevant cross project spec.
None.
Glossary
--------
.. This section is optional.
.. It is highly suggested that you define any terms,
.. abbreviations that are not commonly used in order to ensure
.. that your user story is understood properly.
.. Provide a list of acronyms, their expansions, and what they actually mean in
.. general language here. Define any terms that are specific to your problem
.. domain. If there are devices, appliances, or software stacks that you expect to
.. interact with OpenStack, list them here.
.. Remember: OpenStack is used for a large number of deployments, and the better
.. you communicate your user story, the more likely it is to be considered by the
.. project teams and the product working group.
.. Examples:
.. **reST** reStructuredText is a simple markup language
.. **TLA** Three-Letter Abbreviation is an abbreviation consisting of three letters
.. **xyz** Another example abbreviation
**Hosts** Physical systems that contain a hypervisor allowing for
multiple virtual machines to run
**VM** Virtual Machines, each with their unique operating system,
processor, storage and network resources

View File

@ -1,101 +0,0 @@
Role Based Access
=================
Cross Project Spec - None
User Story Tracker - None
Problem Description
-------------------
Problem Definition
++++++++++++++++++
OpenStack doesnt have a hierarchical permission structure that allows an
Operator to assign different permissions for different activities or access to
resources to different users.
Opportunity/Justification
+++++++++++++++++++++++++
Role Based Access is a basic Enterprise requirement. This capability enables
Enterprise IT Managers to set read and write permissions to different elements
of the IT infrastructure for different people/positions in the organization.
Enterprise security requires separate access UI/ API for Network, Security,
Storage management, User Management, and Instance management.
Requirements Specification
--------------------------
Use Cases
+++++++++
* As a cloud operator I want to enable my team to be able to see all Admin
level alerts, but not to be able to change their status. That requires review
and approval by the IT manager.
Usage Scenarios Examples
++++++++++++++++++++++++
None.
Related User Stories
++++++++++++++++++++
None.
Requirements
++++++++++++
* Enterprise security requires separate access UI/ API for Network, Security,
Storage management, User Management, and Instance management.
* Includes grouping actions into roles, assign roles to users, create hierarchy
of roles, etc.
* OpenStack includes an enforcement piece of access control, but no management
piece.
* Very important that admin type roles need to be tested thoroughly because of
the code bleed-in where actions are exposed if "is-admin".
* As we expand the roles need some tool that can report out on what access the
newly defined role has - want to make sure you dont inadvertently create a
superuser problem (newly created role inherit these rights)
External References
+++++++++++++++++++
From looking at other solutions, generally there are 3 immutable system roles:
administrator, read-only, no-access. With support for specifying roles on
objects and their hierarchy. There is a notion of "folder", data center, host,
VMs on a host, disk etc. Some actions are sort of atomic -- create a disk.
While others encompass multiple steps, needing a variety of privileges. Thus
the role that permits the complex action must contain the full set of necessary
privileges. For example launching a VM needs access to the datastore, OS images
files, disks, ability to create them and/or read an existing one etc.
Gaps
++++
**Keystone**
* Need to add a new role.
* Modify code that makes checks such as context admin, is-admin.
**All Projects**
* Need to review code in other projects to find hardcoded reference to Admin
and replace them with Keystone references.
* Modify the policy.json file to use this new role. Add test cases to confirm
behavior is as expected. Code in all projects need to be searched for
is-admin type tests and code modified to ensure that admin-read-only is
tested as necessary.
* One approach: Expose the policy.json files of each of the projects via
horizon and allow it to be modified.
* May need a bug to fix such as: is-admin evaluations to use only roles in the
code, towards making policy.json the true controller.
**Horizon**
* To expose policy.json via Horizon will need to be allowed to only
cloud-admins, and any change checked for syntactic correctness at the least.
* Further Horizon today is "pulling" the policy files to determine which
buttons/links exposed to users to guide them down the correct path.
Rejected User Stories / Usage Scenarios
---------------------------------------
None.
Glossary
--------
None.

View File

@ -1,71 +0,0 @@
Scheduler Simulator
===================
Cross Project Spec - None
User Story Tracker - None
Problem Description
-------------------
Problem Definition
++++++++++++++++++
Cloud Operators are often confronted with the need to perform what if scenarios
on proposed compute and block storage schedulers tweaks. As such they often
want to have access to a scheduler simulator, to make a series of "virtual"
requests given a specific scheduler configuration to see if the resulting
virtual machine load matches their expected or desired outcome.
Opportunity/Justification
+++++++++++++++++++++++++
This user story is valuable to cloud operators because it allows them to tune
the scheduler without having to run the configurations in real world
environments.
Requirements Specification
--------------------------
Use Cases
+++++++++
This section utilizes the `OpenStack UX Personas`_.
* As `Rey the Cloud Operator`_, I want to be able to simulate my cloud's scheduler with
a variety of virtual machine request loads under a given scheduler
configuration in order to determine the optimal configuration for my desired
outcome
* As Rey, I want to be able to visualize the simulated scheduling
of virtual machines onto hosts in my environment so that I can quickly
realize whether a given configuration will result in my desired outcome
.. _OpenStack UX Personas: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html
.. _Rey the Cloud Operator: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/cloud-ops.html#cloud-ops
Usage Scenarios Examples
++++++++++++++++++++++++
1. Operator Runs Simulator
a. Operator defines scheduler configuration
b. Operator defines request load
c. Simulator places load on "virtual" cloud given configuration
d. Simulator provides results of that placement
e. Simulator provides visual output of placement
f. Operator determines if result is optimal and if not adjusts configuration
Related User Stories
++++++++++++++++++++
None.
Requirements
++++++++++++
None.
External References
+++++++++++++++++++
None.
Rejected User Stories / Usage Scenarios
---------------------------------------
None.
Glossary
--------
None.

View File

@ -1,200 +0,0 @@
Security Segregation
====================
Problem description
-------------------
Problem Definition
++++++++++++++++++
The goal of this use-case is to present the need for a (partly) segregation
of physical resources to support the well-known classic separation of DMZ
and MZ, which is still needed by several applications (VNFs) and requested
by telco security rules.
The main driver therefore is that a vulnerability of a single system must not
affect further critical systems or endanger exposure of sensitive data. On the
one side the benefits of virtualization and automation techniques are mandatory
for telcos but on the other side telecommunication data and involved systems
must be protected by the highest level of security and comply to local
regulatory laws (which are often more strict in comparison with enterprise).
Placement Zones should act as multiple lines of defense against a security
breach. If a security breach happens in a placement zone, all other placement
zones and related VNFs must not be affected. This must be ensured by the design.
This use case affects all of the main OpenStack modules.
Opportunity/Justification
+++++++++++++++++++++++++
Separation of DMZ and MZ is a common requirement of VNFs to meet
communication service provider security requirements.
Requirements Specification
--------------------------
Use Cases
+++++++++
Current Situation
-----------------
Today the DMZ and MZ concept is an essential part of the security design
of nearly every telco application deployment. Today this separation is
done by a consequent physical segregation including hosts, network and
management systems. This separation leads to high investment and
operational costs.
Enable the following
--------------------
Placement Zones should be used to reduce the risk that the whole cloud platform
is affected by a serious security breach.
If the hypervisor is breached, there is a risk to all of the VNFs on that
hypervisor. To reduce this risk, a physical separation of VMs not assigned to
the same security class is necessary. Placement Zones should be used to ensure
that only VMs following the same security classification will run on the same
group of physical hosts.
This should avoid a mix of VMs from different zones (e.g. DMZ and MZ),
coming up with different security requirements, running on the same group
of hosts. Therefore a host (respectively multiple hosts) must be classified
and assigned to only one placement zone. During the deployment process of a
VM it must be possible to assign it to one placement zone (or use the
default one), which automatically leads to a grouping of VMs.
The security separation within the network can be done on a logical layer
using the same physical elements but adding segregation through VLANs
(Virtual LAN), virtual firewalls and other overlay techniques.
The security separation for virtual machine storage can be done on a logical
layer. It must be ensured that a hypervisor belonging to a specific placement
zone can not access the storage of a different placement zone. Otherwise an
attacker could inject malicious code into the virtual disk of a VM.
Usage Scenarios Examples
++++++++++++++++++++++++
An application presentation layer (e.g. webserver) must be decoupled from
the systems containing sensitive data (e.g. database) through at least one
security enforcement device (e.g. virtual firewall) and a separation of
underlying infrastructure (hypervisor). The intent being to minimize the
likelihood of a breach of a component runing in one zone resulting in a breach
of another component running in a separate zone.
*Potential candidate for Group Based Policy with Service Function Chaining from
a network perspective? GBP would need updates for this concept.*
https://wiki.openstack.org/wiki/File:TelcoWG_Placementzones.png
Related User Stories
++++++++++++++++++++
None.
Requirements
++++++++++++
* One OpenStack installation must be capable to manage different placement
zones. All resources (compute, network and storage) are assigned to one
placement zone. By default, all resources are assigned to the "default"
placement zone of OpenStack
* It must be possible to configure the allowed communication between
placement zones on the network layer
* SEC is a special placement zone - it provides the glue to connect the
placement zones on the network layer using VNFs. SEC VNFs may be attached to
resources of other placement zones
* Placement zone usage requires a permission (in SEC, tenants cannot start VMs,
this zone supports only the deployment of Xaas services FWaas, LBaas,...])
* If placement zones are required in a cloud, VMs must be assigned to one
placement zone
* All resources, which are needed to run a VM must belong to the same placement
zone
* Physical Hosts (compute nodes) must be able to assigned to only one placement
zone and re-assigning should be possible due to changing utilization
** Several assignments must be restricted by the API
** If a host is reassigned it must evacuate all existing VM
* ...and the whole thing must be optional :-)
Gaps
++++
**Nova issues:**
* Usage of availability zone(AZ)/host aggregates to assign a vm to a placement
zone is feasible [Ref.1], but:
* By default a physical host can be assigned to multiple host aggregates
* It is up to the operator to ensure security using non OpenStack mechanisms
* Maybe Congress [Ref. 2] (Policy as a Service) could be a solution?
* Cells offer segregation of a compute environment in a manner that is
transparent to the end user but no not in and of themselves allow the type of
explicit placement targeting a specific cell that would be required here.
**Neutron issues:**
* AZs or PZs, and Cells are not known to Neutron services
* It's up to the operator to ensure that the right networks are attached to VMs
**Cinder/Manila/Storage issues:**
* Storage can be segregated with volume-types
* AZs are not known to the storage services
* Must be ensured from the deployment tool that the right storage is accessible
**OpenStack regions** provide a segregation of all resources. The region concept
can be used to implement placement zones, but:
* Complex and resource consuming installation for the OpenStack management
systems
* Tenants must deal with additional regions
* No L2 network sharing for VMs in the SEC placement zone required to glue the
zones together
* No real enforcement
* Complex operations
External References
+++++++++++++++++++
* [1]: http://docs.openstack.org/openstack-ops/content/scaling.html
* [2]: https://wiki.openstack.org/wiki/Congress
Glossary
--------
**AZ**
Availability Zone (OpenStack terminology)
**DMZ**
Demilitarized Zone provides access to the public network,
but adds an additional security layer (e.g. virtual firewall). Designed for
security critical customer facing services (e.g. customer control center).
**EHD**
Exposed Host Domain provides direct access from the public network (e.g.
Internet).
Designed for services which require a high traffic volume (e.g. CDN) and are
not security critical.
**MZ**
Militarized Zone is a logical network without any access from the public
network. Designed for systems without direct customer connectivity (e.g.
databases containing sensitive data) and high security demands.
**PZ**
Placement Zone is a concept to classify different securiy areas based on
different security requirements. PZ are separated on a per host basis.
**SEC**
Secure Network Zone for all devices providing a security function including
devices providing connectivity between Placement Zones (e.g. virtual firewall
for DMZ-MZ traffic).
**VNF**
Virtual Network Function is an implementation of an functional building block
within a network infrastructure that can be deployed on a virtualization
infrastructure or rather an OpenStack based cloud platform (a non virtualized
network function is today often a physical appliance).

View File

@ -1,181 +0,0 @@
.. This template should be in ReSTructured text. Please do not delete any of
.. the sections in this template. If you have nothing to say for a whole
.. section, just write: None. For help with syntax, see
.. http://sphinx-doc.org/rest.html You can also use an online RST editor at
.. rst.ninjs.org to generate proper RST.
OpenStack Service Separation
============================
**Sections in** *italics* **are optional.**
.. In order to propose submitting a User Story as a cross project spec replace
.. 'Cross Project Spec - None' with 'Cross Project Spec - Ready for Submission'
.. after this change is accepted and merged then submit the Cross Project Spec
.. to the openstack/openstack-specs repository and replace 'Ready for
.. Submission' with a link to the review, and after merger of the Cross Project
.. spec with a link to the spec. Before proposing be sure to create and provide
.. a link to the User Story Tracker
Cross Project Spec - None
User Story Tracker - None
Problem description
-------------------
*Problem Definition*
++++++++++++++++++++
.. This section is optional.
.. Please use it to provide additional details (if available) about your user story
.. (if warranted) for further expansion for clarity. A detailed description of the
.. problem. This should include the types of functions that you expect to run on
.. OpenStack and their interactions both with OpenStack and with external systems.
.. Please replace "None." with the problem description if you plan to use this
.. section.
We need separation between services to mitigate for library conflicts. This way
instead of doing "openstack upgrade" we can do "nova upgrade" today, "neutron
upgrade" tomorrow and on, and that approach simplifies things tremendously,
because potential fallback is both easy and fast, and impact is much lower.
Biggest problems in upgradeable deployments are dependencies and speed of
upgrade. Dependencies means, in case of shared libraries, several services
might require same library, but often at different versions between major openstack
releases. This will cause conflicts, and require us to do “all or nothing”
upgrade. Sometimes we would like to use several maintenance windows to upgrade
our deployment one service at the time. Also, we want this maintenance windows
to be as short as possible, hence speed of deployment changes.
Opportunity/Justification
+++++++++++++++++++++++++
.. This section is mandatory.
.. Use this section to give opportunity details that support why
.. pursuing these user stories would help address key barriers to adoption or
.. operation.
.. Some examples of information that might be included here are applicable market
.. segments, workloads, user bases, etc. and any associated data. Please replace
.. "None." with the appropriate data.
None.
Requirements Specification
--------------------------
Use Cases
+++++++++
.. This section is mandatory. You may submit multiple
.. user stories in a single submission as long as they are inter-related and can be
.. associated with a single epic and/or function. If the user stories are
.. explaining goals that fall under different epics/themes then please complete a
.. separate submission for each group of user stories. Please replace "None." with
.. the appropriate data.
.. A list of user stories ideally in this or a similar format:
.. * As a <type of user>, I want to <goal> so that <benefit>
* As an OpenStack Operator, I select services to upgrade based upon the new
features that have been implemented and bugs fixed. This will lead to different
versions of Services running together my deployment. Different Services will be
using different versions of system and 3rd party libraries.
Usage Scenario Examples
+++++++++++++++++++++++
.. This section is mandatory.
.. In order to explain your user stories, if possible, provide an example in the
.. form of a scenario to show how the specified user type might interact with the
.. user story and what they might expect. An example of a usage scenario can be
.. found at http://agilemodeling.com/artifacts/usageScenario.htm of a currently
.. implemented or documented planned solution. Please replace "None." with the
.. appropriate data.
.. If you have multiple usage scenarios/examples (the more the merrier) you may
.. want to use a numbered list with a title for each one, like the following:
.. 1. Usage Scenario Title a. 1st Step b. 2nd Step 2. Usage Scenario Title a. 1st
.. Step b. 2nd Step 3. [...]
* Containers are new way to deploy microservices. Ever heard of Docker? Who
didnt? We want to utilize containers flexibility, speed of deployment, and
most importantly, separation of services. With each service having its own library
base, we dont have conflicts, and we can upgrade just one service at the time
without disruption of others.
* Kolla is project which does exactly that containerize your openstack
services. By using Kolla you can utilize flexible container to be configured in
a way you ran your services for years, or, for those who want to start openstack
adventure, quick and easy way to set up production ready upgradable openstack
using ansible.
Related User Stories
++++++++++++++++++++
.. This section is mandatory.
.. If there are related user stories that have some overlap in the problem domain or
.. that you perceive may partially share requirements or a solution, reference them
.. here.
None.
*Requirements*
++++++++++++++
.. This section is optional. It might be useful to specify
.. additional requirements that should be considered but may not be
.. apparent through the user story and usage examples. This information will help
.. the development be aware of any additional known constraints that need to be met
.. for adoption of the newly implemented features/functionality. Use this section
.. to define the functions that must be available or any specific technical
.. requirements that exist in order to successfully support your use case. If there
.. are requirements that are external to OpenStack, note them as such. Please
.. always add a comprehensible description to ensure that people understand your
.. need.
.. * 1st Requirement
.. * 2nd Requirement
.. * [...]
* Ability to upgrade 1 OpenStack service, without having to upgrade all OpenStack services
* Maintain overall system reliability when only 1 Service is upgraded
* Minimal or no downtime during upgrade
*External References*
+++++++++++++++++++++
.. This section is optional.
.. Please use this section to add references for standards or well-defined
.. mechanisms. You can also use this section to reference existing functionality
.. that fits your user story outside of OpenStack. If any of your requirements
.. specifically call for the implementation of a standard or protocol or other
.. well-defined mechanism, use this section to list them.
None.
*Rejected User Stories / Usage Scenarios*
-----------------------------------------
.. This is optional
.. Please fill out this section after a User Story has been submitted as a
.. cross project spec to highlight any user stories deemed out of scope of the
.. relevant cross project spec.
None.
Glossary
--------
.. This section is optional.
.. It is highly suggested that you define any terms,
.. abbreviations that are not commonly used in order to ensure
.. that your user story is understood properly.
.. Provide a list of acronyms, their expansions, and what they actually mean in
.. general language here. Define any terms that are specific to your problem
.. domain. If there are devices, appliances, or software stacks that you expect to
.. interact with OpenStack, list them here.
.. Remember: OpenStack is used for a large number of deployments, and the better
.. you communicate your user story, the more likely it is to be considered by the
.. project teams and the product working group.
.. Examples:
.. **reST** reStructuredText is a simple markup language
.. **TLA** Three-Letter Abbreviation is an abbreviation consisting of three letters
.. **xyz** Another example abbreviation

View File

@ -1,76 +0,0 @@
Update Ask OpenStack
====================
Cross Project Spec - None
User Story Tracker - None
Problem description
-------------------
Problem description
+++++++++++++++++++
Operators of OpenStack environments lack a reliable source to find answers to
commonly encountered issues. The current ask.openstack.org site is not broadly
functional or well contributed to by the broader community.
Opportunity/Justification
+++++++++++++++++++++++++
A workable community knowledge base is a critical component of any successful
software. Even more so in free open-source projects. Having a simple place for
operators to go to acquire the collective community knowledge on a given topic
will significantly reduce the barrier to entry to utilizing OpenStack.
Requirements Specification
--------------------------
Use Cases
+++++++++
This section utilizes the `OpenStack UX Personas`_.
* As `Rey the Cloud Operator`_, I want to be able to quickly find reliable
answers to common operational issues and questions so that I can continue to
support my users
* As Rey, I want to ask an uncommon question in a community forum and receive
an timely answer from the communities expertise so that I can continue to
support my users
* As Rey, I want to see answers to previously asked uncommon questions and a
notion to the popularity of previous answers so that I can continue to
support my users
.. _OpenStack UX Personas: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html
.. _Rey the Cloud Operator: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/cloud-ops.html#cloud-ops
Usage Scenarios Examples
++++++++++++++++++++++++
1. Common Question Usage - OpenStack Operator
a. Go to common question repository
b. Search for error code, topic, api call, etc
c. Find most popular answer to common question
d. Add support for popular answer if it resolves issue
2. UnCommon Question Usage - OpenStack Operator
a. Go to common question repository
b. Search for error code, topic, api call, etc
c. Discover no answers have been provided
d. Create new request, or add support for existing request
e. Receive notification that an answer has been submitted
f. Add support for answer if it resolves issue
Related User Stories
++++++++++++++++++++
None.
Requirements
++++++++++++
None.
External References
+++++++++++++++++++
None.
Rejected User Stories / Usage Scenarios
---------------------------------------
None.
Glossary
--------
None.

View File

@ -1,171 +0,0 @@
Bare Metal Service
==================
Cross Project Spec - None
Feature Tracker - None
Problem Overview
----------------
In order to support certain Enterprise Business Requirements, OpenStack must
be able to provision bare metal machines in a secure, multi-tenant, and
highly-available fashion, while providing the same integration with other
OpenStack services (such as volume storage, console access, etc) as it does
for virtual machines.
Some use cases for bare metal machines are:
#. Performance-sensitive applications that want to maximize efficiency, reduce
overhead from virtualization, and avoid CPU, Network, or IO fluctuations
from neighboring instances.
#. Security-sensitive applications, or applications with regulatory compliance
requirements that can not be run on shared hardware.
#. Applications whose licensing costs depend on # of CPUs on the Host, regardles
of whether virtualization is in play.
#. Applications that need direct IO access to specialized PCI devices which are
not yet virtualizable.
To support these use cases, we need:
#. Bare metal machine configuration: Bare metal machine can be configured with
CPU specification, memory capacity, local storage drive type such as SATA
or SSD and it's capacity, and network iplink bandwidth. Infiniband or RoCEE
may be needed to achieve network performance.
#. Network Isolation: Networks for one tenant is isolated from other tenants.
#. Storage Service Integration: Bare metal machine can be connected with block
device service such as Cinder. Bare metal machine connects cinder backends
dedicated to single tenant. Tenant can also back up internal storage of
bare metal machine to external block device managed by Cinder and recover
from it.
#. Console: Tenant can operate bare metal machine from console, see console log
integrated with existing Horizon UI.
#. Unified VM/BM Management: Unified management of both VMs and BMs (Bare
metal machines) by software with the similar set of services/functionalities
can be provided to users such as FWaaS, LBaaS, VPNaaS, Security Group,
Block Storage, Backup, High Availability, Connection to VMs in virtual
network (VXLAN), and Console.
Opportunity/Justification
-------------------------
Cloud service providers want to support bare metal machine, but it is a tough
challenge to provide IaaS access to bare metal with the same elastic and
service-oriented properties as they do with virtual machines.
Requirement Specification
-------------------------
Use Cases
+++++++++
This section utilizes the `OpenStack UX Personas`_.
.. _OpenStack UX Personas: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html
.. _Quinn the application developer: https://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/app-developer.html
*The best-matching persona seems to be `Quinn the application developer`_ at the time this proposal
is created.*
* BMT001 - As `Quinn the application developer`_, I want to use bare metal machine so that I
get consistent performance not affected by another machine, nor impacted by
hypervisor.
* BMT002 - As Quinn, I want to have a secure and clean bare
metal machine deployed no matter who used it before.
* BMT003 - As Quinn, I want to have a secure and isolate networks so
that these networks are not affected by other tenants in the cloud.
* BMT004 - As Quinn, I want to back up internal disk of bare
metal and create a snapshot. This can be backed up to an external storage
managed by Cinder.
* BMT005 - As Quinn, I want to use bare metal machine integrated
with block storage service so that I can use external storage service.
* BMT006 - As Quinn, I want to see bare metal machine from
console log and operate from console so that I can analyze problems at
booting time and so on.
* BMT007 - As Quinn, I want to continue my operation immediately
when a bare metal machine fails without any manual operations such as
switchover. Similar to `High Availability for Virtual Machines`_ user story,
the owner should not have to design the fail-over mechanism themselves.
The system should monitor and detect bare metal machine failure and
automatically fail-over to a spare bare metal machine.
* BMT008 - As Quinn, I want to use a bare metal machine with the
network services such as FWaaS, LBaaS, Security Group, VPNaaS, and
connection to VMs in virtual network(VXLAN) in the same manner of VMs.
Usage Scenario Examples
+++++++++++++++++++++++
1. Successful bare metal service
a. Quinn creates virtual network.
#. Quinn boots bare metal machine.
#. Quinn uses block storage from bare metal machine.
#. Quinn uses bare metal machine with consistent performance.
#. Analyze bare metal machine rebooting problem
a. Quinn can't connect to bare metal machine remotely when
rebooting.
#. Quinn can see state of bare metal machine from console log.
#. Quinn analyzes boot problem and resolved the issue.
#. Quinn can boot successfully.
#. Bare metal machine data protection
a. Quinn backs up data in bare metal machine.
#. Quinn restore from data backed up.
Related Development Proposals
++++++++++++++++++++++++++++++
* `High Availability for Virtual Machines <https://review.openstack.org/#/c/289469/>`_
Requirements
++++++++++++
N/A.
External References
+++++++++++++++++++
* `[RFE] [Ironic] Ironic Neutron ML2 Integration <https://bugs.launchpad.net/ironic/+bug/1526403>`_
* `[SPEC] [Ironic] Update of the Ironic Neutron Integration spec <https://review.openstack.org/#/c/188528/>`_
* `[RFE] [Ironic] VLAN Aware Baremetal Instances <https://bugs.launchpad.net/ironic/+bug/1543584>`_
* `[SPEC] [Ironic] VLAN Aware Baremetal Instances <https://review.openstack.org/#/c/277853>`_
* `[BP] [Nova] Tenant networking support for Ironic driver <https://blueprints.launchpad.net/nova/+spec/ironic-networks-support>`_
* `[SPEC] [Nova] Tenant networking support for Ironic driver <https://review.openstack.org/#/c/237067>`_
* `[RFE] [Ironic] Add volume connection information into ironic db <https://bugs.launchpad.net/ironic/+bug/1526231>`_
* `[SPEC] [Ironic] Volume connection information for Ironic nodes <https://review.openstack.org/#/c/200496/>`_
* `[BP] [Nova] Add support for Ironic nodes to boot from Cinder volume <https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume>`_
* `[RFE] [Ironic] Nova serial console support for Ironic <https://bugs.launchpad.net/ironic/+bug/1553083>`_
* `[SPEC] [Ironic] Nova serial console support <https://review.openstack.org/#/c/296869/>`_
* `[SPEC] [Ironic] Add nova-compatible-serial-console.rst to not-implemented <https://review.openstack.org/#/c/293827/>`_
* `[RFE] [Ironic] Bare metal node N+1 redundancy <https://bugs.launchpad.net/ironic/+bug/1526234>`_
* `[SPEC] [Ironic] Bare metal node N+1 redundancy <https://review.openstack.org/#/c/259320>`_
Rejected Proposals
------------------
N/A.
Glossary
--------
N/A.

View File

@ -1,326 +0,0 @@
Quotas, Usage Plans, and Capacity Management
============================================
Cross Project Spec - None
User Story Tracker - None
Problem Description
-------------------
*Problem Definition*
++++++++++++++++++++
A canonical property of an IaaS system like OpenStack is “capacity on demand”.
Users expect to be able to allocate new resources via UI or API whenever needed,
and to release them when the need ends. By supporting a large number of users,
pooling resources, and maintaining some excess capacity, the cloud service provider
(CSP) presents the illusion of infinite capacity.
In practice, of course, the resources are not infinite, and the CSP must
institute measures to manage capacity so that resource exhaustion is minimized.
This is generally done by imposing a cap or quota on the resources that a
particular project may consume, and by managing the relationship between the
available physical resources and the aggregate quotas for all projects. When a
project requires more resources than its assigned quota, the user is generally
required to submit a request, generally requiring human approval. The CSP may
reject the request, or delay it until sufficient capacity is available. When
the request is approved, the quota for the project is modified to reflect the
new limit.
Other CSPs have introduced a number of mechanisms to provide them with
flexibility in managing capacity. These include group quotas (shared by related
projects), reserved instances, ephemeral instances (which may be reclaimed for
reallocation), and market-based allocation models. At the present time,
OpenStack does not support any of these.
One common factor in all these processes is that they do not reflect temporal
variations in resource usage. Yet in many cases the user knows how their usage
is going to vary over time, and such information would be useful to the CSP who
needs to decide how to handle each request. It might also facilitate the
automation of some of the processing. The following user stories capture the
possibilities here.
This user story is also applicable to Telcos / TSP (Telecommunication Service
Providers) users. There is movement in the industry toward NFV (Network
Function Virtualization) that want to leverage the benefits of cloud
technologies and virtualization by deploying VNFs (virtual network functions)
on industry standard high volume servers, switches and storage located in data
centers, network nodes and in end-user premises. The resource requirements
for these VNFs are described in the VNF Descriptor (VNFD) which is being
standardized under the aegis of ETSI NFV ISG [1] and OASIS TOSCA.
Opportunity/Justification
+++++++++++++++++++++++++
.. This section is mandatory.
.. Use this section to give opportunity details that support why
.. pursuing these user stories would help address key barriers to adoption or
.. operation.
.. Some examples of information that might be included here are applicable market
.. segments, workloads, user bases, etc. and any associated data. Please replace
.. "None." with the appropriate data.
CSP and TSP need to be able to efficiently manage and utilize the finite amount
of resources including their temporal characteristics. Current OpenStack
services do not allow for such flexible resource usage requests and scheduling
of resources for future usage. In particular:
* For high priority VNFs (e.g. mobile core network nodes) the TSP requires a
guarantee on the availability of the resources to run the VNFs in different
operational timing (e.g. in future) and scenarios.
.. * Further examples may be added by other stakeholders.
Requirements Specification
--------------------------
Use Cases
+++++++++
.. This section is mandatory. You may submit multiple
.. user stories in a single submission as long as they are inter-related and can be
.. associated with a single epic and/or function. If the user stories are
.. explaining goals that fall under different epics/themes then please complete a
.. separate submission for each group of user stories. Please replace "None." with
.. the appropriate data.
.. A list of user stories ideally in this or a similar format:
.. * XXX### As a <type of user>, I want to <goal> so that <benefit>
This section utilizes the `OpenStack UX Personas`_.
* CRM001 As `Wei the project owner of a Telco operator`_, I want to specify my
resource usage request (RUR) in a way that will enable automated processing by
the CSP, so that my RUR will be handled more quickly and accurately.
* CRM002 As `Adrian the infrastructure architect`_, I want to be able to
automate the processing of RURs so that I can meet my user SLAs and gain more
timely and accurate data input to my capacity management and planning systems.
* CRM003 As Wei, I want to be able to describe the temporal characteristics of
my RUR, so that the CSP can plan capacity more accurately and reduce the
chances of a resource request failure. My CSP may also offer me better pricing
for more accurate usage prediction. Some examples of time-based RURs:
a. I plan to use up to 60 vCPUs and 240GB of RAM from 6/1/2016 to 8/14/2016.
b. I plan to use 200GB of object storage starting on 8/14/2016, increasing by
100GB every calendar month thereafter.
c. I want guaranteed access to 30 vCPUs and 200GB of RAM for my project.
In addition, during October-December, I want to be able to increase my
usage to 150 vCPUs and 1TB of RAM.
d. I want guaranteed access to 4 instances with 1 vCPU and 1GB of RAM and 10GB
of disk and a guaranteed minimum bandwidth of 1Gbps between the instances.
This example is similar to what would be described in the VNFD.
* CRM004 As Wei, I want to be able to submit an updated version of a rolling RUR
for my project every month, so that my CSP has accurate information and can
give me the best price and SLA.
* CRM005 As Wei, I want to be able to take advantage of pricing and other offers
from my CSP in order to meet the business objectives for my project.
For example:
a. I want 60 vCPUs for a minimum of one hour. After that time, the CSP may
shut down all my instances if the resources are needed elsewhere. (I assume
that the price is lower on such instances.)
b. I want up to 100 vCPUs for the next 24 hours. Tell me how many I can have.
* CRM006 As Adrian, I want to be able to automate the construction and
interpretation of a time-based resource usage plan so that I can schedule the
most cost-effective actions to maintain my SLA. Some examples of actions:
a. Schedule the provisioning of additional infrastructure.
b. Repurpose existing allocated infrastructure.
c. Assign a new project to one of a number of regions based on usage
projections.
d. Add “burst capacity” from a federation partner or reseller.
e. Modify or defer another project.
* CRM007 As Wei, I want to be able to query/update/terminate a RUR at any point
in time.
* CRM008 As Wei, I want to receive an appropriate error message in case the a
RUR is not successful. In case of a failure of RUR I want the environment to
be reverted back to pre-RUR state.
In other words, RUR transaction should be Atomic. In case of RUR failure, the
error message should contain sufficient information such that user can take
actions to modify the RUR.
* CRM009 As Adrian, I want to be able to automate the RUR with chargeback so
only users with following requirements are considered for resources:
a. whose account is up to date on payments
b. whose RUR is within a quota
c. whose cost of RUR plus current balance is below project/tenant threshold
.. _OpenStack UX Personas: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html
.. _Wei the project owner of a Telco operator: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/project-owner.html
.. _Adrian the infrastructure architect: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/infrastructure-arch.html
Usage Scenarios Examples
++++++++++++++++++++++++
.. This section is mandatory.
.. In order to explain your user stories, if possible, provide an example in the
.. form of a scenario to show how the specified user type might interact with the
.. user story and what they might expect. An example of a usage scenario can be
.. found at http://agilemodeling.com/artifacts/usageScenario.htm of a currently
.. implemented or documented planned solution. Please replace "None." with the
.. appropriate data.
.. If you have multiple usage scenarios/examples (the more the merrier) you may
.. want to use a numbered list with a title for each one, like the following:
.. 1. Usage Scenario Title a. 1st Step b. 2nd Step 2. Usage Scenario Title a. 1st
.. Step b. 2nd Step 3. [...]
1. Reserve resources for upcoming events
a. `Wei the project owner of a Telco operator`_ is in charge of network
planning for big events, like mega-concerts and festival, where local
traffic spikes are expected.
b. In order to ensure sufficient network capacity for the upcoming Fuji Rock
Festival on 22-24 July 2017, Wei reserves additional capacity by creating
a RUR which describes the aforementioned dates and the amount of
additional resources, e.g., 4 instances with 1 vCPU, 1GB of RAM, 10GB of
disk, and a guaranteed minimum bandwidth of 1Gbps between the instances
which are required for scaling the service.
c. After the RUR having successfully processed, Wei is acknowledged that the
appropriate resource is reserved for the event dates.
2. Reserve resources for maintenance works
a. Wei is responsible for updating his services and
`Rey the cloud operator`_ is responsible for maintaining the underlying
cloud environment including its hardware. Now, the team plans a
maintenance window for several compute hosts on next Monday.
b. To avoid impact on the service, Wei plans to migrate all VMs running on
those hosts to other hosts that are not affected by the maintenance work
on Sunday, i.e., a day before the maintenance window.
c. In order to ensure that those other hosts are available from Sunday to the
end of the maintenance window, Wei reserves the required resources
through his frontend tools.
d. In the backend, the system creates respective RURs for this time window
to guarantee the availability of the resources and the system returns a
reservation ID to Wei.
e. On Sunday, Wei triggers the migration of the affected VMs referring to
the reservation ID. Rey then triggers the maintenance work on the cloud.
The work can be finished earlier than expected and after having migrated
back the VMs, Wei can release the reservation ahead of the planned
reservation end time.
3. Reserve resources for disaster recovery
a. Wei is in charge of ensuring core services are running in disaster cases.
In order to be able to immediately react to a disaster situation, the
services maintains a disaster configuration for its core services and
keeps respective resources reserved for such situations.
b. Just now, an earthquake has hit the country and an automated tsunami
warning was issued by the government. Wei has a short time window to
prepare for the tsunami hitting the coastlines and its effects, e.g. a
high volume of extraordinary communication such as emergency
communication, evacuation instructions, and safety confirmations.
c. Wei switches the service to a pre-configured disaster configuration.
Switching to the disaster configuration is supported by resources that
had been exclusively reserved for such situations.
4. Reserve resources for launching new services
a. Wei is in charge of introducing a new service that has complex
requirements on the infrastructure resources. In order to avoid the risk
that one requirement during the allocation of the resources cannot be met
and the allocation of resources has to be rolled back or be changed to
meet the requirements, Wei first creates a reservation for the required
resources specifying in the request also all parameters and conditions
the resources have to fulfil.
b. The reservation service tries to reserve the resources with the specified
criteria. After having successfully created the reservation, a reservation
ID is returned to Wei.
c. Wei then triggers the setup of the service referencing the reservation ID
knowing that all resource requirements can be met. The new service is
initialized without conflicts.
.. _Wei the project owner of a Telco operator: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/project-owner.html
.. _Rey the cloud operator: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/cloud-ops.html
Related User Stories
++++++++++++++++++++
.. This section is mandatory.
.. If there are related user stories that have some overlap in the problem domain or
.. that you perceive may partially share requirements or a solution, reference them
.. here.
This Use Case is related to the Infinite Elasticity use case. The latter focuses
on testing the capability of an OpenStack cloud to handle large-scale capacity
requests.
*Requirements*
++++++++++++++
.. This section is optional. It might be useful to specify
.. additional requirements that should be considered but may not be
.. apparent through the user story and usage examples. This information will help
.. the development be aware of any additional known constraints that need to be met
.. for adoption of the newly implemented features/functionality. Use this section
.. to define the functions that must be available or any specific technical
.. requirements that exist in order to successfully support your use case. If there
.. are requirements that are external to OpenStack, note them as such. Please
.. always add a comprehensible description to ensure that people understand your
.. need.
.. * 1st Requirement
.. * 2nd Requirement
.. * [...]
* The implementation of these capabilities will depend in part on the existence
of a more flexible and holistic quota scheme, so that the capacity management
system can adjust quotas programmatically.
* It will also require a rich monitoring, notification, and visualization
system, so that both user and CSP have accurate and timely data about the
behavior of the system.
*External References*
+++++++++++++++++++++
.. This section is optional.
.. Please use this section to add references for standards or well-defined
.. mechanisms. You can also use this section to reference existing functionality
.. that fits your user story outside of OpenStack. If any of your requirements
.. specifically call for the implementation of a standard or protocol or other
.. well-defined mechanism, use this section to list them.
[1] ETSI NFV IFA has specified the concept and use cases of "resource reservation"
and **VNFD** in the following standard specifications:
<http://www.etsi.org/deliver/etsi_gs/NFV-IFA>
*Rejected User Stories / Usage Scenarios*
-----------------------------------------
.. This is optional
.. Please fill out this section after a User Story has been submitted as a
.. cross project spec to highlight any user stories deemed out of scope of the
.. relevant cross project spec.
None.
Glossary
--------
.. This section is optional.
.. It is highly suggested that you define any terms,
.. abbreviations that are not commonly used in order to ensure
.. that your user story is understood properly.
.. Provide a list of acronyms, their expansions, and what they actually mean in
.. general language here. Define any terms that are specific to your problem
.. domain. If there are devices, appliances, or software stacks that you expect to
.. interact with OpenStack, list them here.
.. Remember: OpenStack is used for a large number of deployments, and the better
.. you communicate your user story, the more likely it is to be considered by the
.. project teams and the product working group.
.. Examples:
.. **reST** reStructuredText is a simple markup language
.. **TLA** Three-Letter Abbreviation is an abbreviation consisting of three letters
.. **xyz** Another example abbreviation
* **RUR** - Resource Usage Request
* **CSP** - Cloud service provider
* **VNFD** - Virtual Network Function (VNF) Descriptor describes resource
requirements for VNFs

View File

@ -1,126 +0,0 @@
Categorized Configuration
=========================
Cross Project Spec - https://review.openstack.org/#/c/295543 (WIP)
User Story Tracker - None
Problem description
-------------------
Problem Definition
++++++++++++++++++
Operators currently have to grapple with hundreds of configuration options
across tens of OpenStack services across multiple projects. In many cases
they also need to be aware of the relationships between the options.
In particular, this makes the initial deployment a very daunting task.
Opportunity/Justification
+++++++++++++++++++++++++
One of the largest barriers to adoption of OpenStack is perceived complexity.
This surfaces in attempting to understand and properly set configurations
on initial deployment, and again after upgrading.
Requirements Specification
--------------------------
Use Cases
+++++++++
This section utilizes the `OpenStack UX Personas`_.
* As `Rey the Cloud Operator`_, I should be able to understand how to use a configuration
option by reading the documentation. There should be no need to read the
code to understand what value to choose.
* As Rey doing an initial deployment, I want most options to have
a useful default value, so they do not need to be considered or specified
during the initial configuration of the system.
* As Rey doing an initial deployment, I should be able to quickly
identify what configurations I must consider, and how I should determine
their value for my deployment to be successful.
* As a Packager, I should be able to quickly identify what configuration
options the packaging must set, and what options the operator needs to
specify.
* As Rey scaling out a deployment, there should be minimal
configuration changes required. For the small number of options that must
be changed, it should be clear what values require tuning, what symptoms are
caused by incorrect values and how to determine correct values.
* As Rey, the documentation should be clear on the relationship
between configuration options. For example, if you select driver A by
setting configuration option X, it should be clear what values should be
set for any dependent configuration options within the same project.
* There may be some options that are very unlikely to be used, except by the
most advanced users. By default, these advanced options should be marked as
such and presented in such a way as to make the documentation less daunting.
While remaining discoverable it's possible such options may require Rey to
review additional documentation to fully understand their impact.
* As Rey doing an initial configuration, it should be easy to check
that you have a valid configuration file. Checking for any unrecognized
configuration options and any invalid values can help find typos.
* As Rey that has just completed an upgrade, it should be easy to
check if you are making use of any deprecated configuration options, and
determine what changes must be made before doing your next upgrade.
.. _OpenStack UX Personas: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html
.. _Rey the Cloud Operator: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/cloud-ops.html#cloud-ops
Usage Scenario Examples
+++++++++++++++++++++++
1. Initial Configuration and Deployment
a. Deployer reviews centralized config documentation for services selected
for deployment
#. Deployer quickly finds the configurations that need setting prior to
deployment
#. Deployer makes adjustments to configs and deploys, leaving most configs
unset, for example, using their default values
#. Deployment is performed successfully
2. Scaling Re-Configuration
a. Deployer reviews centralized configuration documentation for services
selected for scaling
#. Deployer uses documentation to determine new configuration settings
required for optimal performance at their future scale
#. Deployer makes appropriate adjustments to configuration
#. Deployer scales deployment successfully
3. Scaling Troubleshooting
a. Deployer scales deployment and begins experiencing performance degradation
with a specific service
#. Deployer consults configuration documentation to determine if symptoms are
caused by mis-configuration
#. Deployer identifies configuration adjustment required
#. Deployer makes required configuration adjustments
#. Deployer successfully returns deployment to optimal performance
Related User Stories
++++++++++++++++++++
None.
Requirements
++++++++++++
None.
External References
+++++++++++++++++++
* `Nova - Centralize Config Options Spec <http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/centralize-config-options.html>`_
* `Nova - Centralize Config Options Blueprint - Newton <https://blueprints.launchpad.net/nova/+spec/centralize-config-options-newton>`_
* `Oslo Config Generator <http://docs.openstack.org/developer/oslo.config/generator.html>`_
* `Nova - Sample Configuration <http://docs.openstack.org/developer/nova/sample_config.html>`_
Rejected User Stories / Usage Scenarios
---------------------------------------
None.
Glossary
--------
None.

View File

@ -1,184 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode
Complex Instance Placement
==========================
Problem description
-------------------
Problem Definition
++++++++++++++++++
An IP Multimedia Subsystem (IMS) core [2] is a key element of Telco
infrastructure, handling VoIP device registration and call routing.
Specifically, it provides SIP-based call control for voice and video as well as
SIP based messaging apps.
An IMS core is mainly a compute application with modest demands on
storage and network - it provides the control plane, not the media plane
(packets typically travel point-to-point between the clients) so does not
require high packet throughput rates and is reasonably resilient to jitter and
latency.
As a core Telco service, the IMS core must be deployable as an HA service
capable of meeting strict Service Level Agreements (SLA) with users. Here
HA refers to the availability of the service for completing new call
attempts, not for continuity of existing calls. As a control plane rather
than media plane service the user experience of an IMS core failure is
typically that audio continues uninterrupted but any actions requiring
signalling (e.g. conferencing in a 3rd party) fail. However, it is not
unusual for client to send periodic SIP "keep-alive" pings during a
call, and if the IMS core is not able to handle them the client may tear
down the call.
An IMS core must be highly scalable, and as an NFV function it will be
elastically scaled by an NFV orchestrator running on top of OpenStack.
The requirements that such an orchestrator places on OpenStack are not
addressed in this use case.
Opportunity/Justification
+++++++++++++++++++++++++
Currently OpenStack supports basic workload affinity/anti-affinity using a
concept called server groups. These allow for creation of groups of instances
whereby each instance in the group has either affinity or anti-affinity
(depending on the group policy) towards all other instances in the group. There
is however no concept of having two separate groups of instances where the
instances in the group have one policy towards each other, and a different
policy towards all instances in the other group.
Additionally there is no concept of expressing affinity rules that can control
how concentrated the members of a server group can be - that is, how tightly
packed members of a server group can be onto any given hosts. For some
applications it may be desirable to pack tightly, to minimise latency between
them; for others, it may be undesirable, as then the failure of any given host
can take out an unacceptably high percentage of the total application
resources. Such requirements can partially be met with so called "soft"
affinity and anti-affinity rules (if implemented) but may require more advanced
policy knobs to set how much packing or spread is too much.
Although this user story is written from a particular virtual IMS use case, it
is generally applicable to many other NFV applications and more broadly to any
applications which have some combination of:
* Performance requirements that are met by packing related workloads; or
* Resiliency requirements that are met by spreading related workloads
Requirements Specification
--------------------------
Use Cases
+++++++++
* As a communication service provider, I want to deploy a highly available
IMS core as a Virtual Network Function running on OpenStack so that I meet my
SLAs.
* As an enterprise operator, I want to deploy my traditional database server
shards such that they are not on the same physical nodes so that I avoid a
service outage due to failure of a single node.
Usage Scenarios Examples
++++++++++++++++++++++++
Project Clearwater [3] is an open-source implementation of an IMS core
designed to run in the cloud and be massively scalable. It provides
P/I/S-CSCF functions together with a BGCF and an HSS cache, and includes a
WebRTC gateway providing interworking between WebRTC & SIP clients.
Related User Stories
++++++++++++++++++++
* http://git.openstack.org/cgit/openstack/telcowg-usecases/tree/usecases/sec_segregation.rst
Requirements
++++++++++++
The problem statement above leads to the following requirements.
* Compute application
OpenStack already provides everything needed; in particular, there are no
requirements for an accelerated data plane, nor for core pinning nor NUMA.
* HA
Project Clearwater itself implements HA at the application level, consisting
of a series of load-balanced N+k pools with no single points of failure [4].
To meet typical SLAs, it is necessary that the failure of any given host
cannot take down more than k VMs in each N+k pool. More precisely, given
that those pools are dynamically scaled, it is a requirement that at no time
is there more than a certain proportion of any pool instantiated on the
same host. See Gaps below.
That by itself is insufficient for offering an SLA, though: to be deployable
in a single OpenStack cloud (even spread across availability zones or
regions), the underlying cloud platform must be at least as reliable as the
SLA demands. Those requirements will be addressed in a separate use case.
* Elastic scaling
An NFV orchestrator must be able to rapidly launch or terminate new
instances in response to applied load and service responsiveness. This is
basic OpenStack nova function.
* Placement zones
In the IMS architecture there is a separation between access and core
networks, with the P-CSCF component (Bono - see [4]) bridging the gap
between the two. Although Project Clearwater does not yet support this,
it would in future be desirable to support Bono being deployed in a
DMZ-like placement zone, separate from the rest of the service in the main
MZ.
Gaps
++++
The above requirements currently suffer from these gaps:
* Affinity for N+k pools
An N+k pool is a pool of identical, stateless servers, any of which can
handle requests for any user. N is the number required purely for
capacity; k is the additional number required for redundancy. k is
typically greater than 1 to allow for multiple failures. During normal
operation N+k servers should be running.
Affinity/anti-affinity can be expressed pair-wise between VMs, which is
sufficient for a 1:1 active/passive architecture, but an N+k pool needs
something more subtle. Specifying that all members of the pool should live
on distinct hosts is clearly wasteful. Instead, availability modelling shows
that the overall availability of an N+k pool is determined by the time to
detect and spin up new instances, the time between failures, and the
proportion of the overall pool that fails simultaneously. The OpenStack
scheduler needs to provide some way to control the last of these by limiting
the proportion of a group of related VMs that are scheduled on the same host.
External References
+++++++++++++++++++
* [1] https://wiki.openstack.org/wiki/TelcoWorkingGroup/UseCases#Virtual_IMS_Core
* [2] https://en.wikipedia.org/wiki/IP_Multimedia_Subsystem
* [3] http://www.projectclearwater.org
* [4] http://www.projectclearwater.org/technical/clearwater-architecture/
* [5] https://review.openstack.org/#/c/247654/
* [6] https://blueprints.launchpad.net/nova/+spec/generic-resource-pools
Rejected User Stories / Usage Scenarios
---------------------------------------
None.
Glossary
--------
* NFV - Networks Functions Virtualisation, see http://www.etsi.org/technologies-clusters/technologies/nfv
* IMS - IP Multimedia Subsystem
* SIP - Session Initiation Protocol
* P/I/S-CSCF - Proxy/Interrogating/Serving Call Session Control Function
* BGCF - Breakout Gateway Control Function
* HSS - Home Subscriber Server
* WebRTC - Web Real-Time-Collaboration

View File

@ -1,106 +0,0 @@
Database cleanup of deleted object
==================================
Cross Project Spec - None
User Story Tracker - None
Problem Description
-------------------
Problem Definition
++++++++++++++++++
Each operator of an OpenStack cloud needs the ability to clean up the OpenStack
database of objects which have been deleted. Currently a new record is created in
the OpenStack database when an object (project, user, VM, network, volume, swift
object, etc.) is created. When an object is deleted its record in the database
remains but is marked as deleted. As an OpenStack cloud stays in operation over
time, the number of records in the database fills with deleted object records and
soon the database becomes too large. This can result in the database consuming
significant resources on the controller node(s) and impacting responsiveness of
the database, even bringing down controller(s).
While a record for deletion of an object is needed for cloud governance it is
not a requirement for the database to retain this record. Thus, some tool is
needed for OpenStack operators to cleanup the database of records from deleted
objects.
* As part of a PoC of an OpenStack cloud, an operator needs a tool to purge the
database of deleted records. This is needed in order to be able to rerun the same
test tool that would create objects every run as well as testing actual growth of
the database for operational conditions not OpenStack implementation artifacts.
* As an operator of an OpenStack cloud I need an ability to remove records of
deleted objects from the database after storing those records for audit purposes.
* As a cloud deployment engineer, I need to be able to re-run the same tests
(with same objects) in a repeatable manner so that I can have a high certainty
in the outcome of my proof of concept and cloud functionality.
* As a cloud deployment engineer, I need only the appropriate records in my
database so that I can complete my upgrade in the allocated down time.
Opportunity/Justification
+++++++++++++++++++++++++
DB hygiene is required for handling OpenStack performance, operational and
upgrade issues. This ensures that historical records of deleted items are not
impacting operational performance and such deleted items are not polluted by
upgrades.
Requirements Specification
--------------------------
Use Cases
+++++++++
This section utilizes the `OpenStack UX Personas`_.
* As `Rey the Cloud Operator`_, I need an ability to remove records of
deleted objects from the database after storing those records for audit purposes.
* As `Adrian the infrastructure architect`_, I need to be able to re-run the same tests
(with same objects) in a repeatable manner so that I can have a high certainty
in the outcome of my proof of concept and cloud functionality.
* As Rey, I need only the appropriate records in my
database so that I can complete my upgrade in the allocated down time.
.. _OpenStack UX Personas: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html
.. _Adrian the infrastructure architect: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/infrastructure-arch.html
.. _Rey the cloud operator: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/cloud-ops.html
Usage Scenarios Examples
++++++++++++++++++++++++
WIP
Related User Stories
++++++++++++++++++++
Nova specs:
* https://review.openstack.org/#/c/184645/
* https://review.openstack.org/#/c/184637/
* https://review.openstack.org/#/c/137669/
Cinder blueprint:
* https://blueprints.launchpad.net/cinder/+spec/db-cleanup
Requirements
++++++++++++
* Operator should be able to specify which policy to apply for deleted objects
* Operator should be able to specify which policy to apply for different tenants
and sub-tenants.
* At least two policies must be supported: Policy 1 - Archive the records in
other persistent storage for a specific interval duration; Policy 2 - Remove
the records from database permanently.
External References
+++++++++++++++++++
None.
Rejected User Stories / Usage Scenarios
---------------------------------------
None.
Glossary
--------
None.

View File

@ -1,137 +0,0 @@
Virtual IMS Core
================
Cross Project Spec - None
User Story Tracker - None
Problem description
-------------------
*Problem Definition*
++++++++++++++++++++
This use case is about deploying a virtual IMS core as an NFV function in
OpenStack. It replaces the version previously uploaded to the TelcoWG
repository [1].
An IMS core [2] is a key element of Telco infrastructure, handling VoIP device
registration and call routing. Specifically, it provides SIP-based call
control for voice and video as well as SIP based messaging apps.
An IMS core is mainly a compute application with modest demands on
storage and network - it provides the control plane, not the media plane
(packets typically travel point-to-point between the clients) so does not
require high packet throughput rates and is reasonably resilient to jitter and
latency.
As a core Telco service, the IMS core must be deployable as an HA service
capable of meeting strict Service Level Agreements (SLA) with users. Here
HA refers to the availability of the service for completing new call
attempts, not for continuity of existing calls. As a control plane rather
than media plane service the user experience of an IMS core failure is
typically that audio continues uninterrupted but any actions requiring
signalling (e.g. conferencing in a 3rd party) fail. However, it is not
unusual for client to send periodic SIP "keep-alive" SIP pings during a
call, and if the IMS core is not able to handle them the client may tear
down the call.
An IMS core must be highly scalable, and as an NFV function it will be
elastically scaled by an NFV orchestrator running on top of OpenStack.
The requirements that such an orchestrator places on OpenStack are not
addressed in this use case.
Opportunity/Justification
+++++++++++++++++++++++++
Although this user story is specifically about deploying the Project
Clearwater virtual IMS core, it is more generally representative of the
issues involved in deploying in OpenStack any scalable Telco-grade control
plane Virtual Network Function (VNF) deployed as a series of load-balanced
stateless N+k pools.
Requirements Specification
--------------------------
Use Cases
+++++++++
* As a communication service provider, I want to deploy a highly available,
high scale, high performance virtual IMS core on OpenStack to provide my core
Voice-over-IP service.
Usage Scenario Examples
+++++++++++++++++++++++
Project Clearwater [3] is an open-source implementation of an IMS core
designed to run in the cloud and be massively scalable. It provides
P/I/S-CSCF functions together with a BGCF and an HSS cache, and includes a
WebRTC gateway providing interworking between WebRTC & SIP clients.
Related User Stories
++++++++++++++++++++
None.
*Requirements*
++++++++++++++
The problem statement above leads to the following requirements.
* Compute application
OpenStack already provides everything needed; in particular, there are no
requirements for an accelerated data plane, nor for core pinning nor NUMA.
* HA
Project Clearwater itself implements HA at the application level, consisting
of a series of load-balanced N+k pools with no single points of failure [4].
To meet typical SLAs, it is necessary that the failure of any given host
cannot take down more than k VMs in each N+k pool. More precisely, given
that those pools are dynamically scaled, it is a requirement that at no time
is there more than a certain proportion of any pool instantiated on the
same host.
That by itself is insufficient for offering an SLA, though: to be deployable
in a single OpenStack cloud (even spread across availability zones or
regions), the underlying cloud platform must be at least as reliable as the
SLA demands. Those requirements will be addressed in a separate use case.
* Elastic scaling
An NFV orchestrator must be able to rapidly launch or terminate new
instances in response to applied load and service responsiveness. This is
basic OpenStack nova function.
* Placement zones
In the IMS architecture there is a separation between access and core
networks, with the P-CSCF component (Bono - see [4]) bridging the gap
between the two. Although Project Clearwater does not yet support this,
it would in future be desirable to support Bono being deployed in a
DMZ-like placement zone, separate from the rest of the service in the main
MZ.
*External References*
+++++++++++++++++++++
* [1] https://review.openstack.org/#/c/179142/
* [2] https://en.wikipedia.org/wiki/IP_Multimedia_Subsystem
* [3] http://www.projectclearwater.org
* [4] http://www.projectclearwater.org/technical/clearwater-architecture/
Rejected User Stories / Usage Scenarios
---------------------------------------
None.
Glossary
--------
* NFV - Networks Functions Virtualisation, see http://www.etsi.org/technologies-clusters/technologies/nfv
* IMS - IP Multimedia Subsystem
* SIP - Session Initiation Protocol
* P/I/S-CSCF - Proxy/Interrogating/Serving Call Session Control Function
* BGCF - Breakout Gateway Control Function
* HSS - Home Subscriber Server
* WebRTC - Web Real-Time-Collaboration

View File

@ -1,134 +0,0 @@
================================
Feature Classification Framework
================================
Cross Project Spec - Ready for Submission
User Story Tracker - None
Problem description
-------------------
Problem Definition
++++++++++++++++++
Operators and Users lack centralized visibility into the status of the
completeness, stability and documentation of features. They also lack knowledge
of the effects of configuration decisions on these features and their
classification. This prevents Operators from making optimal decisions when
configuring their clouds and leaves their users unsure of the maturity and
stability of features they might choose to use in their applications and
workloads.
Opportunity/Justification
+++++++++++++++++++++++++
The configuration and use of OpenStack services can be simplified by:
* providing a centralized framework for classifying features, and
* documenting their completeness, maturity and documentation against specific
configuration sets
Requirements Specification
--------------------------
Use Cases
+++++++++
This section utilizes the `OpenStack UX Personas`_.
* As `Rey the Cloud Operator`_, I want to understand what features are complete,
well-documented and stable so that I can ensure a good experience for my
user community in recommending features for use.
* As Rey, I want to understand the effects my configuration choices
might have on specific feature classifications so that I can make optimal
decisions when defining my cloud's configuration.
* As `Quinn the Application Developer`_, I want to understand which features are stable and mature so that
I choose appropriate features for use in my applications.
* As Quinn, I want to be able to reference a common set of feature
classification definitions that span all OpenStack projects so that I don't
have to understand differing definitions across multiple projects.
* As an OpenStack Developer, I want to understand which features lack
stability, completeness and documentation so that I can direct my attention
to improving those deficient features.
.. _OpenStack UX Personas: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html
.. _Rey the Cloud Operator: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/cloud-ops.html#cloud-ops
.. _Quinn the Application Developer: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/app-developer.html#app-developer
Usage Scenario Examples
+++++++++++++++++++++++
**User Feature Selection**
* User begins developing an application which utilizes OpenStack services
* User reviews Feature Classification Definitions to familiarize them self with
classification terms
* User reviews Feature Classification Framework for available services to
determine which features are complete and mature enough to be utilized by
their application
* User selects those features deemed appropriate for use
**Operator Configuration Selection**
* Operator collects requirements for an IaaS platform from user community
* Operator references Feature Classification Definitions to determine minimum
classification required to support a given feature in their cloud
* Operator references Feature Classification Framework to determine appropriate
configuration choices for the services they've selected to deploy
**OpenStack Developer Improvement Selection**
* OpenStack Developer reviews Feature Classification Definitions to determine
deficient classifications they'd like to target (docs, stability,
completeness).
* OpenStack Developer reviews Feature Classification Framework for their target
project and identifies deficient features to work on.
* After OpenStack Developer's work is complete they have materially changed the
classification for their chosen feature and that update is available to other
developers and users immediately after the change is merged.
**Provider Validates Available Feature Set**
* OpenStack Provider is interested in validating their service has a specific
feature set available.
* OpenStack Provider develops a third party CI to submit their configuration to
the Feature Classification Framework.
* OpenStack Provider references the Feature Classification Framework as
validation to their users that the desired feature set is available and
stable, complete and well documented.
Related User Stories
++++++++++++++++++++
None.
Requirements
++++++++++++
* Third Party CI Reporting submitted by Configuration
* Initial Mapping of Tempest UUIDs to Features
* Documented "Feature" Definition Across All Projects
* Documented Classification Criteria Across All Projects
Example Matrix
++++++++++++++
Below is an example matrix to illustrate what a Feature Classification Matrix
might look like.
============================= ===== ===== ===== ===
Feature Classification Matrix Configurations
----------------------------- ---------------------
Features Conf1 Conf2 Conf3 ...
============================= ===== ===== ===== ===
Feature 1 (S,M,D) Y N ? .
Feature 2 (S,M) Y Y Y .
============================= ===== ===== ===== ===
S=Stable, M=Mature, D=Documented
External References
+++++++++++++++++++
* `Nova - Feature Classification <http://docs.openstack.org/developer/nova/feature_classification.html>`_
Rejected User Stories / Usage Scenarios
---------------------------------------
None.
Glossary
--------
None.

View File

@ -1,243 +0,0 @@
Fleet Management
================
Cross Project Spec - None
User Story Tracker - None
Problem description
-------------------
Problem Definition
++++++++++++++++++
While there are many open-source life-cycle management and deployment tools
for initial deployment of OpenStack (Day 1), there is a need for tools which
integrate with these projects to help cloud operators manage their environment
in the days after it is set up (Day 2 and beyond).
Today, this gap is filled through proprietary management tools provided either
by OpenStack distributors, third parties or bespoke code written by the
end-users operators themselves.
The use of these proprietary or bespoke tools and extensions creates either a
dependency and lock-in (if from a vendor) or a continuing maintenance task for
the end-user.
Opportunity/Justification
+++++++++++++++++++++++++
In user survey and market data complexity and uncertainty of day-two operations
is highlighted as a barrier to OpenStack adoption. Providing an in-community
option for Fleet Management would significantly lower the real and perceived
burden on operations teams responsible for keeping on OpenStack cloud running.
While there are available Operations management systems, most take a
traditional approach. An OpenStack fleet management system should be built on
the IaaS and cloud principles upon which OpenStack is based.
Many companies have already spoken publicly about their ongoing efforts in
writing and maintaining tools to manage their private OpenStack deployments.
Some have even started open-sourcing parts of their toolset to gather support
from community developers.
Rackspace has developed its own tools for use in the Rackspace Public Cloud,
and is actively considering open-sourcing parts of it, based on requests from
its Private Cloud customers. The new Watcher project within OpenStack is also
an attempt to address some aspects of this problem.
Requirements Specification
--------------------------
Use Cases
+++++++++
This section utilizes the `OpenStack UX Personas`_.
* FLT001 - As `Rey the Cloud Operator`_, I need to track and utilize the status of
physical resources in my OpenStack cloud - controller, network, compute,
storage, and utility/logging nodes, and factors like power, temperature, CPU,
memory, disk space, disk i/o, bandwidth (in/out) on each physical resources
- so that I may monitor those metrics and maintain the health of my cloud
* FLT002 - As Rey, I need to monitor the health of physical
resources in my OpenStack cloud so that I can take automatic policy-driven
remedial action and deliver on my SLAs
* FLT003 - As Rey, I need to keep track of unused resource
capacity so that I can free up resources and use them more efficiently
* FLT004 - As Rey, I need to keep track of system alerts and
messages, so that I can take remedial action and maintain the health of my
cloud
* FLT005 - As Rey, I need to check my deployed physical resources
against a set of policies and rules, so that I can meet security,
availability and other requirements
* FLT006 - As Rey, I only want to have a human make an operational
decision when it adds value or automation is not able to
* FLT007 - As Rey, I need to utilize an automatic remediation
system which identifies inconsistencies, determines if physical resources are
in the appropriate state and takes remedial action if they are not
* FLT008 - As Rey, I need to define and adjust automated remedial
actions on physical resources so that they are least disruptive to my end-users
* FLT009 - As Rey, I need to execute automated remedial actions as
a result of my investigations into alerts
* FLT010 - As Rey, I need to apply patches and updates to my
physical resources, and maintain audit logs, so that I can keep my
environment updated and secure, and roll back to prior validated
configurations when something goes wrong with a patch or update
* FLT011 - As Rey, I need to be able to retrieve the current state
of my hardware resources and verify if there are any inconsistencies with
respect to the right operating system, OpenStack services or other services
installed.
* FLT012 - As Rey, I should be able to leverage the deployment and
life-cycle management tooling of my choice as the executor of my automated
actions.
* FLT013 - As Rey, I should be able to choose which components of
the fleet management tooling (inventory, auditing, remediation, human
interaction) I utilize
* FLT014 - As Rey, I should be able to set a repeatability
threshold for each alert so the resolving system does not take action on
auditor alerts which happen repeatedly.
* FLT015 - As Rey, I need to be able to replace failed HW: server,
switch, storage node, and to ensure that the replacement hardware is treated
the same as the replaced hardware I should be able to clone the meta-data and
appropriate settings.
.. _OpenStack UX Personas: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html
.. _Rey the Cloud Operator: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/cloud-ops.html#cloud-ops
Use Case Prioritization
+++++++++++++++++++++++
**Phase 1** - Modularity and Inventory Management
#. FLT013
#. FLT002
#. FLT001
#. FLT004
#. FLT003
**Phase 2** - Auditing and Automated Resolution
#. FLT012
#. FLT006
#. FLT011
#. FLT005
#. FLT007
#. FLT008
#. FLT009
#. FLT010
#. FLT015
#. FLT014
Usage Scenario Examples
+++++++++++++++++++++++
**General Lifecycle**
In this process cloud operators are faced with three tasks: detection, trigger
and resolution. The lifecycle moves through the following phases.
#. Manual Detection, Manual Trigger, Manual Resolution
#. Automatic Detection, Manual Trigger, Manual Resolution
#. Automatic Detection, Manual Trigger, Automatic Resolution
#. Automatic Detection, Automatic Trigger, Automatic Resolution
Here is a real world description of this process:
* User reports an outage
* Cloud operator performs manual discovery and manual resolution
* During RCA cloud operator identifies method to automatically detect outage
* Cloud operator implements automatic detection in auditor system
* After repeated detection cloud operator describes method for automated
remediation via code or run-book
* Cloud operator implements automatic remediation in resolver system
* Cloud operator implements automatic trigger from auditor to resolver system
**Automated Remediation**
* Monitoring triggers a warning on physical resources
* Auditing system determines physical resource in question is not in
appropriate state for its resource type
* Where applicable, resolving system removes physical resources from production
pool
* Resolving system returns physical resources to appropriate state
* Resolving system returns physical resources to production pool
**Operator Notification and Manual Remediation**
* Monitoring triggers a warning on physical resources
* Auditing system determines physical resource in question is in appropriate
state
* Resolving system removes physical resources from production pool
* Resolving system informs cloud operator of need for investigation, removes
resources from resolving system management
* Cloud operator investigates and determines cause of issue
* Cloud operator returns resource to resolving system management
* Resolving system returns physical resources to production pool
**Operator Notification and Manual HW Replacement**
* Monitoring triggers a warning on physical resources
* Auditing system determines physical resource in question is in appropriate
state
* Resolving system moves failed physical resources into maintenance mode
* Resolving system informs cloud operator of need for investigation, and
replacement away from automated system management.
* Cloud operator investigates and determines cause of issue
* Cloud operator replaces failed HW. Cloud operator maintains HW UUID in
system management system. Cloud operator configures replaced HW to
identical configuration of replaced HW.
* Cloud operator returns resource to resolving system management
* Resolving system returns physical resource from maintenance mode to
operation one.
* Resolving system returns physical resources to production pool
**Cloud Operator Definition or Adjustment of Automated Remediation**
* Cloud operator identifies pattern causing requirement for manual remediation
* Cloud operator easily programs known resolution into resolving system
* Cloud operator no longer has to handle manual remediation for that identified
pattern
Related User Stories
++++++++++++++++++++
TBD
Requirements
++++++++++++
TBD
External References
+++++++++++++++++++
* `<https://wiki.openstack.org/wiki/Watcher>`_
* `<http://github.com/paypal/cloudminion>`_
* `<https://wiki.openstack.org/wiki/Osops>`_
Rejected User Stories / Usage Scenarios
---------------------------------------
None.
Glossary
--------
* **Virtual Resource** - Resources managed by OpenStack projects (vCPU,
IP addresses, container bays, networks, ports, vms, images, etc)
* **Physical Resource** -Resources managed by deployment and operations tools
(hosts, firewalls, load balancers, switches, etc)
* **Automated Remediation** - Actions taken on physical and virtual resources
including upgrading openstack services, live migrating virtual machines and
patching hosts. These actions are triggered automatically as opposed to
manual remediation where an operator is involved in each occurrence.
* **Cloud Operator** - Cloud-wide operator responsible for maintaining
availability of infrastructure services. This aligns with the "`Cloud Ops <https://wiki.openstack.org/wiki/OpenStack_Personas_2015_Cloud_Ops>`_"
persona as defined by the OpenStack UX team.
* **RCA** - Root Cause Analysis

View File

@ -1,363 +0,0 @@
======================================
High Availability for Virtual Machines
======================================
Cross Project Spec - None
User Story Tracker - None
Problem description
-------------------
*Problem Definition*
++++++++++++++++++++
Enterprise customers are moving their application workloads into OpenStack
clouds, for example to consolidate virtual estates, and benefit from increased
manageability and other economies of scale which OpenStack can bring.
However, it's typically impractical to re-architect all applications into a
purely cloud-native model at once. Therefore some applications, or parts
thereof, are deployed on non-disposable VMs in a pet model. This requires high
availability of such VMs. Even though VM volumes can be stored on a shared
storage system, such as NFS or Ceph, to improve the availability, VM state on
each hypervisor is not easily replicated to other hypervisors. Therefore, the
system must be able to recover the VM from failure events, preferably in an
automated and cost-effective manner.
Even for applications architected in a cloud-native "cattle" model which can
tolerate failures of individual VMs, at scale it is too impractical and costly
to have to manually recover every failure. Ideally this auto-recovery would be
implemented in the application or PaaS layer, to maximise integration with the
rest of the application. However even if a new feature implemented the
OpenStack layer primarily targeted auto-recovery of pets, it could also serve
as a cheap alternative for auto-recovery of cattle.
Opportunity/Justification
+++++++++++++++++++++++++
Many enterprise customers require highly available VMs in order to satisfy their
workload SLAs. For example, this is a critical requirement for NTT customers.
Requirements Specification
--------------------------
Use Cases
+++++++++
As a cloud operator, I would like to provide my users with highly available
VMs to meet high SLA requirements. There are several types of failure
events that can occur in OpenStack clouds. We need to make sure such events
can be detected and recovered by the system. Possible failure events include:
* VM crashes.
For example, with the KVM hypervisor, the ``qemu-kvm`` process could crash.
* VM hangs.
For example, an issue with a VM's block storage (either its
ephemeral disk or an associated Cinder volume) could cause the VM to
hang, and the QEMU layer to emit a ``BLOCK_IO_ERROR`` which would
bubble up through ``libvirt`` and could be detected and handled by
an automated recovery process.
* ``nova-compute`` service crashes or becomes unresponsive.
* Compute host crashes or hangs.
* Hypervisor fails, e.g. libvirtd process dies or becomes unresponsive.
* Network component fails.
There are many ways a network component could fail, e.g. NIC
configuration error, NIC driver failure, NIC hardware failure, cable
failure, switch failure and so on. Any production environment aiming
for high availability already requires considerable redundancy at
the network level, especially voting nodes within a cluster which
needs its quorum protecting against network partitions. Whilst this
redundancy will keep most network hardware failures invisible to
OpenStack, the remainder still need defending against. However, in
order to fulfill this user story we don't need to be able to
pinpoint the cause of a network failure; it's enough to recognise
which network connection failed, and then react accordingly.
* Availability Zone failure
* Data Center / Region failure
Failure of a whole region or data center is obviously much more severe,
requiring recovery of not just compute nodes but also OpenStack services in
the control plane. It needs to be covered by a Disaster Recovery plan,
which will vary greatly for each cloud depending on its architecture,
supported workloads, required SLAs, and organizational structure. As such,
a general solution to Disaster Recovery is a problem of considerable
complexity, therefore it makes sense to keep it out of scope for this user
story, which should instead be viewed as a necessary and manageable step on
the long road to that solution.
As a cloud operator, I need to reserve a certain number of hypervisors so that
they can be used for failover hosts in case of a host failure event. This is
required for planning in order to meet an expected SLA. The number of failover
hosts depends on the expectation of VM availability (SLA), the size of the host
pool (failover segment), the possibility of host failures and the MTTR of host
failure, all of which are managed by the cloud operator.
The size of host pool (failover segment) is a pre-defined boundary for hosts
which they can find a healthy host to failover. These boundaries can defined as
"hosts are in same shared storage", "host aggregates", etc..
As a cloud operator, I need to perform host maintenances. I need to temporarily
and safely disable the HA mechanism for the affected hosts in order to perform
the maintenance task. Disabling HA mechanism for a host means that all alerts
from that host shall be neglected and no recovery action shall be taken.
For recovery, the actions are not limited to fencing, but nova server stop and
start, process restart on the host may also be a subject of the recovery
action.
As a cloud operator, I need to respond to customer issues and perform
troubleshooting. I need to know the history of what, when, where and how the
HA mechanism is performed. This information is used to better understand the
state of the system.
N.B. This user story concerns high availability, not 100% availability.
Therefore some service interruption is usually expected when failures occur.
The goal of the user story is to reduce that interruption via automated recovery.
Usage Scenario Examples
+++++++++++++++++++++++
* Recovery from VM failure
Monitor the VM externally (i.e. as a black box, without requiring
any knowledge of or invasive changes to the internals of the
VM). Detect VM failure and notify system to recover the VM on the same
hypervisor, or if that fails, on another hypervisor.
Note that failures of the VM which are undetectable from outside it
are out of scope of this user story, since they would require invasive
monitoring inside the VM, and there is no general solution to this which
would work across all guest operating systems and workloads.
* Recovery from ``nova-compute`` failure
Monitor the provisioning process (nova-compute service). Detect
process failure and notify system to restart the service.
If it fails to restart the provisioning process, it should prevent scheduling
any new VM instance onto the hypervisor/host that the process is running on.
The operator can evacuate all VMs on this host to another healthy host and
shutdown this host if it fails to restart the process. Prior to evacuation,
the hosts must be fenced to prevent two instances writing to the same shared
storage that lead to data corruption.
* Recovery from hypervisor host failure
Monitor the hypervisor host. When failure is detected, resurrect
all VMs from the failed host onto new hosts that enable an
application workload to resume a process if the VM state is stored in a
volume even though it loses the state on memory. If shared storage is used
for instance volumes, these volumes survive outside the failed hypervisor
host. However this is not required. If shared storage is not available,
the instance VMs will be automatically rebuilt from their original image, as
per standard nova evacuate behaviour.
The design of the infrastructure, and its boundary of each subsystem such as
shared storage, may restrict the deployment of VM instances and the
candidates of failover hosts. To use nova-evacuate API to restart VM
instances, the original hypervisor host and target hypervisor host need to
connect to the same shared storage. Therefore, a cloud operator defines the
segment of hypervisor hosts and assigns the failover hosts to each segments.
These segments can be defined based on the shared storage boundaries or any
other limitations critical for selecting the failover host.
* Recovery from network failure
Typically the cloud infrastructure uses multiple networks, e.g.
- an administrative network used for internal traffic such as the message bus,
database connections, and Pacemaker cluster communication
- various neutron networks
- storage networks
- remote control of physical hardware via IPMI / iLO / DRAC or similar
Failures on these networks should not necessarily be handled in the same
way. For example:
- If a compute host loses connection to the storage network, its VMs cannot
continue to function correctly, so automatic fencing and resurrection is
probably the only reasonable response.
- If it loses connection to the admin network, its VMs should still continue
to function correctly, so the cloud operator might prefer to receive
alerts via email/SMS instead of any fencing and automated resurrection
which would be needlessly disruptive.
- If the compute host loses connection to the project (tenant) network, then
it may be possible to fix this with minimal downtime by automatically
migrating the VMs to another compute host.
The desired response will vary from cloud to cloud, therefore should be
configurable.
* Capacity Reservation
In order to ensure the uptime of VM instance, the operator needs to ensure a
certain amount of host capacity is reserved to cater for a failure event. If
there is not enough host capacity and a host failure event happens, the VM
on the failure host cannot be evacuated to another host. It is assumed that
there is equivalent host within the fault boundaries. If not, a more
complicated logic (e.g. SR-IOV, DMTC, QoS requirements) will be required in
order to reserve the capacity.
The host capacity of the overall system is typically fragmented into segments
due to the underlying components scalability and each segment has a limited
capacity. To increase resource efficiency, high utilization of host capacity
is preferred. However, as resources are consumed on demand, each segment
tends to reach nearly full capacity if the system doesnt provide a way to
reserve a portion of host capacity. Therefore, a function to reserve host
capacity for failover events is important in order to achieve high
availability of VMs.
* Host Maintenance
A host has to be temporarily and safely removed from the overall system for
maintenances such as hardware upgrade and firmware update. Live migration
should be triggered after putting node into maintenance prior to maintenance.
During maintenance, the monitoring function on the host should be disabled
and the monitoring alert for the host should be ignored. There should be no
triggering of any recovery action of VM instances on the host if its
running. The host should be excluded from reserved hosts as well.
* Event History
History of the past events such as process failures, VM failures and host
failures are useful information to determine the required maintenance work of
a host. An easy mechanism to track past events can save operator time from
system diagnosis. These APIs can also be used to generate the health or SLA
report of the VM availability status.
Related User Stories
++++++++++++++++++++
* `Quotas, Usage Plans, and Capacity Management <http://specs.openstack.org/openstack/openstack-user-stories/user-stories/draft/capacity_management.html>`_
The concept of capacity reservation is common with this story. The difference
is that the story provides the reservation for users where this VM-HA story
provides the reservation for specific contexts of resource inquiry such as
aninstance evacuation, not for an instance creation.
*Requirements*
++++++++++++++
* Flexible configuration of which VMs require HA
Ideally it should be possible to configure which VMs require HA at
several different levels of granularity, e.g. per VM, per flavor,
per project, per availability zone, per host aggregate, per region,
per cell. A policy configuring a requirement or non-requirement for
HA at a finer level of granularity should be able to override
configuration set at a coarser level. For example, an availability
zone could be configured to require HA for all VMs inside it, but
VMs booted within the availability zone with a flavor configured as
not requiring HA would override the configuration at the
availability zone level.
However, it does not make sense to support configuration per compute
host, since then VMs would inherit the HA feature
non-deterministically, depending on whether ``nova-scheduler``
happened to boot them on an HA compute host or a non-HA compute
host.
* An ability to non-intrusively monitor VMs for failure
* An ability to monitor provisioning processes on the compute host for failure
Provisioning processes include ``nova-compute``, associated backend
hypervisor processes such as ``libvirtd``, and any other dependent
services, e.g. ``neutron-openvswitch-agent`` if Open vSwitch is in use.
* An ability to monitor hypervisor host failure
* An ability to automatically restart VMs due to VM failure
The restart should first be attempted on the same compute host, and if that
fails, it should be attempted elsewhere.
* An ability to restart provisioning process
* An ability to automatically resurrect VMs from a failed hypervisor host
and restart them on another available host
The host must be fenced (typically via a STONITH mechanism) prior to the
resurrection process, to ensure that there are never multiple instances of
the same VM accidentally running concurrently and conflicting with each
other. The conflict could cause data corruption, e.g. if both instances are
writing to the same non-clustered filesystem backed by a virtual disk on
shared storage, but it could also cause service-level failures even without
shared storage. For example, a VM on a failing host could still be
unexpectedly communicating on a project network even when the host is
unreachable via the cluster network, and this could conflict with
another instance of the same VM resurrected on another compute host.
* An ability to disable the ``nova-compute`` service of a failed host so
that ``nova-scheduler`` will not attempt to provision new VMs to that
host before ``nova`` notices.
* An ability to make sure the target host for VM evacuation is aligned with the
underlying system boundaries and limitations
* An ability to reserve hypervisor host capacity and update the capacity in the
event of a host failure
* An ability for operator to coordinate with host maintenance tasks
* An ability to check the history of failure and recovery actions
*External References*
+++++++++++++++++++++
* `Automatic Evacuation (Etherpad) <https://etherpad.openstack.org/p/automatic-evacuation>`_
* `Instance Auto-Evacuation Cross Project Spec (In Review) <https://review.openstack.org/#/c/257809>`_
* `Instance HA Discussion (Etherpad) <https://etherpad.openstack.org/p/newton-instance-ha>`_
* `High Availability for Pets and Hypervisors (Video) <https://youtu.be/lddtWUP_IKQ>`_
* `Masakari (GitHub) <https://github.com/ntt-sic/masakari>`_
* `Masakari API Design <https://github.com/ntt-sic/masakari/wiki/Masakari-API-Design>`_
*Rejected User Stories / Usage Scenarios*
-----------------------------------------
None.
Glossary
--------
* **MTTR** - Mean Time To Repair
* `Availability <https://en.wikipedia.org/wiki/Availability>`_ -
ratio of the expected value of the uptime of a system
to the aggregate of the expected values of up and down time.
Not to be confused with
`reliability <https://en.wikipedia.org/wiki/Reliability_engineering>`_.
* `High Availability <https://en.wikipedia.org/wiki/High_availability>`_ -
a characteristic of a system which aims to ensure an agreed level of
operational performance for a higher than normal period. Not to be
confused with 100% availability, which is sometimes described as
`fault tolerance <https://en.wikipedia.org/wiki/Fault_tolerance>`_.
* `Pets and cattle
<http://www.theregister.co.uk/2013/03/18/servers_pets_or_cattle_cern/>`_ -
a metaphor commonly used in the OpenStack community to describe the
difference between two service architecture models: cloud-native,
stateless, disposable instances with built-in resilience in the
application layer (cattle), vs. legacy, stateful instances with no
built-in resilience (pets).

View File

@ -1,259 +0,0 @@
===========================
High Scale Media Telco Apps
===========================
Cross Project Spec - None
User Story Tracker - None
Problem description
-------------------
This use case is specifically about deploying the Perimeta Session Border
Controller (SBC) Virtual Network Function (VNF) from Metaswitch Networks in
OpenStack.
Perimeta, like other SBCs, sits on the edge of a service provider's network and
polices SIP and RTP (i.e. VoIP) control and media traffic passing over both
* the access network between end-users and the core network
* the trunk network between the core and another service provider
.. code-block:: text
Access + SP A core + Trunk + SP B core
network | network | network | network
| | |
| | |
+-------+ +-+--+ +---------+ +-+--+ +-+--+ +---------+
|User | |SBC | |Network | |SBC | |SBC | |Network |
|device |--| |---|function |--| |------------| |---|function |
+-------+ +-+--+ +---------+ +-+--+ +-+--+ +---------+
| | |
+ + +
See the Glossary for a description of these terms.
In order to implement its security and admission control functions (e.g. DDoS
protection), Perimeta must perform line-rate processing of received packets.
For RTP streams, this equates to several million VoIP packets (each ~64-220
bytes depending on codec) per second per core. Perimeta must be able to
guarantee this performance and offer SLAs.
Perimeta must be fully HA, with no single points of failure, and service
continuity over both software and hardware failures (i.e. all SIP sessions and
RTP sessions must continue with minimal interruption over software or hardware
failures).
Perimeta must be elastically scalable, enabling an NFV orchestrator to add and
remove instances in response to applied load.
To apply different policies to traffic from different customers, Perimeta must
be able to distinguish and separate traffic from different customers via VLANs
or similar mechanism.
Perimeta must separate networks carrying live customer traffic from networks
carrying management or other internal data.
Perimeta signaling instances must be able to support large numbers of
concurrent TCP connections (hundreds of thousands) to cater for large numbers
of clients using TCP.
Perimeta must be able to coexist with VMs which do not have these requirements
on the same host, so long as it can provide sufficient dedicated resources.
For example, just because Perimeta may not require security group function it
does not mean this can be disabled at a host scope, or just because Perimeta
uses SR-IOV or DPDK it does not mean that all VMs on that host must do so.
Opportunity/Justification
+++++++++++++++++++++++++
Although this user story is specifically about Perimeta, it is more generally
representative of the issues involved in deploying in OpenStack any VNF
utilising a fast data plane or high scale SIP. The use case focuses on those
elements rather than more generic issues like orchestration and high
availability (HA).
Requirements Specification
--------------------------
Use Cases
+++++++++
* As a communication service provider, I want to deploy a highly available,
high scale, high performance Session Border Controller on OpenStack to police
VoIP traffic at the edge of my network.
Usage Scenarios Examples
++++++++++++++++++++++++
The Perimeta Session Border controller from Metaswitch Networks is a
Telco-grade implementation of a Session Border Controller designed to run
either on generic PC hardware or virtualized, running on OpenStack and other
clouds, providing high availability, high scale and high performance.
Related User Stories
++++++++++++++++++++
None.
*Requirements*
++++++++++++++
The problem statement above leads to the following requirements.
* Achieving packets per second target - networking implications
A standard OpenStack/OpenvSwitch platform allows VMs to drive NICs to full
bandwidth if using large packet sizes typical for Web applications. What
makes VoIP different is the small packet size, which means order of magnitude
more packets and permits only a few hundred CPU instructions per packet -
nowhere near enough to drive a packet through the standard OpenStack
networking stack from VM to NIC. Instead, this requires technologies such
as SR-IOV (https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov
- completed in 2014.2, though with some technical debt remaining) or a DPDK
or similar poll mode based vSwitch in the host. Note that SR-IOV in
particular imposes some limitations (e.g. prevents live migration) so may not
be a desirable option for some SPs.
Ideally the network would support and respect QoS rules on traffic priority
and bandwidth limits.
* Security - networking implications
Security groups must be disabled for network technologies where they are
not bypassed completely.
The network should protect against ARP poisoning attacks from other VMs.
* High scale TCP - networking implications
For ports with security group function disabled, it is desirable that host
connection tracking function is disabled to avoid performance and occupancy
hits for large numbers of TCP connections and the need to tune host
parameters unnecessarily.
* Achieving packets per second target - compute implications
* To achieve line rate all the working data for processing RTP streams
(active flows etc.) must be kept in L3 cache - main memory look-ups are too
slow. That requires pinning guest vCPUs to host pCPUs.
* To optimise the data flow rate it is desirable to bind to a NIC on the host
CPU's bus.
* To offer performance SLAs rather than simply "best efforts" we need to
control the placement of cores, minimise transaction lookaside buffer (TLB)
misses and get accurate info about core topology (threads vs. hyperthreads
etc).
* HA
Perimeta must be deployable to provide a 5 9's level of availability. If
deployed in a single cloud instance, that instance must therefore itself be
more than 5 9's available. As that is hard to achieve with today's state of
the art, Perimeta is designed to be able to span multiple independent cloud
instances, so that the failure of any one cloud has a minor impact. The
requirements that creates are still being discussed and will be addressed in
a future use case.
When deploying Perimeta within a single cloud instance, Perimeta uses an
active/standby architecture with an internal heartbeat mechanism allowing the
standby to take over within seconds of failure of the active, including
taking over its IP address. To support these application level HA mechanisms
requires:
* support for anti-affinity rules to permit the active and standby being
instantiated on the same host
* support for application-controlled virtual IPs via gratuitous ARP based
scheme (for IPv4) and NDP Neighbour Advertisements (for IPv6); in both
cases the standby sends messages saying it now owns the virtual IP address.
The former is supported through standard anti-affinity nova scheduler rules,
and the latter through the neutron allowed-address-pairs extension.
If using SR-IOV, Perimeta does not need multiple SR-IOV ports, as
application level redundancy copes with the failure of a single NIC. However,
it can take advantage of local link redundancy using multiple SR-IOV vNICs.
For this to be of any benefit requires the SR-IOV VFs forming a redundant
pair to be allocated on separate PFs.
Additionally, it is clearly desirable that the underlying cloud instance is
as available as possible e.g. no single points of failure (SPOFs) in the
underlying network or storage infrastructure.
* Elastic scaling
An NFV orchestrator must be able to rapidly launch or terminate new Perimeta
instances in response to applied load and service responsiveness. This is
basic OpenStack nova function.
* Support for a scalable mechanism to support multiple networks in a VM
There must be a scalable mechanism to present multiple networks to Perimeta,
of order hundreds or thousands, so far exceeding the number of vNICs that can
be attached. Various mechanisms are possible; a common one, and the one
that Perimeta supports, is for different customer networks to be presented
over VLANs. This creates a guest requirement for VLAN trunking support.
There are multiple possible ways of mapping networks to these VLANs within
OpenStack, for example, trunking external VLAN networks directly to the VMs
with minimal OpenStack knowledge or configuration (already supported in Kilo)
or configuring the mapping between OpenStack networks and VLANs as covered in
VLAN aware VMs: https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
*External References*
+++++++++++++++++++++
None.
*Rejected User Stories / Usage Scenarios*
-----------------------------------------
None.
Glossary
--------
**NFV**
Network Functions Virtualization, the vision of deploying telecoms functions
as virtual applications running on commercial off the shelf hardware.
**VNF**
Virtual Network Function - a telecoms or other network function running as
a virtual application.
**SIP**
Session Initiation Protocol (RFC 3261) - a common application-layer control
protocol for creating, modifying and destroying sessions between two or more
participants.
**RTP**
Real-time Transport Protocol (RFC 3550) - an end-to-end network transport
protocol for transmitting real-time data like audio and video.
**VoIP**
Voice over Internet Protocol - delivering voice and multimedia sessions over
IP networks, commonly through the use of SIP + RTP.
**SBC**
Session Border Controller, a telecoms function which polices SIP and RTP
flows, providing security, quality of service, admission control and interop
services.
**DDoS**
Distributed Denial of Service - a form of packet flood attack.
**SLA**
Service Level Agreement - contractual commitment to reach certain performance
and availability targets.
**SR-IOV**
Single Root I/O Virtualisation - a technique for presenting a single physical
PCIe device (such as a NIC) as multiple virtual devices, directly presented
to VMs.
**DPDK**
Data Plane Development Kit - a set of libraries and drivers for fast packet
processing.

View File

@ -1,158 +0,0 @@
OpenStack Extreme Testing
==========================
Cross Project Spec - None
User Story Tracker - None
Problem description
-------------------
*Problem Definition*
++++++++++++++++++++
In order to provide competitive service to the customers, OpenStack operators
are upgrading components, integrating new hardware, scaling up, and making
configuration changes in frequent manner. However, all of those
variations are not tested in current OpenStack test systems. Most of the
OpenStack cloud service providers conduct tests by themselves before introducing
new changes to production. Those tests include integration testing, component
interface testing, operational acceptance testing, destructive testing,
concurrent testing, performance testing, etc.. Currently the OpenStack ecosystem
has unit, functional, and integration testing, and most of the above listed
tests are missing or only partially implemented in the ecosystem.
Opportunity/Justification
+++++++++++++++++++++++++
These extended tests can significantly improve the overall quality of the
OpenStack and dramatically reduce the delivery time to introduce a new release
or new changes to production environment. Tests will be run before stable
release by the QA team or even more collaboratively by the 3rd
parties CI interface, spreading the cost of pre-stable testing and increasing
the amount of issues reported for fix before release.
However, testing upstream code with all possible combinations of HW and
configurations is not practical. One possible solution is, QA team will
run these extended tests on few pre-selected reference architectures and
other architectures will be added as 3rd party CIs.
After release the tests can be used by each distributor in their stabilization
processes and finally each operator as they stabilize their configuration
and each deployment. Currently operators are doing these extended tests
by themselves and not collaborating and taking advantage of each other.
Requirements Specification
--------------------------
Use Cases
+++++++++
This section utilizes the `OpenStack UX Personas`_.
* Destructive testing
As `Rey the Cloud Operator`_, I would like to have all the OpenStack projects
to be tested for destructive scenarios on OpenStack cloud system with
`High Availability <http://docs.openstack.org/ha-guide/>`_ configurations
such as controller node high availability, Networking, Storage, Compute
service high availability etc..
So that as we deploy OpenStack into production we have fewer situations in
which OpenStack functions themselves fail (bugs fixed beforehand) and
for others we avoid or can plan to mitigate with our specific configurations.
.. todo:: Add the details of reference architecture for Destructive testing
* Concurrent testing
As Rey, I would like to have following OpenStack projects to be
tested before stable release for concurrent testing. So that as we deploy
OpenStack into production environments we are confident that a real world
situation of simultaneous function calls does not fail.
Openstack Projects for extended testing
* Nova
* Cinder
* Glance
* Keystone
* Neutron
* Swift
.. _OpenStack UX Personas: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html
.. _Rey the Cloud Operator: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/cloud-ops.html#cloud-ops
Usage Scenario Examples
+++++++++++++++++++++++
**Destructive testing**
Destructive testing simulates when part of the underlying OpenStack
infrastructure (HW or SW) or a component of OpenStack itself fails
or needs to be restarted and verifies that the system operates
properly even in such conditions:
* Shutdown a control node where API services are running and verify that API
requests are processed as expected
* Restart of network switches and verify that services can recover
automatically
* Restart some OpenStack services and verify that service can recover
in expected downtime.
* Generate DB/RabbitMQ downtime and verify that there are no request
loss or non-recoverable errors in the system.
* Shut off a hardware blade
.. todo:: Add more details to each test case
(ref: Destructive testing reference architecture)
**Concurrent testing**
Concurrent testing issues requests to a functioning OpenStack cloud more
than 1 at a time. This can be the same functional request but for 2
different users or different functional requests but accessing the
same resource. Expected result is that the functions complete in the
same manner as they did when not issued simultaneously.
Openstack Rally can use to conduct these concurrent tests.
* Tenants added at the same moment
* Networks requested at the same moment
* In a constrained storage environment a release of storage and request
for that storage happen at the same time.
* Simultaneously shelve and migrate instance and then unshelve the instance
* Simultaneously create multiple snapshots from an instance
Related User Stories
++++++++++++++++++++
None.
*Requirements*
++++++++++++++
None.
*External References*
+++++++++++++++++++++
* `Destructive testing (os-faults library and Stepler framework) <https://etherpad.openstack.org/p/ocata-qa-destructive-testing>`_
* `OS Faults <https://github.com/openstack/os-faults>`_
* `HA Failure Test <https://github.com/avdhoot07/HA-Failure-TEST>`_
* `RBAC policy testing <https://etherpad.openstack.org/p/ocata-qa-policy-testing>`_
* `Cloud99 <https://github.com/cisco-oss-eng/Cloud99>`_
*Rejected User Stories / Usage Scenarios*
-----------------------------------------
None.
Glossary
--------
None.

View File

@ -1,220 +0,0 @@
.. This work is licensed under a Creative Commons Attribution 3.0 Unported License.
.. http://creativecommons.org/licenses/by/3.0/legalcode
Configuration for RefStack
==========================
Cross Project Spec - None
User Story Tracker - None
Problem description
-------------------
Problem Definition
+++++++++++++++++++
Customer decision maker for a choice of OpenStack solution to acquire
often need configuration information that was used for certification
to decide which solution to choose that ensures interoperability with other
openstack environments, and/or portability of applications across OpenStack
solutions.
OpenStack operators would like to setup a configuration for an OpenStack
deployment that provides interoperability support for their users. Thus, they
need information from RefStack about the configuration used for DefCore
certification of vendor solution.
Opportunity/Justification
+++++++++++++++++++++++++
The goal of `Openstack interoperability branding <https://www.openstack.org/brand/interop>`_
is to ensure interoperability between OpenStack clouds to support portability
of applications between them. Unfortunately, this portability is heavily
dependent on full openstack solution configuration that consists of openstack
configuration as well as underlying HW environment configuration.
The proposal is to provide information necessary for operators and decision
makers to decide which configuration(s) and which vendor solution will satisfy
their interoperability requirements. Configuration information requested is
similar to the one that many benchmarking groups, like TPC and SPEC, provide
with their results.
Requirements Specification
--------------------------
Use Cases
+++++++++
This section utilizes the `OpenStack UX Personas`_.
* Private Cloud
* As a Cloud solution decision maker I want to choose private OpenStack
solution that meet my application interoperability requirements. Thus, I
need to ensure that my openstack solution configuration ensures
interoperability.
* As a Cloud solution decision maker I want to choose private OpenStack
solution that meet my application interoperability requirements on hardware
vendor of my choice. Thus, I need to ensure that solution interoperability
results are applicable on my hardware choice.
* As a Cloud solution decision maker I want to choose private OpenStack
solution for PoC that meet my applications interoperability requirements on
hardware that is available in my lab now. Thus, I need to ensure that
solution interoperability results are applicable on my hardware choice.
* As a Cloud solution decision maker I want to choose OpenStack solution that
supports multiple hypervisors for my applications and meet interoperability
requirements for all of them.
* As As `Rey the cloud operator`_ I want to choose hypervisor for my OpenStack solution
configuration that provides application interoperability. Thus, as Rey
I want to see which hypervisor was used by vendor for refstack
result submissions.
* As Rey I want to choose container management for my OpenStack
solution configuration that provides application interoperability. Thus, as
Rey I want to see which container management was used by vendor
for refstack result submissions.
* As a Cloud solution decision maker I want to choose OpenStack solution that
supports interoperability for bare metal applications.
* As Rey I want to configure my OpenStack solution configuration
that ensures bare metal application interoperability. Thus, as Rey
I want to see what configuration for OpenStack and what hardware
and its configuration used by vendor for refstack result submissions.
* As Rey I would like to replicate vendor interoperability
results. This requiers exact configuration of complete OpenStack solution,
including hardware choices, its configuration, mapping of OpenStack
component to hardware and OpenStack configuraion for each OpenStack
projects.
* Public Cloud
* As a Cloud solution decision maker I want to choose public OpenStack
solution that meet my applications interoperability requirements. Thus, I
need to ensure that public OpenStack solution provider deploys my
applications on the same OpenStack configuration that was used for DevCore
Logo certification.
* As a Cloud solution decision maker I want to choose OpenStack solution that
supports containers for my applications and meet interoperability
requirements for them.
* As Rey I want to choose hypervisor for my OpenStack solution
configuration that provides application interoperability. Thus, as Rey
operator I want to see which hypervisor was used by vendor for refstack
result submissions.
* As Rey I want to choose container management for my OpenStack
solution configuration that provides application interoperability. Thus, as
Rey I wan't to see which container management was used by vendor
for refstack result submissions.
* As a Cloud solution decision maker I want to choose OpenStack solution that
supports interoperability for bare metal applications.
* As Rey I would like to replicate vendor interoperability
results. This requiers exact configuration of complete OpenStack solution,
including hardware choices, its configuration, mapping of openstack
component to hardware and OpenStack configuraion for each OpenStack
projects as chosen by vendor. While these choices are not visible to a user
in public cloud, an identifier for an exact environment that was used for
RefStack results by the vendor that can be used by the user can be used.
That covers, hardware choices, openstack projects configuration,
environment in which it is running, Availability Zone, Host Aggregate,
High Availability.
.. _OpenStack UX Personas: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html
.. _Rey the cloud operator: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/cloud-ops.html
Usage Scenarios Examples
++++++++++++++++++++++++
* I want to choose an OpenStack solution
* I have a list of VM applications that are required to be portable
* I have a list of docker container based applications that are required to
be portable
* I need to provide environment for my dev/QA team to develop portable
applications
* I have a short list of preferred hardware partners
* I review openstack vendors that have foundation logo for interoprability to
see which ones passed certification on KVM.
* I review openstack vendors that have foundation logo for interoprability to
see which ones passed certficiation on docker containers.
* I review hardware partners logo at OpenStack interoperability list and
choose ones that meet my container and hypervisor requirements
* I install a chosen vendor OpenStack product as admin in my organization.
* I review vendor interoperability submission results to configure my
deployment so it will pass interoperability testing
* I run refstack interoperability on it as base validation.
* I run refstack interoperability on OpenStack public cloud that has logo
interoperability certification, say Rackspace.
* I compare results between themselves and between refstack results on
record.
* If results do not match expectations I send email to interop@openstack.org,
a ticket get generated and the issue is escalated to proper level for
resolution.
* If results are successful, I deploy and run an application on my private
cloud.
* I deploy the same application at OpenStack public cloud that has logo
interoperability certification, say Rackspace.
* I compare results of two runs to ensure that they are the same.
Related User Stories
++++++++++++++++++++
* `Require Vendors to submit configurations (from DefCore midcycle) <https://review.openstack.org/#/c/207209>`_
Requirements
++++++++++++
* Identification of configuration items and values that are important in
parametarizing the interoperability of each specific implementation of
OpenStack cloud.
* Tool to extract openstack configuration files, anonymize them for sensative
information, like passwords, and combined them into reporting entity.
* Enhance RefStack client to use above tool to collect and report pertinent
configuration information as defined by DevCore submission guidelines.
* Definition for unique OpenStack public cloud product+configuration that is
submitted to RefStack with the configuration information that is queryable
and discoverble by users.
External References
+++++++++++++++++++
None.
Rejected User Stories / Usage Scenarios
---------------------------------------
None.
Glossary
--------
* **TPC** - `Transaction Processing Performance Council <http://www.tpc.org>`_
* **SPEC** - `Standard Performance Evaliation Corporation <http://www.spec.org>`_

View File

@ -1,281 +0,0 @@
Rolling Updates and Upgrades
=============================
Cross Project Spec - `Under Review <https://review.openstack.org/290977>`_
User Story Tracker - `Rolling Upgrades Tracker <https://github.com/openstack/openstack-user-stories/tree/master/tracker/rolling-upgrades.json>`_
Problem description
-------------------
Problem Definition
++++++++++++++++++
OpenStack operators often shy away from upgrading or updating OpenStack due to
concerns about the intrusiveness of upgrades. This prohibits operators from
realizing the complete value of their OpenStack cloud, specifically their
access to a constantly improving platform and interoperability with an
expanding OpenStack ecosystem.
The use cases below cover deployments based directly on the OpenStack upstream
code base. While some of the features may be utilized by distribution providers
to improve their support for non-disruptive updates and upgrades, they are not
specifically covered in this document.
Opportunity/Justification
+++++++++++++++++++++++++
This is a large reason why enterprises fail to gain the full value of their
OpenStack cloud. **Upgrades and updates have never been easy and in many
environments require extended downtime of both the control and dataplane.**
This is an inherently un-cloudy characteristic of the OpenStack platform.
Fixing upgrades and updates would clear up many concerns which limit OpenStack
adoption today.
Requirements Specification
--------------------------
Use Cases
+++++++++
This section utilizes the `OpenStack UX Personas`_.
* As `Quinn the Application Developer`_, I want to experience a stable, regularly updated and
upgraded OpenStack platform in order to utilize new features, bug fixes and
security enhancements, so that my cloud development experience is
consistently world-class.
* As `Rey the Cloud Operator`_, I want to provide my users a reliable and available
OpenStack platform so that they do not experience any data plane downtime or
extended control plane downtime
* As Rey, I want to have confidence in my ability to perform an
OpenStack cloud update so that I can perform them on a monthly basis
* As Rey, I want to be able to roll back the most recent cloud
upgrade or update I initiate in the event of issues so that I can be
confident that even in the case of errors I will still avoid data plane or
control plane downtime
* As Rey, I want to be able to define characteristics of a rolling
reboot of my data and control plane hosts so that my users are not impacted
by a rolling upgrade or update
* As Rey, I want to be able to run pre-upgrade tests to ensure my
cloud is capable of upgrading or updating to a specified version so that I
can be confident in the success of my upgrade or update
* As Rey, I want a way to validate whether an upgrade completed
successfully, and get clear indication for any issues and how to resolve them
with specific actions (such as repair, fix and retry, rollback).
* As Rey, I want to know beforehand the upgrade plan including
timing, dependencies, and which services would be impacted.
.. _OpenStack UX Personas: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas.html
.. _Quinn the Application Developer: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/app-developer.html#app-developer
.. _Rey the Cloud Operator: http://docs.openstack.org/contributor-guide/ux-ui-guidelines/ux-personas/cloud-ops.html#cloud-ops
Usage Scenarios Examples
++++++++++++++++++++++++
1. Successful upgrade
a. Cloud Operator schedules OpenStack upgrade to latest release
b. Cloud Operator can be assured that API will perform as expected from a
review of the appropriate service release notes
c. Cloud Operator performs upgrade following simple documentation
d. Cloud Operator notifies users of successful upgrade and new feature and
enhancement availability
e. Cloud Operator schedules next update for 1 month's time (or as needed)
to take advantage of backports, bug fixes and security updates
2. Unsuccessful Update/Upgrade
a. Cloud Operator schedules OpenStack upgrade or update to latest 6 month
release
b. While performing upgrade or update Cloud Operator notices an unexpected
error
c. Cloud Operator returns to a previously known, error-free state
3. Immediate Update
a. Cloud Operator is informed that a security vulnerability has been found
in an OpenStack service and a patch is available for the current release
b. Cloud Operator schedules an update to correct the vulnerability
c. After successfully completed the Cloud Operator's cloud is no longer
vulnerable
4. Rolling Upgrade on Dataplane
a. Cloud Operator schedules an OpenStack upgrade or update for a security
vulnerability which requires reboots of the entire fleet of data-plane
hosts
b. Cloud Operator initiates the upgrade or update and performs the reboots
of the dataplane hosts in an automated, configurable process
c. Cloud Users are unaffected by the reboots
Related User Stories
++++++++++++++++++++
None.
Requirements
++++++++++++
None.
Gaps
++++
Upgrades today require downtime in the data plane, network connectivity and
often control plane.
The current gaps preventing rolling upgrades span a number of fronts which can
best be illustrated via a process for performing a rolling upgrade.
1. **Maintenance Mode**- Preventing the scheduling of additional instances on a
host
2. **Live Migration**- Improvements to live migrating existing resources from
hosts
3. **Upgrade Orchestration - Deploy**- Orchestrating deployment of upgraded or
new versions of a service
4. **Multi-version Interoperability**- Enabling communication between different
versions of the same OpenStack Service
5. **Online Schema Migration**- Enable database schema migrations without
requiring service downtime
6. **Graceful Shutdown**- Ensure services can be shut down without interrupting
requests in process
7. **Upgrade Orchestration - Remove**- Orchestrating potential removal of older
versions of a service and cleanup
8. **Upgrade Orchestration - Tooling**- Ease of use tools for performing
upgrades across control and data plane hosts
9. **Upgrade Gating**- Gating projects on successful rolling upgrades
10. **Project Tagging**- Informing operators which projects can successfully
perform rolling upgrades
For operators, a successful cloud upgrade or update involves all OpenStack
services deployed in a cloud. For that reason a number of these fronts require
enhancements to all projects likely deployed by operators. We'll review these
items first:
**Multi-version Interoperability**
During rolling upgrades it is critical that RPC communications can handle
multiple service versions running concurrently. One common pattern for
achieving this functionality is version objects. A version objects library
exists in Oslo. Each individual project must consider whether or not versioned
objects is the right tool for the multi-version interoperability job. The
following is the status of versioned objects for common OpenStack projects:
* Nova - Implemented
* Neutron - In Progress
* Glance - Not Applicable
* Cinder - In Progress, Not Required
* Swift - Not Applicable
* Keystone - Not Applicable
* Horizon - Not Applicable
* Heat - Implemented
* Ceilometer - Alternatives Proposed
**Online Schema Migration**
Online schema migration, like multi-version interoperability, is solved in a
variety of fashions. Some projects propose standard schema expansion and
contraction to happen over an entire development cycle rather than online at
the time of upgrade. The following is the status of online schema migration for
common OpenStack projects:
* Nova - Policy Implemented
* Neutron - Implemented
* Glance - Unknown
* Cinder - Policy Implemented
* Swift - Unknown
* Keystone - Unknown
* Horizon - Unknown
* Heat - In Progress
* Ceilometer - Unknown
**Maintenance Mode**
Maintenance mode is only useful in those services where entire hosts are used
to create virtual resources. The following is the status of maintenance mode
for applicable OpenStack projects:
* Nova - Implemented
* Cinder - Implemented
* Neutron - Implemented
* Ceilometer - Unknown
* Swift - Implemented
**Live Migration**
Like maintenance mode, live migration is only applicable to those services
where hosts are providing resources. The following is the status of live
migration for applicable OpenStack projects:
* Nova - Implemented (needs some improvements)
* Cinder - Available (depends on backend)
**Graceful Shutdown**
Graceful shutdown is applicable to all common OpenStack services and should
result in services being able to be shutdown only after existing requests have
been processed. The following is the status of graceful shutdown across common
OpenStack projects:
* Nova - Implemented
* Neutron - Implemented
* Glance - Unknown
* Cinder - Implemented
* Swift - Unknown
* Keystone - Unknown
* Horizon - Unknown
* Heat - Unknown
* Ceilometer - Unknown
Other fronts require work in specific orchestration projects or OpenStack
infra.
**Upgrade Orchestration**
Within OpenStack many of the cloud deployment mechanisms have made concerted
effort towards providing upgrade orchestration. Depending on the reference
architecture each deployment mechanism will determine the appropriate order and
methodology for performing a rolling upgrade. The status of each deployment
methods approach to rolling upgrades follows:
* Triple O - Unknown
* Fuel - Task Based Deployment
* OpenStack Puppet - Unknown
* OpenStack Ansible - Upgrade scripts
* OpenStack Chef - Unknown
* Kolla - In Progress
**Upgrade Gating**
OpenStack infra has not begun deploying upgrade tests into the general gate.
There is an available multi-node upgrade test framework called Grenade. Some
projects have begun including upgrade tests in their gates.
* Nova - Gated by multi-node Grenade test
* Neutron - Gated by multi-node grenade
* Glance - None
* Cinder - None
* Swift - Unknown
* Keystone - None
* Heat - None
* Ceilometer - None
**Project Tagging**
There are project meta data tags to signify that a given OpenStack project is
capable of performing a rolling upgrade.
* Status - Implemented
External References
+++++++++++++++++++
* `Dan Smith's Upgrade Blog Series <http://www.danplanet.com/blog/tag/nova-upgrade-details/>`_
* `Rolling Upgrades Project Meta Data Tag <https://github.com/openstack/governance/blob/master/reference/tags/assert_supports-rolling-upgrade.rst>`_
* `Grenade - OpenStack Upgrade Test Harness <https://wiki.openstack.org/wiki/Grenade>`_
Rejected User Stories / Usage Scenarios
---------------------------------------
None.
Glossary
--------
* **Control Plane** Hosts or infrastructure which operate OpenStack services
(e.g. nova-api)
* **Data Plane** Infrastructure instances created by cloud users on an
OpenStack cloud. (Examples: VMs, Storage Volumes, Networks, Databases, etc.)
* **Upgrade** Installing an entirely different OpenStack major software release
with new versions available twice a year. Upgrades can include contract
breaking API changes.
* **Update** Installing new OpenStack software, typically from a stable branch,
to gain access to bug fixes, security patches etc. These can happen as
frequently as needed. Updates are backward compatible with the current major
software version.
* **Rollback** Performing an upgrade or update, and whether the result of
errors, inconsistencies or lack of appropriate preparation subsequently
returning to the pre-upgrade or update version. It is understood that any
actions or data created after upgrade or update would likely be lost as the
result of a rollback.

View File

@ -1,78 +0,0 @@
Security / Policy Enforcement for Enterprise IT
===============================================
Cross Project Spec - None
User Story Tracker - None
Problem Description
-------------------
*Problem Definition*
++++++++++++++++++++
Many enterprise has stringent security requirements and the security policy
must be enforced by IT security. Such security policy must be enforced and
applied to all compute resources hosted in the enterprise environment.
Opportunity/Justification
+++++++++++++++++++++++++
TBD.
Requirements Specification
--------------------------
Use Cases
+++++++++
* As an Enterprise IT security policy maker, I need to ensure that all compute
resources must adhere to the security policy as defined by the IT security
department so that the cloud resources are compliant to enterprise rules and
regulations.
* As an Enterprise IT security administrator, I have to create multiple
security policy for different corporate department or division. All cloud
resources provisioned for that particular department or division must be
applied with relevant security policy. Such policy (e.g firewall rules)
cannot be removed by the cloud users. A cloud users may add additional rules
but cannot remove any rules as defined by the IT security administrator.
Usage Scenarios Examples
++++++++++++++++++++++++
The Enterprise IT needs to enforce a corporate-wide or division-wide firewall
policy and rules. This firewall (or security group) must be applied to all
compute resources of a project/tenant within that division. This policy is
defined by the security administrator and must not be removed by the cloud
users.
For example, the security administrator create a security group with a set of
predefined rules. This security group must be automatically applied to all VM
whenever the VM is launched by the cloud users and cannot be removed.
Related User Stories
++++++++++++++++++++
None.
*Requirements*
++++++++++++++
In order to support this user story, we need:
* A method for security administrator to create a firewall or security policy
and be able to enforce such policy to different project tenant.
* A mechanism to automatically attached the fireall or security policy to
each network/VM created by the cloud users within the project tenant.
* The rules defined in such fireall/security policy can only be modified by
the security administrator and must not be removed or modified by cloud
users. This might requires "role-based access control" to specific type of
resources and actions.
*External References*
+++++++++++++++++++++
TBD.
*Rejected User Stories / Usage Scenarios*
-----------------------------------------
None.
Glossary
--------
TBD.

View File

@ -1,75 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
#'sphinx.ext.intersphinx',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'openstack-user-stories'
copyright = u'2013, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1 +0,0 @@
../../development-proposals/

Binary file not shown.

Before

Width:  |  Height:  |  Size: 118 KiB

View File

@ -1,44 +0,0 @@
Contents:
===========================
Development Proposal Status
===========================
.. include:: ../../README.rst
==================================
Development Proposal Specification
==================================
.. toctree::
:glob:
:maxdepth: 1
development-proposals/proposed/*
================================
Other Proposals In Draft/Pending
================================
.. toctree::
:glob:
:maxdepth: 1
development-proposals/parking/*
========
Workflow
========
.. toctree::
:glob:
:maxdepth: 1
workflow/*
==================
Indices and tables
==================
* :ref:`search`

View File

@ -1,48 +0,0 @@
Product WG: Taxonomy Overview
=============================
The Product Working Group will help the community document requirements, in the form of user stories, for key barriers to adoption based on specific needs of special interests groups (SIGs) that are working as a part of the user committee organization. While we will also accept user story submissions from individual members, it is highly encouraged that you join a Working Group that shares your interest and submit as a part of the group. The main reason for encouraging this is because the group may already have plans that align to your needs and, if the user story resonates with the team, they can help it get greater visibility through the working group members.
Here is a list of all of the working groups that exist today, you can visit their wiki page for more information: `https://wiki.openstack.org/wiki/Category:Working_Groups <https://wiki.openstack.org/wiki/Category:Working_Groups>`_
The purpose of this document is to share the taxonomy/hierarchy used by the Product Working Group for generating, categorizing, and tracking user stories.
Agile Terms
-----------
The Product Working Group will follow agile terminology/methodology in a loose sense. It is therefore important to cover some basic agile terms and process information before describing our manifestation of an agile-like process. Please also keep in mind that most individuals and organizations tend to have *slightly* different interpretations of the concepts being discussed in this section, therefore do not consider this a definitive guide on themes, epics, user stories, features, etc.
**Theme**: An area of focus (grouping of user stories/epics). It does not contain significant detail and is associated with a single product. Normally either theme or epic is the top-level artifact when describing requirements. A theme can span multiple sprints.
**Epic**: An epic is generally a larger, more broader, user story. It may be the top level requirements artifact or can be one of many epics under a theme. An epic will generally span multiple sprints; the user stories that are generated from the decomposed epic are the items that belong within the product backlog or sprint.
**User Story**: A user story can only belong to one epic at a time and can not span sprints. The user story generally captures what the user wants using a format that captures an actor/role, objective, and benefit/reasoning. This item is then broken into more-detailed tasks or features that describe the specific work that must be done to deliver the user story. Most product backlogs prioritized work using user stories as the operational unit of work
**Feature**: Feature is used by some as an alternative to user story and as an alternative to task by others. For our terminology overview, we will consider feature and task as interchangeable terms. Please see task for additional details.
**Scenario**: A scenario usually expands upon the user story to describes examples how the user story might be be interpreted or how the requirement being described is experienced by the user. A good scenario can also act a way to validate that the goal of the user story was successfully achieved.
**Task**: A task is a lower-level requirements item that captures a sub-unit, or step, that is necessary to complete a user story. Tasks are usually generated by the development team working on the user story.
.. image:: ../images/agile_overview.jpg
Good resources for additional information related to these terms:
`http://www.mountaingoatsoftware.com/blog/stories-epics-and-themes <http://www.mountaingoatsoftware.com/blog/stories-epics-and-themes>`_
`http://www.romanpichler.com/blog/agile-scenarios-and-storyboards <http://www.romanpichler.com/blog/agile-scenarios-and-storyboards>`_
Product WG Mapping
------------------
The Product WG will leverage agile terms to communicate and this section provides additional information on how these terms will be mapped inside the OpenStack community.
**Theme**: Themes will be very high-level categories that are agreed upon by the Product WG and OpenStack Foundation. Generally, the themes will be areas such as resiliency, availability, performance, scalability, UX, etc.
**Epic**: Epics will be used to build topics within each theme that could be used to aggregate user stories. For example, in the "availability" theme, we could have an epic for "service processes" and all of the user stories related to making services such as cinder, nova, keystone, etc. "highly available" would fall under this epic.
**User Story**: A user story can only belong to one epic at a time and *can* span sprints. The user story generally captures what the user wants using a format that captures an persona/role, objective, and benefit/reasoning. This item is then broken into more-detailed tasks or features that describe the specific work that must be done to deliver the user story. The Product WG tracker will track user stories.
**Feature**: Feature is used by some as an alternative to user story and as an alternative to task by others. For our terminology overview, we will consider feature and task as interchangeable terms. Please see task for additional details.
**Scenario**: A scenario usually expands upon the user story to describes examples how the user story might be be interpreted or how the requirement being described is experienced by the user. A good scenario can also act a way to validate that the goal of the user story was successfully achieved. The Product WG user story template requires an entry for usage scenarios to foster discussion of the user story.
**Task**: A task is a lower-level requirements item that captures a sub-unit, or step, that is necessary to complete a user story. Tasks are usually generated by the development team working on the user story. In our case, blueprints/specs that are related to a user story will be considered tasks.

View File

@ -1,95 +0,0 @@
Workflow
========
Where Feature/Improvements Come From
------------------------------------
Feature or improvement ideas typically start from a few ways:
* User feedback collected by the product working group.
* IRC or Mailing list discussion.
* A patch within a project that's recognized as benefiting other projects.
How to Propose Feature/Improvements in OpenStack
------------------------------------------------
Blueprints
^^^^^^^^^^
To formally propose a feature or improvement to an OpenStack project, you need
to create a Blueprint. Blueprints allow the community to track initiatives and
potentially mark them to a milestone in a release being developed. Some of the
information tracked is who is implementing it, current progress, and `more <https://wiki.openstack.org/wiki/Blueprints#Blueprints_reference>`_
`Read entire flow <https://wiki.openstack.org/wiki/Blueprints#Blueprints_only_lifecycle>`_
Project Specifications
^^^^^^^^^^^^^^^^^^^^^^
Some projects go a step further with blueprints and ask for a set of information
up front to know whether a certain initiative is a good idea. This set of
information can be technical information such as:
* Application Programming Interface (API) impact
* Database impact
* Upgrade impact
* User impact
Specification information is not standard across the different OpenStack
projects, but you can see if a project does specifications by going to
https://github.com/openstack/<project>-specs and find their template file to
see what questions you need to answer.
Keep in mind not all ideas need a specification, so find out from the project
members if a certain idea warrants a full spec, or just a blueprint.
`Read entire flow <https://wiki.openstack.org/wiki/Blueprints#Spec_.2B_Blueprints_lifecycle>`_
Cross-Project Specifications
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If an idea spans to more than one project, it should be introduced in the
`OpenStack Specs repo <https://github.com/openstack/openstack-specs>`_ instead
a project specific Specification.
Each project that is involved with the specification should have a blueprint
registered, and the blueprint URL should be included in the OpenStack
Specification.
Product Working Group Liaisons Role
-----------------------------------
Introducing A Feature/Improvement
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It should be assumed introducing a feature/improvement depending on
availability resources and other priorities, it'll likely require a notice of
a release or two with project teams before any work can begin. Therefore
planning and discussion should happen as soon as possible. A liaison will be
assigned to oversee an idea cross-project with the following responsibilities:
1. Create or have someone create the technical OpenStack specification.
2. CC cross-project spec liaisons of projects to specification for attention.
If needed email cross-project spec liaisons for attention.
3. If any specifications needs additional attention, you can add an item to the
`cross-project meeting agenda
<https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting#Proposed_agenda>`_,
and have it be discussed realtime.
4. Specify in the cross-project spec the `topic branch
<http://git-scm.com/book/en/v2/Git-Branching-Branching-Workflows#Topic-Branches>`_
for work to be carried out on. This will allow all development work in
`gerrit <https://review.openstack.org>`_ to be found easily with the topic
filter across the different projects.
5. Once enough consensus is met by the cross-project spec liaisons of the
necessary projects, the specification will be passed to the Technical
Committee for approval.
Tracking Feature/Improvement
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1. The Product Working Group Liaison should identify with each project involved
who will actually implement the feature/improvement.
2. Alignment with each project implementing the feature/improvement in the same
release is a nice to have. It should be fine to have some projects start
early, but should be considered incomplete until all necessary projects are
done implementing.
3. The Product Working Group liaison should continue to communicate with the
implementers on each project to track progress. It's up to the liaison to
identify when things are stalling and working with the project team on
someone to carry on the work. If the implementer is unresponsive, a meeting
with the project team should be called to find a new implementer.

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-log-messages: 128: If the error message is not useful - then we need a bug on the error message ##action-item ##LCOO

View File

@ -1,28 +0,0 @@
https://etherpad.openstack.org/p/BOS-API-WG-BOF: 38: * ##ACTION ITEM: asettle to add some API contrib stuff to contributor-guide w/ Anne Gentle
https://etherpad.openstack.org/p/BOS-forum-Shared-Commercial-Goals-Public-Clouds: 25: * Identify maturity of projects for public clouds - ##action collect into a spread sheet
https://etherpad.openstack.org/p/BOS-forum-Shared-Commercial-Goals-Public-Clouds: 61: * Find better ways of raising awareness of new features / projects / bugs / whatever that are potentially applicable to other Public Cloud operators ##action
https://etherpad.openstack.org/p/BOS-forum-large-deployment-team: 41: * ##action mdorman to follow up ML post to clarify LDT position on this
https://etherpad.openstack.org/p/BOS-forum-log-messages: 132: ##Action Item Need developer to revise middleware and all clients to pass a single user selected reqID through the entire project chain so all messages for a user initiated requirest will have that reqID (global)
https://etherpad.openstack.org/p/BOS-forum-log-messages: 134: ##Action Item: Youtube on how to navigate Launchpad to specific project and file bug against project for operators
https://etherpad.openstack.org/p/BOS-forum-ops-catch-up: 35: Edgar Magana will work with William Vera to build the local team and make the event possible! ##action
https://etherpad.openstack.org/p/BOS-forum-ops-catch-up: 37: 1.1 Digital Culture Center - August 9th/10th <-- SELECTED AS THE VENUE AND DATE!!!! ##action
https://etherpad.openstack.org/p/BOS-forum-ops-catch-up: 76: - check with Erin or Claire on which option is the most cost efficient ##action
https://etherpad.openstack.org/p/BOS-forum-ops-catch-up: 77: - get with designer ##action
https://etherpad.openstack.org/p/BOS-forum-uc-governance-and-support-of-wgs: 50: * ##action ##uc get documents into gerrit
https://etherpad.openstack.org/p/BOS_Forum_Oslo.Messaging_driver_recommendations: 137: * ##action: Document the status of drivers
https://etherpad.openstack.org/p/pike-forum-cloud-applications: 10: * ##action everyone review Keystone policy roadmap specs
https://etherpad.openstack.org/p/pike-forum-cloud-applications: 20: * ##action mordred, zaneb to come up with preliminary description of this (not spec level of detail) to make sure the API key idea is leading in the right direction

View File

@ -1,84 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-Compliance-Security-Certification: 95: * Recommended configurations to setup basic common security practices ##actionitem ##compliance
https://etherpad.openstack.org/p/BOS-forum-Compliance-Security-Certification: 142: Documented OpenStack security gaps ##actionitem ##compliance
https://etherpad.openstack.org/p/BOS-forum-Compliance-Security-Certification: 143: Guidelines for securing openstack ##actionitem ##compliance
https://etherpad.openstack.org/p/BOS-forum-LCOOGetToKnow: 49: * Define governance model in LCOO and it is allined with UC ##actionitem#lcoo
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 127: * set up and publish slack channel ##actionitem ##lcoo --> https://lcoo.slack.com >> publish this on wiki
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 128: * derive short term win ##actionitem ##lcoo
https://etherpad.openstack.org/p/BOS-forum-contributing-multi-project-requirements: 76: * Collaborate with other WG and bring requirements together, identify commonalities, with further gap analysis ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 44: * Looking for reference architecture on how to configure and design the underlying cloud to support to upper workload ##actionitem ##ewg
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 45: * Link the workload reference architecture to architecture design guide ##actionitem ##ewg
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 46: * Continue to work/colloborate with Docs team on guidance ##actionitem ##ewg
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 48: * Defining who we are??? ##ewg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 49: * define EWG scope ##ewg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 48: * new to openstack - how do you get involved ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 51: * target (welcome) newer large operators ##LCOO ##pwg ##onboarding ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 54: * encourage more sigs ##rbac ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 58: * Post-forum Analysis +1+1 ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 60: * template for response from moderators to send as summary ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 61: * Mine hashtag data ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 65: * Notify moderators to give feedback on Forum sessions ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 66: * How do we identify organization to help with this wg ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading: 123: What could the communiity do to help in place n+2 upgrades?.##all-projects#actionitem
https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading: 124: * Better release notes for all projects would help to identfiy what has changed.##all-projects#actionitem
https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading: 125: * Better description of what happens with mixed versions, i.e agents behind one or two versions.##all-projects#actionitem
https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading: 126: * TODO: call out projects that are doing upgrade impacts but not documenting them in the release notes.##all-projects#actionitem
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 192: 1. Monthly IRC meeting of the WG chairs introducing the happenngs of the last period ##actionitem ##nfv
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 201: 1. Wiki or similar to keep track of the related WGs and their work scope ##actionitem ##nfv
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 202: 1. Regular face2face meeting on the Summits and PTG-s ##actionitem ##nfv
https://etherpad.openstack.org/p/BOS-forum-unanswered-requirements: 64: * Set base expectations on the group openness & output ##actionitem ##requirements
https://etherpad.openstack.org/p/BOS-forum-unanswered-requirements: 65: * have discussions in shared place (general mailing list?) ##actionitem ##uc
https://etherpad.openstack.org/p/BOS-forum-unanswered-requirements: 77: * finding ways to steer member companies contributing developers to focus on the identified goals ##wg ##uc ##actionitem
https://etherpad.openstack.org/p/BOS-forum-user-committee-session: 24: * review what was initially implemented ##uc ##actionitem
https://etherpad.openstack.org/p/BOS-forum-user-committee-session: 25: * review what was has been proposed since initial implementation ##uc ##actionitem
https://etherpad.openstack.org/p/BOS-forum-user-committee-session: 26: * what has not been added as a criteria that needs to be ##uc ##actionitem
https://etherpad.openstack.org/p/BOS-forum-user-committee-session: 27: * writing it in uc vs tc terms ##uc ##actionitem
https://etherpad.openstack.org/p/BOS-forum-user-committee-session: 28: * make a link of which repos fall under uc governance ##uc ##actionitem
https://etherpad.openstack.org/p/BOS-forum-user-committee-session: 45: * extra-AUC? Not need for new repo - Add a txt file ##uc ##actionitem +1
https://etherpad.openstack.org/p/BOS-forum-user-committee-session: 46: * repo for self-nomination +1 ##uc ##actionitem ##emagana (I will take it)+1
https://etherpad.openstack.org/p/BOS-forum-using-vm-and-baremetal: 56: * lbragstad to go to the Product Working group about getting people to work on this ##actionitem
https://etherpad.openstack.org/p/BOS-forum-using-vm-and-baremetal: 58: * mordred to take on massaging the spec ##actionitem
https://etherpad.openstack.org/p/TrackChair: 24: * Ask track chairs to share best practices to share with incoming group ##trackchair ##actionitem
https://etherpad.openstack.org/p/TrackChair: 30: * Clarifying Alternates and Lightning Talks ##trackchair ##actionitem
https://etherpad.openstack.org/p/TrackChair: 34: * dicussion on track-chair mail-ilst ##trackchair ##actionitem

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-developer-openstack-org: 11: 1. Help take notes by adding ##<actions> and "+1" any good ideas with which you agree.

View File

@ -1,10 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading: 79: * Better release notes for all projects would help to identfiy what has changed.##all-projects
https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading: 123: What could the communiity do to help in place n+2 upgrades?.##all-projects#actionitem
https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading: 124: * Better release notes for all projects would help to identfiy what has changed.##all-projects#actionitem
https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading: 125: * Better description of what happens with mixed versions, i.e agents behind one or two versions.##all-projects#actionitem
https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading: 126: * TODO: call out projects that are doing upgrade impacts but not documenting them in the release notes.##all-projects#actionitem

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 98: * Folks tend to look to Nova versus the guidelines ##apiwg

View File

@ -1,30 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 113: * Why only Large/telco WGs? ##best-practice
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 141: * What is the best way to present needed OpenStack features to the community for discussion? ##best-practice
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 19: * Combined newsletter? ##best-practice ##wgchairs
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 20: * An open session simliar to this one, but focused on the WG leads to spend a few minutes describing their group to the attendees might help ##best-practice ##wgchairs - let's always think about scale, i.e. why not describe the groups centrally and consistently communicate to point folks in addition
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 22: * Superuser TV/articles ##best-practice ##wgchairs
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 30: * weekly newsletter sent out to openstack-dev ML ##best-practice ##wgchairs
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 71: * made a brochure ##best-practice ##wgchairs
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 72: * made cards ##best-practice ##wgchairs
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 73: * made a presentation ##best-practice ##wgchairs
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 74: * submit a Summit BoF session to induct new members ##best-practice ##wgchairs
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 75: * Ask Flanders <flanders@openstack.org> ##best-practice ##wgchairs
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 76: * Write a SuperUser article ##best-practice ##wgchairs
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 77: * have co-chairs in different countries, e.g. Scientific-WG has 3x co-chairs from Australia, UK and USA. ##best-practice ##wgchairs
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 78: * have a WG social/party, e.g. Scientific-has a social each summit where it invites vendors to sponsor the drinks for the evening. ##best-practice ##wgchairs
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 79: * Involve the community by letting people know of the work we are doing (Meetups, conferences, small outreach) ##best-practice ##wgchairs

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-developer-openstack-org: 18: ##Board <-- goes to the high level board of directors as suggestion.

View File

@ -1,4 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-Features-Missing-For-Public-Clouds: 33: * ##call for public cloud operators to read proposal for keystone specs on 'unified limits' see BOS-forum-quotas
https://etherpad.openstack.org/p/BOS-forum-Features-Missing-For-Public-Clouds: 47: * ##call for public cloud deployers to support the new maintainer fo rthe VPN as a Service feature.

View File

@ -1,14 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-Compliance-Security-Certification: 75: * Call-To-Action: fill out the ethercalc - https://ethercalc.openstack.org/BOS-forum-Compliance-Security-Certification ##compliance
https://etherpad.openstack.org/p/BOS-forum-Compliance-Security-Certification: 88: * Understanding the 12 PCI DSS Requirements - https://www.pcisecuritystandards.org/pdfs/pci_dss_saq_navigating_dss.pdf ##compliance
https://etherpad.openstack.org/p/BOS-forum-Compliance-Security-Certification: 95: * Recommended configurations to setup basic common security practices ##actionitem ##compliance
https://etherpad.openstack.org/p/BOS-forum-Compliance-Security-Certification: 127: * openSCAP is the testing tool: https://www.open-scap.org ##compliance
https://etherpad.openstack.org/p/BOS-forum-Compliance-Security-Certification: 128: * existing tests for Ubuntu: https://github.com/GovReady/ubuntu-scap ##compliance
https://etherpad.openstack.org/p/BOS-forum-Compliance-Security-Certification: 142: Documented OpenStack security gaps ##actionitem ##compliance
https://etherpad.openstack.org/p/BOS-forum-Compliance-Security-Certification: 143: Guidelines for securing openstack ##actionitem ##compliance

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-user-committee-session: 62: * ##concern ##usersurvey ways of getting qualified data (curated list [focus group] vs the world)

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-Features-Missing-For-Public-Clouds: 35: * Project deletions? ##crossproject ##painpoint

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/cloud-native-forum: 15: ##Deployment-team ##LCOO

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 101: * OpenStack Shared|Special Interest Groups ##vote ##dialogue

View File

@ -1,4 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 103: * Go beyond the summit ##discoverability
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 106: * collaborative document ##discoverability

View File

@ -1,4 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-consumable-keystone: 44: * Write a document describing how to integrate keystone with a set of applications folks would like to see working this way ##keystone ##documentation
https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading: 85: * Best Practices? ##documentation ##upgrades

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-user-committee-session: 46: * repo for self-nomination +1 ##uc ##actionitem ##emagana (I will take it)+1

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 68: * ETSI NFV (Network Functions Virtualisation) ##ETSINFV

View File

@ -1,34 +0,0 @@
https://etherpad.openstack.org/p/Achieving_Resiliency_at_Scales_of_1000+: 37: * Can ##pwg or ##ewg help with this?
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 40: * good cadence on ebook ##feedback ##ewg
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 44: * Looking for reference architecture on how to configure and design the underlying cloud to support to upper workload ##actionitem ##ewg
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 45: * Link the workload reference architecture to architecture design guide ##actionitem ##ewg
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 46: * Continue to work/colloborate with Docs team on guidance ##actionitem ##ewg
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 47: * Any discussion on using general tools such as terraform other than heat ##feedback ##ewg
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 48: * Defining who we are??? ##ewg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 49: * define EWG scope ##ewg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 58: * Align with Summit cycle seems reasonable ##ewg
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 61: * e.g. High Throughput Computing (CERN, Tim Bell) +1 ##ewg ##goal
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 62: * Scientific WG could already have a lot of the data required/needed to complete in parallel with other goal(s) ##ewg ##goal
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 64: * Scientific WG could already have a lot of the data required/needed to complete in parallel with other goal(s) ##ewg ##goal
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 65: * e.g. Containers/Kubernetes (IBM Tong Li, Interop Challenge Heat template, Cockroach DB is sample app). Megan's input: Describe how to use individual OpenStack services with Kubernetes (this may be better as a separate series of articles or briefs). Such as Keystone, Cinder, Kuryr. ##ewg ##goal ##interop
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 70: * How container intersect with OpenStack ##ewg ##wishlist
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 76: * Regional meeting? biweekly meeting? ##ewg
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 77: * need to work on scope/purpose prior to outreach ##ewg
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 82: * Providing assistance to financial wg on execution ##ewg

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-evolving-the-community-generated-roadmap: 55: ##featuretracker - Follow up on developing a community resource for requesting and rating feature requests.

View File

@ -1,4 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 40: * good cadence on ebook ##feedback ##ewg
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 47: * Any discussion on using general tools such as terraform other than heat ##feedback ##ewg

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 85: * Massively Distributed Clouds WG ##FEMDC

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-log-messages: 136: ##Future: Severity for logs

View File

@ -1,24 +0,0 @@
https://etherpad.openstack.org/p/Achieving_Resiliency_at_Scales_of_1000+: 31: * ##gap Performance gate
https://etherpad.openstack.org/p/BOS-ETSI-NFV-Specs-Reqs-vs-OpenStack-Reality: 32: * ##gap Resource reservation
https://etherpad.openstack.org/p/BOS-ETSI-NFV-Specs-Reqs-vs-OpenStack-Reality: 44: * ##gap Resource change events
https://etherpad.openstack.org/p/BOS-ETSI-NFV-Specs-Reqs-vs-OpenStack-Reality: 47: * ##gap Interpretation of alarms
https://etherpad.openstack.org/p/BOS-ETSI-NFV-Specs-Reqs-vs-OpenStack-Reality: 48: * ##gap Capacity management fragments
https://etherpad.openstack.org/p/BOS-forum-Shared-Commercial-Goals-Public-Clouds: 55: * Where are patches of interest to public cloud operators aggregated? Drop a link in -publiccloud IRC maybe? ##gap
https://etherpad.openstack.org/p/BOS-forum-i18n-translation-feedback-from-users: 19: ##I18n ##translation ##user-invoice ##gap
https://etherpad.openstack.org/p/BOS-forum-qa-tools-plugins: 58: * Are there any container projects or something for Tempest? ##newfeature, ##gap ?
https://etherpad.openstack.org/p/BOS-forum-qa-tools-plugins: 71: * Changes in devstack many times break env ##painpoint, ##uservoice, ##gap
https://etherpad.openstack.org/p/TrackChair: 3: * Maybe get some feedback from the submitter to make sure the proposed change makes sense ##gap ##trackchair
https://etherpad.openstack.org/p/TrackChair: 8: * Tags might help ##trackchair ##gap
https://etherpad.openstack.org/p/TrackChair: 12: * Way to collect feedback from the audience about usefulness of previous tracks ##trackchair ##gap

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-user-committee-session: 66: * Expected changes on the UC charter for WGs ##uc ##gaps

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-consumable-keystone: 45: * grafana support (both pointed at gnochi for users to pull their own data, and pointed at prometheus and just restricted to cloud admins?) ##keystone ##grafana ##gnocchi

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-developer-openstack-org: 41: *** GopherCloud --> Joe Topjian, used by K8S, DockerSwarm, etc ##sdk ##go

View File

@ -1,8 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 61: * e.g. High Throughput Computing (CERN, Tim Bell) +1 ##ewg ##goal
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 62: * Scientific WG could already have a lot of the data required/needed to complete in parallel with other goal(s) ##ewg ##goal
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 64: * Scientific WG could already have a lot of the data required/needed to complete in parallel with other goal(s) ##ewg ##goal
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 65: * e.g. Containers/Kubernetes (IBM Tong Li, Interop Challenge Heat template, Cockroach DB is sample app). Megan's input: Describe how to use individual OpenStack services with Kubernetes (this may be better as a separate series of articles or briefs). Such as Keystone, Cinder, Kuryr. ##ewg ##goal ##interop

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-consumable-keystone: 45: * grafana support (both pointed at gnochi for users to pull their own data, and pointed at prometheus and just restricted to cloud admins?) ##keystone ##grafana ##gnocchi

View File

@ -1,4 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-user-committee-session: 57: * Role/responsibilities for upcoming User Survey and timeline ##heidijoy ##usersurvey
https://etherpad.openstack.org/p/BOS-forum-user-committee-session: 58: * has been documented ##heidijoy

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-i18n-translation-feedback-from-users: 19: ##I18n ##translation ##user-invoice ##gap

View File

@ -1,4 +0,0 @@
https://etherpad.openstack.org/p/Achieving_Resiliency_at_Scales_of_1000+: 52: * ##infra team to store data and graph it
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 64: * ##infra It provides a forum to perform CI with adjacent communities and frameworks for validation of use cases

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-enterprise-wg-working-session: 65: * e.g. Containers/Kubernetes (IBM Tong Li, Interop Challenge Heat template, Cockroach DB is sample app). Megan's input: Describe how to use individual OpenStack services with Kubernetes (this may be better as a separate series of articles or briefs). Such as Keystone, Cinder, Kuryr. ##ewg ##goal ##interop

View File

@ -1,16 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-101: 32: * You can use ##<tag> to highlight anything that needs to be shared with a team not present in the session (e.g. ##uservoice, ##ironic) and a group of people will share it in a mailing list summary after the event
https://etherpad.openstack.org/p/BOS-forum-ironic-feedback: 43: * - Port selection on multi physical switches. ##ironic ##painpoint
https://etherpad.openstack.org/p/BOS-forum-ironic-feedback: 51: * no concept of locality ##ironic ##painpoint
https://etherpad.openstack.org/p/BOS-forum-ironic-feedback: 52: * I would like to see this happen for per rack (leaf) etc conductors ##ironic ##painpoint
https://etherpad.openstack.org/p/BOS-forum-ironic-feedback: 55: * https://specs.openstack.org/openstack/ironic-specs/priorities/pike-priorities.html#deploy-steps ##ironic ##painpoint
https://etherpad.openstack.org/p/BOS-forum-ironic-feedback: 71: * Sync power state against 50,000 nodes takes a while. I'm all the way back on Juno though, so there are probably some changes that've happened that i've missed. ##ironic ##painpoint
https://etherpad.openstack.org/p/BOS-forum-ironic-feedback: 73: * Running DHCP/HTTP/etc for instance provisioning out of the Ironic Conductor machines provides some large security holes. This means that datacenter nodes have logical access to one of the most crucial pieces of the OpenStack control plane. We've moved DHCP, TFTP, HTTP for provisioning to another set of nodes. However the conductor expects to write this to local disk. A driver based interface with an API to write these files remotely is necessary. ##ironic ##painpoint
https://etherpad.openstack.org/p/BOS-forum-ironic-feedback: 74: * Need a reference architecture and best practises on how to deploy Ironic in 100/200/1000/5000 nodes ##ironic ##painpoint

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-consumable-keystone: 35: * TODO: sit down and map out similarities and differences between namespaces and project/domains ##keystone ##k8s

View File

@ -1,8 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-consumable-keystone: 35: * TODO: sit down and map out similarities and differences between namespaces and project/domains ##keystone ##k8s
https://etherpad.openstack.org/p/BOS-forum-consumable-keystone: 44: * Write a document describing how to integrate keystone with a set of applications folks would like to see working this way ##keystone ##documentation
https://etherpad.openstack.org/p/BOS-forum-consumable-keystone: 45: * grafana support (both pointed at gnochi for users to pull their own data, and pointed at prometheus and just restricted to cloud admins?) ##keystone ##grafana ##gnocchi
https://etherpad.openstack.org/p/BOS-forum-consumable-keystone: 46: * keystoneclients in other languages +1 ##keystone ##multilanguage

View File

@ -1,32 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-LCOOGetToKnow: 49: * Define governance model in LCOO and it is allined with UC ##actionitem#lcoo
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 61: * Containerized control plane ##roadmap ##lcoo
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 66: * Yes - we definately will include Kolla and OpenStack-Helm and others in the Gap Analysis. We have started 2 different Development Proposals (above) and from an LCOO perspective there should not be an assumption that we have chosen the solution. What we hoped to do next with the Containerized Control Plane user stories was to have an introduction from the Kolla knowledgeable SMEs and also form teh OpenStack-Helm SMEs to help us get started with Gap Analysis. ##lcoo ##
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 76: * Extreme (Destructive) Testing: ##roadmap ##lcoo
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 116: * Older releases in production - Skip level upgrading +1+1 ##roadmap ##lcoo
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 118: * Telemetry / SNAP ##roadmap ##lcoo
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 120: * Logging ##roadmap ##lcoo
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 127: * set up and publish slack channel ##actionitem ##lcoo --> https://lcoo.slack.com >> publish this on wiki
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 128: * derive short term win ##actionitem ##lcoo
https://etherpad.openstack.org/p/BOS-forum-log-messages: 128: If the error message is not useful - then we need a bug on the error message ##action-item ##LCOO
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 51: * target (welcome) newer large operators ##LCOO ##pwg ##onboarding ##actionitem
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 95: * Large Contributing OpenStack Operators (LCOO) ##LCOO
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 123: Jamey McCabe <jamemcc@gmail.com> ##LCOO
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 124: Sundar krishnamoorthy <sundar.krishnamoorthy@intel.com>, ##LCOO
https://etherpad.openstack.org/p/BOS-forum-wg-chairs-collaboration-and-WG-overviews: 126: Andy Ukasick <andrew.ukasick@att.com>, ##LCOO, Product WG
https://etherpad.openstack.org/p/cloud-native-forum: 15: ##Deployment-team ##LCOO

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-consumable-keystone: 46: * keystoneclients in other languages +1 ##keystone ##multilanguage

View File

@ -1,16 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-Features-Missing-For-Public-Clouds: 24: * Self-service signup - +1+1+1 ##painpoint ##newfeature
https://etherpad.openstack.org/p/BOS-forum-Features-Missing-For-Public-Clouds: 29: * Domain level quote management: domain admin can create users ##newfeature
https://etherpad.openstack.org/p/BOS-forum-Features-Missing-For-Public-Clouds: 42: * Volume multi-attach ##newfeature
https://etherpad.openstack.org/p/BOS-forum-qa-tools-plugins: 58: * Are there any container projects or something for Tempest? ##newfeature, ##gap ?
https://etherpad.openstack.org/p/BOS-forum-qa-tools-plugins: 95: * Check status of services ##newfeature, ##uservoice
https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading: 31: *Many very large operators, running highly available, complex OpenStack environments, are running on old versions of (OpenStack Icehouse, Juno, Kilo...). The reason is because upgrading is highly painful and stabilizing platforms has often been the greater priority. How can they catch up and realize the benefits found in current releases? ##uservoice ##newfeature
https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading: 69: * ##newfeature Skip-level upgrading. How many versions? Is there a path to where the number of versions does not matter?
https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading: 70: * ##newfeature Live upgrades would help ease the pain in doing upgrades. Need to clarify the definition of "live". what do you accept and what you do not. (e.g. API down time, service down time, migration of workload, etc)

View File

@ -1,8 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 115: * There are other orchestrators out there we should also look at. ##painpoint ##nfv
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 192: 1. Monthly IRC meeting of the WG chairs introducing the happenngs of the last period ##actionitem ##nfv
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 201: 1. Wiki or similar to keep track of the related WGs and their work scope ##actionitem ##nfv
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 202: 1. Regular face2face meeting on the Summits and PTG-s ##actionitem ##nfv

View File

@ -1,6 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-operating-vm-and-baremetal: 338: ##nova johnthetubaguy to reach out about the "winners" of the vote, and extendning that to folks who were not here
https://etherpad.openstack.org/p/BOS-forum-operating-vm-and-baremetal: 339: ##nova sub groups to email the operator list to update on what they are doing currently and the best way to get involved
https://etherpad.openstack.org/p/BOS-forum-operating-vm-and-baremetal: 340: ##nova get project reps for each thing

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 51: * target (welcome) newer large operators ##LCOO ##pwg ##onboarding ##actionitem

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 58: * OPNFV (open platform for NFV) ##OPNFV

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-ops-tags-wg-session: 11: *##opstags - Feedback from session on best practices and areas for improvement

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 42: * connecting with smaller operators ##pwg ##outreach

View File

@ -1,32 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-Features-Missing-For-Public-Clouds: 24: * Self-service signup - +1+1+1 ##painpoint ##newfeature
https://etherpad.openstack.org/p/BOS-forum-Features-Missing-For-Public-Clouds: 35: * Project deletions? ##crossproject ##painpoint
https://etherpad.openstack.org/p/BOS-forum-Shared-Commercial-Goals-Public-Clouds: 24: * ##painpoint - maintaining patches that aren't reviewed
https://etherpad.openstack.org/p/BOS-forum-ironic-feedback: 43: * - Port selection on multi physical switches. ##ironic ##painpoint
https://etherpad.openstack.org/p/BOS-forum-ironic-feedback: 51: * no concept of locality ##ironic ##painpoint
https://etherpad.openstack.org/p/BOS-forum-ironic-feedback: 52: * I would like to see this happen for per rack (leaf) etc conductors ##ironic ##painpoint
https://etherpad.openstack.org/p/BOS-forum-ironic-feedback: 55: * https://specs.openstack.org/openstack/ironic-specs/priorities/pike-priorities.html#deploy-steps ##ironic ##painpoint
https://etherpad.openstack.org/p/BOS-forum-ironic-feedback: 71: * Sync power state against 50,000 nodes takes a while. I'm all the way back on Juno though, so there are probably some changes that've happened that i've missed. ##ironic ##painpoint
https://etherpad.openstack.org/p/BOS-forum-ironic-feedback: 73: * Running DHCP/HTTP/etc for instance provisioning out of the Ironic Conductor machines provides some large security holes. This means that datacenter nodes have logical access to one of the most crucial pieces of the OpenStack control plane. We've moved DHCP, TFTP, HTTP for provisioning to another set of nodes. However the conductor expects to write this to local disk. A driver based interface with an API to write these files remotely is necessary. ##ironic ##painpoint
https://etherpad.openstack.org/p/BOS-forum-ironic-feedback: 74: * Need a reference architecture and best practises on how to deploy Ironic in 100/200/1000/5000 nodes ##ironic ##painpoint
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 41: * PWG Composition and Participation - how to make PWG more impactful ##painpoint
https://etherpad.openstack.org/p/BOS-forum-qa-tools-plugins: 48: * Is there ay framework or way to automate the configuration options for Tempest? ##painpoint ##uservoice
https://etherpad.openstack.org/p/BOS-forum-qa-tools-plugins: 50: * How to handle resource changes which is like removals of the existing flavors/networks, etc? ##painpoint
https://etherpad.openstack.org/p/BOS-forum-qa-tools-plugins: 71: * Changes in devstack many times break env ##painpoint, ##uservoice, ##gap
https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading: 77: * ##painpoint ##uservoice Never change a running system: if a release does not provide additional wanted functionalities, why do the risk in upgrading to this release?
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 115: * There are other orchestrators out there we should also look at. ##painpoint ##nfv

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 52: * PWG connecting with devs and users, gathering ##painpoints and helping to define priorities

View File

@ -1,6 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-Features-Missing-For-Public-Clouds: 9: ##publiccloud
https://etherpad.openstack.org/p/BOS-forum-Features-Missing-For-Public-Clouds: 23: * ##PublicCloud has created google spreadsheet listing issues for public clouds, requesting feedback,
https://etherpad.openstack.org/p/BOS-forum-Shared-Commercial-Goals-Public-Clouds: 16: ##publiccloud

View File

@ -1,30 +0,0 @@
https://etherpad.openstack.org/p/Achieving_Resiliency_at_Scales_of_1000+: 37: * Can ##pwg or ##ewg help with this?
https://etherpad.openstack.org/p/BOS-forum-advanced-instance-scheduling: 96: * describe the pre-empttive use cases also as a ##pwg development proposal?
https://etherpad.openstack.org/p/BOS-forum-contributing-multi-project-requirements: 76: * Collaborate with other WG and bring requirements together, identify commonalities, with further gap analysis ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 42: * connecting with smaller operators ##pwg ##outreach
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 48: * new to openstack - how do you get involved ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 51: * target (welcome) newer large operators ##LCOO ##pwg ##onboarding ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 54: * encourage more sigs ##rbac ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 55: * Making the community aware that "these are the things" important for users ##pwg
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 58: * Post-forum Analysis +1+1 ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 60: * template for response from moderators to send as summary ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 61: * Mine hashtag data ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 65: * Notify moderators to give feedback on Forum sessions ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 66: * How do we identify organization to help with this wg ##pwg ##actionitem
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 81: * Product Working Group (PWG) ##pwg
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 131: * write a Development Proposals (previously known as “OpenStack User Story”) in PWG ##pwg

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/Achieving_Resiliency_at_Scales_of_1000+: 50: * ##QA - performance on single node to extra performance under load for various API calls

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-product-wg-working-session: 54: * encourage more sigs ##rbac ##pwg ##actionitem

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-unanswered-requirements: 64: * Set base expectations on the group openness & output ##actionitem ##requirements

View File

@ -1,12 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 61: * Containerized control plane ##roadmap ##lcoo
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 76: * Extreme (Destructive) Testing: ##roadmap ##lcoo
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 116: * Older releases in production - Skip level upgrading +1+1 ##roadmap ##lcoo
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 118: * Telemetry / SNAP ##roadmap ##lcoo
https://etherpad.openstack.org/p/BOS-forum-LCOORoadmap: 120: * Logging ##roadmap ##lcoo
https://etherpad.openstack.org/p/BOS-forum-evolving-the-community-generated-roadmap: 16: * ##roadmap - Feedback from sessions on how to improve the community generated roadmap process

View File

@ -1,4 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-developer-openstack-org: 41: *** GopherCloud --> Joe Topjian, used by K8S, DockerSwarm, etc ##sdk ##go
https://etherpad.openstack.org/p/BOS-forum-developer-openstack-org: 62: *** Spinnaker <-- Emily (see below) ##sdk ##spinnaker

View File

@ -1,12 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-unanswered-requirements: 63: * Need a clear scope and goal ##uc ##wg ##sig
https://etherpad.openstack.org/p/BOS-forum-unanswered-requirements: 70: * One or two levels ? ##sig
https://etherpad.openstack.org/p/BOS-forum-unanswered-requirements: 71: * sigs as a higher concept? tom ##sig
https://etherpad.openstack.org/p/BOS-forum-unanswered-requirements: 72: * working groups with a defined goal/dissolution vs. sigs/teams with an ongoing charter (example, from uc charter 'teams' vs 'working groups': https://governance.openstack.org/uc/reference/charter.html#structure-functional-teams ) ##wg ##sig
https://etherpad.openstack.org/p/BOS-forum-unanswered-requirements: 73: * make it clear that having a group complete its task and dissolve to move on to other tasks is acceptable/positive ##wg ##sig
https://etherpad.openstack.org/p/BOS-forum-unanswered-requirements: 74: * We need to make sure those featrures get maintained by the dev after finishing the WG. Would higher level group be the one to look after those long term? ##wg ##sig

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-developer-openstack-org: 62: *** Spinnaker <-- Emily (see below) ##sdk ##spinnaker

View File

@ -1,16 +0,0 @@
https://etherpad.openstack.org/p/TrackChair: 16: * Consider 3-5 minute video from submissions ##trackchair ##suggestion (make it optional) --> already in existing submission tool, maybe rephrase the sentence and clarify the purpose
https://etherpad.openstack.org/p/TrackChair: 19: * Consider leads for each Track Chair group ##trackchair ##suggestion
https://etherpad.openstack.org/p/TrackChair: 22: * Define a process/guidelines/best practices for a successful track chair ##trackchair ##suggestion
https://etherpad.openstack.org/p/TrackChair: 32: * One list versus several ##trackchair ##suggestion
https://etherpad.openstack.org/p/TrackChair: 38: * Could we add another queue (would no limit on personal selections help?) ##trackchair ##suggestion
https://etherpad.openstack.org/p/TrackChair: 41: * Show total # of track-chair that have selected a presentation (like 5 out of 5 have selected this talk) and rank them ##trackchair ##suggestion
https://etherpad.openstack.org/p/TrackChair: 45: * Do we have a group to assist people with their presentations? ##trackchair ##suggestion
https://etherpad.openstack.org/p/TrackChair: 46: * Suggestion to require a track chair at the tracks they select ##trackchair ##suggestion

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-telecom-nfv-collaboration: 112: * Tacker ##tacker

View File

@ -1,2 +0,0 @@
https://etherpad.openstack.org/p/BOS-forum-101: 32: * You can use ##<tag> to highlight anything that needs to be shared with a team not present in the session (e.g. ##uservoice, ##ironic) and a group of people will share it in a mailing list summary after the event

Some files were not shown because too many files have changed in this diff Show More