Retire the searchlight-spec project

As announced in openstack-discuss ML[1], Searchlight project
is retiring in Wallaby cycle.

This commit retires searchlight-spec repository as per process deinfed in
project-guide[2]. Anyone would like to maintain it again, please
revert back this commit and propose the re-adding it
to governance.

The community wishes to express our thanks and appreciation to all of
those who have contributed to the searchlight-spec project over the years.

Depends-On: https://review.opendev.org/c/openstack/project-config/+/764519
Needed-By: https://review.opendev.org/c/openstack/governance/+/764530

[1] http://lists.openstack.org/pipermail/openstack-discuss/2020-November/018637.html
[2] https://docs.openstack.org/project-team-guide/repository.html#retiring-a-repository

Change-Id: Ibbe1d916be8047f2fe48be7b6b79ff8f9196662b
This commit is contained in:
Ghanshyam Mann 2020-11-27 21:50:55 -06:00
parent c29704a061
commit 93dde42fec
40 changed files with 8 additions and 3031 deletions

15
.gitignore vendored
View File

@ -1,15 +0,0 @@
AUTHORS
ChangeLog
build
.tox
.venv
*.egg*
*.swp
*.swo
*.pyc
.testrepository
.idea/*
*/.DS_Store
.DS_Store
.DS_Store?
*~

View File

@ -1,4 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_TEST_TIMEOUT=60 ${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,6 +0,0 @@
- project:
templates:
- openstack-specs-jobs
- openstack-python3-ussuri-jobs
- publish-openstack-docs-pti
- check-requirements

View File

@ -1,3 +0,0 @@
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://creativecommons.org/licenses/by/3.0/legalcode

View File

@ -1,56 +1,10 @@
========================
Team and repository tags
========================
This project is no longer maintained.
.. image:: https://governance.openstack.org/tc/badges/searchlight-specs.svg
:target: https://governance.openstack.org/tc/reference/tags/index.html
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
.. Change things from this point on
====================================
OpenStack Searchlight Specifications
====================================
Please read the Searchlight process documentation on feature requests and bug reports:
https://docs.openstack.org/searchlight/latest/contributor/feature-requests-bugs.html
This git repository is used to hold approved design specifications for additions
to the Searchlight project. Reviews of the specs are done in gerrit, using a
similar workflow to how we review and merge changes to the code itself.
The layout of this repository is::
specs/<release>/
You can find an example spec in `doc/source/specs/template.rst`. A
skeleton that contains all the sections required for a spec
file is located in `doc/source/specs/skeleton.rst` and can
be copied, then filled in with the details of a new blueprint for
convenience.
Specifications are proposed for a given release by adding them to the
`specs/<release>` directory and posting it for review. The implementation
status of a blueprint for a given release can be found by looking at the
blueprint in launchpad. Not all approved blueprints will get fully implemented.
Specifications have to be re-proposed for every release. The review may be
quick, but even if something was previously approved, it should be re-reviewed
to make sure it still makes sense as written.
Spec reviews were completed entirely through Storyboard::
https://storyboard.openstack.org/#!/project_group/93
For more information about working with gerrit, see::
https://docs.openstack.org/infra/manual/developers.html#development-workflow
To validate that the specification is syntactically correct (i.e. get more
confidence in the Zuul result), please execute the following command::
$ tox
After running ``tox``, the documentation will be available for viewing in HTML
format in the ``doc/build/`` directory. Please do not check in the generated
HTML files as a part of your commit.
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,11 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
sphinx>=2.0.0,!=2.1.0 # BSD
openstackdocstheme>=2.2.1 # Apache-2.0
sphinxcontrib-svg2pdfconverter>=0.1.0 # BSD
sphinxcontrib-actdiag>=0.8.5 # BSD
sphinxcontrib-blockdiag>=1.5.5 # BSD
sphinxcontrib-nwdiag>=0.9.5 # BSD
sphinxcontrib-seqdiag>=0.8.5 # BSD

View File

@ -1,244 +0,0 @@
#
# Tempest documentation build configuration file, created by
# sphinx-quickstart on Tue May 21 17:43:32 2013.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import datetime
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.todo',
'sphinx.ext.viewcode',
'sphinxcontrib.blockdiag',
'sphinxcontrib.actdiag',
'sphinxcontrib.seqdiag',
'sphinxcontrib.nwdiag',
'openstackdocstheme',
'sphinxcontrib.rsvgconverter',
]
todo_include_todos = True
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Searchlight Specs'
copyright = u'%s, OpenStack Searchlight Team' % datetime.date.today().year
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = [
'_build',
'**/example.rst',
'**/template.rst',
'**/skeleton.rst',
'**/README.rst',
]
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = False
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['searchlight-specs.']
# -- Options for man page output ----------------------------------------------
man_pages = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
html_domain_indices = False
# If false, no index is generated.
html_use_index = False
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'Searchlight-Specsdoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# openany: Skip blank pages in generated PDFs
'extraclassoptions': 'openany,oneside',
'makeindex': '',
'printindex': '',
'preamble': r'\setcounter{tocdepth}{3}',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'doc-searchlight-specs.tex', u'Searchlight Specs',
u'OpenStack Searchlight Team', 'manual', True),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
latex_domain_indices = False
# Disable usage of xindy https://bugzilla.redhat.com/show_bug.cgi?id=1643664
latex_use_xindy = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'Searchlight-specs', u'Searchlight Design Specs',
u'OpenStack Searchlight Team', 'Searchlight-specs', 'Design specifications for the Searchlight project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/searchlight-specs'
openstackdocs_pdf_link = True
openstackdocs_auto_name = False
openstackdocs_bug_project = 'searchlight'
openstackdocs_bug_tag = 'doc'

View File

@ -1,34 +0,0 @@
.. searchlight-specs documentation master file
==================================
Searchlight Project Specifications
==================================
This page contains the approved Searchlight specifications.
The specification template can be found at:
https://github.com/openstack/searchlight-specs/blob/master/specs/template.rst
If you want to contribute to Searchlight, please find
the information at:
https://docs.openstack.org/searchlight/
Specifications
--------------
.. toctree::
:glob:
:maxdepth: 2
specs/train/index
specs/pike/index
specs/ocata/index
specs/newton/index
specs/mitaka/index
Indices and tables
------------------
* :ref:`search`

View File

@ -1 +0,0 @@
../../specs

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

View File

@ -1,12 +0,0 @@
[metadata]
name = searchlight-specs
summary = OpenStack Searchlight Project Development Specs
description-file =
README.rst
author = OpenStack
author-email = openstack-discuss@lists.openstack.org
home-page = https://specs.openstack.org/openstack/searchlight-specs/
classifier =
Intended Audience :: Developers
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux

View File

@ -1,22 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

@ -1,3 +0,0 @@
If X is the current release, this contains any spec that did not
complete in X-2 (or older), and was not moved forward. Ideally
this directory would be empty.

View File

@ -1,2 +0,0 @@
If X is the current release, this contains any spec that did not
complete in X-1. Ideally this directory would be empty.

View File

@ -1,263 +0,0 @@
..
c) Copyright 2016 Hewlett-Packard Development Company, L.P.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
================================================
ElasticSearch Deletion Journaling
================================================
https://blueprints.launchpad.net/searchlight/+spec/es-deletion-journaling
This feature enables tracking of object deletion in ElasticSearch to allow
for better coherency and asynchronous operations.
Problem Description
===================
As a service providing a snapshot of the OpenStack eco-system, we expect the
following trait:
* The snapshot is up to date and coherent, independent of the order of the
updates received from the other various OpenStack services.
Anything less and we will have failed our users with a deluge of undependable
data.
Background
----------
When created, an ElasticSearch document is stored in an index. When deleted,
we barbarically engage in Damnatio Memoriae and the ElasticSearch document is
permanently removed from the index. In most cases this is not an issue and the
document is not missed. But computer science is fraught with corner cases. One
such corner case is introduced with the new Zero Downtime Re-indexing
functionality [1]. Now Searchlight is required to remember the existence of
deleted documents.
With the addition of the Zero Downtime Re-indexing, Searchlight will be
performing CRUD operations on documents simultaneously from both re-syncing
(which uses API queries) and notifications sent from the services. All within
a distributed environment. These notifications will result in an asynchronous
sequence of ElasticSearch operations. Searchlight needs to make sure that the
state of the eco-system is always correctly reflected in ElasticSearch,
independent of the non-determinate order of notifications that are being
thrown by the services. If a delete notification is received before
the corresponding create notification, this sequence of events still need to
result in the correct ElasticSearch state.
In light of this harsh reality, we need to make the Searchlight document CRUD
commands order independent. This implies a way to track the state of deleted
documents. See below for more concrete examples of the issues we are resolving
with this blueprint.
As a side note, keeping track of deleted documents will allow Searchlight to
easily provide a "delta" functionality if so desired in the future.
The previous incarnation of this spec detailed setting a deletion flag and
using ElasticSearch's TTL functionality to delay deleting documents. It turns
out that ElasticSearch has this functionality built in when versioning is in
use (for an explanation, see the bottom of [3]).
Examples
--------
Some more concrete example may help here. We will use Nova as the resource type,
As a reminder here is how the plug-in commands map to ES operations.
* A "Create document" command in the Nova plug-in turns into an ES "index"
operation with a new payload from Nova.
* An "update document" command in the Nova plug-in turns into an ES "index"
operation with a new payload from Nova that already contains the
modifications. Once thing to note here is that we do neither an ES "update"
operation nor a read/modify/write using multiple ES operations. From
ElasticSearch's frame of reference an update command is the same as a create
command.
* A "delete document" command turns into an ES "delete" operation.
Example #1 (Fusillade)
----------------------
Consider the following Nova notifications "Create Obj1", "Modify Obj1", "Modify Obj1",
"Modify Obj1" and "Delete Obj1". Due to the distributed and asynchronous nature of
the eco-system. the order the notifications are sent by the listener may not be the
same order the operations are received by ElasticSearch. In some cases, the last
ElasticSearch modify operation will arrive after the ElasticSearch delete operation.
ElasticSearch will see the following operations (in this order): ::
PUT /searchlight/obj1 # Create Obj1
PUT /searchlight/obj1 # Modify Obj1
PUT /searchlight/obj1 # Modify Obj1
DELETE /searchlight/obj1 # Delete Obj1
PUT /searchlight/obj1 # Modify Obj1
After all of the operations are executed by ElasticSearch, the net result will
be an index of the Nova object "Obj1". When queried by an inquisitive user,
Searchlight will embarrassingly return the phantom document as if it corporeally
exists. Folie a deux! Not good for anyone involved.
Example #2 (Nostradamus)
------------------------
We will also need to handle the simplistic case of Nova creating a document
followed by Nova deleting the document. This case could be rather common
in the Zero Downtime Re-indexing work [1]. This sequence results in the
Nova notifications "Create Obj2" and "Delete Obj2". If the ElasticSearch
create operation arrives after the ElasticSearch delete operation,
ElasticSearch will see the following operations (in this order) ::
DELETE /searchlight/obj2 # Delete Obj2
PUT /searchlight/obj2 # Create Obj2
After both operations are executed by ElasticSearch, the net result will be
an index of the Nova object "Obj2". This naughty behavior is incorrect
and also needs to be avoided.
This example also illuminates a subtlety with out of order deletion notifications.
There may be times when ElasticSearch is being asked to delete a (currently)
non-existent document. This omen of a future event needs to be interpreted and
thus handled correctly.
Proposed Change
===============
There is an index setting named 'index.gc_deletes' that defaults to 60 seconds.
When a document is deleted *with a specified version* it is not immediately
deleted from the index. Instead, it's marked as ready for garbage collection
(in the document sense, not memory sense). If an update is posted with a later
version, the document is resurrected. If a document is posted with an earlier
version, it raises a ConflictError, as would be the case with an 'alive'
document. The deleted document is *not* visible in searches.
Since this is essentially identical to the implementation described below, and
appears to be fully supported in ElasticSearch 2.x, there is no point
implementing it ourselves. We may decide to recommend a higher `gc_deletes`
value, but the only change necessary is to pass a version as part of
delete operations (see [4]).
Alternatives
------------
An alternative is expressed in the 'previous design' that follows.
Previous Design
---------------
Ecce proponente! With this blueprint, the basic idea is to keep the state of
a deleted document around until no longer needed. At a high level, we will
need to make three major modifications to Searchlight.
* We will need to modify the ElasticSearch index mappings.
* We will need to modify the delete functionality to take advantage of the
new mapping fields.
* We will need to modify the query functionality to be aware of the new
mapping fields.
ElasticSearch Index Mapping Modification
----------------------------------------
Two modifications are needed for the mapping defined for each index.
The first modification is to enable the TTL field. We need to define the
mapping for a particular index like this: ::
{
"mappings": {
"resource_index": {
"_ttl": { "enabled": true }
}
}
}
By not specifying a default TTL value, a document will not expire until the
TTL is explicitly set. Exactly what we need.
The second modification is to add a new metadata field to the mapping.
The metadata field would be named "deleted" and would always be defined.
When the document is created/modified the field would be set to "False".
When the document is deleted the field would be set to "True". There is
some concern that we need more than a boolean for this field. A version
or timestamp may be more appropriate. This is a detail for the design and
can be fleshed out at that time if needed.
Searchlight Delete Functionality Modification
---------------------------------------------
When a document is deleted, we will need to set both the TTL field and the
metadata field. This is considered a modification to the original document.
If the document does not already exist, we will need to create the document
and set the "deleted" and "TTL" fields. This will prevent an out-of-order
create/update operation from succeeding.
Searchlight Query Functionality Modification
--------------------------------------------
When a document is queried, we will need to modify the query to exclude
any documents whose metadata indicates the document has been deleted. We will
also need to filter out the metadata field.
Searchlight Create/Modify Functionality Modification
----------------------------------------------------
When a document is created, the mapping will need to add the new "deleted"
field and enable TTL functionality. The "deleted" field will need to be set
appropriately. If the "deleted" field is set to true we will not modify
the document. These modifications depend on the version functionality being
in place [2].
Configuration Changes
----------------------
We need to define the TTL value to determine how long a deleted document
endures. This default value can be overridden by a configuration value.
Setting a TTL value is not enough to delete a document. In tandem we need
ElasticSearch to run its purge process. This purge process will poll all
documents and delete those with expired TTL values. The default is to run the
purge process every 60 seconds. This default value can be overridden by a
configuration value.
Deleted Field Options
---------------------
For historical completeness, here are the different options that were considered
for the "deleted" metadata field.
(1) The metadata field would be named "deleted" and would be defined only when a
document has been deleted. When a document is created/modified this field is
not defined. To detect if a document is deleted we will search for the
existence of this field. This simplifies the create/modify code, but
complicates the query code.
(2) The metadata field would be named "deleted" and would always be defined.
When the document is created/modified the field would be set to "False".
When the document is deleted the field would be set to "True". This adds a
little bit of work to the create/modify but simplifies the query command.
(3) The metadata field would be named "state" and would always be defined. The
value of "state" would be the current state of the document: "Created",
"Modified" or "Deleted". More work is needed in this option to distinguish
between "Modified" and "Create", since they are treated the same say in
the plug-ins. This will allow for "delta" functionality to be added to
Searchlight in the future. This work is the same as option (2).
References
==========
[1] The Zero Downtime Re-indexing work is described here:
https://blueprints.launchpad.net/searchlight/+spec/zero-downtime-reindexing
[2] External versions added to ElasticSearch documents is described here:
https://review.opendev.org/#/c/255751/
[3] ElasticSearch *document* garbage collection is discussed here:
https://www.elastic.co/blog/elasticsearch-versioning-support
[4] Bug report for handling out-of-order notifications:
https://bugs.launchpad.net/searchlight/+bug/1522271

View File

@ -1,9 +0,0 @@
=================================
Searchlight Mitaka Specifications
=================================
.. toctree::
:glob:
:maxdepth: 1
*

View File

@ -1,67 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===========================
Per resource policy control
===========================
https://blueprints.launchpad.net/searchlight/+spec/per-resource-type-policy-control
Problem Description
===================
Current policy control allows us to restrict who can query, list plugins or
retrieve facets using the oslo policy engine [1]. Openstack is moving towards
supporting more fine-grained controls, and for Searchlight a step in that
direction is allowing control over individual resource types. For instance,
it might be the case in a given cloud that non-administrative users should
not be permitted to search a particular resource type. Longer term this allows
us to move towards a model where RBAC is defined by policy control; rather than
the hard-coded project-based RBAC we use for each plugin, we might replace or
augment it with the typical `is_admin_or_owner` policy rule employed by
projects.
Proposed Change
===============
The proposed change will allow `policy.json` to include rules for individual
plugins. A rule `resource:<resource type>:allow` will control overall access to
a plugin. In addition, `allow` can be replaced with other actions to allow more
precise control. For instance, rules might be::
"default": "",
"resource:OS::Glance::Image:allow": "@",
"resource:OS::Glance::Image:facets": "is_admin:True"
"resource:OS::Nova::Server:query": "!"
A future extension may extend this to support RBAC rule specification through
policy. For instance, the following rules might translate into the existing
RBAC query we have today::
"admin_or_owner": "is_admin:True or project_id:%(project_id)s",
"resource:OS::Nova::Server:allow": "admin_or_owner",
If a resource is *not* allowed via policy, it will be removed from the list
of types to be searched; if this results in no allowed types, the search will
return an empty result set.
Alternatives
------------
Disabling plugins entirely in setup.cfg is one possibility that can be done
with the current codebase.
Disabling indexing for non-administrators would require a few changes.
The ideal long-term solution (which is one that this proposal drives towards)
is to consume the service ``policy.json`` files as does horizon. Ultimately
the hard-coded RBAC rules might be expressed as policy rules in many cases,
allowing greater configuration flexibility (for instance, restricting access
to a resource to the user that created it and not the project/tenant). This
will make it easier to keep searchlight deployments in sync with the rules
deployed with each service.
References
==========
[1] Oslo policy documentation:
http://docs.openstack.org/developer/oslo.policy/api.html

View File

@ -1,225 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===========================
Searching admin-only fields
===========================
https://blueprints.launchpad.net/searchlight/+spec/index-level-role-separation
Our aim is to allow all fields to be searchable and available in facets but
only for users where that is appropriate; as such, we introduced the idea of
filtering search results based on whether or not a user has the admin role.
The flaw that we discovered towards the end of Liberty is described in
https://bugs.launchpad.net/searchlight/+bug/1504399, but very briefly, merely
removing fields from the result is not sufficient. It is possible to 'fish'
for values for known fields by running searches against them and examining
whether results come back; an attacker might use range or wildcard queries
to reduce the time it takes to locate values that return results.
Problem Description
===================
We wish to allow plugins to define fields (whether in code or in configuration)
that cannot be seen by non-administrative users, be that in search results,
visible in facets or by searching for values for a field. Administrators should
be subject to none of these restrictions.
Prior to the fix in bug #1504399 Searchlight fulfilled the first two of these
criteria. Unfortunately the fix (which was under a very tight time restriction)
prevented even administrators from searching fields. The problem, therefore,
is to ensure these conditions.
Proposed Change
===============
Role-based filtering
--------------------
This solution involves indexing twice and adding a field to all resources that
can be used to filter a search based on a user's role. For instance, taking a
heavily cut-down Nova server definition::
{
"_id": "aaaaabbbb-1111-4444-2222-eeee",
"_type": "OS::Nova::Server",
"_source": {
"status", "ACTIVE",
"OS-EXT-ATTR-SOMETHING": "admin only data"
}
}
This is turned into two documents, identical except that:
* the admin-only document has an additional field `'user-role': 'admin'`
* the user document has an additional field `'user-role': 'user'`
* the user document does not contain the `OS-EXT-ATTR-SOMETHING` field
* the ids for each document have a role added (`111111-4444-2222-eeee:ADMIN`)
Indexing
~~~~~~~~
Indexing operations are unchanged except that two operations (or one bulk
operation) are needed. Admin-only fields would be stripped from the serialized
source document for the non-admin copy::
{
"_id": "aaaaabbbb-1111-4444-2222-eeee_ADMIN",
"_type": "OS::Nova::Server",
"_source": {
"status", "ACTIVE",
"OS-EXT-ATTR-SOMETHING": "admin only data",
"_searchlight_user_role": "admin"
}
},
{
"_id": "aaaaabbbb-1111-4444-2222-eeee_USER",
"_type": "OS::Nova::Server",
"_source": {
"status", "ACTIVE",
"_searchlight-user-role": "user"
}
}
This solution allows resources that don't need an admin/user separation to
index a single document with both roles::
{
"_id": "abcdefa-1222",
"_type": "OS::Designate::Zone",
"_source": {
"_searchlight-user-role": ["admin", "user"]
}
}
Searches
~~~~~~~~
The server can apply a non-analyzed (term) filter on `_searchlight-user-role`
based on the request context::
{
"query": {... },
"filter": {"term": {"_searchlight-user-role": "admin"}}
}
Filters are cached and very fast. An alternative, once we switch to using
aliases (see the `zero downtime spec <https://review.opendev.org/#/c/245222/>`_
proposal), is applying the filter on the alias::
{
"index": "searchlight-<timestamp>",
"alias": "searchlight-admin",
"filter": {"term": {"_searchlight-user-role": "admin"}}
}
The search API would query against `searchlight-admin` or `searchlight-user`
as appropriate. There is some precedent for this; it's a common way to make
data appear to be segmented based on a field ('index per user' - Reference_)
without the overhead of multiple lucene indices.
.. _Reference: https://www.elastic.co/guide/en/elasticsearch/guide/current/faking-it.html
Second choice - Separate indexes
--------------------------------
.. note:: This began as my frontrunner, but the added maintenance headache
has pushed me towards filter-based separation.
Another solution is to maintain separate indices for admin and
non-admin users. While this seems offensive from a duplication point of view,
it's very common in non-relational-databases to store information based on
the kinds of queries you want to run. There will be an impact on indexing
speed and data storage, though I believe the volume and throughput of data
we store makes this impact insignificant. The major downside is the increased
maintenance overhead (at a minimum, two indices would be required at least
for those plugins requiring it).
Technically, introducing a pair of indices isn't terribly complicated; all
write operations become two, and searches determine which index they're using
before running. As far as a user sees, there will be no impact (except that
admins will once again be able to run searches against admin-only fields).
Indexing
~~~~~~~~
Information in the -users index can be restricted with dynamic_mapping
template (that can tell Elasticsearch not to store or index matching fields
with `index:no` and `include_in_all:no`). Along with result filtering (or
`_source` filtering or removing these fields from the indexed document)
this achieves all three requirements.
Some plugins do not have admin-only fields, and those plugins could run
against the same index. I believe, though, that it would be necessary to
use a separate shared index in that case, because otherwise a query could
potentially run against (say) `OS::Nova::Server` in both indices. For example,
the structure below assumes `OS::Something::Else` doesn't need two indices,
and all data is in the user index::
searchlight-admin:
OS::Nova::Server
searchlight-user:
OS::Nova::Server
OS::Something::Else
An admin query against both types would have to run against both indices,
running the risk of duplicate results for `OS::Nova::Server` resources.
This might need more discussion, but safer would be to either mandate storing
information twice for all types, or::
searchlight-admin:
OS::Nova::Server
searchlight-user:
OS::Nova::Server
searchlight-all:
OS::Something::Else
Searches
~~~~~~~~
Little would change as far as a user is concerned. The search code would
have some extra conditionals in it to determine which index to use. This
would be complicated if an index contains both admin- and non-admin- data.
Alternatives
------------
There are two other alternatives I'm aware of.
1. `Elasticsearch Shield <https://www.elastic.co/products/shield>`_. Shield
adds a number of features to Elasticsearch, all aimed at security and
authentication. One of those features (supported only by Elasticsearch 2.0)
is `field level access control <https://www.elastic.co/guide/en/shield/current/setting-up-field-and-document-level-security.html>`_.
This requires an inclusive list of fields to be given in configuration on
a per-index basis, and also requires Shield's authentication to be enabled
(there are various plugins available). It disables the `_all` field for
users who are subject to field level restrictions.
Most importantly, Shield is a commercial, closed-source product that runs
on the server, and so is able to do things we are not (since it has
access to the parsed query).
2. Modify or reject incoming queries. We already strip certain fields from
search results for non-admin users, and in theory we could restrict
searches in the same way (or raise Not Authorized exceptions). While
naively this seems straightforward, in reality it becomes complex quite
quickly. Imagine the following queries against Nova for a protected field
called `hypervisor_id`::
{"query": {"term": {"hypervisor_id": "abcd1"}}}
{"query": {"query_string": {"query": "hypervisor_id:abcd1"}}}
{"query": {"multi_match": {"query": "abcd1", "fields": ["hypervisor_id"]}}}
{"query": {"query_string": {"query": "abcd1"}}}
Constructing filters to catch those queries isn't impossible, but becomes
increasingly complex; we would essentially need to parse the query, and
we'd need to do so for each plugin type.
References
==========
* https://bugs.launchpad.net/searchlight/+bug/1504399
* https://review.opendev.org/#/c/233225/ (patch for above)
* `Shield <https://www.elastic.co/guide/en/shield/current/index.html>`_
* https://www.elastic.co/guide/en/elasticsearch/guide/current/faking-it.html

View File

@ -1,487 +0,0 @@
..
c) Copyright 2015-2016 Hewlett-Packard Development Company, L.P.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
================================================
Zero Downtime Re-indexing
================================================
https://blueprints.launchpad.net/searchlight/+spec/zero-downtime-reindexing
This feature enables seamless zero downtime re-indexing of resource data from
an API user's point of view.
Problem Description
===================
As a user of the searchlight API, we expect the following traits:
* The index is up to date and coherent with the source data
* The index is available
* That we are not affected by updates and upgrades to the searchlight service
As a deployer, we expect the following:
* That we can roll out service upgrades and update the index with new data
* That we can bring the index back into coherency without downtime
* That we can tune the service deployment according to performance needs
* That we can have easy deployment of new / patched plugins
* That we can change data mappings and re-index the data
Background
----------
ElasticSearch documents are stored and indexed into an "index" (imagine that).
The index is a logical namespace which points to primary and replica shards
where the document is replicated. A shard is a single Apache Lucene instance.
The shards are distributed amongst nodes in a cluster. API users only interact
with the index and are not exposed to the internals, which ElasticSearch
manages based on configuration inputs from the administrator.
Certain actions can only be done at index creation time, such as changing
shard counts, changing the way data is indexed, etc. In addition to changing
the data, re-populating an index that has lost coherency with source service
data is much easier to do from scratch rather than determining what differences
there are in the data. Due to this the data and indexes should be designed so
that it is possible to re-index at any time without disruption to API users.
The re-indexing happens while the services are in use, still indexing new
documents in ElasticSearch.
In Searchlight 0.1.0, we allowed for each plugin to specify the index where
the data should be stored via configuration in the searchlight-api.conf file.
By default, all plugins store their data in the "searchlight" index. This was
simply chosen as a starting point, because the amount of total data indexed
for resource instance data is believed to be quite small in comparison to
typical log based indexing for small deployments, but this may differ
dramatically based on the resource type being indexed and the size of the
deployment.
To reiterate, all resource types in Searchlight 0.1.0 (either the plug-in or
the searches) have the ElasticSearch index hard-coded into them. This
hard-coded functionality prevents Searchlight from doing smart things
internally with ElasticSearch. Exposing indexes directly to the users is
generally not recommended by the user community or by ElasticSearch. Instead,
they recommend using aliases. API users can use an alias in exactly the same
way as an index, but it can be changed to point to different index(es)
transparently to the user. This allows for seamless migration between
indexes, allowing for all of the above use cases to be fulfilled.
The concept of aliases is described in depth in the ElasticSearch guides [1].
Proposed Change
===============
With this blueprint, we will divorce the plug-ins and searches from knowing
about physical ElasticSearch indexes. Instead we will introduce the concept
of a "Resource Type Group". A Resource Type Group is a collection of Resource
Types that are treated as a single unit within ElasticSearch. All users of
Searchlight will deal with Resource Type Groups, instead of low-level
ElasticSearch details like an index or alias. A Resource Type Group will
correspond to ElasticSearch aliases that are created and controlled by
Searchlight.
The plug-in configuration in the searchlight-api.conf file will no longer
specify the index name. Instead the plug-in will specify the Resource Type
Group it chooses to be a member of. It is important for a plug-in to know which
Resource Type Group it belongs. When some operations are undertaken by one
member of a Resource Type Group, it will need to be done to all members in
the group. There will be more details on this later.
Now that the users are removed from the internals of ElasticSearch, we
can handle zero downtime re-indexing. The basic idea is to create new
indexes on demand, populate them, but use ElasticSearch aliases inside of
Searchlight in a way that makes the actual indexes being used transparent
to both API users and Searchlight listener processes.
We will not directly expose the alias to API users. We will use resource
type mapping to transparently direct API requests to the correct alias.
When implementing this blueprint, we may choose to still expose an "index"
through the plug-in API. Exposing an "index" may allow other open-source
ElasticSearch libraries (which are index-based) to still work. Currently
we are not using any of these libraries, but we may not want to exclude
their usage in the future.
Searchlight will internally manage two aliases per Resource Type Group.
Note: Having these two aliases is the key change enabling zero downtime
indexing.
* API alias
* Listener/Sync alias
The names of the aliases will be derived from the Resource Type
Group name in the configuration file. Exactly how this is handled will
be left to the implementation. For example, we can append "-listener" and
/Sync"-search" to the Resource Group Type name for the two aliases.
The API alias will point to a single "live" index and only be switched once
the index is completely ready to serve data to the API users. Completely
ready means that the new index is a superset of the old index. This allows
for transparently switching the incoming requests to the new index without
disruption to the API end user.
The listener alias will point to 1...* indexes at a time. The listener
simply knows that it must perform CRUD operations on the provided alias. The
fact that it might be updating more than one index at a time is
transparent to the listener. The benefit to this is that the listeners do
not have to provide any additional management API as ElasticSearch handles
this for us automatically.
The algorithm for searchlight-manage index sync will be changed to the following:
* Create a new index in ElasticSearch. Any mapping changes to the index are done
now, before the index is used.
* Add the new index to the listener(s) alias. At this point, the listeners alias is
pointing to multiple indexes. The new index is now "live" and receiving data. Any
data received by the listener(s) will be sent to both indexes.
* There is an issue with indexing an alias with multiple indexes [2]. The
issue is that this case is not allowed! In this case we will catch the
exception and write to both indexes individually in this step. For more
details, refer to the "Implementation Notes" subsection below.
* Bulk dump of data from each Resource Type associated with the old index to the
new index in ElasticSearch.
* The same issue with multiple indexes mentioned above applies here also.
* Atomically switch the aliases for the API alias to point to the new index.
* We will use the actions command with remove/add commands in the same actions API call.
ElasticSearch treats this as an atomic operation. [2]::
{ "actions" : [ { "remove" : { ...} }, { "add" : " {...} } ] }
* Remove the old index from the listener(s) alias.
* Delete the old index from ElasticSearch. We do not want the index to hang around
forever. We can figure out when the index is no longer being used and then delete
it (asynchronous task, a type of internal reference count, etc). If this turns out
to be too unwieldy we can revisit this action.
Notes:
* This algorithm assumes that we can handle out of order events. See below for more details.
* During the re-syncing process, the listener(s) will be adding any new documents to both indexes.
* The listeners will always keep the ElasticSearch index associated with the API alias up to date.
* The listeners will keep the old index up to date after the API alias has switched over to minimize any race conditions.
A critical aspect to all of this is that the batch indexer and all
notification handlers MUST only update documents if they have the most
recent data. This is being handled by a separate bug [3]. In addition,
Searchlight listeners and index must start setting the TTL field in deleted
documents instead of deleting them right away. This functionality is covered
in the ES deletion journal blueprint [4].
We are operating on a Resource Type Group as a whole. We need to make sure
that the entire Resource Type Group is re-indexed instead of just a single
Resource Type within the group. For example, consider the case where a
Resource Type Group consists of Glance and Nova. When Searchlight gets a
command to re-index Glance, Searchlight needs to also re-index Nova. Otherwise
the new index will not have the previous Nova objects in it. If Nova did not
re-index, the new index will not be a superset of the old index. When the
alias switches to this new index it will be incomplete.
The CLI must support manual searchlight-manage commands as well as automated
switchover. For example:
* Delete the specified or current index / alias for a specific resource type group.
* Create a new index for the specified resource type group.
* Switch API and listener aliases automatically when complete (default - yes).
* Delete old index automatically when complete (default - yes).
* Provide a status command so that progress can be seen.
* List all aliases and indexes by resource type with their status
* Can be used from a GUI or a separate CLI concurrently to monitor progress.
This change affects:
* The plugins API which lists plugins
* The API
* The Listener
* The bulk indexer
* The CLI
Illustrated Example
-------------------
To further illuminate the blueprint we will turn to a series of images and save
ourselves thousands of words. The images shows the state of Searchlight during
sequence of operations.
For this example we have three resource types: Glance, Nova and Swift. There are
two Resource Type Groups. The first group, RTG1, contains Glance and Nova. The
two aliases associated with RTG1 are "RTG1-sync" for the plug-in listeners and
"RTG1-query" for the plug-in searches. The second group, RTG2, contains Swift.
The two aliases associated with RTG2 are "RTG2-sync" for the plug-in listener
and "RTG2-query" for the plug-in search.
Figure 1: The initial State
.. image:: ../../images/ZeroFig1.png
First Searchlight will create the ElasticSearch index "Index1" for use by RTG1.
The ElasticSearch aliases "RTG1-sync" and "RTG1-query" are created and will both be
associated with the index "index1". Next Searchlight will create the
ElasticSearch index "Index2" for use by RTG2. The ElasticSearch aliases
"RTG2-sync" and "RTG2-query" are created and will both be associated with the index
"Index2".
Glance has now created two documents "Glance ObjA" and "Glance ObjB". Nova has
created two documents "Nova ObjC" and "Nova ObjD". These four new documents for
the first Resource Type Group are now indexed. They will be indexed against
alias "RTG1-sync" and end up in index "Index1".
Swift has now created two new documents "Swift ObjE" and "Swift ObjF". These two
new documents for the second Resource Type Group are now indexed. They will be
indexed against alias "RTG2-sync" and end up in index "Index2".
Figure 1 shows the current state of Searchlight.
A Glance search will be made against "RTG1-query". Going to "Index1" it will return
"Glance ObjA", "Glance ObjB", "Nova ObjC" and "Nova ObjD". A Swift search will
be made against "RTG2-query". Going to "index2" it will return "Swift ObjE" and
"Swift ObjF".
Figure 2: Explicit Glance Re-sync
.. image:: ../../images/ZeroFig2.png
All of the changes from Image 1 are highlighted in red.
Searchlight receives a re-index command for Glance. After the re-sync, Glance
creates two new documents "Glance ObjG" and "Glance ObjH". Nova creates one new
document "Nova ObjI". Swift creates two new documents "Swift ObjJ" and "Swift
ObjK".
Searchlight will create a new ElasticSearch index "Index3". Since Glance is
re-syncing, the new index is associated with RTG1. Searchlight now associates
both "Index1" and "Index3" to the alias "RTG1-sync". Since the new index "Index3"
is not a superset of the index "Index1" yet, we do not change the RTG1 search
alias "RTG1-query". It remains unchanged for now.
As the Glance re-sync occurs, the previous Glance documents "Glance ObjA" and
"Glance ObjB" get indexed into "Index3". The new documents for RTG1 ("Glance
ObjG", "Glance ObjH" and "Nova ObjI") are indexed against the alias "RTG1-sync".
These documents end up in both "Index1" and "Index3".
The new documents for RTG2 ("Swift ObjJ" and "Swift ObjK") are indexed against
the alias "RTG2-sync". These documents end up in "Index2".
Figure 2 shows the current state of Searchlight.
A Glance search will be made against "RTG1-query". Going to "Index1" it will
return "Glance ObjA", "Glance ObjB", "Nova ObjC", "Nova ObjD", "Glance ObjG",
"Glance ObjH" and "Nova ObjI". A Swift search will be made against "RTG2-query".
Going to "index2" it will return "Swift ObjE", "Swift ObjF", "Swift ObjJ" and
"Swift ObjK".
This diagram shows the subtle point that all resource types within a Resource
Type Group need to re-synced together. If we did not re-sync Nova and updated
the RTG1 search alias "RTG1-query" to be associated with the new index "Index3", the
Searchlight state is incorrect. A Glance search will now be made against
"Index3" and it will return "Glance ObjA", "Glance ObjB", "Glance ObjG",
"Glance ObjH" and "Nova ObjI". This is incorrect as it does not include the
earlier Nova documents: "Nova ObjC" and "Nova ObjD". This incomplete state is
the reason that all resources in a Resource Type Group need to be re-synced
before the Resource Type Group re-sync is to be considered completed.
Figure 3: Implicit Nova Re-Sync
.. image:: ../../images/ZeroFig3.png
All of the changes from Image 2 are highlighted in red.
Searchlight starts an implicit Nova re-sync, since Nova is a member of RTG1.
All of the aliases are still set up correctly, so they do not need to change.
After the re-sync, Glance creates one new document "Glance ObjL". Nova creates
one new document "Nova ObjM". Swift creates one new documents "Swift ObjN".
As the Nova re-sync occurs, the previous Nova documents "Nova ObjC" and "Nova
ObjD" get indexed into "Index3". The new documents for RTG1 ("Glance ObjL" and
"Nova ObjM") are indexed against the alias "RTG1-sync". These documents end up in
both "Index1" and "Index3".
The new document for RTG2 ("Swift ObjN") is indexed against the alias "RTG2-sync".
This document ends up in "Index2".
Searchlight has not yet acknowledged the Nova re-sync as being completed.
Therefore "RTG1-query" has not been updated yet.
Figure 3 shows the current state of Searchlight.
A Glance search will be made against "RTG1-query". Going to "Index1" it will
return "Glance ObjA", "Glance ObjB", "Nova ObjC", "Nova ObjD", "Glance ObjG",
"Glance ObjH", "Nova ObjI", "Glance ObjL" and "Nova ObjM". A Swift search will
be made against "RTG2-query". Going to "index2" it will return "Swift ObjE",
"Swift ObjF", "Swift ObjJ", "Swift ObjK" and "Swift ObjN".
Figure 4: RTG1 Re-Sync Complete
.. image:: ../../images/ZeroFig4.png
All of the changes from Image 3 are highlighted in red.
All resource types within RTG1 have finished re-syncing. Searchlight will now
update the RTG1 search alias "RTG1-query". The alias "RTG1-query" will now be
associated with index "Index3". After updated the RTG1 search alias,
Searchlight will update the RTG1 plug-in listener alias "RTG1-sync". The alias
"RTG1-sync" will now be associated with the index "Index3".
The alias updates need to happen in this order to handle the corner case of a
new RTG1 document being indexed while the aliases are being modified. If we
modified the RTG1 plug-in listener alias first a new document would be indexed
to index "Index3" only. But a search will still go to index "Index1", thus
missing the newly indexed document.
Figure 4 shows the current state of Searchlight.
A Glance search will be made against "RTG1-query". Going to "Index3" it will
return "Glance ObjA", "Glance ObjB", "Nova ObjC", "Nova ObjD", "Glance ObjG",
"Glance ObjH", "Nova ObjI", "Glance ObjL" and "Nova ObjM". A Swift search will
be made against "RTG2-query". Going to "index2" it will return "Swift ObjE",
"Swift ObjF", "Swift ObjJ", "Swift ObjK" and "Swift ObjN".
The internal Searchlight state is correct, coherent and ready to continue.
Sometime in the future we will be able to delete Index1 completely.
Implementation Notes
--------------------
Implementation Note #1: Multiple Indexes
----------------------------------------
Upon careful review of the ES alias documentation [2], there is this warning
lurking in the shadows: "It is an error to index to an alias which points to
more than one index." Yikes. Now the simple solution of adding additional
indexes to an alias and having the re-indexing just work, will not work.
ElasticSearch will through an "ElasticsearchIllegalArgument" exception and
return a 400 (Bad Request).
The plug-ins will need to be aware of this exception and react to it.
Through experimentation, ElasticSearch will return this error: ::
{"error":"ElasticsearchIllegalArgumentException[Alias [test-alias] has more than one indices associated with it [[test-2, test-1]], can't execute a single index op]","status":400}
From this error message, we have the actual indexes. After extracting
the names of the indexes, the plug-ins will be able to complete the
task. The plug-in will now index iterating on each real index, instead
of using the alias. This case applies only to the case where there are
multiple indexes in an alias (i.e. the re-syncing case). When not
re-syncing, the plug-in will not receive this exception.
We need to be careful when parsing the error message. This is a potential
hazardous area if the error message ever changes. The catching of the
exception and parsing of the message should be as flexible as possible.
Implementation Note #2: Incompatible Changes
--------------------------------------------
A corner case in the rationale for triggering a re-index needs to be
addressed. Sometimes an incompatible change between indexes has occurred.
For example a new plug-in has been added or the documents from the
service of changed in an incompatible way (different ElasticSearch mapping).
In any of these cases we need to be able to handle the changes and
roll them out seamlessly.
Some possible options to handle these cases would include:
* Disable re-indexing into the old index.
* Run two listeners, one understanding the old index and the other
understanding the new index.
Alternatives
------------
Alternate #1
------------
An alternate usage scenario would look like the following:
Queries to v1/search/plugins would change so that the index listed for each type would
actually be the alias (the API user won't know this).
The searchlight-manage index sync CLI will change to support the following capabilities:
* Re-index the current index without migrating the alias (no change from 0.1.0).
* Delete the specified or current index for a specific type.
* Create a new index for specified resource types.
* Specified name or autogenerated name using a post-fix numbering pattern.
* Contact and stop all listeners from processing incoming notifications for specified types.
* Switch alias automatically when complete (default - no ?).
* Delete old index automatically when complete (default - no?).
* Contact and start all listeners to process incoming notification for specified types.
* Switch alias on demand to new index(es).
All of the above must account for 1 ... * indexes for a single alias.
All listener processes must now support a management API for them to stop
notification processing for specified resource types. Without this ability,
there will remain a race condition for populating a new index. For example,
if it takes N seconds to populate all Nova server instances, there will be a
delay in time from when the original request for data to Nova was sent and
when any updates to the data happened. Therefore, notification should be
disabled while a new index is being populated and then turned back on.
Alternate #2
------------
This alternate explores a way to avoid the "multiple indexes in a single
alias while indexing" exception as described in the "Implementation Notes"
subsection.
The idea is that instead of having two indexes in the Sync alias and one index
in the search alias, we invert the index usage in the aliases. Now we consider
adding multiple indexes to the search alias while leaving a single index in
the sync alias.
When we start a re-sync, we create a new index. We update the sync alias to point
to this new index, replacing the old index. Since there is only a single index
in the sync alias, we will not get the ElasticsearchIllegalArgument exception.
We also add the new index to search alias.
At this point, the search alias contains just the new index while the search
alias contains both the old and new index. When a search occurs it will
find old documents as well as any new documents.
The main issue with this alternative is that the search will find a lot of
duplicates while the re-sync is occurring. All of the documents in the old index
will eventually be added to the new index. In order to be usable, we would
need to figure out a way to filter out these duplicates. The initial
investigation into filtering ideas led to solutions that were deemed to
fragile and defect prone. Hence the inclusion of this idea at the
bottom of the alternate proposals.
Future Enhancements
-------------------
Optimizations:
* Use the ElasticSearch index sync functionality instead of having each Resource Type
do a manual re-index. ElasticSearch does not have a native re-sync command, but it
can be accomplished using "scan and scroll" with the ElasticSearch Bulk API. [5]
This optimization needs to be carefully considered. It would only be performed
when we are absolutely sure that the old ElasticSearch index is coherent and complete.
References
==========
[1] The concept of aliases is described in depth here:
https://www.elastic.co/guide/en/elasticsearch/guide/current/index-aliases.html
[2] How ES treats an alias is described here:
https://www.elastic.co/guide/en/elasticsearch/reference/1.7/indices-aliases.html
[3] All searchlight index updates should ensure latest before updating any document
https://bugs.launchpad.net/searchlight/+bug/1522271
[4] ES deletion journal blueprint:
https://blueprints.launchpad.net/searchlight/+spec/es-deletion-journal
[5] ES scan and scroll is discussed here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-scroll.html
ES Bulk API is discussed here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html

View File

@ -1,58 +0,0 @@
..
c) Copyright 2016, Huawei Technology.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
==============================
Add nova server groups plugin
==============================
https://blueprints.launchpad.net/searchlight/+spec/nova-server-groups-plugin
This Blueprint adds a plugin for Nova server groups (OS::Nova::Server_groups).
Problem Description
===================
Currently, in nova, there are no filter support fo os-server-groups API,
that means, when list server groups, all the existing server groups will
be listed. As server groups are very widely used feature in commercial
deployment, this will be problematic, especially for large scale Public
Cloud deployments. For example, in Deutsche Telekom OTC Public Cloud,
each tenant will have 10 server groups by default, when the number of
tenant grows, it will be a bottleneck to list and search for particular
server groups. And it will also be very user-friendly to let user
search for server groups with ``name``, ``policy`` or ``members`` which
is not yet provided by Nova.
Proposed Change
===============
Phase I:
Add a Nova server groups plugin to collect server groups data and
provide the ability to search server groups using ``name``, ``policy``,
``members``, ``id`` and ``metadata``.
Phase II:
Add new notification handler for server groups notifications once the
notification for server groups in nova has been added.
Alternatives
------------
Not add this plugin and we will lack the support for a widely used nova
feature.
References
==========

View File

@ -1,154 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===================
Cross-Region Search
===================
SearchLight currently is targeted at being deployed alongside nova, glance,
cinder etc. as part of an Openstack control plane. There has been a lot of
interest in allowing SearchLight to index and search resources across
Openstack regions.
Problem Description
===================
A typical production Openstack deployment can provide scaling_ and resilience
in several ways, some of which apply to all services and some specific to
services that support a feature.
**Availability zones** provide some ability to distribute resources (VMs,
networks) across multiple machines that might, for instance be in separate
racks with separate power supplies. AZs are represented in resource data
and are already part of the indexing that SearchLight does.
**Regions** can provide separation across geographical locations (different
data centers, for instance). The only requirement to run multiple regions
within a single Openstack deployment is that Keystone is configured to share
data across the regions (with master-master database replication, for
instance). All other services are isolated from one another, but Keystone is
able to provide the URLs for, say, the ``nova-api`` deployments in each
region. A keystone token typically is not scoped to a particular region;
Horizon currently treats a region change as a heavy weight option because it
means refreshing the dashboards and panels that should be visible in the
selected region.
**Multiple clouds** provide total isolation such that each cloud is totally
separate with no knowledge of the other. Horizon also supports this model
(confusingly also referring to each cloud as a 'Region') though changing
'region' requires logging into the new region.
**Nova cells** are a feature specific to Nova that allows horizontal scaling
within a single region. Cells_ spread the compute API over several databases
and message queues in a way that's mostly transparent to users. Cells will
be considered beyond the scope of this document since they address performance
rather than resilience, and thus aren't directly related to this feature
(though may be the basis of another).
For deployments using multiple regions, the ability to search aggregated data
can provide value. A Nova deployment in a fictional Region-A is
unaware of resources in a fictional Region-B, so a user must make requests to
each region to get information. This makes Horizon somewhat cumbersome
(changing region triggers reloading pages to change the context,
although authentication status is preserved).
.. _scaling: http://docs.openstack.org/openstack-ops/content/scaling.html
.. _Cells: http://docs.openstack.org/liberty/config-reference/content/section_compute-cells.html
These are the potential deployment options for multi-region clouds. The options
that follow are presented in order of effort and complexity. Relative
performance is noted.
1. Deploy Searchlight in the same fashion as Keystone. API endpoints can exist
in both regions. Data will be duplicated between regions (by some external
process - Elasticsearch explicitly does not support or recommend splitting
clusters across geographical locations); Searchlight indexing will write to
its local cluster and queries will be run against a local cluster. All
region-aware resources will have a region id attached to them.
**Best performance**, **most difficult maintenance**,
**zero client complexity**.
2. Searchlight will run in each location, but data will not be duplicated
across locations (similar to how nova and glance work). To allow searching
across regions, Elasticsearch is configured with a tribe_ node that acts as
a federated search node; indexing operations are always performed locally.
A search against an alias on the tribe node will act as a search against
that alias in all clusters to which the tribe node is joined.
**Worst performance** (as bad as the slowest node, though single region
searches can be optimized), **easier maintenance**,
**zero client complexity**.
3. Run Searchlight in both regions separately; either have clients make
queries against both regions explicitly or have Searchlight's API echo
requests to other regions. Likely this would enforce segregating results by
region (which might be a good outcome).
**Variable performance** (can receive results as they are available),
**easiest maintenance**, **complexity pushed to client**.
.. _tribe: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-tribe.html
It should be noted that options 1 and 2 provide a similar
functional experience; searches will appear to run against a single API
endpoint returning data in multiple regions (sorting and paging
appropriately). Option 3 treats each region separately; paging and sorting
would apply to each region's results. There is a decision to be made even at
that level what is actually desirable; should the UI segregate results by
region or is a merged view more desirable?
Proposed Change
===============
After discussion at the Newton summit in Austin (where the alternative below
was presented) it was agreed that a unified view is potentially very useful,
but requires significant awareness of the security implications and a lot more
testing. The general feeling given the workload for Newton was that this
functionality can be adequately supported from the client.
As such, we will document the method by which Searchlight can be run using
tribe nodes, and the networking implications it brings. We will also add
``region_name`` to all mappings.
This may be expanded upon in subsequent releases.
Alternatives
------------
To enable use of tribe nodes to support searches in multiple regions:
#. Set up a stock devstack with Searchlight
#. Deploy a second devstack configured for `multi-region`_ (I set it up on
a second local VM)
#. Ensure that ``searchlight.conf`` included the correct ``region-name`` in
the auth credential sections
#. Ensure that the Elasticsearch cluster names were different
(``cluster.name: region-one``)
#. Check that ``searchlight-manage`` indexes correctly in each region
#. Set up a tribe_ node (again with a different cluster name) on a different
port on the first devstack VM. I used manual host discovery
#. Configure a separate searchlight-api running off the tribe node. Searches
against the alias return results across both clusters
This technique will suffer from the same problem that resulted in us disabling
`multi-index support`_; if the index mappings are different errors will
result. The solution (as I suspect there) is to ensure identical mappings
across indices even if no data is indexed into a given index.
The work required here in addition to setting up Elasticsearch would be to
make ``searchlight-api`` use the tribe node (potentially one in every region)
rather than the cluster the listener uses (or perhaps both depending on
search context). This change is relatively minor. We would also need to make
all resources region aware (which is a sensible change) and make sure
Searchlight itself is aware of its own region (also a sensible change).
.. _`multi-index support`: (https://blueprints.launchpad.net/searchlight/+spec/reenable-multiple-indices)
.. _`multi-region`: http://docs.openstack.org/developer/devstack/configuration.html#multi-region-setup
.. _tribe: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-tribe.html
References
==========
* Openstack scaling http://docs.openstack.org/openstack-ops/content/scaling.html
* Elasticsearch 'tribe' nodes: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-tribe.html

View File

@ -1,187 +0,0 @@
..
c) Copyright 2016 Hewlett-Packard Enterprise Development Company, L.P.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
=============================
Index Performance Enhancement
=============================
https://blueprints.launchpad.net/searchlight/+spec/index-performance-enhancement.
This feature will improve the performance of indexing resource types within Searchlight.
Problem Description
===================
If the above link is too troublesome to follow, please indulge us while we
plagiarize from the blueprint.
When indexing (first time or re-indexing) we will index all resource group types
sequentially. We loop through all plugins, indexing each one in turn. The result
is that the time it takes to re-index is equal to the sum of the time for all
plugins. This may take longer than it should. In some cases a lot longer.
The time it takes to complete the full index is::
n
O( ∑ T(p) )
p=0
When n is the number of plugins and T(p) is the time it takes for plugin p to
index.
We should change the algorithm to index in parallel, rather than in serial. As we
are looping through each plugin to re-index, we should spin each indexing task
into it's own thread. This way the time it takes to index is the time it takes
the longest plugin to re-index.
With this enhancement, the time it takes to complete the index is::
n
O( MAX( T(p) ) )
p=0
To provide context for the design, we will review the current design for
re-indexing. A re-indexing starts when the admin runs the command:
``searchlight-manage index sync``
Under the cover, ``searchlight-manage`` is doing the following:
* Determine which resource groups need to be re-indexed.
* Determine which resource types within each resource group needs to be
re-indexed.
* For each resource type that *does* need to be re-indexed,
``searchlight-manage`` will call the plugin associated with that resource type.
The plugin will make API calls to that service and re-index the information.
* For each resource type that *does not* need to be re-indexed,
``searchlight-manage`` will call ElasticSearch directly and re-index from the
old index into the new index.
* Once all re-indexing is complete, the ES aliases are adjusted and
``searchlight-manage`` returns to the user.
This implies the following:
* The admin must wait for all of the re-indexing to complete before
``searchlight-manage`` finishes.
* When ``searchlight-manage`` finishes, the admin knows the exact state of the
re-index. Whether it completed successfully or if there was an error.
Proposed Change
===============
As described in the blueprint, we would like to reduce the time to complete the
re-index. Based on discussions with the blueprint and this spec, we will be
implementing only the first enhancement in the blueprint. We will be using python
threads to accomplish this task. We need to understand the design issues
associated with implementing a multi-thread approach.
1. **Are the indexing plugins thread-safe?**
If there are a lot of inter-dependencies within the plugins, it may not pay off
to try to multi-thread the plugins. Reviewing the code and functionality of the
plugins, they appear to be separate enough that they are good candidates to be
moved into their own threads. The plugins are isolated from each other and do not
depend on any internal structures to handle the actual indexing.
**Design Proposal:** The individual plugins can be successfully threaded.
2. **At what level should we create the indexing threads?**
The obvious candidates are the resource type (e.g. OS::Nova::Server) or the
resource type group (e.g. the index "searchlight"). The main reason that we are
considering this enhancement is due to the large amount of time for a particular
resource type, but not for a particular resource type group.
Internal to ``searchlight-manage``, this distinction fades rather quickly. We use
the resource type groups to only determine which resource types need to be
re-indexed. We also have an existing enhancement within ``searchlight-manage``
where we re-index through the plugin API only the resource types that were
explicitly demanded by the user. All other resource types are re-indexed directly
within ElasticSearch. We need to keep this enhancement.
Keeping the current design intact means we will want to thread on the fine
resource type level and not at the gross resource type group level. Based on the
parent/child relationship that exists between some of the resource types, this is
the "fine" level we will be considering.
Since we are already using bulk commands for Elasticsearch re-indexing, we will
place all of the Elasticsearch re-indexing into a single thread. Considering
that this will be I/O bound on Elasticsearch's side, There does not appear
to be any advantage of doing an Elasticsearch re-indexing for each resource type
in a separate thread.
**Design Proposal:** Whenever the indexing code currently calls the plugin API,
it will create a worker in the thread pool.
**Design Proposal:** All of the calls to ElasticSearch to re-index an existing
index, will be placed in a single worker in the thread pool.
3. **Mapping of plugins to threads**
There may be a large number of plugins used with Searchlight. If each plugin
has its own thread, we may be using a lot of threads. Instead of having a single
thread map to a single plugin, we will use a thread pool. This will keep the
number of threads to a manageable level while still allowing for an appropriate
level of asynchronous re-indexing. The size of the thread pool can be changed
through a configuration option.
**Design Proposal:** Use a thread pool.
4. **When will we know to switch the ElasticSearch aliases?**
In the serial model of re-indexing, it is trivial to know when to switch the
ElasticSearch alias to the use the new index. It's when the last index finishes!
Switching over to a model of asynchronous threads running in parallel potentially
complicates the alias update.
The indexing code will wait for all the threads to complete. When all threads
have completed, the indexing code can continue with updating the aliases.
**Design Proposal:** The alias switching code will be run after all of the
threads have completed.
5. **How do we clean up from a failed thread?**
The indexing code will need to have the threads communicate if a catastrophic
failure occurred. After all workers have been placed into the Thread pool, the
main program will wait for all of the threads to finish. If any thread fails,
it will raise an exception. The exception will be caught and the normal
clean-up call will commence. All threads that are still waiting to run will be
cancelled.
**Design Proposal:** Catch exceptions thrown by a failing thread.
For those following along with the code (searchlight/cmd/manage.py::sync), here
is a rough guide to the changes. We will reference the sections as mentioned in
the large comment blocks:
* First pass: No changes.
* Second pass: No changes.
* Step #1: No changes.
* Step #2: No changes.
* Step #3: No changes.
* Step #4: Use threads. Track thread usage.
* Step #5: No changes.
* Step #6: No changes.
Alternatives
------------
We can always choose to not perform any enhancements. Or we can go back to the
first draft of this spec.
References
==========

View File

@ -1,9 +0,0 @@
=================================
Searchlight Newton Specifications
=================================
.. toctree::
:glob:
:maxdepth: 1
*

View File

@ -1,198 +0,0 @@
..
c) Copyright 2015 Intel Corp.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
================================================
Notification Forwarding (to systems like Zaqar)
================================================
https://blueprints.launchpad.net/searchlight/+spec/notification-forwarding
This feature adds the ability to forward the Searchlight consumed indexing
notifications to external providers such as Zaqar[1] via a simple driver
interface. This would include the data enrichment that Searchlight provides on
top of the simple OpenStack notifications.
Problem Description
===================
There are a number of use cases by projects within OpenStack and outside of
OpenStack that need to know when OpenStack resources have been changed.
OpenStack provides a notification bus for receiving notifications published by
services, however there are a few limitations:
* The OpenStack message bus is not typically directly exposed external
consumers for security concerns.
* The notifications often only contain a subset of the data about a particular
resource, that is incomplete, not rich enough
* The notification data does not look the same as the API results
* The notification bus does not support all the message subscription semantics
or mechanisms needed, like per-subscriber filtering or web sockets/hooks
Searchlight handles a number of the above problems and will continue to evolve
with OpenStack so that as notifications change / are enriched, it will evolve
with them. It does this by indexing the data from the OpenStack Services from
a variety of locations, including listening to notifications. It enriches the
data provided by the base notifications in a number of ways, which allows the
index to be used directly instead of the APIs [2]. The latter has the
additional advantage of reducing API call pressure on the various services.
Searchlight's enhanced searching capabilities [3] and performance [4] over
typical API responses are compelling; consequently it has been integrated
into horizon[5][6][7] and elsewhere.
Current Searchlight users will still need to use polling mechanisms to get
updated information from Searchlight. Many Searchlight consumers will need an
asynchronous way to stay informed of OpenStack Resource changes, such as
Availability and Status, instantly. They will also need a way that ensures
that the status updates they receive are coherent with the status of the
resources indexed into Searchlight.
1. A Horizon painpoint today, despite the availability of Searchlight and
its many benefits stemming from richer notifications and elastic search,
is that Horizon still needs to periodically poll Searchlight for updates.
(Where a notification is not handled by Searchlight, direct calls to
OpenStack service APIs are necessary. However, more and more OpenStack
projects are being integrated with Searchlight.) A notification
service that instead provides updates as they occur would be attractive.
Consider for instance a VM launch request. In the ideal scenario the
Horizon UI updates automatically to display the various stages in a
virtual machine launch as the transitions occur. Today
this is achieved in Horizon behind the scenes by frequently polling
Searchlight and/or other OpenStack API services.
2. Telco vendors, with their demanding service up time requirements and low
tolerance for service delays, also seek resource change notifications.
Typically they will have a management layer/application that seeks visibility
into resource updates. For example, CRUD of a user, VM, glance image,
flavor or other. For instance, the arrival of a new user may need to trigger
a workflow such as welcome messages, proferring services, and more.
Searchlight today detects such events and indexes them but is unable to
push them to Telco management layer instantly, near real time.
3. For third party applications, often there is interest in monitoring tools
that want insight into resources and their availability. These might span
flavors, instances, storage, images, and more and their status. Typically these
third party monitor systems will not have access to the OpenStack message bus
for security reasons. Consider for example, a content provider, who would like
to advertise to their customer base when a new movie is uploaded into Swift.
Yet another example may be triggering a upgrade action when a security patch
gets uploaded into Glance.
Proposed Change
===============
We propose that Searchlight add the ability to forward notifications.
Further, a forwarding infrastructure that supports a pipeline of forwarding
entities would provide maximum flexibility. For example, consider paste
pipeline, within the order and entities as specified in the paste.ini file.
Note, in a sense Searchlight is the first element in notification pipeline,
the entity that takes as input the primary notifications from the OpenStack
message bus and enriches them in addition to indexing them.
A notification consumer might be Zaqar, the OpenStack messaging as a service
project or even simpler message forwarding mechanisms such as WebSockets,
or even a simple logger.
The notification consumer would need to be fast so as not to bog down
Searchlight's notification push subsystem. Further, rules of engagement
need to be agreed upon. For instance, if Searchlight is unable to contact
a registered notification consumer, what should it do? See bug [8].
Log the error and forget it? Should it re-try some pre-configured number
of times after some pre-configured wait interval between tries? Should
it hold the notifications till it can successfully send them? This latter
solution may be overly demanding on Searchlight especially when there are
multiple registered consumers.
Our solution strategy is to define a consumer-plugin for Searchlight for
each consumer. If Zaqar is the consumer, than a Zaqar plugin in Searchlight.
The intricacies of error handling on connection fail can be left to the Zaqar
plugin, any filtering of notifications to be actually transmitted from
Searchlight to Zaqar can further be handled there. Leaving the error handling
to the plugin makes sense because some consumers may care about lost
notifications while others may not. For example, a mail program displays
messages as they arrive, but occasionally the VPN goes down, or there is no
wireless connectivity or other problem. The mail reader then may on reconnect
just issue a mail-synch. It is in this vein that we leave notification
handling to the plugin and its associated consumer and its end-user API.
Essentially re-try behavior, re-synch behavior are all left to the
plugin-consumer pair.
The main notification/message flow would look like the below:
OS Internal Service —> Searchlight —> Zaqar-plugin —> Zaqar —> External app
For each Searchlight plugin, it may have a notification forwarder configured.
After Searchlight has received a notification, performed any data enrichment,
and indexed into ElasticSearch, it would send the enriched data to the
configured notification forwarder via its plugin.
The notification forwarder would support filtering out forwarded fields:
* Unsearchable fields (already configurable by Searchlight)
* Admin only fields
* By regex (similar to Glance property protections)
Searchlight may ease plugin development by refactoring the above functionality
into a common utility. The respective plugins may want to leave filtering as
configurable parameters or hard code them. This is entirely up to the
plugin-consumer pair.
This way, when a resource gets updated by notification, the updated resource
will be sent out to a messaging system like Zaqar. The Zaqar message body will
include the complete resource data from Searchlight.
Alternatives
------------
An alternative is introducing a brand new service, split out from ceilometer,
to listen to notifications, but that would have the following shortcomings.
Would either not have or would have to rebuild Searchlight's ability to enrich
data and to know about sensitive data.
Generically solving the issue of cache coherency from listening to
notifications and from periodic re-synchs would still be an issue.
Ideally no consumer should be obliged to deal with a flood of notifications
resulting from another consumer initiating some action.
Consider for instance the Horizon table view, such as the instance table
which lists all instances. If Horizon was just consuming the
notification data to display new instances as they become available,
it could call the Nova API to supplement any currently displayed list
of instances. However, the results the user sees from searching/
Searchlight (see Horizon blueprints referenced) are different. Likewise the
results will vary should the search criteria be changed. By keeping
Searchlight as the first hop in a notification pipeline, we ensure the user
has a consistent view of all notifications, barring any re-synchs the user
initiates.
References
==========
[1] https://launchpad.net/zaqar
[2] https://www.youtube.com/watch?v=0jYXsK4j26s&feature=youtu.be&t=2053
[3] https://www.youtube.com/watch?v=0jYXsK4j26s&feature=youtu.be&t=167
[4] https://blueprints.launchpad.net/horizon/+spec/searchlight-search-panel
[5] https://blueprints.launchpad.net/horizon/+spec/searchlight-images-integration
[6] https://blueprints.launchpad.net/horizon/+spec/searchlight-instances-integration
[7] https://www.youtube.com/watch?v=0jYXsK4j26s&feature=youtu.be&t=1771
[8] https://bugs.launchpad.net/searchlight/+bug/1524998

View File

@ -1,109 +0,0 @@
..
c) Copyright 2016 Intel Corp.
Licensed under the Apache License, Version 2.0 (the License); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an AS IS BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
================================================
Pipeline Architecture
================================================
https://blueprints.launchpad.net/searchlight/+spec/pipeline-architecture
This feature enables flexible pipeline architecture to allow Searchlight to
configure multiple publishers to consume enhanced data.
Problem Description
===================
Currently when a notification comes to Searchlight, it gets processed in
a notification handler. The handler enriches notifications in a number of ways
and indexes enriched data into Elasticsearch. This is a simple and
straightforward approach because Elasticsearch is the only storage backend in
Searchlight now. As we are going to introduce notification forwarder [1], this
process becomes inflexible. A pipeline architecture is needed to
provide extra flexibility for users to define where enriched data is published
to. It also allows Searchlight to support other non-elasticsearch backends
in the future.
Proposed Change
===============
We propose that Searchlight change to a pipeline architecture to provide
flexibility for forwarding notifications.
The current main message flow in Searchlight is looking like the below:
Source(Notifications) -> Enrichment&Index to Elasticsearch.
Currently notification handlers wait for notifications, transform them and
index data into Elasticsearch. It's tight coupled and not modular. A
consistent workflow is needed.
With pipeline in place, the main message flow would look like:
Source(Notifications) -> Data Transformer(Enrichment) -> Publishers(Elasticsearch, Zaqar).
To achieve this, some refactors to Searchlight are needed. Notification
handlers focus solely on capturing supported notification events.
After that, notifications are forwarded to data transformers.
There are mainly two kinds of data transformation. One is to normalize
a notification into an OpenStack resource payload. The payload is api
compatible format without publisher additional metadata. The
normalization is always done by either calling OpenStack API services
or updating existing Elasticsearch data. These resource transformers
are plugin-dependent. For example, a nova create instance notification
could be normalized into a server info document. Besides resource data
enrichment, there might be some publisher metadata to be attached,
like user role field, parent child relationship, version in
Elasticsearch. These transformations should be separated from resource
data enrichment.
Publishers should implement a method to accept enriched data, notification
information, as well as an action indicated resources CURD. For example, if a
nova server has been updated, publishers in the pipeline will receive server
full info, server update notification and an 'update' action. It is entirely
up to the publisher to decide how to deal with those information.
We see Elasticsearch indexing as a case of publisher. It could be the default
publisher because for some plugins resource update needs to fetch old documents
from Elasticsearch, partial update doesn't work without Elasticsearch, though in
the future we may solve this issue. The order of publishers in the pipeline
doesn't matter. Publisher can choose how to deal with errors, either request a
requeue or just ignore the exception. The requeue operation is especially
useful for Elasticsearch publisher, because data integrity is important for
search functions of Searchlight. Requeue should not affect other configured
publishers, thus a filter is needed to make sure publishers won't deliver
same message twice.
Currently Searchlight gets its data in two ways, one is incremental updates via
notifications, the other is full indexing to ElasticSearch via API calls.
Incremental updates are notified to all the publishers configured in the
pipeline. For reindexing, it is up to publishers to decide if they want
reindexing or not.
There are two alternatives of pipeline design. One is the pipeline only
consists of publishers. Notifications are normalized by resource transformer,
then passed to configured publishers. Publishers could attach specific metadata
inside themselves. Users can only control what publishers Searchlight data is
heading for. Another alternative is make both transformers and publishers
configurable. By combining different transformers and publishers, one can
produce different pipeline on same notification.
Alternatives
------------
References
==========
[1] https://blueprints.launchpad.net/searchlight/+spec/notification-forwarding

View File

@ -1,74 +0,0 @@
..
c) Copyright 2016, Huawei Technology.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
==========================================
Support Nova microversions for Nova plugin
==========================================
https://blueprints.launchpad.net/searchlight/+spec/support-microversion-for-nova
This feature adds the support for Nova APIs with microversions. This would
support Searchlight providing the fields that are added in new microversions
of Nova APIs.
Problem Description
===================
Nova have deprecated v2.0 API and starting to remove the codes from tree.
Nova v2.1 API is designed with the microversion mechanism, that is, when
new data fields are added to one particular resource, the backward
compatibility is ensured by adding new microversions to related APIs [1]_.
For example, in [2]_, a new field ``description`` is added for servers,
it is used for users to make a simple string to describe their servers,
it would be very useful if we can also provide this field.
The changes that were made for each microversions could be found in [3]_.
Proposed Change
===============
Currently, when we initialize nova client, it is hard coded to use
``version=2``, this is bad in two ways:
1. The v2.0 nova API is deprecated and the code will be removed in
Newton [4]_.
2. It can not support microversions if it is hard coded.
In this BP, a new configure option ``compute_api_version`` will be
added in the configuration file. When we initialize nova client,
this config option will be used as the version of the API version.
The default value of this config option will be set to 2.1 in the
design of this BP and can be modified in the future according to
the changes in Nova API.
The supported data fields will be also updated according to the
provided microversion.
Alternatives
------------
Hard code the version to 2.1 as 2.0 will no longer be usable soon.
But the newly added data fields cannot be supported.
References
==========
.. [1] http://docs.openstack.org/developer/nova/api_microversions.html
.. [2] https://blueprints.launchpad.net/nova/+spec/user-settable-server-description
.. [3] https://opendev.org/openstack/nova/src/branch/master/nova/api/openstack/compute/rest_api_version_history.rst
.. [4] https://blueprints.launchpad.net/nova/+spec/remove-legacy-v2-api-code

View File

@ -1,9 +0,0 @@
================================
Searchlight Ocata Specifications
================================
.. toctree::
:glob:
:maxdepth: 1
*

View File

@ -1,56 +0,0 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
==============
Ironic plugin
==============
https://blueprints.launchpad.net/searchlight/+spec/ironic-plugin
This spec is proposed to add ironic plugin for Searchlight. Ironic is OpenStack
baremetal service. Plugin should support these baremetal resources: nodes
(OS::Ironic::Node), ports (OS::Ironic::Port) and chassis (OS::Ironic::Chassis).
Problem Description
===================
Notifications about baremetal node state changes (power, provisioning) and
create, update and delete of resources are proposed to ironic ([1]_, [2]_).
Because information about node in the database can be changed quickly during
deployment specification [2]_ provides ways to limits flow of notifications.
Using of Searchlight API with ironic plugin can reduce load on ironic API
from periodical polling tasks.
Proposed Change
===============
1. Searchlight listener should be changed, because ironic can use also ERROR
notifications message priority. New handler for ERROR priority will be added.
2. Plugin with indexers and notification handlers for ironic nodes, ports and
chassis should be implemented.
3. Custom Searchlight configurations should be used with ironic because ironic uses
own hardcoded ``ironic_versioned_notifications`` topic ([3]_).
Alternatives
------------
None
References
==========
.. [1] http://specs.openstack.org/openstack/ironic-specs/specs/approved/notifications.html
.. [2] https://review.opendev.org/#/c/347242
.. [3] http://docs.openstack.org/developer/ironic/dev/notifications.html

View File

@ -1,58 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===============
Default filters
===============
https://blueprints.launchpad.net/searchlight/+spec/overridable-default-filters
This spec is proposed to support cases in plugins where a filter should be
applied for most searches, but should be overidable by a user.
Problem Description
===================
The two cases identified thus far for supporting default (but overridable)
filters are glance's community images, and nova deleted server instances. In
both cases there are query clauses that should be applied to searches by
default, but which a user should be able to explicitly disable.
Proposed Change
===============
In addition to RBAC filters which are applied to all queries on a per-plugin
basis, this change will allow plugins to specify additional filters that will
be applied by default alongside the RBAC filters. For these defaults, however,
the query builder will examine the incoming query for any instances of the
fields that would be filtered on, excluding any filters that the user has
explicitly specified.
This solution will have some flaws. query_string clauses are by their nature
difficult to analyze; potentially we can look for instances of ``key:`` after
splitting on ``&``. For structured queries, looking for instances of the
filter key in the structured query should be good enough.
In addition, it will be difficult/impossible to know whether a filter should
be overridden only for specific types (e.g. if ``deleted`` is a default filter
for the Nova servers plugin, it will be removed for any query including
``deleted`` as a term even if it wasn't intended to apply to Nova servers).
These limitations are somewhat unavoidable given the flexibility of the
Elasticsearch DSL. The cases for removing these filters are specific enough
that edge cases aren't important.
Alternatives
------------
None, but this restricts the use of Searchlight for Nova's cells, and the
ability to match Glance's API with respect to community images.
An alternative implementation option would be a specific option for 'disable
default filters' at the query top level. This would be safer, more performant,
more predictable but would require knowledge of these defaults (e.g. that a
search for ``_type:OS::Nova::Server AND deleted`` won't return anything
unless the additional override parameter is given).
References
==========

View File

@ -1,9 +0,0 @@
===============================
Searchlight Pike Specifications
===============================
.. toctree::
:glob:
:maxdepth: 1
*

View File

@ -1,51 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===================
Nova service plugin
===================
https://blueprints.launchpad.net/searchlight/+spec/nova-service-plugin
This spec is proposed to support nova service plugin (OS::Nova::Service,
note that OS::Nova::Service doesn't exist in heat resource type same as the
hypervisor plugin, and it is an admin-only resource type), and versioned
notifications are supported for services in nova [0], it would be a nice
additional plugin for Searchlight.
Problem Description
===================
A service[1] takes a manager and enables rpc by listening to queues based on
topic. It also periodically runs tasks on the manager and reports its state
to the database services table. So in cloud with large amount of compute
nodes, there will be a pretty large number of services (typically one service
each compute node, and four services each controller node). The list or search
services (you can use command `nova service-list` to get the whole list or
filter the result by host or binary) may get slow using the native nova API.
And the versioned notifications for hypervisor [2] refers to the service id.
In future implementation for notification in hypervisor plugin [3] we may want
to fetch the service details for hypervisor create or update action by the
service id.
Proposed Change
===============
1. Support index services through nova API.
2. Support versioned notifications.
Alternatives
------------
None
References
==========
[0] https://github.com/openstack/nova/blob/master/nova/objects/service.py#L309-L315
[1] http://docs.openstack.org/developer/nova/services.html#the-nova-service-module
[2] https://review.opendev.org/#/c/315312/11/nova/notifications/objects/compute_node.py
[3] https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/notification_handler.py#L107

View File

@ -1,423 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Example Spec - The title of your blueprint
==========================================
Include the URL of your StoryBoard story:
https://storyboard.openstack.org/#!/project/openstack/searchlight-specs
Introduction paragraph -- why are we doing anything? A single paragraph of
prose that operators can understand.
Some notes about using this template:
* Your spec should be in ReSTructured text, like this template.
* Please wrap text at 80 columns.
* Please do not delete any of the sections in this template. If you have
nothing to say for a whole section, just write: None
* For help with syntax, see http://sphinx-doc.org/rest.html
* To test out your formatting, build the docs using tox, or see:
http://rst.ninjs.org
* If you would like to provide a diagram with your spec, text representations
are preferred. http://asciiflow.com/ is a very nice tool to assist with
making ascii diagrams. blockdiag is another tool. These are described below.
If you require an image (screenshot) for your BP, attaching that to the BP
and checking it in is also accepted. However, text representations are preferred.
* Diagram examples
asciiflow::
+----------+ +-----------+ +----------+
| A | | B | | C |
| +-----+ +--------+ |
+----------+ +-----------+ +----------+
blockdiag
.. blockdiag::
blockdiag sample {
a -> b -> c;
}
actdiag
.. actdiag::
actdiag {
write -> convert -> image
lane user {
label = "User"
write [label = "Writing reST"];
image [label = "Get diagram IMAGE"];
}
lane actdiag {
convert [label = "Convert reST to Image"];
}
}
nwdiag
.. nwdiag::
nwdiag {
network dmz {
address = "210.x.x.x/24"
web01 [address = "210.x.x.1"];
web02 [address = "210.x.x.2"];
}
network internal {
address = "172.x.x.x/24";
web01 [address = "172.x.x.1"];
web02 [address = "172.x.x.2"];
db01;
db02;
}
}
seqdiag
.. seqdiag::
seqdiag {
browser -> webserver [label = "GET /index.html"];
browser <-- webserver;
browser -> webserver [label = "POST /blog/comment"];
webserver -> database [label = "INSERT comment"];
webserver <-- database;
browser <-- webserver;
}
Problem description
===================
A detailed description of the problem:
* For a new feature this might be use cases. Ensure you are clear about the
actors in each use case: End User vs Deployer
* For a major reworking of something existing it would describe the
problems in that feature that are being addressed.
Proposed change
===============
Here is where you cover the change you propose to make in detail. How do you
propose to solve this problem?
If this is one part of a larger effort make it clear where this piece ends. In
other words, what's the scope of this effort?
Alternatives
------------
What other ways could we do this thing? Why aren't we using those? This doesn't
have to be a full literature review, but it should demonstrate that thought has
been put into why the proposed solution is an appropriate one.
Data model impact
-----------------
Changes which require modifications to the data model often have a wider impact
on the system. The community often has strong opinions on how the data model
should be evolved, from both a functional and performance perspective. It is
therefore important to capture and gain agreement as early as possible on any
proposed changes to the data model.
Questions which need to be addressed by this section include:
* What new data objects and/or database schema changes is this going to require?
* What database migrations will accompany this change.
* How will the initial set of new data objects be generated, for example if you
need to take into account existing instances, or modify other existing data
describe how that will work.
REST API impact
---------------
For each API resource to be implemented using Tacker's attribute map
facility (see the tacker.api.v2.attributes), describe the resource
collection and specify the name, type, and other essential details of
each new or modified attribute. A table similar to the following may
be used:
+----------+-------+---------+---------+------------+--------------+
|Attribute |Type |Access |Default |Validation/ |Description |
|Name | | |Value |Conversion | |
+==========+=======+=========+=========+============+==============+
|id |string |RO, all |generated|N/A |identity |
| |(UUID) | | | | |
+----------+-------+---------+---------+------------+--------------+
|name |string |RW, all |'' |string |human-readable|
| | | | | |name |
+----------+-------+---------+---------+------------+--------------+
|color |string |RW, admin|'red' |'red', |color |
| | | | |'yellow', or|indicating |
| | | | |'green' |state |
+----------+-------+---------+---------+------------+--------------+
Here is the other example of the table using csv-table
.. csv-table:: CSVTable
:header: Attribute Name,Type,Access,Default Value,Validation Conversion,Description
id,string (UUID),"RO, all",generated,N/A,identity
name,string,"RW, all","''",string,human-readable name
color,string,"RW, admin",red,"'red', 'yellow' or 'green'",color indicating state
Each API method which is either added or changed that does not use
Tacker's attribute map facility should have the following:
* Specification for the method
* A description of what the method does suitable for use in
user documentation
* Method type (POST/PUT/GET/DELETE)
* Normal http response code(s)
* Expected error http response code(s)
* A description for each possible error code should be included
describing semantic errors which can cause it such as
inconsistent parameters supplied to the method, or when an
instance is not in an appropriate state for the request to
succeed. Errors caused by syntactic problems covered by the JSON
schema definition do not need to be included.
* URL for the resource
* Parameters which can be passed via the url
* JSON schema definition for the body data if allowed
* JSON schema definition for the response data if any
* Example use case including typical API samples for both data supplied
by the caller and the response
* Discuss any API policy changes, and discuss what things a deployer needs to
think about when defining their API policy. This is in reference to the
policy.json file.
Note that the schema should be defined as restrictively as
possible. Parameters which are required should be marked as such and
only under exceptional circumstances should additional parameters
which are not defined in the schema be permitted (eg
additionaProperties should be False).
Reuse of existing predefined parameter types such as regexps for
passwords and user defined names is highly encouraged.
Security impact
---------------
Describe any potential security impact on the system. Some of the items to
consider include:
* Does this change touch sensitive data such as tokens, keys, or user data?
* Does this change alter the API in a way that may impact security, such as
a new way to access sensitive information or a new way to login?
* Does this change involve cryptography or hashing?
* Does this change require the use of sudo or any elevated privileges?
* Does this change involve using or parsing user-provided data? This could
be directly at the API level or indirectly such as changes to a cache layer.
* Can this change enable a resource exhaustion attack, such as allowing a
single API interaction to consume significant server resources? Some examples
of this include launching subprocesses for each connection, or entity
expansion attacks in XML.
For more detailed guidance, please see the OpenStack Security Guide as
a reference (https://docs.openstack.org/security-guide/index.html). These
guidelines are a work in progress and are designed to help you identify
security best practices. For further information, feel free to reach out
to the OpenStack Security Group at openstack-security@lists.openstack.org.
Notifications impact
--------------------
Please specify any changes to notifications. Be that an extra notification,
changes to an existing notification, or removing a notification.
Other end user impact
---------------------
Aside from the API, are there other ways a user will interact with this feature?
* Does this change have an impact on python-tackerclient? What does the user
interface there look like?
Performance Impact
------------------
Describe any potential performance impact on the system, for example
how often will new code be called, and is there a major change to the calling
pattern of existing code.
Examples of things to consider here include:
* A periodic task might look like a small addition but if it calls conductor or
another service the load is multiplied by the number of nodes in the system.
* A small change in a utility function or a commonly used decorator can have a
large impacts on performance.
* Calls which result in a database queries (whether direct or via conductor) can
have a profound impact on performance when called in critical sections of the
code.
* Will the change include any locking, and if so what considerations are there on
holding the lock?
Other deployer impact
---------------------
Discuss things that will affect how you deploy and configure OpenStack
that have not already been mentioned, such as:
* What config options are being added? Should they be more generic than
proposed (for example a flag that other hypervisor drivers might want to
implement as well)? Are the default values ones which will work well in
real deployments?
* Is this a change that takes immediate effect after its merged, or is it
something that has to be explicitly enabled?
* If this change is a new binary, how would it be deployed?
* Please state anything that those doing continuous deployment, or those
upgrading from the previous release, need to be aware of. Also describe
any plans to deprecate configuration values or features. For example, if we
change the directory name that instances are stored in, how do we handle
instance directories created before the change landed? Do we move them? Do
we have a special case in the code? Do we assume that the operator will
recreate all the instances in their cloud?
Developer impact
----------------
Discuss things that will affect other developers working on OpenStack,
such as:
* If the blueprint proposes a change to the API, discussion of how other
plugins would implement the feature is required.
Implementation
==============
Assignee(s)
-----------
Who is leading the writing of the code? Or is this a blueprint where you're
throwing it out there to see who picks it up?
If more than one person is working on the implementation, please designate the
primary author and contact.
Primary assignee:
<launchpad-id or None>
Other contributors:
<launchpad-id or None>
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
Apart from the main functionality, the developer needs to deliver the following
work items as part of any major blueprint:
* Unit Tests
* Functional Tests
* Feature documentation in doc/source/devref/feature
NOTE: This feature documentation should be treated like code. Preferably
this should be included in the primary patchset itself and not as an
afterthought. This is also mandatory for a blueprint to be marked complete.
Dependencies
============
* Include specific references to specs and/or blueprints in tacker, or in other
projects, that this one either depends on or is related to.
* If this requires functionality of another project that is not currently used
by Tacker (such as the glance v2 API when we previously only required v1),
document that fact.
* Does this feature require any new library dependencies or code otherwise not
included in OpenStack? Or does it depend on a specific version of library?
Testing
=======
Please discuss how the change will be tested. We especially want to know what
tempest tests will be added. It is assumed that unit test coverage will be
added so that doesn't need to be mentioned explicitly, but discussion of why
you think unit tests are sufficient and we don't need to add more tempest
tests would need to be included.
Is this untestable in gate given current limitations (specific hardware /
software configurations available)? If so, are there mitigation plans (3rd
party testing, gate enhancements, etc).
Documentation Impact
====================
What is the impact on the docs team of this change? Some changes might require
donating resources to the docs team to have the documentation updated. Don't
repeat details discussed above, but please reference them here.
References
==========
Please add any useful references here. You are not required to have any
reference. Moreover, this specification should still make sense when your
references are unavailable. Examples of what you could include are:
* Links to mailing list or IRC discussions
* Links to notes from a summit session
* Links to relevant research, if appropriate
* Related specifications as appropriate (e.g. link any vendor documentation)
* Anything else you feel it is worthwhile to refer to

View File

@ -1,9 +0,0 @@
================================
Searchlight Train Specifications
================================
.. toctree::
:glob:
:maxdepth: 1
*

View File

@ -1,110 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=============================
Tacker Plugin for Searchlight
=============================
https://storyboard.openstack.org/#!/story/2004968
This spec is proposed to support indexing Tacker resource information into
ElasticSearch.
Problem Description
===================
Tacker is a software that facilitates the OpenStack components to provide NFV
Orchestration. While leveraging the OpenStack infrastructure to realize its
elements (e.g., virtual machines as VNFs, etc.), Tacker keeps its own copy of
the resource definitions in a separate database. That database can only be
accessed by using the Tacker APIs. So, it would be beneficial to index Tacker
resource information and events into Searchlight to provide a universal search
interface for users.
Proposed Change
===============
The Tacker plugin will support indexing Tacker resources via Tacker API. The
plugin will use the python-tackerclient to communicate with Tacker server to
query its resource information. The plugin will then index that information
into ElasticSearch database. Tacker plugin also offers Searchlight listener
the ability to acknowledge any change on those resources and update the
corresponding data in ElasticSearch.
The following figure describes the overall architecture of the proposed
plugin:
::
+------------------------------------------------+
| |
| Tacker |
| |
+---------^--------------+-----------------------+
| |
| +-----------v------------+
| | Oslo Messeging |
| +-----------^------------+
| |
+---------|--------------|-----------------------+
| +-------|--------------v---------------------+ |
| | +----v---------------------------------+ | |
| | | Tacker Client | | |
| | +--------------------------------------+ | |
| | Tacker Plugin | |
| +----------------------+---------------------+ |
| | |
| +----------------------v---------------------+ |
| | ElasticSearch | |
| +--------------------------------------------+ |
| Searchlight |
+------------------------------------------------+
The following Tacker resource information will be indexed:
* Network Services (NS)
* Virtual Infrastructure Managers (VIM)
* Virtual Network Functions (VNF)
* Virtual Network Function Forwarding Graphs (VNFFG)
Alternatives
------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Trinh Nguyen <dangtrinhnt@gmail.com>
Work Items
----------
1. Create a Tacker plugin for Searchlight to index resource information.
2. Add unit & functional tests.
3. Add user guides.
References
==========
* https://docs.openstack.org/tacker/latest/
* https://docs.openstack.org/python-tackerclient/latest/
* https://docs.openstack.org/oslo.messaging/latest/
* https://www.elastic.co

View File

@ -1,12 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr!=2.1.0,>=2.0.0
sphinxcontrib-actdiag>=0.8.5 # BSD
sphinxcontrib-blockdiag>=1.5.5 # BSD
sphinxcontrib-nwdiag>=0.9.5 # BSD
sphinxcontrib-seqdiag>=0.8.5 # BSD
flake8
doc8>=0.6.0 # Apache-2.0
Pillow>=2.4.0
sphinx>=2.0.0,!=2.1.0 # BSD

43
tox.ini
View File

@ -1,43 +0,0 @@
[tox]
minversion = 3.1.1
envlist = pep8,docs
skipsdist = True
ignore_basepython_conflict = True
[testenv]
basepython = python3
usedevelop = True
setenv = VIRTUAL_ENV={envdir}
deps =
-r{toxinidir}/test-requirements.txt
-r{toxinidir}/doc/requirements.txt
[testenv:venv]
commands = {posargs}
[testenv:docs]
deps = -r{toxinidir}/doc/requirements.txt
whitelist_externals = rm
commands =
rm -fr doc/build
sphinx-build -W --keep-going -b html -d doc/build/doctrees doc/source doc/build/html
[testenv:pdf-docs]
deps = -r{toxinidir}/doc/requirements.txt
envdir = {toxworkdir}/docs
whitelist_externals =
make
commands =
sphinx-build -W --keep-going -b latex doc/source doc/build/pdf
make -C doc/build/pdf
[testenv:pep8]
commands =
flake8
doc8 --ignore D001 specs/ doc/source README.rst
[flake8]
# E123, E125 skipped as they are invalid PEP-8.
show-source = True
ignore = E123,E125
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build