Open specs for Stein release
Tidy Rocky specs; open specs for Stein. Change-Id: Icfb0a50b45a8e6c1a071d465fe086a447e7ee296
This commit is contained in:
parent
9beaeab888
commit
ba223808d3
@ -14,6 +14,7 @@ on for the upcoming release. This is the output of those discussions:
|
||||
:glob:
|
||||
:maxdepth: 1
|
||||
|
||||
priorities/stein-priorities
|
||||
priorities/rocky-priorities
|
||||
priorities/queens-priorities
|
||||
priorities/pike-priorities
|
||||
@ -29,6 +30,7 @@ Here you can find the specs, and spec template, for each release:
|
||||
:glob:
|
||||
:maxdepth: 1
|
||||
|
||||
specs/stein/index
|
||||
specs/rocky/index
|
||||
specs/queens/index
|
||||
specs/pike/index
|
||||
|
1
doc/source/specs/rocky/implemented
Symbolic link
1
doc/source/specs/rocky/implemented
Symbolic link
@ -0,0 +1 @@
|
||||
../../../../specs/rocky/implemented
|
1
doc/source/specs/stein/approved
Symbolic link
1
doc/source/specs/stein/approved
Symbolic link
@ -0,0 +1 @@
|
||||
../../../../specs/stein/approved
|
1
doc/source/specs/stein/backlog
Symbolic link
1
doc/source/specs/stein/backlog
Symbolic link
@ -0,0 +1 @@
|
||||
../../../../specs/stein/backlog
|
1
doc/source/specs/stein/implemented
Symbolic link
1
doc/source/specs/stein/implemented
Symbolic link
@ -0,0 +1 @@
|
||||
../../../../specs/stein/implemented
|
34
doc/source/specs/stein/index.rst
Normal file
34
doc/source/specs/stein/index.rst
Normal file
@ -0,0 +1,34 @@
|
||||
===========================
|
||||
Charm Stein Specifications
|
||||
===========================
|
||||
|
||||
Template:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
Specification Template (Stein release) <template>
|
||||
|
||||
Stein implemented specs:
|
||||
|
||||
.. toctree::
|
||||
:glob:
|
||||
:maxdepth: 1
|
||||
|
||||
implemented/*
|
||||
|
||||
Stein approved (but not implemented) specs:
|
||||
|
||||
.. toctree::
|
||||
:glob:
|
||||
:maxdepth: 1
|
||||
|
||||
approved/*
|
||||
|
||||
Stein backlog (carried over from previous cycle) specs:
|
||||
|
||||
.. toctree::
|
||||
:glob:
|
||||
:maxdepth: 1
|
||||
|
||||
backlog/*
|
1
doc/source/specs/stein/redirects
Symbolic link
1
doc/source/specs/stein/redirects
Symbolic link
@ -0,0 +1 @@
|
||||
../../../../specs/stein/redirects
|
@ -1,5 +1,5 @@
|
||||
..
|
||||
Copyright 2016, Canonical UK
|
||||
Copyright 2017 Canonical LTD
|
||||
|
||||
This work is licensed under a Creative Commons Attribution 3.0
|
||||
Unported License.
|
||||
@ -12,82 +12,104 @@
|
||||
http://sphinx-doc.org/rest.html To test out your formatting, see
|
||||
http://www.tele3.cz/jbar/rest/rest.html
|
||||
|
||||
===========================
|
||||
Keystone Federation Support
|
||||
===========================
|
||||
===================
|
||||
Keystone Federation
|
||||
===================
|
||||
|
||||
Keystone Federation is a maturing feature and charm support for it is
|
||||
frequently requested.
|
||||
Keystone can be configured to integrate with a number of different identity
|
||||
providers in a number of different configurations. This spec attempts to
|
||||
discuss how to implement pluggable backends that utilise Keystone
|
||||
federations.
|
||||
|
||||
Problem Description
|
||||
===================
|
||||
|
||||
Single identity across a cloud with multiple, geographically disparate
|
||||
regions is complex for operators; Keystone federation provides the
|
||||
ability for multiple clouds to federate identity trusty between
|
||||
regions, supporting single identity either via keystone or via a 3rd
|
||||
party identity provider. The use of Keystone Federation will help us
|
||||
build bigger, more manageable clouds across geographies.
|
||||
When deploying the OpenStack charms to an enterprise customer they are likely
|
||||
to want to integrate OpenStack authentication with an existing identity
|
||||
provider like AD, LDAP, Kerberos etc. There are two main ways that Keystone
|
||||
appears to achieve this integration: backends and federation. Although this
|
||||
spec is concerned with federation it is useful to go over backends to in an
|
||||
attempt to distinguish the two.
|
||||
|
||||
The following are not covered here:
|
||||
|
||||
- Integration with Kerberos
|
||||
- Horizon SSO
|
||||
- Federated LDAP via SSSD and mod_lookup_identity
|
||||
- Keystone acting as an IdP in a federated environment.
|
||||
|
||||
Backends
|
||||
--------
|
||||
|
||||
When keystone uses a backend it is Keystone itself which knows how to manage
|
||||
that backend, how to talk to it and how to deal with operations on it. This
|
||||
limits the number of backends that keystone can support as each new backend
|
||||
needs new logic in Keystone itself. This approach also has negative security
|
||||
implications. Keystone may need an account with the backend (an LDAP username
|
||||
and password) to perform lookups, these account details will be in clear text
|
||||
in the keystone.conf. In addition, all users passwords with flow through
|
||||
keystone.
|
||||
|
||||
The keystone project highlights SQL and LDAP (inc AD) as their supported
|
||||
backends. The status of support for these is as follows:
|
||||
|
||||
- SQL: Currently supported by the keystone charm.
|
||||
- LDAP: Currently supported by the keystone and keystone-ldap subordinate.
|
||||
|
||||
These backends are supported with the Keystone v2 API if they are used
|
||||
exclusively. To support multiple backends then Keystone v3 API needs to be used
|
||||
and each backend is associated with a particular Keystone domain. This allows
|
||||
for service users to be in SQL but users to be in ldap for example.
|
||||
|
||||
Adding a new backend is achieved by writing a keystone subordinate charm and
|
||||
relating it to keystone via the keystone-domain-backend interface.
|
||||
|
||||
Enabling a backend tends to be achieved via the keystone.conf
|
||||
|
||||
Federation
|
||||
----------
|
||||
|
||||
With federation Keystone trusts a remote Identity provider. Keystone
|
||||
communicates with that provider using a protocol like SAML or OpenID connect.
|
||||
Keystone relies on a local Apache to manage communication and Apache passes
|
||||
back to keystone environment variables like REMOTE_USER. Keystone is abstracted
|
||||
from the implementation details of talking to the identity provider and never
|
||||
sees the users password. When using Federation, LDAP may still be the ultimate
|
||||
backend but it is fronted by something providing SAML/OpenID connectivity like
|
||||
AD federation service or Shibboleth.
|
||||
|
||||
Each Identity provider must be associated with a different domain within
|
||||
keystone. The keystone v3 API is needed to support federation.
|
||||
|
||||
Compatible Identity Providers:
|
||||
(https://docs.openstack.org/ocata/config-reference/identity/federated-identity.html#supporting-keystone-as-a-sp
|
||||
):
|
||||
|
||||
- OpenID
|
||||
- SAML
|
||||
|
||||
|
||||
Proposed Change
|
||||
===============
|
||||
|
||||
The design of this solution is predicated on the use of the Keystone
|
||||
federation features introduced in OpenStack Kilo; these allow Keystone
|
||||
to delegate authentication of users to a different identity provider
|
||||
(IDP) which might be keystone, but could also be a solution implementing
|
||||
one of the methods used for expressing assertions of identity (saml2 or
|
||||
OpenID).
|
||||
Both Keystone backends and federated backends may need to add config to the
|
||||
keystone.conf and/or the Apache WSGI vhost. As such, it makes sense for both
|
||||
types to share the existing interface particularly as the existing interface
|
||||
is called keystone-domain-backend which does not differentiate between the
|
||||
two.
|
||||
|
||||
If IDP is keystone, then this is the current straw man
|
||||
design:
|
||||
|
||||
- A single ‘global’ keystone is set-up as the IDP for the cloud; this
|
||||
service provides authentication for users and the global service
|
||||
catalog for all cloud regions.
|
||||
- Region level keystone instances delegate authentication to the global
|
||||
keystone, but also maintain a region level service catalog of endpoints
|
||||
for local use
|
||||
|
||||
An end-user accesses the cloud via the entry point of the global keystone;
|
||||
at this point the end-user will be redirected to the region level services
|
||||
based on which region they which to manage resources within.
|
||||
|
||||
In terms of charm design, the existing registration approach for services
|
||||
in keystone is still maintained, but each keystone deployment will also
|
||||
register its service catalog entries into the global keystone catalog.
|
||||
|
||||
They keystone charm will need updating to enable a) operation under apache
|
||||
(standalone to be removed in mitaka and b) enablement of required federation
|
||||
components. There will also be impact onto the openstack-dashboard charm to
|
||||
enable use of this feature.
|
||||
|
||||
There is also a wider charm impact in that we need to re-base onto the
|
||||
keystone v3 api across the board to support this type of feature.
|
||||
|
||||
The packages to support identity federation will also need to be selected
|
||||
and undergo MIR into Ubuntu main this cycle; various options exist:
|
||||
|
||||
- SAM: Keystone supports the following implementations:
|
||||
Shibboleth - see Setup Shibboleth.
|
||||
Mellon - see Setup Mellon.
|
||||
- OpenID Connect: see Setup OpenID Connect.
|
||||
|
||||
The Keystone Federation feature should support:
|
||||
|
||||
- Federation between two keystone services in the same model
|
||||
- Federation between two keystone services in different models
|
||||
- Federation between keystone and an identity provider not managed by
|
||||
Juju
|
||||
This spec covers changes add support for federation using either SAML or
|
||||
OpenID, to the keystone charm. This will involve extending the
|
||||
keystone-domain-backend interface to support passing configuration snippets to
|
||||
the Apache vhosts and creating subordinate charms which implement OpenID and
|
||||
SAML.
|
||||
|
||||
Alternatives
|
||||
------------
|
||||
|
||||
Identities can be kept in sync between keystone instances using database
|
||||
replication. The keystone charm also supports using LDAP as a backend,
|
||||
keystone charms in different models could share the same LDAP backend if
|
||||
there service users are stored locally.
|
||||
- Add support for federation via OpenID and SAML directly to the keystone
|
||||
charm.
|
||||
- Create a new interface for federation via OpenID and SAML
|
||||
|
||||
Implementation
|
||||
==============
|
||||
@ -96,87 +118,71 @@ Assignee(s)
|
||||
-----------
|
||||
|
||||
Primary assignee:
|
||||
gnuoy
|
||||
None
|
||||
|
||||
|
||||
Gerrit Topic
|
||||
------------
|
||||
|
||||
Use Gerrit topic "keystone-federation" for all patches related to this
|
||||
spec:
|
||||
Use Gerrit topic "keystone_federation" for all patches related to this spec.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git-review -t keystone_federation
|
||||
git-review -t <keystone_federation>
|
||||
|
||||
Work Items
|
||||
----------
|
||||
|
||||
Keystone Investigative Work
|
||||
+++++++++++++++++++++++++++
|
||||
**Create deployment scripts to create test env for OpenID or SAML integration**
|
||||
|
||||
- Deploy multiple juju environments and define one IDP keystone and the
|
||||
rest as SPs. Configuration will be manually applied to units
|
||||
- Test OpenID integration with keystone. Configuration will be manually
|
||||
applied to units
|
||||
**Extend keystone-domain-backend**
|
||||
|
||||
Keystone v3 endpoint enablement
|
||||
+++++++++++++++++++++++++++++++
|
||||
The keystone-domain-backend interface will need to provide the following:
|
||||
|
||||
- Define intercharm protocol for agreeing keystone api version
|
||||
- Enable Keystone v3 in keystone charm
|
||||
- Enable Keystone v3 in client charms
|
||||
- Update Openstack charm testing configuration scripts to talk keystone
|
||||
v3
|
||||
- Create Mojo spec for v3 deploy
|
||||
- Modules for Apache to enable
|
||||
- Configuration for principle to insert into Apache keystone wsgi vhosts
|
||||
- Subordinate triggered restart of Apache
|
||||
- Auth method(s) to be added to keystone's [auth] methods list
|
||||
- Configuration for principle to insert into keystone.conf
|
||||
|
||||
**Configure Keystone to consume new interface**
|
||||
|
||||
Keystone to keystone Federation enablement
|
||||
++++++++++++++++++++++++++++++++++++++++++
|
||||
Keystone charm will need to be updated to respond to events outlined in the
|
||||
interface description above
|
||||
|
||||
- Switch keystone to use apache for all use cases on deployments >= Liberty
|
||||
- Enable keystone to keystone SP/IDP relation using a config option in the
|
||||
charm to define the IDP endpoint (in lieu of cross environment relations)
|
||||
- Mojo spec to deploy two regions and test federated access
|
||||
**New keystone-openid and keystone-saml subordinates**
|
||||
|
||||
Keystone to OpenID 3rd party enablement
|
||||
+++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
- Backport libapache2-mod-auth-openidc to trusty cloud archive
|
||||
- Expose OpenID configuration options to keystone charm, and update keystone
|
||||
apache accordingly.
|
||||
- Create bundle for deploying a single region using UbuntuONE for
|
||||
authentication.
|
||||
- Mojo spec for multi-region UbuntuONE backed deployment
|
||||
|
||||
Keystone to SAML 3rd party enablement
|
||||
+++++++++++++++++++++++++++++++++++++
|
||||
|
||||
- Expose SAML configuration options to keystone charm, and update keystone
|
||||
apache accordingly.
|
||||
- Create bundle for deploying a single region using SAML for authentication.
|
||||
- Mojo spec for multi-region SAML backed deployment
|
||||
The new subordinates will need to expose all the configuration options needed
|
||||
for connecting to the identity provider. It will then need to use the
|
||||
interface to pass any required config for Apache or Keystone up to the
|
||||
keystone principle.
|
||||
|
||||
Repositories
|
||||
------------
|
||||
|
||||
No new repositories
|
||||
New projects for the interface and new subordinates will be needed.
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
The Keystone charm README will be updated with instructions for enabling
|
||||
federatioon. A blog post is optional but would be a useful addition.
|
||||
This will require documentation in the READMEs of both the subordinates and
|
||||
the keystone charm. A blog walking through the deployment and integration
|
||||
would be very useful.
|
||||
|
||||
Security
|
||||
--------
|
||||
|
||||
Security review may be required.
|
||||
Although a Keystone back-end will determine who has access to the entire
|
||||
OpenStack deployment, this specific charm will only change Keystone and Apache
|
||||
parameters, avoiding default values and leave the configuration to the user
|
||||
should be enough.
|
||||
|
||||
Testing
|
||||
-------
|
||||
|
||||
Code changes will be covered by unit tests; functional testing will be done
|
||||
using a combination of Amulet, Bundle tester and Mojo specification.
|
||||
The code must be covered by unit tests. Ideally amulet tests would be extended
|
||||
to cover this new functionality but deploying a functional openid server for
|
||||
keystone to use may not be practical. It must be covered by a Mojo spec though.
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
@ -33,8 +33,8 @@ symmetric encryption keys, or fernet keys.
|
||||
.. _Fernet: https://github.com/fernet/spec
|
||||
.. _MessagePacked: http://msgpack.org/
|
||||
|
||||
Task Description
|
||||
================
|
||||
Problem Description
|
||||
===================
|
||||
|
||||
Keystone has had support for Fernet tokens since Kilo, so all services support
|
||||
fernet token authorization. In the upcoming Rocky release the sql token driver
|
||||
@ -192,7 +192,7 @@ immediate; indeed, it could be just before the next key rotation in the worst
|
||||
case, although, this is extremely unlikely to be the case.
|
||||
|
||||
Alternatives
|
||||
============
|
||||
------------
|
||||
|
||||
In the Openstack rocky release, *fernet* is the only token provider available.
|
||||
Therefore, there is no alternative.
|
343
specs/stein/approved/cells.rst
Normal file
343
specs/stein/approved/cells.rst
Normal file
@ -0,0 +1,343 @@
|
||||
..
|
||||
Copyright 2018 Canonical UK Ltd
|
||||
|
||||
This work is licensed under a Creative Commons Attribution 3.0
|
||||
Unported License.
|
||||
http://creativecommons.org/licenses/by/3.0/legalcode
|
||||
|
||||
..
|
||||
This template should be in ReSTructured text. Please do not delete
|
||||
any of the sections in this template. If you have nothing to say
|
||||
for a whole section, just write: "None". For help with syntax, see
|
||||
http://sphinx-doc.org/rest.html To test out your formatting, see
|
||||
http://www.tele3.cz/jbar/rest/rest.html
|
||||
|
||||
========
|
||||
Cells V2
|
||||
========
|
||||
|
||||
Nova cells v2 has been introduced over the Ocata and Pike cycles. In fact, all
|
||||
Pike deployments are now deployments using nova cells v2 usually using a single
|
||||
compute cell.
|
||||
|
||||
Problem Description
|
||||
===================
|
||||
|
||||
Nova cells v2 allows for a group of compute nodes to have their own dedicated
|
||||
database, message queue and conductor while still being administered through
|
||||
a central API service. This has the following benefits:
|
||||
|
||||
Reduced pressure on Rabbit and MySQL in large deployments
|
||||
---------------------------------------------------------
|
||||
|
||||
In even moderately sized clouds the database and message broker can quickly
|
||||
become a bottle neck. Cells can be used to alleviate that pressure by having a
|
||||
database and message queue per cell of compute nodes. It is worth noting that
|
||||
the charms already support having traffic for neutron etc in a separate rabbit
|
||||
instance.
|
||||
|
||||
Create multiple failure domains
|
||||
-------------------------------
|
||||
|
||||
Grouping compute cells with their local services allows the creation of
|
||||
discrete failure domains (from a nova POV at least).
|
||||
|
||||
Remote Compute cells (Edge computing)
|
||||
-------------------------------------
|
||||
|
||||
In some deployments a group of compute nodes maybe far removed (from a
|
||||
networking pov) from the central services. In this case it maybe useful to have
|
||||
the compute nodes act as a largely independent group.
|
||||
|
||||
Different SLAs per cell
|
||||
-----------------------
|
||||
|
||||
Different groups of compute nodes can have different levels of performance,
|
||||
HA. etc. A cell could have no local HA for the database, message queue and
|
||||
conductor for a development cell but the production cell could have
|
||||
significantly higher specification servers running clustered services.
|
||||
|
||||
(These use cases were paraphrased from `*4 <https://www.openstack.org/videos/sydney-2017/adding-cellsv2-to-your-existing-nova-deployment>`_.)
|
||||
|
||||
Proposed Change
|
||||
===============
|
||||
|
||||
To facilitate a cells v2 deployment a few relatively simple interfaces and
|
||||
relations need to be added. From a nova perspective the topology looks like `this <https://docs.openstack.org/nova/latest/_images/graphviz-d1099235724e647ca447c7bd6bf703c607ddf68f.png>`_.
|
||||
This spec proposes mapping that to this `charm topology <https://docs.google.com/drawings/d/1v5f8ow0aCGrKRIpg3uXsv2zolWsz3mGVGzLnbgUQpKQ/>`_.
|
||||
|
||||
Superconductor access to Child cells
|
||||
------------------------------------
|
||||
|
||||
The superconductor needs to be able to query the databases of the compute cells
|
||||
and to send and receive messages on the compute_cells message bus. The
|
||||
cleanest way to model this would be to have a direct Juju relation between the
|
||||
superconductor and the compute cells database and message bus. To facilitate
|
||||
this the following relations will be added to the nova-cloud-controller charm:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
requires:
|
||||
shared-db-cell:
|
||||
interface: mysql-shared
|
||||
amqp-cell:
|
||||
interface: rabbitmq
|
||||
|
||||
|
||||
Superconductor configuring child cells
|
||||
--------------------------------------
|
||||
|
||||
With the above change the superconductor has access to the child db and mq but
|
||||
does not know which compute cell name to associate with them. To solve this the
|
||||
nova-cloud-controller charm will have the following new relations:
|
||||
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
provides:
|
||||
nova-cell-api:
|
||||
interface: cell
|
||||
requires:
|
||||
nova-cell:
|
||||
interface: cell
|
||||
|
||||
The new cell relation will be used to pass the cell name, db service name and
|
||||
message queue service name.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
{
|
||||
'amqp-service': 'rabbitmq-server-cell1',
|
||||
'db-service': 'mysql-cell1',
|
||||
'cell-name': 'cell1',
|
||||
}
|
||||
|
||||
Given this information the superconductor can examine the service names that
|
||||
are attached to its shared-db-cell and amqp-cell relations and construct
|
||||
urls for them. The superconductor is then able to create the cell mapping in
|
||||
the api database by running:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
nova-manage cell_v2 create_cell \
|
||||
--name <cell_name> \
|
||||
--transport-url <transport_url> \
|
||||
--database_connection <database_connection>
|
||||
|
||||
The superconductor needs five relations to be in place and their corresponding
|
||||
contexts to be complete before the cell can be mapped. Given the
|
||||
nova-cloud-controller is a non-reactive charm special care will be needed to
|
||||
ensure that the cell mapping happens irrespective of the order in which those
|
||||
relations are completed.
|
||||
|
||||
Compute conductor no longer registering with keystone
|
||||
-----------------------------------------------------
|
||||
|
||||
The compute conductor does not need to register an endpoint with keystone nor
|
||||
does it need service credentials. As such the identity-service relation should
|
||||
not be used for compute cells. A guard should be put in place in the
|
||||
nova-cloud-controller charm to prevent a compute cells nova-cloud-controller
|
||||
from registering an incorrect endpoint in keystone.
|
||||
|
||||
Compute conductor cell name config option
|
||||
-----------------------------------------
|
||||
|
||||
The compute conductor needs to know its own cell name so that it can pass this
|
||||
information up to the superconductor. To allow this a new configuration option
|
||||
will be added to the nova-compute charm:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
options:
|
||||
cell-name:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
Name of the compute cell this controller is associated with. If this is
|
||||
left unset or set to api then it is assumed that this controller will
|
||||
be the top level api and cell0 controller.
|
||||
|
||||
Leaving the cell name unset assumes the current behaviour of associating the
|
||||
nova-cloud-controller with the api service, cell0 and cell1.
|
||||
|
||||
nova-compute service credentials
|
||||
--------------------------------
|
||||
|
||||
The nova-compute charm needs service credentials for RPC calls to the Nova
|
||||
Placement API and the Neutron API service. It currently gets these credentials
|
||||
via its cloud-compute relation which is ugly at best. However, given that the
|
||||
compute cells nova-cloud-controller will no longer have a relation with
|
||||
keystone it will not have any credentials to pass on to nova-compute. This is
|
||||
overcome by adding a cloud-credentials relation to the nova-compute charm.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
requires:
|
||||
cloud-credentials:
|
||||
interface: keystone-credentials
|
||||
|
||||
nova-compute will request a username based on its service name so that users
|
||||
for different cells can be distinguished from one another.
|
||||
|
||||
Bespoke vhosts and db names
|
||||
---------------------------
|
||||
|
||||
The ability to specify a nova db name and a rabbitmq vhost name should either
|
||||
be removed from the nova-cloud-controller charm or the new cell interface needs
|
||||
to support passing those up to the superconductor so that the superconductor
|
||||
can request access to the correct resources from the compute nodes database and
|
||||
message queue.
|
||||
|
||||
Disabling unused services
|
||||
-------------------------
|
||||
|
||||
The compute cells nova-cloud-controller only needs to run the conductor service
|
||||
and possible the console services. Unused services should be disabled by the
|
||||
charm.
|
||||
|
||||
New cell conductor charm?
|
||||
-------------------------
|
||||
|
||||
The nova cloud controller in a compute node only runs a small subset of the
|
||||
nova services and does not require a lot of the complexity that is baked
|
||||
into the current nova-cloud-controller charm. This begs the question of whether
|
||||
a new cut-down reactive charm that just runs the conductor would make sense.
|
||||
Most of the changes outlined above actually impact the superconductor rather
|
||||
than the compute conductor. However, looking at this the other way around the
|
||||
changes needed to allow the nova-cloud-controller charm to act as a child
|
||||
conductor are actually quite small and so probably do not warrant the creation
|
||||
of a new charm. It is probably worth noting some historical context here too,
|
||||
every time the decision has been made to create a charm which can operate in
|
||||
multiple modes that decision has been reversed at some cost at a later data (
|
||||
ceph being a prime example).
|
||||
|
||||
Taking all that into consideration a new charm will not be written and the
|
||||
existing nova-cloud-controller charm will be extended to add support for
|
||||
running as a compute conductor.
|
||||
|
||||
Message Queues
|
||||
--------------
|
||||
|
||||
There is flexibility around which message queue the non-nova services use. A
|
||||
dedicated rabbit instance could be created for them or they could reuse the
|
||||
rabbit instance the nova api service is using.
|
||||
|
||||
Telemetry etc
|
||||
--------------
|
||||
|
||||
This spec does not touch on integration with telemetry. However, this does
|
||||
require further investigation to ensure that message data can be collected.
|
||||
|
||||
Juju service names
|
||||
------------------
|
||||
|
||||
It will be useful, but not required, to embed the cell name in the service name
|
||||
of each component that is cell specific. Eg deploying services for cellN
|
||||
may look like this:
|
||||
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
juju deploy nova-compute nova-compute-cellN
|
||||
juju deploy nova-cloud-controller nova-cloud-controller-cellN
|
||||
juju deploy mysql mysql-cellN
|
||||
juju deploy rabbitmq-server rabbitmq-server-cellN
|
||||
|
||||
|
||||
Alternatives
|
||||
------------
|
||||
|
||||
* Do nothing and do not support additional nova v2 cells.
|
||||
* Resurrect support for the deprecated and bug ridden cells v1
|
||||
|
||||
Implementation
|
||||
==============
|
||||
|
||||
Assignee(s)
|
||||
-----------
|
||||
|
||||
Primary assignee:
|
||||
Unknown
|
||||
|
||||
Gerrit Topic
|
||||
------------
|
||||
|
||||
Use Gerrit topic "<topic_name>" for all patches related to this spec.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git-review -t cellsv2
|
||||
|
||||
Existing Work
|
||||
-------------
|
||||
|
||||
As part of writing the spec prototype charms and a bundle were created
|
||||
for reference: `Bundle <https://gist.github.com/gnuoy/9ede4e9d426ea56951c664569e7ad957>`_
|
||||
and `charm diffs <https://gist.github.com/gnuoy/aff86d0ad616a890ba731a3cb7deef51>`_
|
||||
|
||||
Work Items
|
||||
----------
|
||||
|
||||
* Remove support for cells v1 from nova-compute and nova-cloud-controller
|
||||
charms
|
||||
* Add identity-context relation to nova-compute and ensure the supplied
|
||||
credentials are used when rendering placement and keystone sections in
|
||||
nova.conf
|
||||
* Add shared-db-cell relation to nova-cloud-controller assuming 'nova'
|
||||
database name when requesting access.
|
||||
* Add amqp-cell relations to nova-cloud-controller assuming 'openstack' vhost
|
||||
name when requesting access.
|
||||
* Add code for registering a cell to nova-cloud-controller. This will use the
|
||||
AMQ and SharedDB contexts from the shared-db-cell and amqp-cell relation
|
||||
to create the cell mapping.
|
||||
* Update nova.conf templates in nova-cloud-controller to only render api db
|
||||
url if the nova-cloud-controller is a superconductor.
|
||||
* Update db initialisation code to only run the relevant cell migration if not
|
||||
a superconductor.
|
||||
* Add nova-cell and nova-cell-api relations and ensure that the shared-db, amqp
|
||||
shared-db-cell, amqp-cell and nova-api-cell relations all attempt to register
|
||||
compute cells.
|
||||
* Write bundles to use cells topology
|
||||
* Check integration with other services (designate and telemetry in particular)
|
||||
|
||||
Repositories
|
||||
------------
|
||||
|
||||
No new repositories needed.
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
* READMEs of nova-cloud-controller and nova-compute will need updating to
|
||||
explain new relations and config options.
|
||||
* Blog with deployment walkthrough and explanation.
|
||||
* Update Openstack Charm documentation to explain how to do a multi-cell
|
||||
deployment
|
||||
* Add bundle to charm store.
|
||||
|
||||
Security
|
||||
--------
|
||||
|
||||
No new security risks that I am aware of
|
||||
|
||||
Testing
|
||||
-------
|
||||
|
||||
* A multi-cell topology is probably beyond the scope of amulet tests
|
||||
* Bundles added to openstack-charm-testing
|
||||
* Mojo specs
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
None that I can think of
|
||||
|
||||
Credits
|
||||
-------
|
||||
|
||||
Much of the benefit of cells etc was lifted from \*4
|
||||
|
||||
\*1 https://docs.openstack.org/nova/pike/cli/nova-manage.html
|
||||
\*2 https://docs.openstack.org/nova/latest/user/cellsv2-layout.html
|
||||
\*3 https://bugs.launchpad.net/nova/+bug/1742421
|
||||
\*4 https://www.openstack.org/videos/sydney-2017/adding-cellsv2-to-your-existing-nova-deployment
|
124
specs/stein/backlog/ceph-storage-action.rst
Normal file
124
specs/stein/backlog/ceph-storage-action.rst
Normal file
@ -0,0 +1,124 @@
|
||||
..
|
||||
Copyright 2016, Canonical UK
|
||||
|
||||
This work is licensed under a Creative Commons Attribution 3.0
|
||||
Unported License.
|
||||
http://creativecommons.org/licenses/by/3.0/legalcode
|
||||
|
||||
..
|
||||
This template should be in ReSTructured text. Please do not delete
|
||||
any of the sections in this template. If you have nothing to say
|
||||
for a whole section, just write: "None". For help with syntax, see
|
||||
http://sphinx-doc.org/rest.html To test out your formatting, see
|
||||
http://www.tele3.cz/jbar/rest/rest.html
|
||||
|
||||
===============================
|
||||
Ceph Storage Action
|
||||
===============================
|
||||
|
||||
We should allow the user to specify the class of storage that a given
|
||||
storage device should be added to. This action would allow adding a list
|
||||
of osd-devices into specified Ceph buckets.
|
||||
|
||||
Problem Description
|
||||
===================
|
||||
|
||||
All osd-devices are currently added to the same default bucket making it
|
||||
impossible to use multiple types of storage effectively. For example, users
|
||||
may wish to bind SSD/NVMe devices into a fast/cache bucket, 15k spindles into
|
||||
a default bucket, and 5k low power spindles into a slow bucket for later
|
||||
use in pool configuration.
|
||||
|
||||
Proposed Change
|
||||
===============
|
||||
|
||||
Add an action that includes additional metadata into the osd-devices list
|
||||
allowing for specification of bucket types for the listed devices. These OSDs
|
||||
would be added to the specified bucket when the OSD is created.
|
||||
|
||||
Alternatives
|
||||
------------
|
||||
|
||||
- User could specify a pre-build usage profile and we could map device types
|
||||
onto that profile by detecting the device type and deciding, based on the
|
||||
configured profile, what bucket the device should go into.
|
||||
|
||||
This solution is being discarded because of the difficulty of designing
|
||||
and implementing usage profiles to match any use case automigically. It will
|
||||
also require a lot of work to correctly identify the device type and decide
|
||||
where to place it in the desired profile.
|
||||
|
||||
In addition, it would require that we only support a single "profile" within
|
||||
a deployed ceph-osd cluster.
|
||||
|
||||
- Charm could define additional storage attach points in addition to
|
||||
osd-devices that would allow the user to specify what bucket they
|
||||
should add devices to.
|
||||
|
||||
The reason for discarding this solution is that it was determined to be too
|
||||
limiting because of only supporting a fixed number of bindings, in addition
|
||||
to making changes harder because of backwards compatibility requirements.
|
||||
|
||||
Implementation
|
||||
==============
|
||||
|
||||
Assignee(s)
|
||||
-----------
|
||||
|
||||
Primary assignee:
|
||||
Chris MacNaughton <chris.macnaughton@canonical.com>
|
||||
|
||||
Contact:
|
||||
Chris Holcombe <chris.holcombe@canonical.com>
|
||||
|
||||
Gerrit Topic
|
||||
------------
|
||||
|
||||
Use Gerrit topic "storage-action" for all patches related to this spec.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git-review -t storage-action
|
||||
|
||||
Work Items
|
||||
----------
|
||||
|
||||
1. Create a bucket with the required name. We do this so that we can create
|
||||
pool in the specified bucket type. This would be handled within the ceph-mon
|
||||
charm.
|
||||
2. Create an action that returns a list of unused storage devices along with
|
||||
their device types.
|
||||
3. Create an action that takes `osd-devices` and `storage-type`, that would
|
||||
be an enum into the buckets that we create. Rather than be a user specified
|
||||
string, this would be an enumerated list in the ceph charms provided through
|
||||
the shared charms_ceph library.
|
||||
4. Add the ability for other charms to request the bucket that should back
|
||||
their created pools when creating pools through the broker.
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
This will require additional documentation be added around how to use the
|
||||
action correctly and what to expect from it. This documentation will be added
|
||||
to the charm README
|
||||
|
||||
Security
|
||||
--------
|
||||
|
||||
This should have no security impact.
|
||||
|
||||
Testing
|
||||
-------
|
||||
|
||||
There will need to be new unit tests and functional tests to ensure that
|
||||
the necessary buckets are created and that the disks are added to them.
|
||||
|
||||
Repositories
|
||||
------------
|
||||
|
||||
None
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
None
|
135
specs/stein/backlog/charm-openstack-ovn.rst
Normal file
135
specs/stein/backlog/charm-openstack-ovn.rst
Normal file
@ -0,0 +1,135 @@
|
||||
..
|
||||
Copyright 2018 Aakash KT
|
||||
|
||||
This work is licensed under a Creative Commons Attribution 3.0
|
||||
Unported License.
|
||||
http://creativecommons.org/licenses/by/3.0/legalcode
|
||||
|
||||
..
|
||||
This template should be in ReSTructured text. Please do not delete
|
||||
any of the sections in this template. If you have nothing to say
|
||||
for a whole section, just write: "None". For help with syntax, see
|
||||
http://sphinx-doc.org/rest.html To test out your formatting, see
|
||||
http://www.tele3.cz/jbar/rest/rest.html
|
||||
|
||||
===============================
|
||||
OpenStack with OVN
|
||||
===============================
|
||||
|
||||
Openstack can be deployed with a number of SDN solutions (e.g. ODL). OVN
|
||||
provides virtual-networking for Open vSwitch (OVS). OVN has a lot of desirable
|
||||
features and is designed to be integrated into Openstack, among others.
|
||||
|
||||
Since there is already a networking-ovn project under openstack, it is the
|
||||
obvious next step to implement a Juju charm that provides this service.
|
||||
|
||||
Problem Description
|
||||
===================
|
||||
|
||||
Currently, Juju charms have support for deploying openstack, either with it's
|
||||
default SDN solution (Neutron), or with others such as ODL. This project
|
||||
will expand the deployment scenarios under Juju for openstack by including OVN
|
||||
in the list of available SDN solutions.
|
||||
|
||||
This will also benefit OPNFV's JOID installer in providing another scenario in
|
||||
its deployment.
|
||||
|
||||
Proposed Change
|
||||
===============
|
||||
|
||||
Charms implementing neutron-api, ovn-controller and neutron-ovn will need to
|
||||
be implemented. These will be written using the new reactive framework
|
||||
of Juju.
|
||||
|
||||
Charm : neutron-ovn
|
||||
-------------------
|
||||
|
||||
This charm will be deployed alongside nova-compute deployments. This will be
|
||||
a subordinate charm to nova-compute, that installs and runs openvswitch and
|
||||
the ovn-controller.
|
||||
|
||||
Charm : ovn-controller
|
||||
----------------------
|
||||
|
||||
This charm will deploy ovn itself. It will start the OVN services
|
||||
(ovsdb-server, ovn-northd). Since there can only be a single instance of
|
||||
ovsdb-server and ovn-northd in a deployment, we can also implement passive
|
||||
HA, but this can be included in further revisions of this charm.
|
||||
|
||||
Charm : neutron-api-ovn
|
||||
-----------------------
|
||||
|
||||
This charm will provide the api only integration of neutron to OVN. This charm
|
||||
will need to be subordinate to the existing neutron-api charm. The main task
|
||||
of this charm is to setup the "neutron.conf" and "ml2_ini.conf" config files
|
||||
with the right parameters for OVN. The principal charm, neutron-api, handles
|
||||
the install and restart for neutron-server.
|
||||
|
||||
Refer for more information : https://docs.openstack.org/networking-ovn/latest/install/manual.html
|
||||
|
||||
Alternatives
|
||||
------------
|
||||
|
||||
N/A
|
||||
|
||||
Implementation
|
||||
==============
|
||||
|
||||
Assignee(s)
|
||||
-----------
|
||||
Aakash KT
|
||||
|
||||
Gerrit Topic
|
||||
------------
|
||||
|
||||
Use Gerrit topic "charm-os-ovn" for all patches related to this spec.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git-review -t charm-os-ovn
|
||||
|
||||
Work Items
|
||||
----------
|
||||
|
||||
* Implement neutron-api-ovn charm
|
||||
* Implement ovn-controller charm
|
||||
* Implement neutron-ovn charm
|
||||
* Integration testing
|
||||
* Create a bundle to deploy OpenStack OVN
|
||||
* Create documentation for above three charms
|
||||
|
||||
Repositories
|
||||
------------
|
||||
|
||||
Yes, three new repositories will need to be created :
|
||||
* charm-neutron-api-ovn
|
||||
* charm-ovn-controller
|
||||
* charm-neuton-ovn
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
This will require creation of new documentation for the covered scenario of
|
||||
openstack + ovn in Juju.
|
||||
A README file for the bundle needs to be written.
|
||||
Add documentation in charm-deployment guide to detail how to deploy OVN with
|
||||
OpenStack.
|
||||
|
||||
Security
|
||||
--------
|
||||
|
||||
Communications to and from these charms should be made secure. For eg.
|
||||
communication between ovn-central and ovn-edges to be made secure using
|
||||
self-signed certs.
|
||||
|
||||
Testing
|
||||
-------
|
||||
|
||||
For testing at Juju level, we can use the new "juju-matrix" tool.
|
||||
For testing functionality at OpenStack level, Mojo should be used. This will
|
||||
help validate the deployment.
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
This charm will support OpenStack Queens as its baseline.
|
153
specs/stein/backlog/charm-panko.rst
Normal file
153
specs/stein/backlog/charm-panko.rst
Normal file
@ -0,0 +1,153 @@
|
||||
..
|
||||
Copyright 2017, Canonical UK
|
||||
|
||||
This work is licensed under a Creative Commons Attribution 3.0
|
||||
Unported License.
|
||||
http://creativecommons.org/licenses/by/3.0/legalcode
|
||||
|
||||
..
|
||||
This template should be in ReSTructured text. Please do not delete
|
||||
any of the sections in this template. If you have nothing to say
|
||||
for a whole section, just write: "None". For help with syntax, see
|
||||
http://sphinx-doc.org/rest.html To test out your formatting, see
|
||||
http://www.tele3.cz/jbar/rest/rest.html
|
||||
|
||||
===============================
|
||||
Panko Charm
|
||||
===============================
|
||||
|
||||
Ceilometer used to provide an event API to query and store events from
|
||||
different OpenStack services. However, this functionality was deprecated
|
||||
in Newton and removed in Ocata. Event storage and querying functionality
|
||||
is now provided by a service called Panko. Use-cases of historical
|
||||
event data storage include audit logging, debugging and billing.
|
||||
|
||||
Problem Description
|
||||
===================
|
||||
|
||||
Panko is an event storage service that provides an ability to store and
|
||||
query event data generated by Ceilometer with potentially other sources.
|
||||
Panko includes support for several storage options (sqlalchemy-compatible
|
||||
databases, mongodb, elasticsearch) which differ in their level of maturity.
|
||||
|
||||
At its core Panko is a regular API service with a database backend.
|
||||
|
||||
Events are published to Panko via Direct publisher in Ocata while in
|
||||
Pike Direct publisher was deprecated and will be removed. For that
|
||||
reason Panko publisher was added.
|
||||
|
||||
* Direct publisher `deprecation <https://docs.openstack.org/releasenotes/panko/unreleased.html#deprecation-notes>`__ (ceilometer/publisher/direct.py) was done under this `commit <https://git.io/vd98b>`__.
|
||||
|
||||
Another mechanism that was deprecated in Pike is dispatchers which were
|
||||
used to send data specified by publishers. So were
|
||||
{event,meter}_dispatchers options in ceilometer.conf
|
||||
|
||||
* Panko dispatcher `deprecation <https://docs.openstack.org/releasenotes/panko/unreleased.html#deprecation-notes>`__.
|
||||
* `Notes <https://docs.openstack.org/releasenotes/ceilometer/ocata.html#deprecation-notes>`__ on unneeded duplication of publishers and dispatchers.
|
||||
* A `discussion <http://lists.openstack.org/pipermail/openstack-dev/2017-April/115576.html>`__ on dispatchers vs publishers.
|
||||
|
||||
This is instead done directly by publishers in Pike and Panko publisher is
|
||||
present in Panko's repository itself, not ceilometer repository.
|
||||
|
||||
Panko first appeared in Ocata Ubuntu Cloud Archive.
|
||||
|
||||
Ceilometer is able to query Panko's presence via Keystone catalog but
|
||||
does not define a publisher for sending event data to Panko by default.
|
||||
|
||||
Proposed Change
|
||||
===============
|
||||
|
||||
The new charm should include the following features:
|
||||
|
||||
- Support SQLAlchemy-compatible databases as storage backends;
|
||||
- HA support;
|
||||
- TLS support;
|
||||
- integration with Ceilometer charm.
|
||||
|
||||
Alternatives
|
||||
------------
|
||||
|
||||
None for historical event data within OpenStack.
|
||||
|
||||
Implementation
|
||||
==============
|
||||
|
||||
Assignee(s)
|
||||
-----------
|
||||
|
||||
Primary assignee:
|
||||
dmitriis
|
||||
|
||||
Gerrit Topic
|
||||
------------
|
||||
|
||||
Use Gerrit topic "panko-charm" for all patches related to this spec.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git-review -t panko-charm
|
||||
|
||||
Work Items
|
||||
----------
|
||||
|
||||
Reactive Interfaces
|
||||
+++++++++++++++++++
|
||||
|
||||
- interface: panko
|
||||
|
||||
Provide Panko charm
|
||||
+++++++++++++++++++++
|
||||
|
||||
- Create a charm layer based on openstack-api layer;
|
||||
- Add support for upgrading Panko (schema changes);
|
||||
- Add support for deploying Panko in a highly available configuration;
|
||||
- Add support for the Panko to display workload status;
|
||||
- Add support TLS endpoints;
|
||||
- Charm should have unit and functional tests.
|
||||
|
||||
Update Ceilometer Charm
|
||||
+++++++++++++++++++++++++++++++++
|
||||
|
||||
- Support for deployment with Panko (by specifying publishers correctly
|
||||
in event_pipeline.yaml for both Ocata and Pike+).
|
||||
|
||||
Mojo specification deploying and testing Panko
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
- Update HA Mojo spec for deploying Panko in an HA configuration.
|
||||
|
||||
Update telemetry bundle
|
||||
+++++++++++++++++++++++
|
||||
|
||||
- Update telemetry bundle to deploy Panko
|
||||
|
||||
Repositories
|
||||
------------
|
||||
|
||||
A new git repository will be required for the Panko charm:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git://git.openstack.org/openstack/charm-panko
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
The Panko charm should contain a README with instructions on deploying the
|
||||
charm. A blog post is optional but would be a useful addition.
|
||||
|
||||
Security
|
||||
--------
|
||||
|
||||
No additional security concerns.
|
||||
|
||||
Testing
|
||||
-------
|
||||
|
||||
Code changes will be covered by unit tests; functional testing will be done
|
||||
using a combination of Amulet, Bundle tester and Mojo specification.
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
- No dependencies outside of this specification.
|
165
specs/stein/backlog/controlled-service-restarts.rst
Normal file
165
specs/stein/backlog/controlled-service-restarts.rst
Normal file
@ -0,0 +1,165 @@
|
||||
..
|
||||
Copyright 2017 Canonical LTD
|
||||
|
||||
This work is licensed under a Creative Commons Attribution 3.0
|
||||
Unported License.
|
||||
http://creativecommons.org/licenses/by/3.0/legalcode
|
||||
|
||||
..
|
||||
This template should be in ReSTructured text. Please do not delete
|
||||
any of the sections in this template. If you have nothing to say
|
||||
for a whole section, just write: "None". For help with syntax, see
|
||||
http://sphinx-doc.org/rest.html To test out your formatting, see
|
||||
http://www.tele3.cz/jbar/rest/rest.html
|
||||
|
||||
=================================
|
||||
Service Restart Control In Charms
|
||||
=================================
|
||||
|
||||
Openstack charms continuously respond to hook events from their peers
|
||||
related applications which frequently result in configuration
|
||||
changes and subsequent service restarts. This is all fine until these
|
||||
applications are deployed at large scale and having these services restart
|
||||
simultaneously can cause (a) service outages and (b) excessive load on
|
||||
external applications e.g. databases or rabbitmq servers. In order to
|
||||
mitigate these effects we would like to introduce the ability for charms
|
||||
to apply controllable patterns to how they restart their services.
|
||||
|
||||
Problem Description
|
||||
===================
|
||||
|
||||
An example scenario where this sort of behaviour becomes a problem is where
|
||||
we have a large number, say 1000, of nova-compute units all connected to the
|
||||
same rabbitmq server. If we make a config change e.g. enable debug logging
|
||||
on that application this will result in a restart all nova-* services on
|
||||
every compute host in tandem which will in turn generate a large spike of
|
||||
load on the rabbit server as well as making all compute operations block
|
||||
until these services are back up. This could also clearly have other
|
||||
knock-on effects such as impacting other applications that depend on
|
||||
rabbitmq.
|
||||
|
||||
There are a number of ways that we could approach solving this problem but
|
||||
for this proposal we choose simplicity by attempting to use all information
|
||||
already available to an application unit combined with some user config to
|
||||
allow units to decide how best to perform these actions.
|
||||
|
||||
Every unit of an application already has access to some information that
|
||||
describes itself with respect to its environment e.g. every unit has a unique
|
||||
id and some applications have a peer relation that gives them information
|
||||
about their neighbours. Using this information coupled with some extra
|
||||
config options on the charm to vary timing we could provide the operator
|
||||
the ability to control service restarts across units using nothing more
|
||||
than basic mathematics and no juju api calls.
|
||||
|
||||
For example, let's say an application unit knows it has id 215 and the user
|
||||
has provided two options via config; a modulo value of 2 and an offset of
|
||||
10. We could then do the following:
|
||||
|
||||
.. code:: python
|
||||
|
||||
time.sleep((215 % 2) * 10)
|
||||
|
||||
which, when applied to all units, would result in 50% of the cluster
|
||||
restarting its services 10 seconds after the rest. This should hopefully
|
||||
alleviate some of the pressure resulting from cluster-wide synchronous
|
||||
restarts, ensuring that part of the cluster is always responsive and
|
||||
making restarts happen quicker.
|
||||
|
||||
As mentioned above we will require two new config options to any charm for
|
||||
which this logic is supported:
|
||||
|
||||
* service-restart-offset (default to 10)
|
||||
* service-restart-modulo (default to 1 so that default behaviour is same as
|
||||
before)
|
||||
|
||||
The restart logic will skip for any charms not implementing these options.
|
||||
|
||||
Over time some units may be deleted from and added back to the cluster
|
||||
resulting in non-contiguous unit ids. While for applications deployed at
|
||||
large scale this is unlikely to be significantly impactful, since subsequent
|
||||
adds and deletes will cancel each other out, it could nevertheless be a
|
||||
problem so we will check for the existance of a peer relation on the
|
||||
application we are running and, if one exists, use the info in that relation
|
||||
to normalise unit ids prior to calculating delays.
|
||||
|
||||
Lastly, we must consider how to behave when the charm is being used to upgrade
|
||||
Openstack services whether directly using config ("big bang") or using actions
|
||||
defined on a charm. For the case where all services are upgraded at once we
|
||||
will leave it to the operator to set/unset the offset parameters. For the case
|
||||
where actions are being used, and likely only a subset of units are being
|
||||
upgraded at once, we will ignore the control settings i.e. delays will not
|
||||
be used.
|
||||
|
||||
Proposed Change
|
||||
===============
|
||||
|
||||
To implement this change we will extend the restart_on_change() decorator
|
||||
implemented across the openstack charms so that when services are stop/started
|
||||
or restarted they will include a time.sleep(delay) where delay is
|
||||
calculated from unit id combined with two new config options;
|
||||
service-restart-offset and service-restart-modulo. This calculation will be
|
||||
done in a new function that will be implemented in contrib.openstack the
|
||||
output of which will be passed into the restart_on_changed() decorator.
|
||||
|
||||
Since a decorator is used we do not need to worry about multiple restarts of
|
||||
the same service. We do, however, need to consider how apply offsets when
|
||||
stop/start and restarts are performed manually as is the case in the action
|
||||
managed upgrades handler.
|
||||
|
||||
Alternatives
|
||||
------------
|
||||
|
||||
None
|
||||
|
||||
Implementation
|
||||
==============
|
||||
|
||||
Assignee(s)
|
||||
-----------
|
||||
|
||||
Primary assignee:
|
||||
hopem
|
||||
|
||||
Gerrit Topic
|
||||
------------
|
||||
|
||||
Use Gerrit topic "controlled-service-restarts" for all patches related to
|
||||
this spec.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git-review -t controlled-service-restarts
|
||||
|
||||
Work Items
|
||||
----------
|
||||
|
||||
* implement changes to charmhelpers
|
||||
* sync into openstack charms and add new config opts
|
||||
|
||||
Repositories
|
||||
------------
|
||||
|
||||
None
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
These new settings will be properly documented in the charm config.yaml as
|
||||
well as in the charm deployment guide.
|
||||
|
||||
Security
|
||||
--------
|
||||
|
||||
None
|
||||
|
||||
Testing
|
||||
-------
|
||||
|
||||
Unit tests will be provided in charm-helpers and functional tests will be
|
||||
updated to include config that enables this feature. Scale testing to prove
|
||||
effectiveness and determine optimal defaults will also be required.
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
None
|
263
specs/stein/backlog/openstack-load-balancer.rst
Normal file
263
specs/stein/backlog/openstack-load-balancer.rst
Normal file
@ -0,0 +1,263 @@
|
||||
..
|
||||
Copyright 2016, Canonical UK
|
||||
|
||||
This work is licensed under a Creative Commons Attribution 3.0
|
||||
Unported License.
|
||||
http://creativecommons.org/licenses/by/3.0/legalcode
|
||||
|
||||
..
|
||||
This template should be in ReSTructured text. Please do not delete
|
||||
any of the sections in this template. If you have nothing to say
|
||||
for a whole section, just write: "None". For help with syntax, see
|
||||
http://sphinx-doc.org/rest.html To test out your formatting, see
|
||||
http://www.tele3.cz/jbar/rest/rest.html
|
||||
|
||||
================================
|
||||
OpenStack Endpoint Load Balancer
|
||||
================================
|
||||
|
||||
To enable Openstack services for a single cloud to be installed in a highly
|
||||
available configuration without requiring that each unit of a service is in
|
||||
the same broadcast domain.
|
||||
|
||||
Problem Description
|
||||
===================
|
||||
|
||||
1. As a cloud administrator I would like to simplify my deployment so that I
|
||||
don't have to manage a corosync and pacemaker per OpenStack API service.
|
||||
|
||||
2. As a cloud architect I am designing a new cloud where all services will be
|
||||
in a single broadcast domain. I see no need to use the new central
|
||||
loadbalancer and would like to continue to have each service manage its
|
||||
own VIP.
|
||||
|
||||
3. As a cloud architect I would like to spread my control plane across N racks
|
||||
for redundancy. Each rack is in its own broadcast domain. I do not want the
|
||||
users of the cloud to require knowledge of this topology. I want the
|
||||
endpoints registered in Keystone to work regardless of a rack level failure.
|
||||
I am using network spaces to segregate traffic in my cloud and the OpenStack
|
||||
loadbalancer has access to all spaces so I only require one set of
|
||||
loadbalancers for the deployment.
|
||||
|
||||
4. As a cloud architect I would like to spread my control plane across N racks
|
||||
for redundancy. Each rack is in its own broadcast domain. I do not want the
|
||||
users of the cloud to require knowledge of this topology. I want the
|
||||
endpoints registered in Keystone to work regardless of a rack level failure.
|
||||
I am using network spaces to segregate traffic in my cloud. I want the
|
||||
segregation to extend to the load balancers and so will be requiring a set
|
||||
of load balancers per network space.
|
||||
|
||||
5. As a cloud architect I am designing a new internal cloud and have no
|
||||
interest in IPv6, I wish to deploy a pure IPv4 solution.
|
||||
|
||||
6. As a cloud architect I am designing a new cloud. I appreciate that it has
|
||||
been 18 years since the IETF brought us IPv6 and feel it maybe time to
|
||||
enable IPv6 within the cloud. I am happy to have some IPv4 where needed
|
||||
and am looking to deploy a dual stack IPv4 and IPv6.
|
||||
|
||||
7. As a cloud architect I am designing a new cloud. I appreciate that it has
|
||||
been 18 years since the IETF brought us IPv6 and wish to never see an IPv4
|
||||
address again. I am looking to deploy a pure IPv6 cloud.
|
||||
|
||||
8. As a cloud architect I wish to use DNS HA in conjunction with the OpenStack
|
||||
loadbalancer so that loadbalancer units can be spread across different
|
||||
subnets within each network space.
|
||||
|
||||
9. As a cloud administrator I would like to have the OpenStack load balancers
|
||||
look after HA and so will be deploying in an Active/Passive deployment.
|
||||
I will need to use a VIP for the loadbalancer in this configuration.
|
||||
|
||||
10. As a cloud architect I have an existing hardware loadbalancers I wish to
|
||||
use. I do not want to have to update it with the location of each API
|
||||
service backend. Instead I would like to have the OpenStack load balancers
|
||||
in an Active/Active configuration and have the hardware loadbalancers
|
||||
manager traffic between haproxy instance in the OpenStack loadbalancer
|
||||
service. I do not need to use a VIP for the loadbalancer in this
|
||||
configuration. My hardware loadbalancers utilise vip(s) which will need
|
||||
to be registered as the endpoints for services in Keystone.
|
||||
|
||||
11. As a cloud administrator haproxy statistics are fascinating to me and I
|
||||
want the statistics from all haproxy instances to be aggregated.
|
||||
|
||||
12. As a cloud administrator I would like haproxy to be able to perform health
|
||||
checks on the backends which assert the health of a service more
|
||||
conclusively than simple open port checking.
|
||||
|
||||
13. As a cloud administrator I want to be able to configure max connections
|
||||
and timeouts as my cloud evolves.
|
||||
|
||||
14. As a charm author of a service which is behind the OpenStack load balancer
|
||||
I would like the ability to tell the loadbalancer to drain connection to a
|
||||
specific unit and take it out of service. This will allow the unit to go
|
||||
into maintenance mode.
|
||||
|
||||
Proposed Change
|
||||
===============
|
||||
|
||||
New interface: openstack-api-endpoints
|
||||
--------------------------------------
|
||||
|
||||
This interface allows a backend charm hosting API endpoints to inform
|
||||
the OpenStack loadbalancer which services it's hosting and on which IP
|
||||
address and port frontend API requests should be sent to on the backend
|
||||
unit. It also allows the backend charm to inform the loadbalancer which
|
||||
frontend port should be used for each service.
|
||||
|
||||
Example - neutron-api (single API endpoint per unit):
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
endpoints:
|
||||
- service-type: network
|
||||
frontend-port: 9696
|
||||
backend-port: 9689
|
||||
backend-ip: 10.10.10.1
|
||||
check-type: http
|
||||
|
||||
Example - nova-cloud-controller (multiple API endpoints per unit):
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
endpoints:
|
||||
- service-type: nova
|
||||
frontend-port: 8774
|
||||
backend-port: 8764
|
||||
backend-ip: 10.10.10.2
|
||||
check-type: http
|
||||
- service-type: nova-placement
|
||||
frontend-port: 8778
|
||||
backend-port: 8768
|
||||
backend-ip: 10.10.10.2
|
||||
check-type: http
|
||||
|
||||
A single instance of the OpenStack Loadbalancer application will only service
|
||||
a single type of OpenStack API endpoint (public, admin or internal). The
|
||||
charm will use the network space binding of the frontend interface to determine
|
||||
which IP or VIP (if deployed in HA configuration) should be used by the
|
||||
backend API service for registration into the Cloud endpoint catalog.
|
||||
|
||||
Having processed the requests from all backend units, the loadbalancer now
|
||||
needs to tell the backend application the external IP being used to listen for
|
||||
connections for each endpoint service type:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
endpoints:
|
||||
- service-type: nova
|
||||
frontend-ip: 98.34.12.1
|
||||
frontend-port: 8774
|
||||
- service-type: nova-placement
|
||||
frontend-ip: 98.34.12.1
|
||||
frontend-port: 8778
|
||||
|
||||
The backend service now updates the endpoints in the Keystone registry to point
|
||||
at the IPs passed back by the loadbalancer.
|
||||
|
||||
This interface is provided by each backend API charm and consumed via
|
||||
the backend interface on the OpenStack loadbalancer charm. Each backend
|
||||
charm would provide three instances of this interface type:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
provides:
|
||||
public-backend:
|
||||
interface: openstack-api-endpoints
|
||||
admin-backend:
|
||||
interface: openstack-api-endpoints
|
||||
internal-backend:
|
||||
interface: openstack-api-endpoints
|
||||
|
||||
Taking this approach means that the backend charm can continue to be the
|
||||
entry point/loadbalancer for some endpoint types, and push the loadbalancing
|
||||
for other entry points out to the OpenStack Loadbalancer charm (or multiple
|
||||
instances).
|
||||
|
||||
Updates to keystone endpoint calculation code
|
||||
---------------------------------------------
|
||||
|
||||
Currently the following competing options are used to calculate which EP should
|
||||
be registered in Keystone:
|
||||
|
||||
* os-\*-network set do resolve_address old method
|
||||
* dnsha use dnsha
|
||||
* os-\*-hostname set use hostname
|
||||
* juju network space binding via extra-bindings
|
||||
* prefer ipv6 via configuration option
|
||||
* presence of {public,internal,admin}-backend relations to
|
||||
opentack loadbalancers
|
||||
|
||||
OpenStack Loadbalancer charm
|
||||
----------------------------
|
||||
|
||||
New charm - OpenStack Loadbalancer - with corresponding tests & QA CI/setup.
|
||||
|
||||
Alternatives
|
||||
------------
|
||||
|
||||
1. Extend existing HAProxy charm.
|
||||
2. Use DNS HA.
|
||||
|
||||
Implementation
|
||||
==============
|
||||
|
||||
Assignee(s)
|
||||
-----------
|
||||
|
||||
Primary assignee:
|
||||
unknown
|
||||
|
||||
Gerrit Topic
|
||||
------------
|
||||
|
||||
Use Gerrit topic "osbalancer" for all patches related to this spec.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git-review -t osbalancer
|
||||
|
||||
Work Items
|
||||
----------
|
||||
|
||||
Provide OpenStack Loadbalancer Charm
|
||||
++++++++++++++++++++++++++++++++++++
|
||||
|
||||
- Write draft interface for LB <-> Backend
|
||||
- Write unit tests for Keystone endpoint registration code
|
||||
- Write Keystone endpoint registration code
|
||||
|
||||
|
||||
Mojo specification deploying and testing Mistral
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
- Write Mojo spec for deploying LB in an HA configuration
|
||||
|
||||
Repositories
|
||||
------------
|
||||
|
||||
A new git repository will be required for the Mistral charm:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git://git.openstack.org/openstack/charm-openstack-loadbalancer
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
The OpenStack Loadbalancer charm should contain a README with instructions on
|
||||
deploying the charm. A blog post is optional but would be a useful addition.
|
||||
|
||||
Security
|
||||
--------
|
||||
|
||||
No additional security concerns.
|
||||
|
||||
Testing
|
||||
-------
|
||||
|
||||
Code changes will be covered by unit tests; functional testing will be done
|
||||
using a combination of Amulet, Bundle tester and Mojo specification.
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
None
|
171
specs/stein/backlog/service-discovery.rst
Normal file
171
specs/stein/backlog/service-discovery.rst
Normal file
@ -0,0 +1,171 @@
|
||||
..
|
||||
Copyright 2017 Canonical UK Ltd
|
||||
|
||||
This work is licensed under a Creative Commons Attribution 3.0
|
||||
Unported License.
|
||||
http://creativecommons.org/licenses/by/3.0/legalcode
|
||||
|
||||
..
|
||||
This template should be in ReSTructured text. Please do not delete
|
||||
any of the sections in this template. If you have nothing to say
|
||||
for a whole section, just write: "None". For help with syntax, see
|
||||
http://sphinx-doc.org/rest.html To test out your formatting, see
|
||||
http://www.tele3.cz/jbar/rest/rest.html
|
||||
|
||||
=================
|
||||
Service Discovery
|
||||
=================
|
||||
|
||||
Many optional services may now be deployed as part of an OpenStack Cloud,
|
||||
with each service having a different optional features that may or may
|
||||
not be enabled as part of a deployment.
|
||||
|
||||
Charms need a way to discover this information so that services can be
|
||||
correctly configured for the options chosen by the charm user.
|
||||
|
||||
Problem Description
|
||||
===================
|
||||
|
||||
Charms need to be able to determine what other services are deployed
|
||||
within an OpenStack Cloud so that features can be enabled/disabled as
|
||||
appropriate.
|
||||
|
||||
Examples include:
|
||||
|
||||
- notifications for ceilometer (really don't want notifications enabled
|
||||
when ceilometer is not deployed).
|
||||
|
||||
- misc panels within the openstack dashboard (fwaas, lbaas, l3ha, dvr
|
||||
etc... for neutron).
|
||||
|
||||
- notifications for designate (disable when designate is not deployed).
|
||||
|
||||
Services and features of services are determined by the API endpoint
|
||||
charms that register them into the service catalog via the keystone charm.
|
||||
|
||||
Proposed Change
|
||||
===============
|
||||
|
||||
The keystone charm will expose a new provides interface 'cloud-services'
|
||||
which is a rich-ish description of the services deployed with registered
|
||||
endpoints.
|
||||
|
||||
The identity-service relations would also advertise the same data as the
|
||||
cloud-services relations so that charms already related to keystone don't
|
||||
have to double relate (identity-service is a superset of cloud-services).
|
||||
|
||||
By default, a registered endpoint of type 'A' will result in service type
|
||||
'A' being listed as part of the deployed cloud on this interface.
|
||||
|
||||
Services may also enrich this data by providing 'features' (optional)
|
||||
alongside their endpoint registration - these will be exposed on the
|
||||
cloud-service and identity-service relations.
|
||||
|
||||
Data will look something like (populated with real examples - key and
|
||||
proposed values):
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
services: ['object-store', 'network', 'volumev2', 'compute',
|
||||
'metering', 'image', 'orchestration', 'dns']
|
||||
|
||||
Example - advertise features supported by the networking service, allowing
|
||||
features to be enabled automatically in the dashboard:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
network: ['dvr', 'fwaas', 'lbaasv2', 'l3ha']
|
||||
|
||||
Example - allow ceilometer to know that the deployed object-store is radosgw
|
||||
rather than swift:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
object-store: ['radosgw']
|
||||
|
||||
Values will be parseable as a json/yaml formatted list.
|
||||
|
||||
By using the basic primitive of tags, we get alot of flexibility with
|
||||
type/feature being easily express-able.
|
||||
|
||||
Interface will be eventually consistent in clustered deployments -
|
||||
all keystone units will present the same data.
|
||||
|
||||
Alternatives
|
||||
------------
|
||||
|
||||
Each charm could query the keystone service catalog; however this is very
|
||||
much a point in time check, and the service catalog may change after the
|
||||
query has been made. In addition the keystone service catalog does not
|
||||
have details on what optional features each service type may have enabled
|
||||
and keystone services will be restarted during deployment as clusters
|
||||
get built out etc.
|
||||
|
||||
Implementation
|
||||
==============
|
||||
|
||||
Assignee(s)
|
||||
-----------
|
||||
|
||||
Primary assignee:
|
||||
james-page
|
||||
|
||||
Gerrit Topic
|
||||
------------
|
||||
|
||||
Use Gerrit topic "service-discovery" for all patches related to this spec.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git-review -t service-discovery
|
||||
|
||||
Work Items
|
||||
----------
|
||||
|
||||
Core (keystone charm):
|
||||
|
||||
- Add cloud-services relation to keystone charm
|
||||
- Add service and feature discover handler to keystone charm
|
||||
- Update keystone interface to advertise services and features in
|
||||
keystone charm.
|
||||
- Create cloud-services reactive interface
|
||||
|
||||
Enablement:
|
||||
|
||||
- Update ceilometer charm for radosgw discovery.
|
||||
- Update openstack-dashboard charm to automatically enable panels
|
||||
for deployed services and features.
|
||||
- Update neutron-api charm for designate discovery.
|
||||
- Update cinder charm for ceilometer discovery.
|
||||
- Update glance charm for ceilometer discovery.
|
||||
- Update neutron-api charm for ceilometer discovery.
|
||||
- Update radosgw charm to advertise 'radosgw' feature.
|
||||
- Update neutron-api charm to advertise networking features.
|
||||
|
||||
Repositories
|
||||
------------
|
||||
|
||||
No new git repositories required.
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
This change is internal for use across the OpenStack charms, no documentation
|
||||
updates are required for end-users.
|
||||
|
||||
Security
|
||||
--------
|
||||
|
||||
No security implications for this change.
|
||||
|
||||
Testing
|
||||
-------
|
||||
|
||||
Implementation will include unit tests for all new code written; amulet
|
||||
function tests will be updated to ensure that feature is being implemented
|
||||
correctly across the charm set.
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
No external dependencies
|
151
specs/stein/backlog/swift-extended-cluster-operations.rst
Normal file
151
specs/stein/backlog/swift-extended-cluster-operations.rst
Normal file
@ -0,0 +1,151 @@
|
||||
..
|
||||
Copyright 2017 Canonical UK Ltd
|
||||
|
||||
This work is licensed under a Creative Commons Attribution 3.0
|
||||
Unported License.
|
||||
http://creativecommons.org/licenses/by/3.0/legalcode
|
||||
|
||||
..
|
||||
This template should be in ReSTructured text. Please do not delete
|
||||
any of the sections in this template. If you have nothing to say
|
||||
for a whole section, just write: "None". For help with syntax, see
|
||||
http://sphinx-doc.org/rest.html To test out your formatting, see
|
||||
http://www.tele3.cz/jbar/rest/rest.html
|
||||
|
||||
=================================
|
||||
Extended Swift Cluster Operations
|
||||
=================================
|
||||
|
||||
The Swift charms currently support a subset of operations required to support
|
||||
a Swift cluster over time. This spec proposes expanding on what we already have
|
||||
in order to support more crucial operations such as reconfiguring the
|
||||
rings post-deployment.
|
||||
|
||||
Problem Description
|
||||
===================
|
||||
|
||||
To deploy a Swift object store using the OpenStack charms you are required to
|
||||
deploy both the swift-proxy and swift-storage charms. The swift-proxy charm
|
||||
performs two key roles - running the api endpoint service and managing the
|
||||
rings - while the swift-storage charm is responsible for the running the swift
|
||||
object store services (account, container, object).
|
||||
|
||||
As they stand, these charms currently support a base set of what is required to
|
||||
effectively maintain a Swift cluster over time:
|
||||
|
||||
* deploy swift cluster with configurable min-part-hours, replica count,
|
||||
partition-power and block storage devices.
|
||||
|
||||
* once deployed the only changes that can be made are the addition of
|
||||
block devices and modification of min-part-hours. Changes to
|
||||
partition-power or replicas are ignored by the charm once the rings
|
||||
have already been initialised.
|
||||
|
||||
This forces operators to manually apply changes like adjusting the
|
||||
partition-power to accomodate for additional storage added to the cluster. This
|
||||
poses great risk since manually editing the rings/builders and syncing them
|
||||
across the cluster could easily conflict with the swift-proxy charm's native
|
||||
support for doing this resulting in a broken cluster.
|
||||
|
||||
Proposed Change
|
||||
===============
|
||||
|
||||
The proposal here is to extend the charm support for ring management in order
|
||||
to be able to support making changes to partition-power, replicas and possibly
|
||||
others and have the charm safely and automatically apply these changes and
|
||||
distribute aross the cluster.
|
||||
|
||||
Currently we check whether the rings are already initialised and if they are
|
||||
we ignore the partition power and replica count configured in the charm i.e.
|
||||
changes are not applied. To make it possible to do this we will need to remove
|
||||
these blocks and implement the steps documented in [0] and [1]. I also propose
|
||||
that charm impose a cluster size limit (number of devices) above which we
|
||||
refuse to make changes until the operator has paused the swift-proxy units i.e.
|
||||
placed them into "maintenance mode" which will shutdown the api services and
|
||||
block any restarts until the units are resumed. The user will also have the
|
||||
option to set disable-ring-balance=true if they want check that their changes
|
||||
have been applied successully (to the builder files) prior to having the rings
|
||||
rebuilt and sycned across the cluster.
|
||||
|
||||
For the swift-storage charms, where currently one can only add devices but not
|
||||
remove, the proposal is to support removing devices. This will entail
|
||||
messaging the swift-proxy on the storage relation which an updated list of
|
||||
devices and a new setting 'purge-missing-devices' to instruct the swift-proxy
|
||||
to remove devices from the ring that are no longer configured. We will also
|
||||
need to ensure that the device cache located on the swift-storage unit from
|
||||
which we are removing a device is also updated to no longer include the
|
||||
device since not doing so would block the device from being re-added in the
|
||||
future. As an extension to this we should also extend the
|
||||
swift-storage-relation-broken to support removing devices associated with that
|
||||
unit/host from the rings and sycning these changes across the cluster.
|
||||
|
||||
[0] https://docs.openstack.org/swift/latest/ring_partpower.html
|
||||
[1] https://docs.openstack.org/swift/latest/admin/objectstorage-ringbuilder.html#replica-counts
|
||||
|
||||
Alternatives
|
||||
------------
|
||||
|
||||
Juju charm actions are another way to implement operational actions to be
|
||||
performed on the cluster but do not necessarily fit all cases. Since ring
|
||||
management is at the core of the existing charm (hook) code itself, the
|
||||
proposal is to extend this code rather than move and rewrite it as an action.
|
||||
However, there will likely be a need for some actions to be defined as
|
||||
post-modification checks and cleanups which would be well suited to an
|
||||
action and not directly depend on the charm ring manager.
|
||||
|
||||
Implementation
|
||||
==============
|
||||
|
||||
Assignee(s)
|
||||
-----------
|
||||
|
||||
Primary assignee:
|
||||
hopem
|
||||
|
||||
Gerrit Topic
|
||||
------------
|
||||
|
||||
Use Gerrit topic swift-charm-extended-operations for all patches related to
|
||||
this spec.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git-review -t swift-charm-extended-operations
|
||||
|
||||
Work Items
|
||||
----------
|
||||
|
||||
* add support for modifying partition-power
|
||||
* add support for modifying replicas
|
||||
* add support for removing devices
|
||||
* add support for removing an entire swift-storage host
|
||||
|
||||
Repositories
|
||||
------------
|
||||
|
||||
None
|
||||
|
||||
Documentation
|
||||
-------------
|
||||
|
||||
All of the above additions will need to be properly documented in the charm
|
||||
deployment guide.
|
||||
|
||||
Security
|
||||
--------
|
||||
|
||||
None
|
||||
|
||||
Testing
|
||||
-------
|
||||
|
||||
Each additional level of support will need very thorough testing against a
|
||||
real Swift object deployed with the charms that contains data and is of a
|
||||
reasonable scale. All code changes will be accompanied by unit tests and
|
||||
where possible functional tests.
|
||||
|
||||
Dependencies
|
||||
============
|
||||
|
||||
None
|
||||
|
0
specs/stein/redirects
Normal file
0
specs/stein/redirects
Normal file
Loading…
x
Reference in New Issue
Block a user