Merge "Correct indent in doc"

This commit is contained in:
Zuul 2017-11-13 02:19:20 +00:00 committed by Gerrit Code Review
commit 0fbd7a427d
6 changed files with 108 additions and 108 deletions

View File

@ -36,24 +36,24 @@ Major Components
In the context of an OpenStack cloud, the most important components involved in In the context of an OpenStack cloud, the most important components involved in
the authentication and the authorization process are: the authentication and the authorization process are:
- The Senlin client (i.e. the `python-senlinclient` package) which accepts - The Senlin client (i.e. the `python-senlinclient` package) which accepts
user credentials provided through environment variables and/or the command user credentials provided through environment variables and/or the command
line arguments and forwards them to the OpenStack SDK (i.e. the line arguments and forwards them to the OpenStack SDK (i.e. the
`python-openstacksdk` package) when making service requests to Senlin API. `python-openstacksdk` package) when making service requests to Senlin API.
- The OpenStack SDK (`python-openstacksdk`) is used by Senlin engine to - The OpenStack SDK (`python-openstacksdk`) is used by Senlin engine to
interact with any other OpenStack services. The Senlin client also uses the interact with any other OpenStack services. The Senlin client also uses the
SDK to talk to the Senlin API. The SDK package translates the user-provided SDK to talk to the Senlin API. The SDK package translates the user-provided
credentials into a token by invoking the Keystone service. credentials into a token by invoking the Keystone service.
- The Keystone middleware (i.e. `keystonemiddleware`) which backs the - The Keystone middleware (i.e. `keystonemiddleware`) which backs the
`auth_token` WSGI middleware in the Senlin API pipeline provides a basic `auth_token` WSGI middleware in the Senlin API pipeline provides a basic
validation filter. The filter is responsible to validate the token that validation filter. The filter is responsible to validate the token that
exists in the HTTP request header and then populates the HTTP request header exists in the HTTP request header and then populates the HTTP request header
with detailed information for the downstream filters (including the API with detailed information for the downstream filters (including the API
itself) to use. itself) to use.
- The `context` WSGI middleware which is based on the `oslo.context` package - The `context` WSGI middleware which is based on the `oslo.context` package
provides a constructor of the `RequestContext` data structure that provides a constructor of the `RequestContext` data structure that
accompanies any requests down the WSGI application pipeline so that those accompanies any requests down the WSGI application pipeline so that those
downstream components don't have to access the HTTP request header. downstream components don't have to access the HTTP request header.
Usage Scenarios Usage Scenarios

View File

@ -29,11 +29,11 @@ Keystone project (tenant) which is the default project of the user.
A cluster has the following timestamps when instantiated: A cluster has the following timestamps when instantiated:
- ``init_at``: the timestamp when a cluster object is initialized in the - ``init_at``: the timestamp when a cluster object is initialized in the
Senlin database, but the actual cluster creation has not yet started; Senlin database, but the actual cluster creation has not yet started;
- ``created_at``: the timestamp when the cluster object is created, i.e. - ``created_at``: the timestamp when the cluster object is created, i.e.
the ``CLUSTER_CREATE`` action has completed; the ``CLUSTER_CREATE`` action has completed;
- ``updated_at``: the timestamp when the cluster was last updated. - ``updated_at``: the timestamp when the cluster was last updated.
Cluster Statuses Cluster Statuses
@ -41,20 +41,20 @@ Cluster Statuses
A cluster can have one of the following statuses during its lifecycle: A cluster can have one of the following statuses during its lifecycle:
- ``INIT``: the cluster object has been initialized, but not created yet; - ``INIT``: the cluster object has been initialized, but not created yet;
- ``ACTIVE``: the cluster is created and providing service; - ``ACTIVE``: the cluster is created and providing service;
- ``CREATING``: the cluster creation action is still on going; - ``CREATING``: the cluster creation action is still on going;
- ``ERROR``: the cluster is still providing services, but there are things - ``ERROR``: the cluster is still providing services, but there are things
going wrong that needs human intervention; going wrong that needs human intervention;
- ``CRITICAL``: the cluster is not operational, it may or may not be - ``CRITICAL``: the cluster is not operational, it may or may not be
providing services as expected. Senlin cannot recover it from its current providing services as expected. Senlin cannot recover it from its current
status. The best way to deal with this cluster is to delete it and then status. The best way to deal with this cluster is to delete it and then
re-create it if needed. re-create it if needed.
- ``DELETING``: the cluster deletion is ongoing; - ``DELETING``: the cluster deletion is ongoing;
- ``WARNING``: the cluster is operational, but there are some warnings - ``WARNING``: the cluster is operational, but there are some warnings
detected during past operations. In this case, human involvement is detected during past operations. In this case, human involvement is
suggested but not required. suggested but not required.
- ``UPDATING``: the cluster is being updated. - ``UPDATING``: the cluster is being updated.
Along with the ``status`` property, Senlin provides a ``status_reason`` Along with the ``status`` property, Senlin provides a ``status_reason``
property for users to check what is the cause of the cluster's current status. property for users to check what is the cause of the cluster's current status.
@ -74,24 +74,24 @@ carries a body with valid, sufficient information for the engine to complete
the creation job. The following fields are required in a map named ``cluster`` the creation job. The following fields are required in a map named ``cluster``
in the request JSON body: in the request JSON body:
- ``name``: the name of the cluster to be created; - ``name``: the name of the cluster to be created;
- ``profile``: the name or ID or short-ID of a profile to be used; - ``profile``: the name or ID or short-ID of a profile to be used;
- ``desired_capacity``: the desired number of nodes in the cluster, which is - ``desired_capacity``: the desired number of nodes in the cluster, which is
treated also as the initial number of nodes to be created. treated also as the initial number of nodes to be created.
The following optional fields can be provided in the ``cluster`` map in the The following optional fields can be provided in the ``cluster`` map in the
JSON request body: JSON request body:
- ``min_size``: the minimum number of nodes inside the cluster, default - ``min_size``: the minimum number of nodes inside the cluster, default
value is 0; value is 0;
- ``max_size``: the maximum number of nodes inside the cluster, default - ``max_size``: the maximum number of nodes inside the cluster, default
value is -1, which means there is no upper limit on the number of nodes; value is -1, which means there is no upper limit on the number of nodes;
- ``timeout``: the maximum number of seconds to wait for the cluster to - ``timeout``: the maximum number of seconds to wait for the cluster to
become ready, i.e. ``ACTIVE``. become ready, i.e. ``ACTIVE``.
- ``metadata``: a list of key-value pairs to be associated with the cluster. - ``metadata``: a list of key-value pairs to be associated with the cluster.
- ``dependents``: A dict contains dependency information between nova server/ - ``dependents``: A dict contains dependency information between nova server/
heat stack cluster and container cluster. The container node's id will be heat stack cluster and container cluster. The container node's id will be
stored in 'dependents' property of its host cluster. stored in 'dependents' property of its host cluster.
The ``max_size`` and the ``min_size`` fields, when specified, will be checked The ``max_size`` and the ``min_size`` fields, when specified, will be checked
against each other by the Senlin API. The API also checks if the specified against each other by the Senlin API. The API also checks if the specified

View File

@ -15,12 +15,12 @@
OSProfiler OSProfiler
========== ==========
OSProfiler provides a tiny but powerful library that is used by OSProfiler provides a tiny but powerful library that is used by
most (soon to be all) OpenStack projects and their python clients. It most (soon to be all) OpenStack projects and their python clients. It
provides functionality to be able to generate 1 trace per request, that goes provides functionality to be able to generate 1 trace per request, that goes
through all involved services. This trace can then be extracted and used through all involved services. This trace can then be extracted and used
to build a tree of calls which can be quite handy for a variety of to build a tree of calls which can be quite handy for a variety of
reasons (for example in isolating cross-project performance issues). reasons (for example in isolating cross-project performance issues).
More about OSProfiler: More about OSProfiler:
https://docs.openstack.org/osprofiler/latest/ https://docs.openstack.org/osprofiler/latest/

View File

@ -141,10 +141,10 @@ policy file, copy it to `/etc/senlin/policy.yaml` and then update it.
5. Create Senlin Database. 5. Create Senlin Database.
Create Senlin database using the :command:`senlin-db-recreate` script under Create Senlin database using the :command:`senlin-db-recreate` script under
the :file:`tools` subdirectory. Before calling the script, you need edit it the :file:`tools` subdirectory. Before calling the script, you need edit it
to customize the password you will use for the ``senlin`` user. You need to to customize the password you will use for the ``senlin`` user. You need to
update this script with the <DB PASSWORD> entered in step4. update this script with the <DB PASSWORD> entered in step4.
:: ::
@ -153,7 +153,7 @@ policy file, copy it to `/etc/senlin/policy.yaml` and then update it.
6. Start senlin engine and api service. 6. Start senlin engine and api service.
You may need two consoles for the services i.e., one for each service. You may need two consoles for the services i.e., one for each service.
:: ::

View File

@ -35,11 +35,11 @@ Senlin DB version
``senlin-manage db_version`` ``senlin-manage db_version``
Print out the db schema revision. Print out the db schema revision.
``senlin-manage db_sync`` ``senlin-manage db_sync``
Sync the database up to the most recent version. Sync the database up to the most recent version.
FILES FILES

View File

@ -10,91 +10,91 @@ it can be used by both users and developers.
1.1 1.1
--- ---
- This is the initial version of the v1 API which supports microversions. - This is the initial version of the v1 API which supports microversions.
The v1.1 API is identical to that of v1.0 except for the new supports to The v1.1 API is identical to that of v1.0 except for the new supports to
microversion checking. microversion checking.
A user can specify a header in the API request:: A user can specify a header in the API request::
OpenStack-API-Version: clustering <version> OpenStack-API-Version: clustering <version>
where the ``<version>`` is any valid API version supported. If such a where the ``<version>`` is any valid API version supported. If such a
header is not provided, the API behaves as if a version request of v1.0 header is not provided, the API behaves as if a version request of v1.0
is received. is received.
1.2 1.2
--- ---
- Added ``cluster_collect`` API. This API takes a single parameter ``path`` - Added ``cluster_collect`` API. This API takes a single parameter ``path``
and interprets it as a JSON path for extracting node properties. Properties and interprets it as a JSON path for extracting node properties. Properties
values from all nodes are aggregated into a list and returned to users. values from all nodes are aggregated into a list and returned to users.
- Added ``profile_validate`` API. This API is provided to validate the spec - Added ``profile_validate`` API. This API is provided to validate the spec
of a profile without really creating a profile object. of a profile without really creating a profile object.
- Added ``policy_validate`` API. This API validates the spec of a policy - Added ``policy_validate`` API. This API validates the spec of a policy
without creating a policy object. without creating a policy object.
1.3 1.3
--- ---
- Added ``cluster_replace_nodes`` API. This API enables users to replace the - Added ``cluster_replace_nodes`` API. This API enables users to replace the
specified existing nodes with ones that were not members of any clusters. specified existing nodes with ones that were not members of any clusters.
1.4 1.4
--- ---
- Added ``profile_type_ops`` API. This API returns a dictionary containing - Added ``profile_type_ops`` API. This API returns a dictionary containing
the operations and parameters supported by a specific profile type. the operations and parameters supported by a specific profile type.
- Added ``node_operation`` API. This API enables users to trigger an - Added ``node_operation`` API. This API enables users to trigger an
operation on a node. The operation and its parameters are determined by the operation on a node. The operation and its parameters are determined by the
profile type. profile type.
- Added ``cluster_operation`` API. This API enables users to trigger an - Added ``cluster_operation`` API. This API enables users to trigger an
operation on a cluster. The operation and its parameters are determined by operation on a cluster. The operation and its parameters are determined by
the profile type. the profile type.
- Added ``user`` query parameter for listing receivers. - Added ``user`` query parameter for listing receivers.
- Added ``destroy_after_deletion`` parameter for deleting cluster members. - Added ``destroy_after_deletion`` parameter for deleting cluster members.
1.5 1.5
--- ---
- Added ``support_status`` to profile type list. - Added ``support_status`` to profile type list.
- Added ``support_status`` to policy type list. - Added ``support_status`` to policy type list.
- Added ``support_status`` to profile type show. - Added ``support_status`` to profile type show.
- Added ``support_status`` to policy type show. - Added ``support_status`` to policy type show.
1.6 1.6
--- ---
- Added ``profile_only`` parameter to cluster update request. - Added ``profile_only`` parameter to cluster update request.
- Added ``check`` parameter to node recover request. When this parameter is - Added ``check`` parameter to node recover request. When this parameter is
specified, the engine will check if the node is active before performing specified, the engine will check if the node is active before performing
a recover operation. a recover operation.
- Added ``check`` parameter to cluster recover request. When this parameter - Added ``check`` parameter to cluster recover request. When this parameter
is specified, the engine will check if the nodes are active before is specified, the engine will check if the nodes are active before
performing a recover operation. performing a recover operation.
1.7 1.7
--- ---
- Added ``node_adopt`` operation to node. - Added ``node_adopt`` operation to node.
- Added ``node_adopt_preview`` operation to node. - Added ``node_adopt_preview`` operation to node.
- Added ``receiver_update`` operation to receiver. - Added ``receiver_update`` operation to receiver.
- Added ``service_list`` API. - Added ``service_list`` API.
1.8 1.8
--- ---
- Added ``force`` parameter to cluster delete request. - Added ``force`` parameter to cluster delete request.
- Added ``force`` parameter to node delete request. - Added ``force`` parameter to node delete request.