Browse Source

Fix docs renderring, enforce instructions and template

This patch applies various documentation renderring fixes,
and enforces application of the instructions and the
template for the file names.

In addition to that it adds a requirement to add patches
related to the spec under specified Gerrit topics.

Change-Id: I36199cf78c30f2ee75c2d716b8919ceae2ab7c42
changes/74/643074/6
Roman Gorshunov 2 years ago
parent
commit
c5064ef2eb
10 changed files with 206 additions and 206 deletions
  1. +1
    -0
      .gitignore
  2. +6
    -14
      specs/approved/airship_multi_linux_distros.rst
  3. +109
    -95
      specs/approved/data_config_generator.rst
  4. +2
    -0
      specs/approved/divingbell_ansible_framework.rst
  5. +2
    -6
      specs/approved/drydock_support_bios_configuration.rst
  6. +1
    -1
      specs/approved/k8s_external_facing_api.rst
  7. +2
    -2
      specs/approved/pegleg_secrets.rst
  8. +41
    -52
      specs/approved/workflow_node-teardown.rst
  9. +38
    -32
      specs/instructions.rst
  10. +4
    -4
      specs/template.rst

+ 1
- 0
.gitignore View File

@ -5,3 +5,4 @@
/AUTHORS
/ChangeLog
.tox
.vscode/

specs/approved/multi-linux-distros.rst → specs/approved/airship_multi_linux_distros.rst View File


+ 109
- 95
specs/approved/data_config_generator.rst View File

@ -150,34 +150,40 @@ Overall Architecture
- Raw rack information from plugin:
::
vlan_network_data:
oam:
subnet: 12.0.0.64/26
vlan: '1321'
- Rules to define gateway, ip ranges from subnet:
::
rule_ip_alloc_offset:
name: ip_alloc_offset
ip_alloc_offset:
default: 10
gateway: 1
The above rule specify the ip offset to considered to define ip address for gateway, reserved
and static ip ranges from the subnet pool.
So ip range for 12.0.0.64/26 is : 12.0.0.65 ~ 12.0.0.126
The rule "ip_alloc_offset" now helps to define additional information as follows:
- gateway: 12.0.0.65 (the first offset as defined by the field 'gateway')
- reserved ip ranges: 12.0.0.65 ~ 12.0.0.76 (the range is defined by adding
"default" to start ip range)
- static ip ranges: 12.0.0.77 ~ 12.0.0.126 (it follows the rule that we need
to skip first 10 ip addresses as defined by "default")
The above rule specify the ip offset to considered to define ip address for gateway, reserved
and static ip ranges from the subnet pool.
So ip range for 12.0.0.64/26 is : 12.0.0.65 ~ 12.0.0.126
The rule "ip_alloc_offset" now helps to define additional information as follows:
- gateway: 12.0.0.65 (the first offset as defined by the field 'gateway')
- reserved ip ranges: 12.0.0.65 ~ 12.0.0.76 (the range is defined by adding
"default" to start ip range)
- static ip ranges: 12.0.0.77 ~ 12.0.0.126 (it follows the rule that we need
to skip first 10 ip addresses as defined by "default")
- Intermediary YAML file information generated after applying the above rules
to the raw rack information:
::
::
network:
vlan_network_data:
@ -192,13 +198,13 @@ Overall Architecture
static_end: 12.0.0.126 ----+
vlan: '1321'
--
--
- J2 templates for specifying oam network data: It represents the format in
which the site manifests will be generated with values obtained from
Intermediary YAML
- J2 templates for specifying oam network data: It represents the format in
which the site manifests will be generated with values obtained from
Intermediary YAML
::
::
---
schema: 'drydock/Network/v1'
@ -230,12 +236,12 @@ Overall Architecture
end: {{ data['network']['vlan_network_data']['oam']['static_end'] }}
...
--
--
- OAM Network information in site manifests after applying intermediary YAML to J2
templates.:
- OAM Network information in site manifests after applying intermediary YAML to J2
templates.:
::
::
---
schema: 'drydock/Network/v1'
@ -267,7 +273,7 @@ Overall Architecture
end: 12.0.0.126
...
--
--
Security impact
---------------
@ -304,106 +310,114 @@ plugins.
A. Excel Based Data Source.
- Gather the following input files:
1) Excel based site Engineering package. This file contains detail specification
covering IPMI, Public IPs, Private IPs, VLAN, Site Details, etc.
2) Excel Specification to aid parsing of the above Excel file. It contains
details about specific rows and columns in various sheet which contain the
necessary information to build site manifests.
3) Site specific configuration file containing additional configuration like
proxy, bgp information, interface names, etc.
- Gather the following input files:
4) Intermediary YAML file. In this cases Site Engineering Package and Excel
specification are not required.
1) Excel based site Engineering package. This file contains detail specification
covering IPMI, Public IPs, Private IPs, VLAN, Site Details, etc.
2) Excel Specification to aid parsing of the above Excel file. It contains
details about specific rows and columns in various sheet which contain the
necessary information to build site manifests.
3) Site specific configuration file containing additional configuration like
proxy, bgp information, interface names, etc.
4) Intermediary YAML file. In this cases Site Engineering Package and Excel
specification are not required.
B. Remote Data Source
B. Remote Data Source
- Gather the following input information:
- Gather the following input information:
1) End point configuration file containing credentials to enable its access.
Each end-point type shall have their access governed by their respective plugins
and associated configuration file.
2) Site specific configuration file containing additional configuration like
proxy, bgp information, interface names, etc. These will be used if information
extracted from remote site is insufficient.
1) End point configuration file containing credentials to enable its access.
Each end-point type shall have their access governed by their respective plugins
and associated configuration file.
2) Site specific configuration file containing additional configuration like
proxy, bgp information, interface names, etc. These will be used if information
extracted from remote site is insufficient.
* Program execution
1) CLI Options:
-g, --generate_intermediary Dump intermediary file from passed Excel and
Excel spec.
-m, --generate_manifests Generate manifests from the generated
intermediary file.
-x, --excel PATH Path to engineering Excel file, to be passed
with generate_intermediary. The -s option is
mandatory with this option. Multiple engineering
files can be used. For example: -x file1.xls -x file2.xls
-s, --exel_spec PATH Path to Excel spec, to be passed with
generate_intermediary. The -x option is
mandatory along with this option.
-i, --intermediary PATH Path to intermediary file,to be passed
with generate_manifests. The -g and -x options
are not required with this option.
-d, --site_config PATH Path to the site specific YAML file [required]
-l, --loglevel INTEGER Loglevel NOTSET:0 ,DEBUG:10, INFO:20,
WARNING:30, ERROR:40, CRITICAL:50 [default:20]
-e, --end_point_config File containing end-point configurations like user-name
password, certificates, URL, etc.
--help Show this message and exit.
2) Example:
2-1) Using Excel spec as input data source:
Generate Intermediary: spyglass -g -x <DesignSpec> -s <excel spec> -d <site-config>
Generate Manifest & Intermediary: spyglass -mg -x <DesignSpec> -s <excel spec> -d <site-config>
Generate Manifest with Intermediary: spyglass -m -i <intermediary>
2-1) Using external data source as input:
Generate Manifest and Intermediary : spyglass -m -g -e<end_point_config> -d <site-config>
Generate Manifest : spyglass -m -e<end_point_config> -d <site-config>
Note: The end_point_config shall include attributes of the external data source that are
necessary for its access. Each external data source type shall have its own plugin to configure
its corresponding credentials.
1. CLI Options:
+-----------------------------+-----------------------------------------------------------+
| -g, --generate_intermediary | Dump intermediary file from passed Excel and |
| | Excel spec. |
+-----------------------------+-----------------------------------------------------------+
| -m, --generate_manifests | Generate manifests from the generated |
| | intermediary file. |
+-----------------------------+-----------------------------------------------------------+
| -x, --excel PATH | Path to engineering Excel file, to be passed |
| | with generate_intermediary. The -s option is |
| | mandatory with this option. Multiple engineering |
| | files can be used. For example: -x file1.xls -x file2.xls |
+-----------------------------+-----------------------------------------------------------+
| -s, --exel_spec PATH | Path to Excel spec, to be passed with |
| | generate_intermediary. The -x option is |
| | mandatory along with this option. |
+-----------------------------+-----------------------------------------------------------+
| -i, --intermediary PATH | Path to intermediary file,to be passed |
| | with generate_manifests. The -g and -x options |
| | are not required with this option. |
+-----------------------------+-----------------------------------------------------------+
| -d, --site_config PATH | Path to the site specific YAML file [required] |
+-----------------------------+-----------------------------------------------------------+
| -l, --loglevel INTEGER | Loglevel NOTSET:0 ,DEBUG:10, INFO:20, |
| | WARNING:30, ERROR:40, CRITICAL:50 [default:20] |
+-----------------------------+-----------------------------------------------------------+
| -e, --end_point_config | File containing end-point configurations like user-name |
| | password, certificates, URL, etc. |
+-----------------------------+-----------------------------------------------------------+
| --help | Show this message and exit. |
+-----------------------------+-----------------------------------------------------------+
2. Example:
1) Using Excel spec as input data source:
Generate Intermediary: ``spyglass -g -x <DesignSpec> -s <excel spec> -d <site-config>``
Generate Manifest & Intermediary: ``spyglass -mg -x <DesignSpec> -s <excel spec> -d <site-config>``
Generate Manifest with Intermediary: ``spyglass -m -i <intermediary>``
2) Using external data source as input:
Generate Manifest and Intermediary: ``spyglass -m -g -e<end_point_config> -d <site-config>``
Generate Manifest: ``spyglass -m -e<end_point_config> -d <site-config>``
.. note::
The end_point_config shall include attributes of the external data source that are
necessary for its access. Each external data source type shall have its own plugin to configure
its corresponding credentials.
* Program output:
a) Site Manifests: As an initial release, the program shall output manifest files for
"airship-seaworthy" site. For example: baremetal, deployment, networks, pki, etc.
Reference:https://github.com/openstack/airship-treasuremap/tree/master/site/airship-seaworthy
Reference: https://github.com/openstack/airship-treasuremap/tree/master/site/airship-seaworthy
b) Intermediary YAML: Containing aggregated site information generated from data sources that is
used to generate the above site manifests.
Future Work
============
1) Schema based manifest generation instead of Jinja2 templates. It shall
be possible to cleanly transition to this schema based generation keeping a unique
mapping between schema and generated manifests. Currently this is managed by
considering a mapping of j2 templates with schemas and site type.
2) UI editor for intermediary YAML
1. Schema based manifest generation instead of Jinja2 templates. It shall
be possible to cleanly transition to this schema based generation keeping a unique
mapping between schema and generated manifests. Currently this is managed by
considering a mapping of j2 templates with schemas and site type.
2. UI editor for intermediary YAML
Alternatives
============
1) Schema based manifest generation instead of Jinja2 templates.
2) Develop the data source plugins as an extension to Pegleg.
1. Schema based manifest generation instead of Jinja2 templates.
2. Develop the data source plugins as an extension to Pegleg.
Dependencies
============
1) Availability of a repository to store Jinja2 templates.
2) Availability of a repository to store generated manifests.
1. Availability of a repository to store Jinja2 templates.
2. Availability of a repository to store generated manifests.
References
==========
None

+ 2
- 0
specs/approved/divingbell_ansible_framework.rst View File

@ -60,6 +60,7 @@ A separate directory structure needs to be created for adding the playbooks.
Each Divingbell config can be a separate role within the playbook structure.
::
- playbooks/
- roles/
- systcl/
@ -83,6 +84,7 @@ With Divingbell DaemonSet running on each host mounted at ``hostPath``,
``hosts`` should be defined as given below within the ``master.yml``.
::
hosts: all
connection: chroot


+ 2
- 6
specs/approved/drydock_support_bios_configuration.rst View File

@ -193,14 +193,10 @@ Work Items
----------
- Update Hardware profile schema to support new attribute bios_setting
- Update Hardware profile objects
- Update Orchestrator action PrepareNodes to call OOB driver for BIOS
configuration
- Update Redfish OOB driver to support new action ConfigBIOS
- Add unit test cases
Assignee(s):
@ -215,8 +211,8 @@ Other contributors:
Dependencies
============
This spec depends on ``Introduce Redfish based OOB Driver for Drydock``
https://storyboard.openstack.org/#!/story/2003007
This spec depends on `Introduce Redfish based OOB Driver for Drydock <https://storyboard.openstack.org/#!/story/2003007>`_
story.
References
==========


+ 1
- 1
specs/approved/k8s_external_facing_api.rst View File

@ -45,7 +45,7 @@ Impacted components
The following Airship components would be impacted by this solution:
#. Promenade - Maintenance of the chart for external facing Kubernetes API
servers
servers
Proposed change
===============


specs/approved/pegleg-secrets.rst → specs/approved/pegleg_secrets.rst View File


+ 41
- 52
specs/approved/workflow_node-teardown.rst View File

@ -150,21 +150,26 @@ details:
#. Drain the Kubernetes node.
#. Clear the Kubernetes labels on the node.
#. Remove etcd nodes from their clusters (if impacted).
- if the node being decommissioned contains etcd nodes, Promenade will
attempt to gracefully have those nodes leave the etcd cluster.
attempt to gracefully have those nodes leave the etcd cluster.
#. Ensure that etcd cluster(s) are in a stable state.
- Polls for status every 30 seconds up to the etcd-ready-timeout, or the
cluster meets the defined minimum functionality for the site.
cluster meets the defined minimum functionality for the site.
- A new document: promenade/EtcdClusters/v1 that will specify details about
the etcd clusters deployed in the site, including: identifiers,
credentials, and thresholds for minimum functionality.
the etcd clusters deployed in the site, including: identifiers,
credentials, and thresholds for minimum functionality.
- This process should ignore the node being torn down from any calculation
of health
of health
#. Shutdown the kubelet.
- If this is not possible because the node is in a state of disarray such
that it cannot schedule the daemonset to run, this step may fail, but
should not hold up the process, as the Drydock dismantling of the node
will shut the kubelet down.
that it cannot schedule the daemonset to run, this step may fail, but
should not hold up the process, as the Drydock dismantling of the node
will shut the kubelet down.
Responses
~~~~~~~~~
@ -173,11 +178,9 @@ All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success
Indicates that all steps are successful.
- Failure: Code: 404, reason: NotFound
Indicates that the target node is not discoverable by Promenade.
- Failure: Code: 500, reason: DisassociateStepFailure
The details section should detail the successes and failures further. Any
@ -223,16 +226,13 @@ All responses will be form of the Airship Status response.
Indicates that the drain node has successfully concluded, and that no pods
are currently running
- Failure: Status response, code: 400, reason: BadRequest
A request was made with parameters that cannot work - e.g. grace-period is
set to a value larger than the timeout value.
- Failure: Status response, code: 404, reason: NotFound
The specified node is not discoverable by Promenade
- Failure: Status response, code: 500, reason: DrainNodeError
There was a processing exception raised while trying to drain a node. The
@ -263,11 +263,9 @@ All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success
All labels have been removed from the specified Kubernetes node.
- Failure: Code: 404, reason: NotFound
The specified node is not discoverable by Promenade
- Failure: Code: 500, reason: ClearLabelsError
There was a failure to clear labels that prevented completion. The details
@ -298,11 +296,9 @@ All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success
All etcd nodes have been removed from the specified node.
- Failure: Code: 404, reason: NotFound
The specified node is not discoverable by Promenade
- Failure: Code: 500, reason: RemoveEtcdError
There was a failure to remove etcd from the target node that prevented
@ -315,7 +311,7 @@ Promenade Check etcd
~~~~~~~~~~~~~~~~~~~~
Retrieves the current interpreted state of etcd.
GET /etcd-cluster-health-statuses?design_ref={the design ref}
GET /etcd-cluster-health-statuses?design_ref={the design ref}
Where the design_ref parameter is required for appropriate operation, and is in
the same format as used for the join-scripts API.
@ -334,42 +330,40 @@ All responses will be form of the Airship Status response.
The status of each etcd in the site will be returned in the details section.
Valid values for status are: Healthy, Unhealthy
https://github.com/openstack/airship-in-a-bottle/blob/master/doc/source/api-conventions.rst#status-responses
.. code:: json
{ "...": "... standard status response ...",
"details": {
"errorCount": {{n}},
"messageList": [
{ "message": "Healthy",
"error": false,
"kind": "HealthMessage",
"name": "{{the name of the etcd service}}"
},
{ "message": "Unhealthy"
"error": false,
"kind": "HealthMessage",
"name": "{{the name of the etcd service}}"
},
{ "message": "Unable to access Etcd"
"error": true,
"kind": "HealthMessage",
"name": "{{the name of the etcd service}}"
}
]
}
...
}
https://github.com/openstack/airship-in-a-bottle/blob/master/doc/source/api-conventions.rst#status-responses
.. code:: json
{ "...": "... standard status response ...",
"details": {
"errorCount": {{n}},
"messageList": [
{ "message": "Healthy",
"error": false,
"kind": "HealthMessage",
"name": "{{the name of the etcd service}}"
},
{ "message": "Unhealthy"
"error": false,
"kind": "HealthMessage",
"name": "{{the name of the etcd service}}"
},
{ "message": "Unable to access Etcd"
"error": true,
"kind": "HealthMessage",
"name": "{{the name of the etcd service}}"
}
]
}
...
}
- Failure: Code: 400, reason: MissingDesignRef
Returned if the design_ref parameter is not specified
- Failure: Code: 404, reason: NotFound
Returned if the specified etcd could not be located
- Failure: Code: 500, reason: EtcdNotAccessible
Returned if the specified etcd responded with an invalid health response
@ -400,11 +394,9 @@ All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success
The kubelet has been successfully shutdown
- Failure: Code: 404, reason: NotFound
The specified node is not discoverable by Promenade
- Failure: Code: 500, reason: ShutdownKubeletError
The specified node's kubelet fails to shutdown. The details section of the
@ -433,17 +425,14 @@ All responses will be form of the Airship Status response.
- Success: Code: 200, reason: Success
The specified node has been removed from the Kubernetes cluster.
- Failure: Code: 404, reason: NotFound
The specified node is not discoverable by Promenade
- Failure: Code: 409, reason: Conflict
The specified node cannot be deleted due to checks that the node is
drained/cordoned and has no labels (other than possibly
`promenade-decomission: enabled`).
- Failure: Code: 500, reason: DeleteNodeError
The specified node cannot be removed from the cluster due to an error from


+ 38
- 32
specs/instructions.rst View File

@ -20,6 +20,12 @@ Instructions
a short explanation.
- New specs for review should be placed in the ``approved`` subfolder, where
they will undergo review and approval in Gerrit_.
- Test if the spec file renders correctly in a web-browser by running
``make docs`` command and opening ``doc/build/html/index.html`` in a
web-browser. Ubuntu needs the following packages to be installed::
apt-get install -y make tox gcc python3-dev
- Specs that have finished implementation should be moved to the
``implemented`` subfolder.
@ -50,38 +56,38 @@ Use the following guidelines to determine the category to use for a document:
1) For new functionality and features, the best choice for a category is to
match a functional duty of Airship.
site-definition
Parts of the platform that support the definition of a site, including
management of the yaml definitions, document authoring and translation, and
the collation of source documents.
genesis
Used for the steps related to preparation and deployment of the genesis node
of an Airship deployment.
baremetal
Those changes to Airflow that provide for the lifecycle of bare metal
components of the system - provisioning, maintenance, and teardown. This
includes booting, hardware and network configuration, operating system, and
other host-level management
k8s
For functionality that is about interfacing with Kubernetes directly, other
than the initial setup that is done during genesis.
software
Functionality that is related to the deployment or redeployment of workload
onto the Kubernetes cluster.
workflow
Changes to existing workflows to provide new functionality and creation of
new workflows that span multiple other areas (e.g. baremetal, k8s, software),
or those changes that are new arrangements of existing functionality in one
or more of those other areas.
administration
Security, logging, auditing, monitoring, and those things related to site
administrative functions of the Airship platform.
site-definition
Parts of the platform that support the definition of a site, including
management of the yaml definitions, document authoring and translation, and
the collation of source documents.
genesis
Used for the steps related to preparation and deployment of the genesis node
of an Airship deployment.
baremetal
Those changes to Airflow that provide for the lifecycle of bare metal
components of the system - provisioning, maintenance, and teardown. This
includes booting, hardware and network configuration, operating system, and
other host-level management
k8s
For functionality that is about interfacing with Kubernetes directly, other
than the initial setup that is done during genesis.
software
Functionality that is related to the deployment or redeployment of workload
onto the Kubernetes cluster.
workflow
Changes to existing workflows to provide new functionality and creation of
new workflows that span multiple other areas (e.g. baremetal, k8s, software),
or those changes that are new arrangements of existing functionality in one
or more of those other areas.
administration
Security, logging, auditing, monitoring, and those things related to site
administrative functions of the Airship platform.
2) For specs that are not feature focused, the component of the system may
be the best choice for a category, e.g. ``shipyard``, ``armada`` etc...


+ 4
- 4
specs/template.rst View File

@ -12,7 +12,7 @@
Blueprints are written using ReSTructured text.
Add index directives to help others find your spec. E.g.::
Add *index* directives to help others find your spec by keywords. E.g.::
.. index::
single: template
@ -27,9 +27,9 @@ Introduction paragraph -- What is this blueprint about?
Links
=====
Include pertinent links to where the work is being tracked (e.g. Storyboard),
as well as any other foundational information that may lend clarity to this
blueprint
Include pertinent links to where the work is being tracked (e.g. Storyboard ID
and Gerrit topics), as well as any other foundational information that may lend
clarity to this blueprint
Problem description
===================


Loading…
Cancel
Save