[US394832] Convert to sphinx docs

Change-Id: I6ff0ce53ccdac3083d458b7f366f5c97b3af6bb5
This commit is contained in:
Craig Anderson 2018-03-14 20:19:44 +00:00
parent 82254b99e1
commit b4c7160aa6
3 changed files with 329 additions and 172 deletions

186
README.md
View File

@ -1,177 +1,19 @@
Divingbell
==========
..
Copyright 2018 AT&T Intellectual Property.
All Rights Reserved.
What is it?
-----------
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
Divingbell is a lightweight solution for:
1. Bare metal configuration management for a few very targeted use cases
2. Bare metal package manager orchestration
http://www.apache.org/licenses/LICENSE-2.0
What problems does it solve?
----------------------------
The needs identified for Divingbell were:
1. To plug gaps in day 1 tools (e.g., drydock) for node configuration
2. To provide a day 2 solution for managing these configurations going forward
3. [Future] To provide a day 2 solution for system level host patching
Design and Implementation
-------------------------
Divingbell daemonsets run as priviledged containers which mount the host
filesystem and chroot into that filesystem to enforce configuration and package
state. (The [diving bell](http://bit.ly/2hSXlai) analogue can be thought of as something that descends
into the deeps to facilitate work done down below the surface.)
We use the daemonset construct as a way of getting a copy of each pod on every
node, but the work done by this chart's pods behaves like an event-driven job.
In practice this means that the chart internals run once on pod startup,
followed by an infinite sleep such that the pods always report a "Running"
status that k8s recognizes as the healthy (expected) result for a daemonset.
In order to keep configuration as isolated as possible from other systems that
manage common files like /etc/fstab and /etc/sysctl.conf, Divingbell daemonsets
manage all of their configuration in separate files (e.g. by writing unique
files to /etc/sysctl.d or defining unique Systemd units) to avoid potential
conflicts.
To maximize robustness and utility, the daemonsets in this chart are made to be
idempotent. In addition, they are designed to implicitly restore the original
system state after previously defined states are undefined. (e.g., removing a
previously defined mount from the yaml manifest, with no record of the original
mount in the updated manifest).
Lifecycle management
--------------------
This chart's daemonsets will be spawned by Armada. They run in an event-driven
fashion: the idempotent automation for each daemonset will only re-run when
Armada spawns/respawns the container, or if information relevant to the host
changes in the configmap.
For upgrades, a decision was taken not to use any of the built-in kubernetes
update strategies such as RollingUpdate. Instead, we are putting this on
Armada to handle the orchestration of how to do upgrades (e.g., rack by rack).
Daemonset configs
-----------------
### sysctl ###
Used to manage host level sysctl tunables. Ex:
``` yaml
conf:
sysctl:
net/ipv4/ip_forward: 1
net/ipv6/conf/all/forwarding: 1
```
### mounts ###
used to manage host level mounts (outside of those in /etc/fstab). Ex:
``` yaml
conf:
mounts:
mnt:
mnt_tgt: /mnt
device: tmpfs
type: tmpfs
options: 'defaults,noatime,nosuid,nodev,noexec,mode=1777,size=1024M'
```
### ethtool ###
Used to manage host level NIC tunables. Ex:
``` yaml
conf:
ethtool:
ens3:
tx-tcp-segmentation: off
tx-checksum-ip-generic: on
```
### packages ###
Not implemented
### users ###
Not implemented
Node specific configurations
----------------------------
Although we expect these deamonsets to run indiscriminately on all nodes in the
infrastructure, we also expect that different nodes will need to be given a
different set of data depending on the node role/function. This chart supports
establishing value overrides for nodes with specific label value pairs and for
targeting nodes with specific hostnames. The overrided configuration is merged
with the normal config data, with the override data taking precedence.
The chart will then generate one daemonset for each host and label override, in
addition to a default daemonset for which no overrides are applied.
Each daemonset generated will also exclude from its scheduling criteria all
other hosts and labels defined in other overrides for the same daemonset, to
ensure that there is no overlap of daemonsets (i.e., one and only one daemonset
of a given type for each node).
Overrides example with sysctl daemonset:
``` yaml
conf:
sysctl:
net.ipv4.ip_forward: 1
net.ipv6.conf.all.forwarding: 1
fs.file-max: 9999
overrides:
divingbell_sysctl:
labels:
- label:
key: compute_type
values:
- "dpdk"
- "sriov"
conf:
sysctl:
net.ipv4.ip_forward: 0
- label:
key: another_label
values:
- "another_value"
conf:
sysctl:
net.ipv6.conf.all.forwarding: 0
hosts:
- name: superhost
conf:
sysctl:
net.ipv4.ip_forward: 0
fs.file-max: 12345
- name: superhost2
conf:
sysctl:
fs.file-max: 23456
```
Caveats:
1. For a given node, at most one override operation applies. If a node meets
override criteria for both a label and a host, then the host overrides take
precedence and are used for that node. The label overrides are not used in this
case. This is especially important to note if you are defining new host
overrides for a node that is already consuming matching label overrides, as
defining a host override would make those label overrides no longer apply.
2. In the event of label conflicts, the last applicable label override defined
takes precedence. In this example, overrides defined for "another_label" would
take precedence and be applied to nodes that contained both of the defined
labels.
Recorded Demo
-------------
A recorded demo of using divingbell can be found [here](https://asciinema.org/a/beJQZpRPdOctowW0Lxkxrhz17).
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
# Divingbell
Documentation can be found [here](https://divingbell.readthedocs.io).

126
docs/source/conf.py Normal file
View File

@ -0,0 +1,126 @@
# -*- coding: utf-8 -*-
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import sphinx_rtd_theme
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.viewcode',
]
# Add any paths that contain templates here, relative to this directory.
# templates_path = []
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Divingbell'
copyright = u'2018 AT&T Intellectual Property.'
author = u'Divingbell Authors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = u'0.1.0'
# The full version, including alpha/beta/rc tags.
release = u'0.1.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = []
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'ucpintdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}

189
docs/source/index.rst Normal file
View File

@ -0,0 +1,189 @@
..
Copyright 2018 AT&T Intellectual Property.
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Divingbell
==========
What is it?
-----------
Divingbell is a lightweight solution for:
1. Bare metal configuration management for a few very targeted use cases
2. Bare metal package manager orchestration
What problems does it solve?
----------------------------
The needs identified for Divingbell were:
1. To plug gaps in day 1 tools (e.g., Drydock) for node configuration
2. To provide a day 2 solution for managing these configurations going forward
3. [Future] To provide a day 2 solution for system level host patching
Design and Implementation
-------------------------
Divingbell daemonsets run as privileged containers which mount the host
filesystem and chroot into that filesystem to enforce configuration and package
state. (The `diving bell <http://bit.ly/2hSXlai>`_ analogue can be thought of as something that descends
into the deeps to facilitate work done down below the surface.)
We use the daemonset construct as a way of getting a copy of each pod on every
node, but the work done by this chart's pods behaves like an event-driven job.
In practice this means that the chart internals run once on pod startup,
followed by an infinite sleep such that the pods always report a "Running"
status that k8s recognizes as the healthy (expected) result for a daemonset.
In order to keep configuration as isolated as possible from other systems that
manage common files like /etc/fstab and /etc/sysctl.conf, Divingbell daemonsets
manage all of their configuration in separate files (e.g. by writing unique
files to /etc/sysctl.d or defining unique Systemd units) to avoid potential
conflicts.
To maximize robustness and utility, the daemonsets in this chart are made to be
idempotent. In addition, they are designed to implicitly restore the original
system state after previously defined states are undefined. (e.g., removing a
previously defined mount from the yaml manifest, with no record of the original
mount in the updated manifest).
Lifecycle management
--------------------
This chart's daemonsets will be spawned by Armada. They run in an event-driven
fashion: the idempotent automation for each daemonset will only re-run when
Armada spawns/respawns the container, or if information relevant to the host
changes in the configmap.
For upgrades, a decision was taken not to use any of the built-in Kubernetes
update strategies such as RollingUpdate. Instead, we are putting this on
Armada to handle the orchestration of how to do upgrades (e.g., rack by rack).
Daemonset configs
-----------------
sysctl
^^^^^^
Used to manage host level sysctl tunables. Ex::
conf:
sysctl:
net/ipv4/ip_forward: 1
net/ipv6/conf/all/forwarding: 1
mounts
^^^^^^
used to manage host level mounts (outside of those in /etc/fstab). Ex::
conf:
mounts:
mnt:
mnt_tgt: /mnt
device: tmpfs
type: tmpfs
options: 'defaults,noatime,nosuid,nodev,noexec,mode=1777,size=1024M'
ethtool
^^^^^^^
Used to manage host level NIC tunables. Ex::
conf:
ethtool:
ens3:
tx-tcp-segmentation: off
tx-checksum-ip-generic: on
packages
^^^^^^^^
Not implemented
users
^^^^^
Not implemented
Node specific configurations
----------------------------
Although we expect these daemonsets to run indiscriminately on all nodes in the
infrastructure, we also expect that different nodes will need to be given a
different set of data depending on the node role/function. This chart supports
establishing value overrides for nodes with specific label value pairs and for
targeting nodes with specific hostnames. The overridden configuration is merged
with the normal config data, with the override data taking precedence.
The chart will then generate one daemonset for each host and label override, in
addition to a default daemonset for which no overrides are applied.
Each daemonset generated will also exclude from its scheduling criteria all
other hosts and labels defined in other overrides for the same daemonset, to
ensure that there is no overlap of daemonsets (i.e., one and only one daemonset
of a given type for each node).
Overrides example with sysctl daemonset::
conf:
sysctl:
net.ipv4.ip_forward: 1
net.ipv6.conf.all.forwarding: 1
fs.file-max: 9999
overrides:
divingbell_sysctl:
labels:
- label:
key: compute_type
values:
- "dpdk"
- "sriov"
conf:
sysctl:
net.ipv4.ip_forward: 0
- label:
key: another_label
values:
- "another_value"
conf:
sysctl:
net.ipv6.conf.all.forwarding: 0
hosts:
- name: superhost
conf:
sysctl:
net.ipv4.ip_forward: 0
fs.file-max: 12345
- name: superhost2
conf:
sysctl:
fs.file-max: 23456
Caveats:
1. For a given node, at most one override operation applies. If a node meets
override criteria for both a label and a host, then the host overrides take
precedence and are used for that node. The label overrides are not used in this
case. This is especially important to note if you are defining new host
overrides for a node that is already consuming matching label overrides, as
defining a host override would make those label overrides no longer apply.
2. In the event of label conflicts, the last applicable label override defined
takes precedence. In this example, overrides defined for "another_label" would
take precedence and be applied to nodes that contained both of the defined
labels.
Recorded Demo
-------------
A recorded demo of using Divingbell can be found `here <https://asciinema.org/a/beJQZpRPdOctowW0Lxkxrhz17>`_.