Updating DevRef docs

Removing superfluous documentation as well as moving changing doc
structure to better reflect contents of documentation.

Change-Id: I6fa798b9c6fc542ef05c954acae8641f69f5cb2b
This commit is contained in:
Larry Rensing 2017-07-03 15:01:25 -05:00
parent 7bc3d6d5fe
commit f7ce1912f7
12 changed files with 9 additions and 325 deletions

View File

@ -1,38 +0,0 @@
Common Conditionals
-------------------
The OpenStack-Helm charts make the following conditions available across
all charts, which can be set at install or upgrade time with Helm below.
Developer Mode
~~~~~~~~~~~~~~
::
helm install local/chart --set development.enabled=true
The development mode flag should be used by any charts that should
behave differently on a developer's laptop than in a production-like deployment,
or have resources that would be difficult to spin up in a small environment.
A chart could for instance define the following ``development:``
override to set ``foo`` to ``bar`` in a dev environment, which
would be triggered by setting the ``enabled`` flag to ``true``.
::
development:
enabled: false
foo: bar
Resources
~~~~~~~~~
::
helm install local/chart --set resources.enabled=true
Resource limits/requirements can be turned on and off. By default, they
are off. Setting this enabled to ``true`` will deploy Kubernetes
resources with resource requirements and limits.

View File

@ -1,18 +0,0 @@
Getting started
===============
Contents:
.. toctree::
:maxdepth: 2
values
overrides
pod-disruption-budgets
replicas
images
resources
labels
endpoints
upgrades
conditionals

View File

@ -1,102 +0,0 @@
Labels
------
This project uses ``nodeSelectors`` as well as ``podAntiAffinity`` rules
to ensure resources land in the proper place within Kubernetes. Today,
OpenStack-Helm employs four labels:
- ceph-storage: enabled
- openstack-control-plane: enabled
- openstack-compute-node: enabled
- openvswitch: enabled
NOTE: The ``openvswitch`` label is an element that is applicable to both
``openstack-control-plane`` as well as ``openstack-compute-node`` nodes.
Ideally, you would eliminate the ``openvswitch`` label if you could
simply do an OR of (``openstack-control-plane`` and
``openstack-compute-node``). However, Kubernetes ``nodeSelectors``
prohibits this specific logic. As a result of this, a third label that
spans all hosts is required, which in this case is ``openvswitch``. The
Open vSwitch service must run on both control plane and tenant nodes
with both labels to provide connectivity for DHCP, L3, and Metadata
services. These Open vSwitch services run as part of the control plane
as well as tenant connectivity, which runs as part of the compute node
infrastructure.
Labels are of course definable and overridable by the chart operators.
Labels are defined in charts by using a ``labels:`` section, which is a
common convention that defines both a selector and a value:
::
labels:
node_selector_key: openstack-control-plane
node_selector_value: enabled
In some cases, such as with the Neutron chart, a chart may need to
define more then one label. In cases such as this, each element should
be articulated under the ``labels:`` section, nesting where appropriate:
::
labels:
# ovs is a special case, requiring a special
# label that can apply to both control hosts
# and compute hosts, until we get more sophisticated
# with our daemonset scheduling
ovs:
node_selector_key: openvswitch
node_selector_value: enabled
agent:
dhcp:
node_selector_key: openstack-control-plane
node_selector_value: enabled
l3:
node_selector_key: openstack-control-plane
node_selector_value: enabled
metadata:
node_selector_key: openstack-control-plane
node_selector_value: enabled
server:
node_selector_key: openstack-control-plane
node_selector_value: enabled
These labels should be leveraged by ``nodeSelector`` definitions in
charts for all resources, including jobs:
::
...
spec:
nodeSelector:
{{ .Values.labels.node_selector_key }}: {{ .Values.labels.node_selector_value }}
containers:
...
In some cases, especially with infrastructure components, it is
necessary for the chart developer to provide scheduling instruction to
Kubernetes to help ensure proper resiliency. The most common examples
employed today are podAntiAffinity rules, such as those used in the
``mariadb`` chart. These should be placed on all foundational elements
so that Kubernetes will not only disperse resources for resiliency, but
also allow multi-replica installations to deploy successfully into a
single host environment:
::
# alanmeadows: this soft requirement allows single
# host deployments to spawn several mariadb containers
# but in a larger environment, would attempt to spread
# them out
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values: ["mariadb"]
topologyKey: kubernetes.io/hostname
weight: 10

View File

@ -1,6 +0,0 @@
Chart Overrides
===============
This document covers Helm overrides and the OpenStack-Helm approach. For
more information on Helm overrides in general see the Helm `Values
Documentation <https://github.com/kubernetes/helm/blob/master/docs/charts.md#values-files>`__

View File

@ -1,31 +0,0 @@
Replicas
--------
All charts must provide replica definitions and leverage those in the
Kubernetes manifests. This allows site operators to tune the replica
counts at install or when upgrading. Each chart should deploy with
multiple replicas by default to ensure that production deployments are
treated as first class citizens, and that services are tested with
multiple replicas more frequently during development and testing.
Developers wishing to deploy minimal environments can enable the
``development`` mode override, which should enforce only one replica per
component.
The convention today in OpenStack-Helm is to define a ``replicas:``
section for the chart, where each component being deployed has its own
tunable value.
For example, the ``glance`` chart provides the following replicas in
``values.yaml``
::
replicas:
api: 2
registry: 2
An operator can override these on ``install`` or ``upgrade``:
::
$ helm install local/glance --set replicas.api=3,replicas.registry=3

View File

@ -1,62 +0,0 @@
Resource Limits
---------------
Resource limits should be defined for all charts within OpenStack-Helm.
The convention is to leverage a ``resources:`` section within
values.yaml by using an ``enabled`` setting that defaults to ``false``
but can be turned on by the operator at install or upgrade time.
The resources specify the requests (memory and cpu) and limits (memory
and cpu) for each deployed resource. For example, from the Nova chart
``values.yaml``:
::
resources:
enabled: false
nova_compute:
requests:
memory: "124Mi"
cpu: "100m"
limits:
memory: "1024Mi"
cpu: "2000m"
nova_libvirt:
requests:
memory: "124Mi"
cpu: "100m"
limits:
memory: "1024Mi"
cpu: "2000m"
nova_api_metadata:
requests:
memory: "124Mi"
cpu: "100m"
limits:
memory: "1024Mi"
cpu: "2000m"
...
These resources definitions are then applied to the appropriate
component when the ``enabled`` flag is set. For instance, the following
nova\_compute daemonset has the requests and limits values applied from
``.Values.resources.nova_compute``:
::
{{- if .Values.resources.enabled }}
resources:
requests:
memory: {{ .Values.resources.nova_compute.requests.memory | quote }}
cpu: {{ .Values.resources.nova_compute.requests.cpu | quote }}
limits:
memory: {{ .Values.resources.nova_compute.limits.memory | quote }}
cpu: {{ .Values.resources.nova_compute.limits.cpu | quote }}
{{- end }}
When a chart developer doesn't know what resource limits or requests to
apply to a new component, they can deploy the component locally and
examine resource utilization using tools like WeaveScope. The resource
limits may not be perfect on initial submission, but over time and with
community contributions, they can be refined to reflect reality.

View File

@ -1,53 +0,0 @@
Default Values
--------------
Two major philosophies guide the OpenStack-Helm values approach. It is
important that new chart developers understand the ``values.yaml``
approach OpenStack-Helm has within each of its charts to ensure that all
charts are both consistent and remain a joy to work with.
The first philosophy to understand is that all charts should be
independently installable and should not require a parent chart. This
means that the values file in each chart should be self-contained. The
project avoids using Helm globals and parent charts as requirements for
capturing and feeding environment specific overrides into subcharts. An
example of a single site definition YAML that can be source controlled
and used as ``--values`` input to all OpenStack-Helm charts to maintain
overrides in one testable place is forthcoming. Currently Helm does not
support a ``--values=environment.yaml`` chunking up a larger override
file's YAML namespace. Ideally, the project seeks native Helm support
for ``helm install local/keystone --values=environment.yaml:keystone``
where ``environment.yaml`` is the operator's chart-wide environment
definition and ``keystone`` is the section in environment.yaml that will
be fed to the keystone chart during install as overrides. Standard YAML
anchors can be used to duplicate common elements like the ``endpoints``
sections. At the time of writing, operators can use a temporary approach
like
`values.py <https://github.com/att-comdev/openstack-helm/blob/master/helm-toolkit/utils/values/values.py>`__
to chunk up a single override YAML file as input to various individual
charts. Overrides, just like the templates themselves, should be source
controlled and tested, especially for operators operating charts at
scale. This project will continue to examine efforts such as
`helm-value-store <https://github.com/skuid/helm-value-store>`__ and
solutions in the vein of
`helmfile <https://github.com/roboll/helmfile>`__. Another compelling
project that seems to address the needs of orchestrating multiple charts
and managing site specific overrides is
`Landscape <https://github.com/Eneco/landscaper>`__
The second philosophy is that the values files should be consistent
across all charts, including charts in core, infra, and add-ons. This
provides a consistent way for operators to override settings, such as
enabling developer mode, defining resource limitations, and customizing
the actual OpenStack configuration within chart templates without having
to guess how a particular chart developer has laid out their
values.yaml. There are also various macros in the ``helm-toolkit`` chart
that will depend on the ``values.yaml`` within all charts being
structured a certain way.
Finally, where charts reference connectivity information for other
services sane defaults should be provided. In cases where these services
are provided by OpenStack-Helm itself, the defaults should assume that
the user will use the OpenStack-Helm charts for those services, but
should also allow those charts to be overridden if the operator has them
externally deployed.

View File

@ -1,9 +1,12 @@
Helm development Developer References
================ ====================
Contents: Contents:
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
getting-started/index pod-disruption-budgets
images
endpoints
upgrades

View File

@ -1,15 +1,6 @@
=====================
Kubernetes Operations
=====================
Init-Containers
===============
Jobs
====
Pod Disruption Budgets Pod Disruption Budgets
====================== ----------------------
OpenStack-Helm leverages PodDistruptionBudgets to enforce quotas OpenStack-Helm leverages PodDistruptionBudgets to enforce quotas
that ensure that a certain number of replicas of a pod are available that ensure that a certain number of replicas of a pod are available
at any given time. This is particularly important in the case when a Kubernetes at any given time. This is particularly important in the case when a Kubernetes
@ -31,4 +22,4 @@ conflict with other values that have been provided if an operator chooses to
leverage Rolling Updates for deployments. In the case where an leverage Rolling Updates for deployments. In the case where an
operator defines a ``maxUnavailable`` and ``maxSurge`` within an update strategy operator defines a ``maxUnavailable`` and ``maxSurge`` within an update strategy
that is higher than a ``minAvailable`` within a pod disruption budget, that is higher than a ``minAvailable`` within a pod disruption budget,
a scenario may occur where pods fail to be evicted from a deployment. a scenario may occur where pods fail to be evicted from a deployment.