Add initial docs

Change-Id: I26b511c57ba24532688690e44ae183a9151e52a9
This commit is contained in:
Michal Nasiadka 2024-03-14 07:25:01 +01:00 committed by sd109
parent 551230cb22
commit bc3c715d64
10 changed files with 334 additions and 187 deletions

View File

@ -65,3 +65,4 @@
templates:
- openstack-cover-jobs-magnum
- openstack-python3-jobs-magnum
- publish-openstack-docs-pti

View File

@ -2,199 +2,24 @@
magnum-capi-helm
===============================
OpenStack Magnum driver using helm to create k8s clusters
OpenStack Magnum driver using Helm to create k8s clusters
with Cluster API.
The driver uses capi-helm-charts to create the
k8s resources needed to create a k8s cluster
using Cluster API, including various useful
add ons like a CNI and a monitoring stack.
https://github.com/stackhpc/capi-helm-charts
The driver uses `capi-helm-charts <https://github.com/stackhpc/capi-helm-charts>`_
to create the k8s resources needed to provision a k8s cluster using
Cluster API, including various useful add-ons like a CNI and a monitoring
stack.
Note, the above helm charts are intended to be
Note, the above Helm charts are intended to be
a way to share a reference method to create K8s
on OpenStack. The charts are not expected or
indented to be specific to Magnum. The hope is
intended to be specific to Magnum. The hope is
they can also be used by ArgoCD, Flux or Azimuth
to create k8s clusters on OpenStack.
Work on this driver started upstream aroun October 2021.
Work on this driver started upstream around October 2021.
After failing to get merged during Bobcat,
we created this downstream repo as a stop gap to help
we created this downstream repo as a stop-gap to help
those wanting to use this driver now.
https://specs.openstack.org/openstack/magnum-specs/specs/bobcat/clusterapi-driver.html
Installation and Dependencies
=============================
For a kolla-ansible deployment, you can follow `this <https://stackhpc-kayobe-config.readthedocs.io/en/stackhpc-yoga/configuration/magnum-capi.html>`__ guide.
If you install this python package within your Magnum virtual env,
it should be picked up by Magnum:::
git clone https://github.com/stackhpc/magnum-capi-helm.git
cd magnum-capi-helm
pip install -e .
We currently run the unit tests against the 2023.1 version of Magnum.
The driver requires access to a Cluster API management cluster.
For more information, please see:
https://cluster-api.sigs.k8s.io/user/quick-start
To access the above Cluster API management cluster,
you need to configure where the kubeconfig file
lives:::
[capi_helm]
kubeconfig_file = /etc/magnum/kubeconfig
To create a cluster, first you will need an image that
has been built to include kubernetes.
There are community maintained packer build pipelines here:
https://image-builder.sigs.k8s.io/capi/capi.html
Or you can grab prebuilt images from our `azimuth image releases <https://github.com/stackhpc/azimuth-images/releases/latest>`__.
Images are available in the `manifest.json` file, and are named in the format `ubuntu-<ubuntu release>-<kube version>-<date and time of build>`.
The above image needs to have the correct os-distro
property set when uploaded to Glance. For example:::
curl -fo ubuntu.qcow 'https://object.arcus.openstack.hpc.cam.ac.uk/azimuth-images/ubuntu-jammy-kube-v1.28.3-231030-1102.qcow2?AWSAccessKeyId=c5bd0fa15bae4e08b305a52aac97c3a6&Expires=1730200795&Signature=gs9Fk7y06cpViQHP04TmHDtmkWE%3D'
openstack image create ubuntu-jammy-kube-v1.28.3 \
--file ubuntu.qcow2 \
--disk-format qcow2 \
--container-format bare \
--public
openstack image set ubuntu-jammy-kube-v1.28.3 --os-distro ubuntu --os-version 22.04
Finally, this means you can now create a template, and then a cluster,
get the kubeconfig to access it, then run sonaboy to test it,
doing something like this:::
openstack coe cluster template create new_driver \
--coe kubernetes \
--label octavia_provider=ovn \
--image $(openstack image show ubuntu-jammy-kube-v1.28.3 -c id -f value) \
--external-network public \
--master-flavor ds2G20 \
--flavor ds2G20 \
--public \
--master-lb-enabled
openstack coe cluster create devstacktest \
--cluster-template new_driver \
--master-count 1 \
--node-count 2
openstack coe cluster list
mkdir -p ~/clusters/devstacktest
cd ~/clusters/devstacktest
openstack coe cluster config devstacktest
export KUBECONFIG=~/clusters/kubernetes-cluster/config
kubectl get nodes
sonobuoy run --mode quick --wait
DevStack Setup
==============
Did you want to try this driver in DevStack?
Please try our setup script in this repo:
`devstack/contrib/new-devstack.sh`
The above devstack script includes creating k3s based
Cluster API management cluster.
Features
========
The driver currently supports, create, delete, upgrade and
updates to node groups and their sizes.
The CAPI helm charts are currently being tested
with K8s 1.26, 1.27 and 1.28:
https://github.com/stackhpc/capi-helm-charts/blob/main/.github/workflows/ensure-capi-images.yaml#L9
The driver respects the following cluster and template properties:
* image_id
* keypair
* fixed_network, fixed_subnet (if missing, new one is created)
* external_network_id
* dns_nameserver
The driver supports the following labels:
* csi_cinder_availability_zone: default is nova, operators can configure the default in magnum.conf
* monitoring_enabled: default is off, change to "true" to enable
* kube_dashboard_enabled: defalt is on, change to "false" to disable
* octavia_provider: default is "amphora", ovn is also an option
* fixed_subnet_cidr: default is "10.0.0.0/24"
* extra_network_name: default is "", change to name of additional network,
which can be useful if using Manila with the CephFS Native driver.
Currently all clusters use the Calico CNI. While Cilium is also supported
in the helm charts, it is not currently reguarlly tested.
We have found upgrade with ClusterAPI doesn't work well without
using a loadbalancer, even with a single node control plane,
so we currently ignore the "master-lb-enabled" flag.
NOTE:
We are working in Cluster API provider OpenStack to add the ability
to store the etcd state on a cinder volume, separate from the root
disk. This is a big feature gap for clouds where most of your
root disks are on spinning disk Ceph, which is not fast enough
for etcd to operate correctly, but equally you don't have enough
ssd based Ceph to put all controller root disks on that Ceph:
https://github.com/kubernetes-sigs/cluster-api-provider-openstack/pull/1668
History
=======
The helm charts used by this driver started
out in August 2021 to build a template for
creating K8s on OpenStack using Cluster API.
We hope to find an upstream home for these
somewhere within OpenStack, ideally within
Magnum, but for now they are here:
https://github.com/stackhpc/capi-helm-charts
The helm charts have been in use in production
by Azimuth, since early 2022, to create
Kubernetes clusters on OpenStack:
https://github.com/stackhpc/azimuth
The hope is these helm charts can provide a common
well tested base that can be used in many different
ways to run Kubernetes on OpenStack. Be that automated
using helm directly, ArgoCD, Flux, Azimuth,
OpenStack Magnum and more.
Ideally we can eventually apply for Kubernetes
certification for these charts. The current helm chart
CI makes use of sonoboy smoke tests, and have been
manually tested to pass all conformance tests.
There has been an ongoing effort since October 2021 to create a Magnum
driver that makes use of the above helm charts, with a view to replace
the existing Heat based driver. However progress was severely delayed
getting the funding in place to do the work, which was finally confirmed
in August 2023.
You can see the upstream patches starting here:
https://review.opendev.org/c/openstack/magnum/+/815521
In early 2023 we discovered Vexhost had created
their own Cluster API Magnum driver, out of tree:
https://github.com/vexxhost/magnum-cluster-api
After subsequent PTG discussions, we agreed to continue this
effort to merge a driver upstream that makes use of cluster API,
with the above spec eventually getting merged for the Bobcat release.
The hope is that helm provides a better interface for per operator
additions to clusters, and should allow for helm to be updated to
support new Kubernetes versions, independently from the core
Magnum code.

8
doc/requirements.txt Normal file
View File

@ -0,0 +1,8 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
doc8>=0.6.0 # Apache-2.0
openstackdocstheme>=2.2.1 # Apache-2.0
reno>=3.1.0 # Apache-2.0
sphinx>=2.0.0,!=2.1.0 # BSD

112
doc/source/conf.py Normal file
View File

@ -0,0 +1,112 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath("../.."))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
"sphinx.ext.autodoc",
"openstackdocstheme",
]
# openstackdocstheme options
openstackdocs_repo_name = "openstack/magnum-capi-helm"
openstackdocs_pdf_link = True
openstackdocs_use_storyboard = False
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = ".rst"
# The master toctree document.
master_doc = "index"
# General information about the project.
project = "magnum-capi-helm"
copyright = "2013-present, OpenStack Foundation"
current_release = "2023.1"
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "native"
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
html_theme = "openstackdocs"
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = "%sdoc" % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
(
"index",
"doc-%s.tex" % project,
"%s Documentation" % project,
"OpenStack Foundation",
"manual",
),
]
# If false, no module index is generated.
latex_domain_indices = False
latex_elements = {
"makeindex": "",
"printindex": "",
"preamble": r"\setcounter{tocdepth}{3}",
"maxlistdepth": 10,
}
# Disable usage of xindy https://bugzilla.redhat.com/show_bug.cgi?id=1643664
latex_use_xindy = False
# Example configuration for intersphinx: refer to the Python standard library.
# intersphinx_mapping = {'http://docs.python.org/': None}
# openstackdocstheme options
openstackdocs_repo_name = "openstack/magnum-capi-helm"
openstackdocs_pdf_link = True
openstackdocs_bug_project = "magnum"
openstackdocs_bug_tag = "magnum-capi-helm"
openstackdocs_auto_name = False
openstackdocs_projects = [
"magnum",
]
# Substitutions loader
rst_prolog = """
.. |current_release| replace:: {current_release}
""".format( # noqa: E501
current_release=current_release,
)

View File

@ -0,0 +1,38 @@
===================
Configuration Guide
===================
Features
========
The driver currently supports create, delete and upgrade operations as well
as updates to node groups and their sizes.
The Kubernetes versions against which the CAPI Helm charts are currently being tested
can be found `here <https://github.com/stackhpc/capi-helm-charts/blob/main/.github/workflows/ensure-capi-images.yaml#L9>`_.
The driver respects the following cluster and template properties:
* image_id
* keypair
* fixed_network, fixed_subnet (if missing, a new one is created)
* external_network_id
* dns_nameserver
The driver supports the following labels:
* monitoring_enabled: default is off, change to "true" to enable
* kube_dashboard_enabled: default is on, change to "false" to disable
* octavia_provider: default is "amphora", "ovn" is also an option
* fixed_subnet_cidr: default is "10.0.0.0/24"
* extra_network_name: default is "", which can be useful if using
Manila with the CephFS Native driver.
**TODO: Add more recently supported labels here.**
Currently, all clusters use the Calico CNI. While Cilium is also supported
in the Helm charts, it is not currently regularly tested.
We have found that cluster upgrades with ClusterAPI don't work well without
using a load balancer, even with a single node control plane, so we currently
ignore the "master-lb-enabled" flag.

View File

@ -0,0 +1,14 @@
=================
Contributor Guide
=================
DevStack Setup
==============
Did you want to try this driver in DevStack?
Please try the setup script which is included in the repo
`here <https://opendev.org/openstack/magnum-capi-helm/src/branch/master/devstack/contrib/new-devstack.sh>`_.
The above DevStack script will also install k3s on the host and will
install the required components on the k3s cluster for it to act as a
Cluster API management cluster.

66
doc/source/index.rst Normal file
View File

@ -0,0 +1,66 @@
..
Copyright 2014-2015 OpenStack Foundation
All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
=======================================================
Welcome to the Magnum CAPI Helm Driver's documentation!
=======================================================
Magnum CAPI Helm is an OpenStack Magnum driver which uses Helm to create
Kubernetes (k8s) clusters with Cluster API.
The driver uses a standard set of Helm charts to create the k8s resources
required to provision and manage a k8s cluster using Cluster API,
including various useful add-ons like a CNI and a monitoring stack.
These Helm charts currently live at https://github.com/stackhpc/capi-helm-charts
but will soon be moved to https://opendev.org/openstack/magnum-capi-helm-charts
instead.
The Helm charts are intended to be a way to share a reference method to
create k8s clusters on OpenStack. The charts are not expected or intended to
be specific to Magnum. The hope is they can also be used by ArgoCD, Flux or
Azimuth to create clusters outside of Magnum if desired.
* **Free software:** under the `Apache license <http://www.apache.org/licenses/LICENSE-2.0>`_
* **Source:** https://opendev.org/openstack/magnum-capi-helm
* **Blueprints:** https://blueprints.launchpad.net/magnum
* **Bugs: (use magnum-capi-helm tag)** https://bugs.launchpad.net/magnum
* **Magnum Source:** https://opendev.org/openstack/magnum
* **Magnum REST Client:** https://opendev.org/openstack/python-magnumclient
Installation Guide
------------------
.. toctree::
:maxdepth: 2
Installation Guide <install/index>
Configuration Reference
-----------------------
.. toctree::
:maxdepth: 2
configuration/index
Contributor Guide
-----------------
.. toctree::
:maxdepth: 2
contributor/index

View File

@ -0,0 +1,78 @@
==================
Installation Guide
==================
For a Kayobe-based deployment, you can follow
`this <https://stackhpc-kayobe-config.readthedocs.io/en/stackhpc-2023.1/configuration/magnum-capi.html>`__ guide.
The relevant sub-sections of the same guide can also be adapted for
Kolla-Ansible-based deployments.
If you install this Python package within your Magnum virtual environment,
it should be picked up by Magnum::
git clone https://github.com/stackhpc/magnum-capi-helm.git
cd magnum-capi-helm
pip install -e .
We currently run the unit tests against the 2023.1 version of Magnum.
The driver requires access to a Cluster API management cluster.
For more information, please see:
https://cluster-api.sigs.k8s.io/user/quick-start
To access the above Cluster API management cluster, you need to add Magnum
configuration to tell the driver where the management cluster's kubeconfig
file lives::
[capi_helm]
kubeconfig_file = /etc/magnum/kubeconfig
Once the driver installation is complete, to create a cluster you
first need an image that has been built to include Kubernetes.
There are community-maintained packer build pipelines here:
https://image-builder.sigs.k8s.io/capi/capi.html
Alternatively, you can grab pre-built images from StackHPC's
`Azimuth image releases <https://github.com/stackhpc/azimuth-images/releases/latest>`__.
Images are available in the `manifest.json` file and are named in the format
`ubuntu-<ubuntu release>-<kube version>-<date and time of build>`.
Since Magnum distinguishes which driver to use based on the properties
of the images used in the cluster template, the above image needs to
have the correct os-distro property set when uploaded to Glance. For example::
curl -fo ubuntu.qcow 'https://object.arcus.openstack.hpc.cam.ac.uk/azimuth-images/ubuntu-jammy-kube-v1.28.3-231030-1102.qcow2?AWSAccessKeyId=c5bd0fa15bae4e08b305a52aac97c3a6&Expires=1730200795&Signature=gs9Fk7y06cpViQHP04TmHDtmkWE%3D'
openstack image create ubuntu-jammy-kube-v1.28.3 \
--file ubuntu.qcow2 \
--disk-format qcow2 \
--container-format bare \
--public
openstack image set ubuntu-jammy-kube-v1.28.3 --os-distro ubuntu --os-version 22.04
After uploading a suitable image, you can now create a Magnum cluster template
and then a cluster based on this template::
openstack coe cluster template create new_driver \
--coe kubernetes \
--label octavia_provider=ovn \
--image $(openstack image show ubuntu-jammy-kube-v1.28.3 -c id -f value) \
--external-network public \
--master-flavor ds2G20 \
--flavor ds2G20 \
--public \
--master-lb-enabled
openstack coe cluster create test-cluster \
--cluster-template new_driver \
--master-count 1 \
--node-count 2
openstack coe cluster list
Once the cluster has been created, you can get the cluster's kubeconfig file
and (optionally) run Sonoboy to test the created cluster::
openstack coe cluster config test-cluster
export KUBECONFIG=~/config
kubectl get nodes
sonobuoy run --mode quick --wait

View File

@ -59,7 +59,7 @@ copyright = "2022, OpenStack Developers"
openstackdocs_repo_name = "stackhpc/magnum-capi-helm"
openstackdocs_bug_project = "magnum_capi_helm"
openstackdocs_bug_tag = ""
openstackdocs_auto_name = "False"
openstackdocs_auto_name = False
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the

View File

@ -46,7 +46,12 @@ commands =
[testenv:docs]
deps = -r{toxinidir}/doc/requirements.txt
commands = sphinx-build -W -b html doc/source doc/build/html
skip_install = true
commands =
rm -rf doc/build
doc8 doc
sphinx-build -W --keep-going -b html doc/source doc/build/html
allowlist_externals = rm
[testenv:releasenotes]
deps = {[testenv:docs]deps}