Browse Source

Clean up v1 structure

Change-Id: I12feb1db2ef4ffe58be00f0c290b37e7f184efd6
changes/20/686120/1
Dmitry Ukov 2 years ago
parent
commit
d247bb6057
845 changed files with 2 additions and 58045 deletions
  1. +0
    -10
      .style.yapf
  2. +2
    -169
      .zuul.yaml
  3. +0
    -2
      doc/requirements.txt
  4. +0
    -293
      doc/source/airskiff.rst
  5. +0
    -631
      doc/source/airsloop.rst
  6. +0
    -770
      doc/source/authoring_and_deployment.rst
  7. +0
    -160
      doc/source/conf.py
  8. +0
    -187
      doc/source/config_update_guide.rst
  9. +0
    -242
      doc/source/development_guide.rst
  10. BIN
      doc/source/diagrams/airsloop-architecture.png
  11. BIN
      doc/source/diagrams/architecture.png
  12. BIN
      doc/source/diagrams/component_list.png
  13. BIN
      doc/source/diagrams/deploy_site.png
  14. BIN
      doc/source/diagrams/genesis.png
  15. +0
    -210
      doc/source/index.rst
  16. +0
    -69
      doc/source/seaworthy.rst
  17. +0
    -177
      doc/source/troubleshooting_guide.rst
  18. +0
    -26
      global/baremetal/bootactions/airship-target.yaml
  19. +0
    -47
      global/baremetal/bootactions/apparmor-profiles.yaml
  20. +0
    -23
      global/baremetal/bootactions/nested-virt.yaml
  21. +0
    -34
      global/baremetal/bootactions/promjoin.yaml
  22. +0
    -31
      global/baremetal/bootactions/seccomp-profiles.yaml
  23. +0
    -39
      global/deployment/deployment-strategy.yaml
  24. +0
    -12
      global/layering-policy.yaml
  25. +0
    -150
      global/profiles/genesis.yaml
  26. +0
    -19
      global/profiles/hardware/generic.yaml
  27. +0
    -116
      global/profiles/host/cp.yaml
  28. +0
    -65
      global/profiles/host/dp.yaml
  29. +0
    -200
      global/profiles/kubernetes-host.yaml
  30. +0
    -80
      global/profiles/security/apparmor_loader.yaml
  31. +0
    -78
      global/profiles/security/default_apparmor.yaml
  32. +0
    -787
      global/profiles/security/seccomp_default.yaml
  33. +0
    -12
      global/schemas/armada/Chart/v1.yaml
  34. +0
    -12
      global/schemas/armada/ChartGroup/v1.yaml
  35. +0
    -12
      global/schemas/armada/Manifest/v1.yaml
  36. +0
    -161
      global/schemas/drydock/BaremetalNode/v1.yaml
  37. +0
    -93
      global/schemas/drydock/BootAction/v1.yaml
  38. +0
    -49
      global/schemas/drydock/HardwareProfile/v1.yaml
  39. +0
    -159
      global/schemas/drydock/HostProfile/v1.yaml
  40. +0
    -70
      global/schemas/drydock/Network/v1.yaml
  41. +0
    -47
      global/schemas/drydock/NetworkLink/v1.yaml
  42. +0
    -35
      global/schemas/drydock/Rack/v1.yaml
  43. +0
    -71
      global/schemas/drydock/Region/v1.yaml
  44. +0
    -645
      global/schemas/pegleg/AccountCatalogue/v1.yaml
  45. +0
    -17
      global/schemas/pegleg/AppArmorProfile/v1.yaml
  46. +0
    -116
      global/schemas/pegleg/CommonAddresses/v1.yaml
  47. +0
    -15
      global/schemas/pegleg/CommonSoftwareConfig/v1.yaml
  48. +0
    -169
      global/schemas/pegleg/EndpointCatalogue/v1.yaml
  49. +0
    -8
      global/schemas/pegleg/Script/v1.yaml
  50. +0
    -19
      global/schemas/pegleg/SeccompProfile/v1.yaml
  51. +0
    -29
      global/schemas/pegleg/SiteDefinition/v1.yaml
  52. +0
    -1214
      global/schemas/pegleg/SoftwareVersions/v1.yaml
  53. +0
    -16
      global/schemas/promenade/Docker/v1.yaml
  54. +0
    -50
      global/schemas/promenade/EncryptionPolicy/v1.yaml
  55. +0
    -165
      global/schemas/promenade/Genesis/v1.yaml
  56. +0
    -245
      global/schemas/promenade/HostSystem/v1.yaml
  57. +0
    -31
      global/schemas/promenade/Kubelet/v1.yaml
  58. +0
    -121
      global/schemas/promenade/KubernetesNetwork/v1.yaml
  59. +0
    -47
      global/schemas/promenade/KubernetesNode/v1.yaml
  60. +0
    -43
      global/schemas/promenade/PKICatalog/PKICatalog.yaml
  61. +0
    -80
      global/schemas/shipyard/DeploymentConfiguration/v1.yaml
  62. +0
    -73
      global/schemas/shipyard/DeploymentStrategy/v1.yaml
  63. +0
    -128
      global/scripts/configure-ip-rules.yaml
  64. +0
    -26
      global/scripts/hanging-cgroup-release.yaml
  65. +0
    -32
      global/scripts/rbd-roomba-scanner.yaml
  66. +0
    -14
      global/secrets/passphrases/private_docker_key.yaml
  67. +0
    -11
      global/secrets/publickey/airship_ssh_public_key.yaml
  68. +0
    -173
      global/software/charts/kubernetes/container-networking/calico.yaml
  69. +0
    -15
      global/software/charts/kubernetes/container-networking/chart-group.yaml
  70. +0
    -136
      global/software/charts/kubernetes/container-networking/etcd.yaml
  71. +0
    -198
      global/software/charts/kubernetes/core/apiserver.yaml
  72. +0
    -15
      global/software/charts/kubernetes/core/chart-group.yaml
  73. +0
    -138
      global/software/charts/kubernetes/core/controller-manager.yaml
  74. +0
    -95
      global/software/charts/kubernetes/core/scheduler.yaml
  75. +0
    -13
      global/software/charts/kubernetes/dns/chart-group.yaml
  76. +0
    -149
      global/software/charts/kubernetes/dns/coredns.yaml
  77. +0
    -13
      global/software/charts/kubernetes/etcd/chart-group.yaml
  78. +0
    -137
      global/software/charts/kubernetes/etcd/etcd.yaml
  79. +0
    -13
      global/software/charts/kubernetes/haproxy/chart-group.yaml
  80. +0
    -111
      global/software/charts/kubernetes/haproxy/haproxy.yaml
  81. +0
    -13
      global/software/charts/kubernetes/ingress/chart-group.yaml
  82. +0
    -88
      global/software/charts/kubernetes/ingress/ingress.yaml
  83. +0
    -14
      global/software/charts/kubernetes/proxy/chart-group.yaml
  84. +0
    -94
      global/software/charts/kubernetes/proxy/kubernetes-proxy.yaml
  85. +0
    -28
      global/software/charts/osh-infra/dependencies.yaml
  86. +0
    -92
      global/software/charts/osh-infra/osh-infra-ceph-config/ceph-config.yaml
  87. +0
    -13
      global/software/charts/osh-infra/osh-infra-ceph-config/chart-group.yaml
  88. +0
    -14
      global/software/charts/osh-infra/osh-infra-dashboards/chart-group.yaml
  89. +0
    -269
      global/software/charts/osh-infra/osh-infra-dashboards/grafana.yaml
  90. +0
    -126
      global/software/charts/osh-infra/osh-infra-dashboards/kibana.yaml
  91. +0
    -13
      global/software/charts/osh-infra/osh-infra-ingress-controller/chart-group.yaml
  92. +0
    -57
      global/software/charts/osh-infra/osh-infra-ingress-controller/ingress.yaml
  93. +0
    -16
      global/software/charts/osh-infra/osh-infra-logging/chart-group.yaml
  94. +0
    -364
      global/software/charts/osh-infra/osh-infra-logging/elasticsearch.yaml
  95. +0
    -255
      global/software/charts/osh-infra/osh-infra-logging/fluentbit.yaml
  96. +0
    -375
      global/software/charts/osh-infra/osh-infra-logging/fluentd.yaml
  97. +0
    -13
      global/software/charts/osh-infra/osh-infra-mariadb/chart-group.yaml
  98. +0
    -100
      global/software/charts/osh-infra/osh-infra-mariadb/mariadb.yaml
  99. +0
    -19
      global/software/charts/osh-infra/osh-infra-monitoring/chart-group.yaml
  100. +0
    -159
      global/software/charts/osh-infra/osh-infra-monitoring/nagios.yaml

+ 0
- 10
.style.yapf View File

@ -1,10 +0,0 @@
[style]
based_on_style = pep8
spaces_before_comment = 2
column_limit = 79
blank_line_before_nested_class_or_def = false
blank_line_before_module_docstring = true
split_before_logical_operator = true
split_before_first_argument = true
allow_split_before_dict_value = false
split_before_arithmetic_operator = true

+ 2
- 169
.zuul.yaml View File

@ -11,29 +11,12 @@
# limitations under the License.
- project:
templates:
- docs-on-readthedocs
vars:
rtd_webhook_id: '47687'
rtd_project_name: 'airship-treasuremap'
check:
jobs:
- treasuremap-seaworthy-site-lint
- treasuremap-seaworthy-virt-site-lint
- treasuremap-airskiff-ubuntu-site-lint
- treasuremap-airskiff-suse-site-lint
- treasuremap-airsloop-site-lint
- treasuremap-aiab-site-lint
- treasuremap-airskiff-deployment-ubuntu
- treasuremap-airskiff-deployment-suse
- noop
gate:
jobs:
- treasuremap-seaworthy-site-lint
- treasuremap-seaworthy-virt-site-lint
- treasuremap-airskiff-ubuntu-site-lint
- treasuremap-airskiff-suse-site-lint
- treasuremap-airsloop-site-lint
- treasuremap-aiab-site-lint
- noop
post:
jobs:
- treasuremap-upload-git-mirror
@ -45,156 +28,6 @@
- name: ubuntu-bionic
label: ubuntu-bionic
- job:
name: treasuremap-site-lint
description:
Lint a site using Pegleg. Default site is seaworthy.
nodeset: treasuremap-single-node
timeout: 900
pre-run:
- tools/gate/playbooks/install-docker.yaml
- tools/gate/playbooks/git-config.yaml
run: tools/gate/playbooks/site-lint.yaml
vars:
site: seaworthy
irrelevant-files:
- ^.*\.rst$
- ^doc/.*$
- job:
name: treasuremap-seaworthy-site-lint
description: |
Lint the seaworthy site using Pegleg.
parent: treasuremap-site-lint
vars:
site: seaworthy
irrelevant-files:
- ^.*\.rst$
- ^doc/.*$
- ^site/seaworthy-virt/.*$
- ^site/airskiff/.*$
- ^site/airsloop/.*$
- ^site/aiab/.*$
- job:
name: treasuremap-seaworthy-virt-site-lint
description: |
Lint the seaworthy site using Pegleg.
parent: treasuremap-site-lint
vars:
site: seaworthy-virt
irrelevant-files:
- ^.*\.rst$
- ^doc/.*$
- ^site/seaworthy/.*$
- ^site/airskiff/.*$
- ^site/airsloop/.*$
- ^site/aiab/.*$
- job:
name: treasuremap-airskiff-ubuntu-site-lint
description: |
Lint the airskiff site using Pegleg.
parent: treasuremap-site-lint
vars:
site: airskiff
irrelevant-files:
- ^.*\.rst$
- ^doc/.*$
- ^site/seaworthy/.*$
- ^site/seaworthy-virt/.*$
- ^site/airsloop/.*$
- ^site/aiab/.*$
- job:
name: treasuremap-airskiff-suse-site-lint
description: |
Lint the airskiff-suse site using Pegleg.
parent: treasuremap-site-lint
vars:
site: airskiff-suse
irrelevant-files:
- ^.*\.rst$
- ^doc/.*$
- ^site/seaworthy/.*$
- ^site/seaworthy-virt/.*$
- ^site/airsloop/.*$
- ^site/aiab/.*$
- job:
name: treasuremap-airsloop-site-lint
description: |
Lint the airsloop site using Pegleg.
parent: treasuremap-site-lint
vars:
site: airsloop
irrelevant-files:
- ^.*\.rst$
- ^doc/.*$
- ^site/seaworthy/.*$
- ^site/seaworthy-virt/.*$
- ^site/airskiff/.*$
- ^site/aiab/.*$
- job:
name: treasuremap-aiab-site-lint
description: |
Lint the aiab site using Pegleg.
parent: treasuremap-site-lint
pre-run:
- tools/gate/playbooks/generate-certs.yaml
vars:
site: aiab
irrelevant-files:
- ^.*\.rst$
- ^doc/.*$
- ^site/seaworthy/.*$
- ^site/seaworthy-virt/.*$
- ^site/airskiff/.*$
- ^site/airsloop/.*$
- job:
name: treasuremap-airskiff-deployment-ubuntu
nodeset: treasuremap-single-node
description: |
Deploy Memcached using Airskiff and latest Treasuremap changes.
voting: false
timeout: 9600
pre-run:
- tools/gate/playbooks/git-config.yaml
- tools/gate/playbooks/airskiff-reduce-site.yaml
run: tools/gate/playbooks/airskiff-deploy-gate.yaml
post-run: tools/gate/playbooks/debug-report.yaml
vars:
site: airskiff
irrelevant-files:
- ^.*\.rst$
- ^doc/.*$
- ^site/seaworthy/.*$
- ^site/airsloop/.*$
- ^site/aiab/.*$
- job:
name: treasuremap-airskiff-deployment-suse
nodeset: treasuremap-single-node
description: |
Deploy Memcached using Airskiff-suse and latest Treasuremap changes.
voting: false
timeout: 9600
pre-run:
- tools/gate/playbooks/git-config.yaml
- tools/gate/playbooks/airskiff-reduce-site.yaml
run: tools/gate/playbooks/airskiff-deploy-gate.yaml
vars:
site: airskiff-suse
post-run: tools/gate/playbooks/debug-report.yaml
irrelevant-files:
- ^.*\.rst$
- ^doc/.*$
- ^site/seaworthy/.*$
- ^site/airsloop/.*$
- ^site/aiab/.*$
- secret:
name: airshipit-github-secret
data:


+ 0
- 2
doc/requirements.txt View File

@ -1,2 +0,0 @@
sphinx>=1.6.2
sphinx_rtd_theme>=0.4.3

+ 0
- 293
doc/source/airskiff.rst View File

@ -1,293 +0,0 @@
Airskiff: Lightweight Airship for Dev
=====================================
* Skiff (n): a shallow, flat-bottomed, open boat
* Airskiff (n): a learning development, and gating environment for Airship
What is Airskiff
----------------
Airskiff is an easy way to get started with the software delivery components
of Airship:
* `Armada`_
* `Deckhand`_
* `Pegleg`_
* `Shipyard`_
Airskiff is packaged with a set of deployment scripts modeled after the
`OpenStack-Helm project`_ for seamless developer setup.
These scripts:
* Download, build, and containerize the Airship components above from source.
* Deploy a Kubernetes cluster using Minikube.
* Deploy Armada, Deckhand, and Shipyard using the latest `Armada image`_.
* Deploy OpenStack using the Airskiff site and charts from the
`OpenStack-Helm project`_.
.. warning:: Airskiff is not safe for production use. These scripts are
only intended to deploy a minimal development environment.
Common Deployment Requirements
------------------------------
This section covers actions that may be required for some deployment scenarios.
Passwordless sudo
~~~~~~~~~~~~~~~~~
Airskiff relies on scripts that utilize the ``sudo`` command. Throughout this
guide the assumption is that the user is: ``ubuntu``. It is advised to add the
following lines to ``/etc/sudoers``:
.. code-block:: bash
root ALL=(ALL) NOPASSWD: ALL
ubuntu ALL=(ALL) NOPASSWD: ALL
Proxy Configuration
~~~~~~~~~~~~~~~~~~~
.. note:: This section assumes you have properly defined the standard
``http_proxy``, ``https_proxy``, and ``no_proxy`` environment variables and
have followed the `Docker proxy guide`_ to create a systemd drop-in unit.
In order to deploy Airskiff behind proxy servers, define the following
environment variables:
.. code-block:: shell
export USE_PROXY=true
export PROXY=${http_proxy}
export no_proxy=${no_proxy},10.0.2.15,.svc.cluster.local
export NO_PROXY=${NO_PROXY},10.0.2.15,.svc.cluster.local
.. note:: The ``.svc.cluster.local`` address is required to allow the OpenStack
client to communicate without being routed through proxy servers. The IP
address ``10.0.2.15`` is the advertised IP address of the minikube Kubernetes
cluster. Replace the addresses if your configuration does not match the one
defined above.
Deploy Airskiff
---------------
Deploy Airskiff using the deployment scripts contained in the
``tools/deployment/airskiff`` directory of the `airship-treasuremap`_
repository.
.. note:: Scripts should be run from the root of ``treasuremap`` repository.
Clone Dependencies
~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../tools/deployment/airskiff/developer/000-clone-dependencies.sh
:language: shell
:lines: 1,18-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/airskiff/developer/000-clone-dependencies.sh
Deploy Kubernetes with Minikube
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../tools/deployment/airskiff/developer/010-deploy-k8s.sh
:language: shell
:lines: 1,18-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/airskiff/developer/010-deploy-k8s.sh
Restart your shell session
~~~~~~~~~~~~~~~~~~~~~~~~~~
At this point, restart your shell session to complete adding ``$USER`` to the
``docker`` group.
Setup OpenStack Client
~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../tools/deployment/airskiff/developer/020-setup-client.sh
:language: shell
:lines: 1,18-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/airskiff/developer/020-setup-client.sh
Deploy Airship components using Armada
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../tools/deployment/airskiff/developer/030-armada-bootstrap.sh
:language: shell
:lines: 1,18-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/airskiff/developer/030-armada-bootstrap.sh
Deploy OpenStack using Airship
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../tools/deployment/airskiff/developer/100-deploy-osh.sh
:language: shell
:lines: 1,18-
Alternatively, this step can be performed by running the script directly:
.. code-block:: shell
./tools/deployment/airskiff/developer/100-deploy-osh.sh
Use Airskiff
------------
The Airskiff deployment scripts install and configure the OpenStack client for
usage on your host machine.
Airship Examples
~~~~~~~~~~~~~~~~
To use Airship services, set the ``OS_CLOUD`` environment variable to
``airship``.
.. code-block:: shell
export OS_CLOUD=airship
List the Airship service endpoints:
.. code-block:: shell
openstack endpoint list
.. note:: ``${SHIPYARD}`` is the path to a cloned `Shipyard`_ repository.
Run Helm tests for all deployed releases:
.. code-block:: shell
${SHIPYARD}/tools/shipyard.sh create action test_site
List all `Shipyard`_ actions:
.. code-block:: shell
${SHIPYARD}/tools/shipyard.sh get actions
For more information about Airship operations, see the
`Shipyard actions`_ documentation.
OpenStack Examples
~~~~~~~~~~~~~~~~~~
To use OpenStack services, set the ``OS_CLOUD`` environment variable to
``openstack``:
.. code-block:: shell
export OS_CLOUD=openstack
List the OpenStack service endpoints:
.. code-block:: shell
openstack endpoint list
List ``Glance`` images:
.. code-block:: shell
openstack image list
Issue a new ``Keystone`` token:
.. code-block:: shell
openstack token issue
.. note:: Airskiff deploys identity, network, cloudformation, placement,
compute, orchestration, and image services. You can deploy more services
by adding chart groups to
``site/airskiff/software/manifests/full-site.yaml``. For more information,
refer to the `site authoring and deployment guide`_.
Develop with Airskiff
---------------------
Once you have successfully deployed a running cluster, changes to Airship
and OpenStack components can be deployed using `Shipyard actions`_ or the
Airskiff deployment scripts.
This example demonstrates deploying `Armada`_ changes using the Airskiff
deployment scripts.
.. note:: ``${ARMADA}`` is the path to your cloned Armada repository that
contains the changes you wish to deploy. ``${TREASUREMAP}`` is the path to
your cloned Treasuremap repository.
Build Armada:
.. code-block:: shell
cd ${ARMADA}
make images
Update Airship components:
.. code-block:: shell
cd ${TREASUREMAP}
./tools/deployment/developer/airskiff/030-armada-bootstrap.sh
Troubleshooting
---------------
This section is intended to help you through the initial troubleshooting
process. If issues persist after following this guide, please join us on
`IRC`_: #airshipit (freenode)
``Missing value auth-url required for auth plugin password``
If this error message appears when using the OpenStack client, verify your
client is configured for authentication:
.. code-block:: shell
# For Airship services
export OS_CLOUD=airship
# For OpenStack services
export OS_CLOUD=openstack
.. _Docker proxy guide: https://docs.docker.com/config/daemon/systemd/
#httphttps-proxy
.. _OpenStack-Helm project: https://docs.openstack.org/openstack-helm/latest/
install/developer/requirements-and-host-config.html
.. _Armada: https://opendev.org/airship/armada
.. _Deckhand: https://opendev.org/airship/deckhand
.. _Pegleg: https://opendev.org/airship/pegleg
.. _Shipyard: https://opendev.org/airship/shipyard
.. _Armada image: https://quay.io/repository/airshipit/armada?tab=tags
.. _airship-treasuremap: https://opendev.org/airship/treasuremap
.. _Shipyard actions: https://airship-shipyard.readthedocs.io/en/latest/
action-commands.html
.. _IRC: irc://chat.freenode.net:6697/airshipit
.. _site authoring and deployment guide: https://
airship-treasuremap.readthedocs.io/en/latest/authoring_and_deployment.html

+ 0
- 631
doc/source/airsloop.rst View File

@ -1,631 +0,0 @@
Airsloop: Simple Bare-Metal Airship
===================================
Airsloop is a two bare-metal server site deployment reference.
The goal of this site is to be used as a reference for simplified Airship
deployments with one control and one or more compute nodes.
It is recommended to get familiar with the `Site Authoring and Deployment Guide`_
documentation before deploying Airsloop in the lab. Most steps and concepts
including setting up the Genesis node are the same.
.. _Site Authoring and Deployment Guide: https://airship-treasuremap.readthedocs.io/en/latest/authoring_and_deployment.html
.. image:: diagrams/airsloop-architecture.png
Various resiliency and security features are tuned down via configuration.
* Two bare-metal server setup with 1 control, and 1 compute.
Most components are scaled to a single replica and doesn't carry
any HA as there is only a single control plane host.
* No requirements for DNS/certificates.
HTTP and internal cluster DNS is used.
* Ceph set to use the single disk.
This generally provides minimalistic no-touch Ceph deployment.
No replication of Ceph data (single copy).
* Simplified networking (no bonding).
Two network interfaces are used by default (flat PXE, and DATA network
with VLANs for OAM, Calico, Storage, and OpenStack Overlay).
* Generic hostnames used (airsloop-control-1, airsloop-compute-1) that
simplifies generation of k8s certificates.
Airsloop site manifests are available at
`site/airsloop <https://opendev.org/airship/treasuremap/src/branch/master/site/airsloop>`__.
Hardware
--------
While HW configuration is flexible, Airsloop reference manifests
reflect a single control and a single compute node. The aim of
this is to create a minimalistic lab/demo reference environment.
Increasing the number of compute nodes will require site overrides
to align parts of the system such as Ceph OSDs, etcd, etc.
See host profiles for the servers
`here <https://opendev.org/airship/treasuremap/src/branch/master/site/airsloop/profiles/host>`__.
+------------+-------------------------+
| Node | Hostnames |
+============+=========================+
| control | airsloop-control-1 |
+------------+-------------------------+
| compute | airsloop-compute-1 |
+------------+-------------------------+
Network
-------
Physical (underlay) networks are described in Drydock site configuration
`here <https://opendev.org/airship/treasuremap/src/branch/master/site/airsloop/networks/physical/networks.yaml>`__.
It defines OOB (iLO/IPMI), untagged PXE, and multiple tagged general use networks.
Also no bonded interfaces are used in Airsloop deployment.
The networking reference is simplified compared to Airship Seaworthy
site. There are only two NICs required (excluding oob), one for PXE
and another one for the rest of the networks separated using VLAN segmentation.
Below is the reference network configuration:
+------------+------------+-----------+---------------+
| NICs | VLANs | Names | CIDRs |
+============+============+===========+===============+
| oob | N/A | oob |10.22.104.0/24 |
+------------+------------+-----------+---------------+
| pxe | N/A | pxe |10.22.70.0/24 |
+------------+------------+-----------+---------------+
| | 71 | oam |10.22.71.0/24 |
| +------------+-----------+---------------+
| | 72 | calico |10.22.72.0/24 |
| data +------------+-----------+---------------+
| | 73 | storage |10.22.73.0/24 |
| +------------+-----------+---------------+
| | 74 | overlay |10.22.74.0/24 |
+------------+------------+-----------+---------------+
Calico overlay for k8s POD networking uses IPIP mesh.
Storage
-------
Because Airsloop is a minimalistic deployment the required number of disks is just
one per node. That disk is not only used by the OS but also by Ceph Journals and OSDs.
The way that this is achieved is by using directories and not extra
disks for Ceph storage. Ceph OSD configuration can be changed in a `Ceph chart override <https://opendev.org/airship/treasuremap/src/branch/master/type/sloop/charts/ucp/ceph/ceph-osd.yaml>`__.
The following Ceph chart configuration is used:
.. code-block:: yaml
osd:
- data:
type: directory
location: /var/lib/openstack-helm/ceph/osd/osd-one
journal:
type: directory
location: /var/lib/openstack-helm/ceph/osd/journal-one
Host Profiles
-------------
Host profiles in Airship are tightly coupled with the hardware profiles.
That means every disk or interface which is described in host profiles
should have a corresponding reference to the hardware profile which is
being used.
Airship always identifies every NIC or disk by its PCI or
SCSI address and that means that the interfaces and the disks that are
defined in host and hardware profiles should have the correct PCI and
SCSI addresses objectively.
Let's give an example by following the host profile of Airsloop site.
In this `Host Profile <https://opendev.org/airship/treasuremap/src/branch/master/site/airsloop/profiles/host/compute.yaml>`__
is defined that the slave interface that will be used for the pxe
boot will be the pxe_nic01. That means a corresponding entry should
exist in this `Hardware Profile <https://opendev.org/airship/treasuremap/src/branch/master/site/airsloop/profiles/hardware/dell_r720xd.yaml>`__
which it does. So when drydock and maas try to deploy the node it will
identify the interface by the PCI address that is written in the
Hardware profile.
A simple way to find out which PCI or SCSI address corresponds to which
NIC or Disk is to use the lshw command. More information about that
command can be found `Here <https://linux.die.net/man/1/lshw>`__.
Extend Cluster
--------------
This section describes what changes need to be made to the existing
manifests of Airsloop for the addition of an extra compute node to the
cluster.
First and foremost the user should go to the `nodes.yaml <https://opendev.org/airship/treasuremap/src/branch/master/site/airsloop/baremetal/nodes.yaml>`__
file and add an extra section for the new compute node.
The next step is to add a similar section as the existing
airsloop-compute-1 section to the `pki-catalog.yaml <https://opendev.org/airship/treasuremap/src/branch/master/site/airsloop/pki/pki-catalog.yaml>`__.
This is essential for the correct generation of certificates and the
correct communication between the nodes in the cluster.
Also every time the user adds an extra compute node to the cluster then the
number of OSDs that are managed by this manifest `Ceph-client <https://opendev.org/airship/treasuremap/src/branch/master/site/airsloop/software/charts/osh/ceph/ceph-client.yaml>`__
should be increased by one.
Last step is to regenerate the certificates which correspond to this
`certificates.yaml <https://opendev.org/airship/treasuremap/src/branch/master/site/airsloop/secrets/certificates/certificates.yaml>`__
file so the changes in the pki-catalog.yaml file takes place.
This can be done through the promenade CLI.
Getting Started
---------------
**Update Site Manifests.**
Carefully review site manifests (site/airsloop) and update the configuration
to match the hardware, networking setup and other specifics of the lab.
See more details at `Site Authoring and Deployment Guide`_.
.. note:: Many manifest files (YAMLs) contain documentation in comments
that instruct what changes are required for specific sections.
1. Build Site Documents
.. code-block:: bash
tools/airship pegleg site -r /target collect airsloop -s collect
mkdir certs
tools/airship promenade generate-certs -o /target/certs /target/collect/*.yaml
mkdir bundle
tools/airship promenade build-all -o /target/bundle /target/collect/*.yaml /target/certs/*.yaml
See more details at `Building Site documents`_, use site ``airsloop``.
.. _Building Site documents: https://airship-treasuremap.readthedocs.io/en/latest/authoring_and_deployment.html#building-site-documents
2. Deploy Genesis
Deploy the Genesis node, see more details at `Genesis node`_.
.. _Genesis node: https://airship-treasuremap.readthedocs.io/en/latest/authoring_and_deployment.html#genesis-node
Genesis is the first node in the cluster and serves as a control node.
In Airsloop configuration Genesis is the only control node (airsloop-control-1).
Airsloop is using non-bonded network interfaces:
.. code-block:: bash
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet static
address 10.22.70.21/24
auto enp67s0f0
iface enp67s0f0 inet manual
auto enp67s0f0.71
iface enp67s0f0.71 inet static
address 10.22.71.21/24
gateway 10.22.71.1
dns-nameservers 8.8.8.8 8.8.4.4
vlan-raw-device enp67s0f0
vlan_id 71
auto enp67s0f0.72
iface enp67s0f0.72 inet static
address 10.22.72.21/24
vlan-raw-device enp67s0f0
vlan_id 72
auto enp67s0f0.73
iface enp67s0f0.73 inet static
address 10.22.73.21/24
vlan-raw-device enp67s0f0
vlan_id 73
auto enp67s0f0.74
iface enp67s0f0.74 inet static
address 10.22.74.21/24
vlan-raw-device enp67s0f0
vlan_id 74
Execute Genesis bootstrap script on the Genesis server.
.. code-block:: bash
sudo ./genesis.sh
3. Deploy Site
.. code-block:: bash
tools/airship shipyard create configdocs design --directory=/target/collect
tools/airship shipyard commit configdocs
tools/airship shipyard create action deploy_site
tools/shipyard get actions
See more details at `Deploy Site with Shipyard`_.
.. _Deploy Site with Shipyard: https://airship-treasuremap.readthedocs.io/en/latest/authoring_and_deployment.html#deploy-site-with-shipyard
Deploying Behind a Proxy
------------------------
The following documents show the main differences you need to make in order to have
airsloop run behind a proxy.
.. note::
The "-" sign refers to a line that needs to be omitted (replaced), and the "+" sign refers to a
line replacing the omitted line, or simply a line that needs to be added to your yaml.
Under site/airsloop/software/charts/osh/openstack-glance/ create a glance.yaml file as follows:
.. code-block:: yaml
---
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
replacement: true
name: glance
layeringDefinition:
abstract: false
layer: site
parentSelector:
name: glance-type
actions:
- method: merge
path: .
storagePolicy: cleartext
data:
test:
enabled: false
...
Under site/airsloop/software/config/ create a versions.yaml file in the following format:
.. code-block:: yaml
---
data:
charts:
kubernetes:
apiserver:
proxy_server: proxy.example.com:8080
apiserver-htk:
proxy_server: proxy.example.com:8080
calico:
calico:
proxy_server: proxy.example.com:8080
calico-htk:
proxy_server: proxy.example.com:8080
etcd:
proxy_server: proxy.example.com:8080
etcd-htk:
proxy_server: proxy.example.com:8080
controller-manager:
proxy_server: proxy.example.com:8080
controller-manager-htk:
proxy_server: proxy.example.com:8080
coredns:
proxy_server: proxy.example.com:8080
coredns-htk:
proxy_server: proxy.example.com:8080
etcd:
proxy_server: proxy.example.com:8080
etcd-htk:
proxy_server: proxy.example.com:8080
haproxy:
proxy_server: proxy.example.com:8080
haproxy-htk:
proxy_server: proxy.example.com:8080
ingress:
proxy_server: proxy.example.com:8080
ingress-htk:
proxy_server: proxy.example.com:8080
proxy:
proxy_server: proxy.example.com:8080
proxy-htk:
proxy_server: proxy.example.com:8080
scheduler:
proxy_server: proxy.example.com:8080
scheduler-htk:
proxy_server: proxy.example.com:8080
osh:
barbican:
proxy_server: proxy.example.com:8080
cinder:
proxy_server: proxy.example.com:8080
cinder-htk:
proxy_server: proxy.example.com:8080
glance:
proxy_server: proxy.example.com:8080
glance-htk:
proxy_server: proxy.example.com:8080
heat:
proxy_server: proxy.example.com:8080
heat-htk:
proxy_server: proxy.example.com:8080
helm_toolkit:
proxy_server: proxy.example.com:8080
horizon:
proxy_server: proxy.example.com:8080
horizon-htk:
proxy_server: proxy.example.com:8080
ingress:
proxy_server: proxy.example.com:8080
ingress-htk:
proxy_server: proxy.example.com:8080
keystone:
proxy_server: proxy.example.com:8080
keystone-htk:
proxy_server: proxy.example.com:8080
libvirt:
proxy_server: proxy.example.com:8080
libvirt-htk:
proxy_server: proxy.example.com:8080
mariadb:
proxy_server: proxy.example.com:8080
mariadb-htk:
proxy_server: proxy.example.com:8080
memcached:
proxy_server: proxy.example.com:8080
memcached-htk:
proxy_server: proxy.example.com:8080
neutron:
proxy_server: proxy.example.com:8080
neutron-htk:
proxy_server: proxy.example.com:8080
nova:
proxy_server: proxy.example.com:8080
nova-htk:
proxy_server: proxy.example.com:8080
openvswitch:
proxy_server: proxy.example.com:8080
openvswitch-htk:
proxy_server: proxy.example.com:8080
rabbitmq:
proxy_server: proxy.example.com:8080
rabbitmq-htk:
proxy_server: proxy.example.com:8080
tempest:
proxy_server: proxy.example.com:8080
tempest-htk:
proxy_server: proxy.example.com:8080
osh_infra:
elasticsearch:
proxy_server: proxy.example.com:8080
fluentbit:
proxy_server: proxy.example.com:8080
fluentd:
proxy_server: proxy.example.com:8080
grafana:
proxy_server: proxy.example.com:8080
helm_toolkit:
proxy_server: proxy.example.com:8080
kibana:
proxy_server: proxy.example.com:8080
nagios:
proxy_server: proxy.example.com:8080
nfs_provisioner:
proxy_server: proxy.example.com:8080
podsecuritypolicy:
proxy_server: proxy.example.com:8080
prometheus:
proxy_server: proxy.example.com:8080
prometheus_alertmanager:
proxy_server: proxy.example.com:8080
prometheus_kube_state_metrics:
proxy_server: proxy.example.com:8080
prometheus_node_exporter:
proxy_server: proxy.example.com:8080
prometheus_openstack_exporter:
proxy_server: proxy.example.com:8080
prometheus_process_exporter:
proxy_server: proxy.example.com:8080
ucp:
armada:
proxy_server: proxy.example.com:8080
armada-htk:
proxy_server: proxy.example.com:8080
barbican:
proxy_server: proxy.example.com:8080
barbican-htk:
proxy_server: proxy.example.com:8080
ceph-client:
proxy_server: proxy.example.com:8080
ceph-htk:
proxy_server: proxy.example.com:8080
ceph-mon:
proxy_server: proxy.example.com:8080
ceph-osd:
proxy_server: proxy.example.com:8080
ceph-provisioners:
proxy_server: proxy.example.com:8080
ceph-rgw:
proxy_server: proxy.example.com:8080
deckhand:
proxy_server: proxy.example.com:8080
deckhand-htk:
proxy_server: proxy.example.com:8080
divingbell:
proxy_server: proxy.example.com:8080
divingbell-htk:
proxy_server: proxy.example.com:8080
drydock:
proxy_server: proxy.example.com:8080
drydock-htk:
proxy_server: proxy.example.com:8080
ingress:
proxy_server: proxy.example.com:8080
ingress-htk:
proxy_server: proxy.example.com:8080
keystone:
proxy_server: proxy.example.com:8080
keystone-htk:
proxy_server: proxy.example.com:8080
maas:
proxy_server: proxy.example.com:8080
maas-htk:
proxy_server: proxy.example.com:8080
mariadb:
proxy_server: proxy.example.com:8080
mariadb-htk:
proxy_server: proxy.example.com:8080
memcached:
proxy_server: proxy.example.com:8080
memcached-htk:
proxy_server: proxy.example.com:8080
postgresql:
proxy_server: proxy.example.com:8080
postgresql-htk:
proxy_server: proxy.example.com:8080
promenade:
proxy_server: proxy.example.com:8080
promenade-htk:
proxy_server: proxy.example.com:8080
rabbitmq:
proxy_server: proxy.example.com:8080
rabbitmq-htk:
proxy_server: proxy.example.com:8080
shipyard:
proxy_server: proxy.example.com:8080
shipyard-htk:
proxy_server: proxy.example.com:8080
tenant-ceph-client:
proxy_server: proxy.example.com:8080
tenant-ceph-htk:
proxy_server: proxy.example.com:8080
tenant-ceph-mon:
proxy_server: proxy.example.com:8080
tenant-ceph-osd:
proxy_server: proxy.example.com:8080
tenant-ceph-provisioners:
proxy_server: proxy.example.com:8080
tenant-ceph-rgw:
proxy_server: proxy.example.com:8080
tiller:
proxy_server: proxy.example.com:8080
tiller-htk:
proxy_server: proxy.example.com:8080
metadata:
name: software-versions
replacement: true
layeringDefinition:
abstract: false
layer: site
parentSelector:
name: software-versions-global
actions:
- method: merge
path: .
storagePolicy: cleartext
schema: metadata/Document/v1
schema: pegleg/SoftwareVersions/v1
...
Update site/airsloop/networks/common-addresses.yaml to add the proxy information as follows:
.. code-block:: diff
# settings are correct and reachable in your environment; otherwise update
# them with the correct values for your environment.
proxy:
- http: ""
- https: ""
- no_proxy: []
+ http: "proxy.example.com:8080"
+ https: "proxy.example.com:8080"
+ no_proxy:
+ - 127.0.0.1
Under site/airsloop/software/charts/ucp/ create the file maas.yaml with the following format:
.. code-block:: yaml
---
# This file defines site-specific deviations for MaaS.
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
replacement: true
name: ucp-maas
layeringDefinition:
abstract: false
layer: site
parentSelector:
name: ucp-maas-type
actions:
- method: merge
path: .
storagePolicy: cleartext
data:
values:
conf:
maas:
proxy:
proxy_enabled: true
peer_proxy_enabled: true
proxy_server: 'http://proxy.example.com:8080'
...
Under site/airsloop/software/charts/ucp/ create a promenade.yaml file in the following format:
.. code-block:: yaml
---
# This file defines site-specific deviations for Promenade.
schema: armada/Chart/v1
metadata:
schema: metadata/Document/v1
replacement: true
name: ucp-promenade
layeringDefinition:
abstract: false
layer: site
parentSelector:
name: ucp-promenade-type
actions:
- method: merge
path: .
storagePolicy: cleartext
data:
values:
pod:
env:
promenade_api:
- name: http_proxy
value: http://proxy.example.com:8080
- name: https_proxy
value: http://proxy.example.com:8080
- name: no_proxy
value: "127.0.0.1,localhost,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,.cluster.local"
- name: HTTP_PROXY
value: http://proxy.example.com:8080
- name: HTTP_PROXY
value: http://proxy.example.com:8080
- name: HTTPS_PROXY
value: http://proxy.example.com:8080
- name: NO_PROXY
value: "127.0.0.1,localhost,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster.local,.cluster.local"
...

+ 0
- 770
doc/source/authoring_and_deployment.rst View File

@ -1,770 +0,0 @@
Site Authoring and Deployment Guide
===================================
The document contains the instructions for standing up a greenfield
Airship site. This can be broken down into two high-level pieces:
1. **Site authoring guide(s)**: Describes how to craft site manifests
and configs required to perform a deployment. The primary site
authoring guide is for deploying Airship sites, where OpenStack
is the target platform deployed on top of Airship.
2. **Deployment guide(s)**: Describes how to apply site manifests for a
given site.
This document is an "all in one" site authoring guide + deployment guide
for a standard Airship deployment. For the most part, the site
authoring guidance lives within ``seaworthy`` reference site in the
form of YAML comments.
Support
-------
Bugs may be viewed and reported at the following locations, depending on
the component:
- OpenStack Helm: `OpenStack Storyboard group
<https://storyboard.openstack.org/#!/project_group/64>`__
- Airship: Bugs may be filed using OpenStack Storyboard for specific
projects in `Airship
group <https://storyboard.openstack.org/#!/project_group/85>`__:
- `Airship Armada <https://storyboard.openstack.org/#!/project/1002>`__
- `Airship
Deckhand <https://storyboard.openstack.org/#!/project/1004>`__
- `Airship
Divingbell <https://storyboard.openstack.org/#!/project/1001>`__
- `Airship
Drydock <https://storyboard.openstack.org/#!/project/1005>`__
- `Airship MaaS <https://storyboard.openstack.org/#!/project/1007>`__
- `Airship Pegleg <https://storyboard.openstack.org/#!/project/1008>`__
- `Airship
Promenade <https://storyboard.openstack.org/#!/project/1009>`__
- `Airship
Shipyard <https://storyboard.openstack.org/#!/project/1010>`__
- `Airship Treasuremap
<https://storyboard.openstack.org/#!/project/airship/treasuremap>`__
Terminology
-----------
**Cloud**: A platform that provides a standard set of interfaces for
`IaaS <https://en.wikipedia.org/wiki/Infrastructure_as_a_service>`__
consumers.
**OSH**: (`OpenStack Helm <https://docs.openstack.org/openstack-helm/latest/>`__) is a
collection of Helm charts used to deploy OpenStack on Kubernetes.
**Helm**: (`Helm <https://helm.sh/>`__) is a package manager for Kubernetes.
Helm Charts help you define, install, and upgrade Kubernetes applications.
**Undercloud/Overcloud**: Terms used to distinguish which cloud is
deployed on top of the other. In Airship sites, OpenStack (overcloud)
is deployed on top of Kubernetes (undercloud).
**Airship**: A specific implementation of OpenStack Helm charts that deploy
Kubernetes. This deployment is the primary focus of this document.
**Control Plane**: From the point of view of the cloud service provider,
the control plane refers to the set of resources (hardware, network,
storage, etc.) configured to provide cloud services for customers.
**Data Plane**: From the point of view of the cloud service provider,
the data plane is the set of resources (hardware, network, storage,
etc.) configured to run consumer workloads. When used in this document,
"data plane" refers to the data plane of the overcloud (OSH).
**Host Profile**: A host profile is a standard way of configuring a bare
metal host. It encompasses items such as the number of bonds, bond slaves,
physical storage mapping and partitioning, and kernel parameters.
Versioning
----------
Airship reference manifests are delivered monthly as release tags in the
`Treasuremap <https://github.com/airshipit/treasuremap/releases>`__.
The releases are verified by `Seaworthy
<https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html>`__,
`Airsloop
<https://airship-treasuremap.readthedocs.io/en/latest/airsloop.html>`__,
and `Airship-in-a-Bottle
<https://github.com/airshipit/treasuremap/blob/master/tools/deployment/aiab/README.rst>`__
pipelines before delivery and are recommended for deployments instead of using
the master branch directly.
Component Overview
------------------
.. image:: diagrams/component_list.png
Node Overview
-------------
This document refers to several types of nodes, which vary in their
purpose, and to some degree in their orchestration / setup:
- **Build node**: This refers to the environment where configuration
documents are built for your environment (e.g., your laptop)
- **Genesis node**: The "genesis" or "seed node" refers to a node used
to get a new deployment off the ground, and is the first node built
in a new deployment environment
- **Control / Master nodes**: The nodes that make up the control
plane. (Note that the genesis node will be one of the controller
nodes)
- **Compute / Worker Nodes**: The nodes that make up the data
plane
Hardware Preparation
--------------------
The Seaworthy site reference shows a production-worthy deployment that includes
multiple disks, as well as redundant/bonded network configuration.
Airship hardware requirements are flexible, and the system can be deployed
with very minimal requirements if needed (e.g., single disk, single network).
For simplified non-bonded, and single disk examples, see
`Airsloop <https://airship-treasuremap.readthedocs.io/en/latest/airsloop.html>`__.
BIOS and IPMI
~~~~~~~~~~~~~
1. Virtualization enabled in BIOS
2. IPMI enabled in server BIOS (e.g., IPMI over LAN option enabled)
3. IPMI IPs assigned, and routed to the environment you will deploy into
Note: Firmware bugs related to IPMI are common. Ensure you are running the
latest firmware version for your hardware. Otherwise, it is recommended to
perform an iLo/iDrac reset, as IPMI bugs with long-running firmware are not
uncommon.
4. Set PXE as first boot device and ensure the correct NIC is selected for PXE.
Disk
~~~~
1. For servers that are in the control plane (including genesis):
- Two-disk RAID-1: Operating System
- Two disks JBOD: Ceph Journal/Meta for control plane
- Remaining disks JBOD: Ceph OSD for control plane
2. For servers that are in the tenant data plane (compute nodes):
- Two-disk RAID-1: Operating System
- Two disks JBOD: Ceph Journal/Meta for tenant-ceph
- Two disks JBOD: Ceph OSD for tenant-ceph
- Remaining disks configured according to the host profile target
for each given server (e.g., RAID-10 for OpenStack ephemeral).
Network
~~~~~~~
1. You have a dedicated PXE interface on untagged/native VLAN,
1x1G interface (eno1)
2. You have VLAN segmented networks,
2x10G bonded interfaces (enp67s0f0 and enp68s0f1)
- Management network (routed/OAM)
- Calico network (Kubernetes control channel)
- Storage network
- Overlay network
- Public network
See detailed network configuration in the
``site/${NEW_SITE}/networks/physical/networks.yaml`` configuration file.
Hardware sizing and minimum requirements
----------------------------------------
+-----------------+----------+----------+----------+
| Node | Disk | Memory | CPU |
+=================+==========+==========+==========+
| Build (laptop) | 10 GB | 4 GB | 1 |
+-----------------+----------+----------+----------+
| Genesis/Control | 500 GB | 64 GB | 24 |
+-----------------+----------+----------+----------+
| Compute | N/A* | N/A* | N/A* |
+-----------------+----------+----------+----------+
* Workload driven (determined by host profile)
See detailed hardware configuration in the
``site/${NEW_SITE}/networks/profiles`` folder.
Establishing build node environment
-----------------------------------
1. On the machine you wish to use to generate deployment files, install required
tooling
.. code-block:: bash
sudo apt -y install docker.io git
2. Clone the ``treasuremap`` git repo as follows
.. code-block:: bash
git clone https://opendev.org/airship/treasuremap.git
cd treasuremap && git checkout <release-tag>
Building site documents
-----------------------
This section goes over how to put together site documents according to
your specific environment and generate the initial Promenade bundle
needed to start the site deployment.
Preparing deployment documents
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In its current form, Pegleg provides an organized structure for YAML
elements that separates common site elements (i.e., ``global``
folder) from unique site elements (i.e., ``site`` folder).
To gain a full understanding of the Pegleg structure, it is highly
recommended to read the Pegleg documentation on this topic
`here <https://airship-pegleg.readthedocs.io/>`__.
The ``seaworthy`` site may be used as reference site. It is the
principal pipeline for integration and continuous deployment testing of Airship.
Change directory to the ``site`` folder and copy the
``seaworthy`` site as follows:
.. code-block:: bash
NEW_SITE=mySite # replace with the name of your site
cd treasuremap/site
cp -r seaworthy $NEW_SITE
Remove ``seaworthy`` specific certificates.
.. code-block:: bash
rm -f site/${NEW_SITE}/secrets/certificates/certificates.yaml
You will then need to manually make changes to these files. These site
manifests are heavily commented to explain parameters, and more importantly
identify all of the parameters that need to change when authoring a new
site.
These areas which must be updated for a new site are flagged with the
label ``NEWSITE-CHANGEME`` in YAML comments. Search for all instances
of ``NEWSITE-CHANGEME`` in your new site definition. Then follow the
instructions that accompany the tag in order to make all needed changes
to author your new Airship site.
Because some files depend on (or will repeat) information from others,
the order in which you should build your site files is as follows:
1. site/$NEW\_SITE/networks/physical/networks.yaml
2. site/$NEW\_SITE/baremetal/nodes.yaml
3. site/$NEW\_SITE/networks/common-addresses.yaml
4. site/$NEW\_SITE/pki/pki-catalog.yaml
5. All other site files
Register DNS names
~~~~~~~~~~~~~~~~~~
Airship has two virtual IPs.
See ``data.vip`` in section of
``site/${NEW_SITE}/networks/common-addresses.yaml`` configuration file.
Both are implemented via Kubernetes ingress controller and require FQDNs/DNS.
Register the following list of DNS names:
::
+---+---------------------------+-------------+
| A | iam-sw.DOMAIN | ingress-vip |
| A | shipyard-sw.DOMAIN | ingress-vip |
+---+---------------------------+-------------+
| A | cloudformation-sw.DOMAIN | ingress-vip |
| A | compute-sw.DOMAIN | ingress-vip |
| A | dashboard-sw.DOMAIN | ingress-vip |
| A | grafana-sw.DOMAIN | ingress-vip |
+---+---------------------------+-------------+
| A | identity-sw.DOMAIN | ingress-vip |
| A | image-sw.DOMAIN | ingress-vip |
| A | kibana-sw.DOMAIN | ingress-vip |
| A | nagios-sw.DOMAIN | ingress-vip |
| A | network-sw.DOMAIN | ingress-vip |
| A | nova-novncproxy-sw.DOMAIN | ingress-vip |
| A | object-store-sw.DOMAIN | ingress-vip |
| A | orchestration-sw.DOMAIN | ingress-vip |
| A | placement-sw.DOMAIN | ingress-vip |
| A | volume-sw.DOMAIN | ingress-vip |
+---+---------------------------+-------------+
| A | maas-sw.DOMAIN | maas-vip |
| A | drydock-sw.DOMAIN | maas-vip |
+---+---------------------------+-------------+
Here ``DOMAIN`` is a name of ingress domain, you can find it in the
``data.dns.ingress_domain`` section of
``site/${NEW_SITE}/secrets/certificates/ingress.yaml`` configuration file.
Run the following command to get an up-to-date list of required DNS names:
.. code-block:: bash
grep -E 'host: .+DOMAIN' site/${NEW_SITE}/software/config/endpoints.yaml | \
sort -u | awk '{print $2}'
Update Secrets
~~~~~~~~~~~~~~
Replace passphrases under ``site/${NEW_SITE}/secrets/passphrases/``
with random generated ones:
- Passphrases generation ``openssl rand -hex 10``
- UUID generation ``uuidgen`` (e.g., for Ceph filesystem ID)
- Update ``secrets/passphrases/ipmi_admin_password.yaml`` with IPMI password
- Update ``secrets/passphrases/ubuntu_crypt_password.yaml`` with password hash:
.. code-block:: python
python3 -c "from crypt import *; print(crypt('<YOUR_PASSWORD>', METHOD_SHA512))"
Configure certificates in ``site/${NEW_SITE}/secrets/certificates/ingress.yaml``,
they need to be issued for the domains configured in the ``Register DNS names`` section.
.. caution::
It is required to configure valid certificates. Self-signed certificates
are not supported.
Control Plane & Tenant Ceph Cluster Notes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Configuration variables for ceph control plane are located in:
- ``site/${NEW_SITE}/software/charts/ucp/ceph/ceph-osd.yaml``
- ``site/${NEW_SITE}/software/charts/ucp/ceph/ceph-client.yaml``
Configuration variables for tenant ceph are located in:
- ``site/${NEW_SITE}/software/charts/osh/openstack-tenant-ceph/ceph-osd.yaml``
- ``site/${NEW_SITE}/software/charts/osh/openstack-tenant-ceph/ceph-client.yaml``
Configuration summary:
- data/values/conf/storage/osd[\*]/data/location: The block device that
will be formatted by the Ceph chart and used as a Ceph OSD disk
- data/values/conf/storage/osd[\*]/journal/location: The block device
backing the ceph journal used by this OSD. Refer to the journal
paradigm below.
- data/values/conf/pool/target/osd: Number of OSD disks on each node
Assumptions:
1. Ceph OSD disks are not configured for any type of RAID. Instead, they
are configured as JBOD when connected through a RAID controller.
If the RAID controller does not support JBOD, put each disk in its
own RAID-0 and enable RAID cache and write-back cache if the
RAID controller supports it.
2. Ceph disk mapping, disk layout, journal and OSD setup is the same
across Ceph nodes, with only their role differing. Out of the 4
control plane nodes, we expect to have 3 actively participating in
the Ceph quorum, and the remaining 1 node designated as a standby
Ceph node which uses a different control plane profile
(cp\_*-secondary) than the other three (cp\_*-primary).
3. If performing a fresh install, disks are unlabeled or not labeled from a
previous Ceph install, so that Ceph chart will not fail disk
initialization.
.. important::
It is highly recommended to use SSD devices for Ceph Journal partitions.
If you have an operating system available on the target hardware, you
can determine HDD and SSD devices with:
.. code-block:: bash
lsblk -d -o name,rota
where a ``rota`` (rotational) value of ``1`` indicates a spinning HDD,
and where a value of ``0`` indicates non-spinning disk (i.e., SSD). (Note:
Some SSDs still report a value of ``1``, so it is best to go by your
server specifications).
For OSDs, pass in the whole block device (e.g., ``/dev/sdd``), and the
Ceph chart will take care of disk partitioning, formatting, mounting,
etc.
For Ceph Journals, you can pass in a specific partition (e.g., ``/dev/sdb1``).
Note that it's not required to pre-create these partitions. The Ceph chart
will create journal partitions automatically if they don't exist.
By default the size of every journal partition is 10G. Make sure
there is enough space available to allocate all journal partitions.
Consider the following example where:
- /dev/sda is an operating system RAID-1 device (SSDs for OS root)
- /dev/sd[bc] are SSDs for ceph journals
- /dev/sd[efgh] are HDDs for OSDs
The data section of this file would look like:
.. code-block:: yaml
data:
values:
conf:
storage:
osd:
- data:
type: block-logical
location: /dev/sde
journal:
type: block-logical
location: /dev/sdb1
- data:
type: block-logical
location: /dev/sdf
journal:
type: block-logical
location: /dev/sdb2
- data:
type: block-logical
location: /dev/sdg
journal:
type: block-logical
location: /dev/sdc1
- data:
type: block-logical
location: /dev/sdh
journal:
type: block-logical
location: /dev/sdc2
Manifest linting and combining layers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
After constituent YAML configurations are finalized, use Pegleg to lint
your manifests. Resolve any issues that result from linting before
proceeding:
.. code-block:: bash
sudo tools/airship pegleg site -r /target lint $NEW_SITE
Note: ``P001`` and ``P005`` linting errors are expected for missing
certificates, as they are not generated until the next section. You may
suppress these warnings by appending ``-x P001 -x P005`` to the lint
command.
Next, use Pegleg to perform the merge that will yield the combined
global + site type + site YAML:
.. code-block:: bash
sudo tools/airship pegleg site -r /target collect $NEW_SITE
Perform a visual inspection of the output. If any errors are discovered,
you may fix your manifests and re-run the ``lint`` and ``collect``
commands.
Once you have error-free output, save the resulting YAML as follows:
.. code-block:: bash
sudo tools/airship pegleg site -r /target collect $NEW_SITE \
-s ${NEW_SITE}_collected
This output is required for subsequent steps.
Lastly, you should also perform a ``render`` on the documents. The
resulting render from Pegleg will not be used as input in subsequent
steps, but is useful for understanding what the document will look like
once Deckhand has performed all substitutions, replacements, etc. This
is also useful for troubleshooting and addressing any Deckhand errors
prior to submitting via Shipyard:
.. code-block:: bash
sudo tools/airship pegleg site -r /target render $NEW_SITE
Inspect the rendered document for any errors. If there are errors,
address them in your manifests and re-run this section of the document.
Building the Promenade bundle
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Create an output directory for Promenade certs and run
.. code-block:: bash
mkdir ${NEW_SITE}_certs
sudo tools/airship promenade generate-certs \
-o /target/${NEW_SITE}_certs /target/${NEW_SITE}_collected/*.yaml
Estimated runtime: About **1 minute**
After the certificates has been successfully created, copy the generated
certificates into the security folder. Example:
.. code-block:: bash
mkdir -p site/${NEW_SITE}/secrets/certificates
sudo cp ${NEW_SITE}_certs/certificates.yaml \
site/${NEW_SITE}/secrets/certificates/certificates.yaml
Regenerate collected YAML files to include copied certificates:
.. code-block:: bash
sudo rm -rf ${NEW_SITE}_collected ${NEW_SITE}_certs
sudo tools/airship pegleg site -r /target collect $NEW_SITE \
-s ${NEW_SITE}_collected
Finally, create the Promenade bundle:
.. code-block:: bash
mkdir ${NEW_SITE}_bundle
sudo tools/airship promenade build-all --validators \
-o /target/${NEW_SITE}_bundle /target/${NEW_SITE}_collected/*.yaml
Genesis node
------------
Initial setup
~~~~~~~~~~~~~
Before starting, ensure that the BIOS and IPMI settings match those
stated previously in this document. Also ensure that the hardware RAID
is setup for this node per the control plane disk configuration stated
previously in this document.
Then, start with a manual install of Ubuntu 16.04 on the genesis node, the node
you will use to seed the rest of your environment. Use standard `Ubuntu
ISO <http://releases.ubuntu.com/16.04>`__.
Ensure to select the following:
- UTC timezone
- Hostname that matches the genesis hostname given in
``data.genesis.hostname`` in
``site/${NEW_SITE}/networks/common-addresses.yaml``.
- At the ``Partition Disks`` screen, select ``Manual`` so that you can
setup the same disk partitioning scheme used on the other control
plane nodes that will be deployed by MaaS. Select the first logical
device that corresponds to one of the RAID-1 arrays already setup in
the hardware controller. On this device, setup partitions matching
those defined for the ``bootdisk`` in your control plane host profile
found in ``site/${NEW_SITE}/profiles/host``.
(e.g., 30G for /, 1G for /boot, 100G for /var/log, and all remaining
storage for /var). Note that the volume size syntax looking like
``>300g`` in Drydock means that all remaining disk space is allocated
to this volume, and that volume needs to be at least 300G in
size.
- When you get to the prompt, "How do you want to manage upgrades on
this system?", choose "No automatic updates" so that packages are
only updated at the time of our choosing (e.g., maintenance windows).
- Ensure the grub bootloader is also installed to the same logical
device as in the previous step (this should be default behavior).
After installation, ensure the host has outbound internet access and can
resolve public DNS entries (e.g., ``nslookup google.com``,
``curl https://www.google.com``).
Ensure that the deployed genesis hostname matches the hostname in
``data.genesis.hostname`` in
``site/${NEW_SITE}/networks/common-addresses.yaml``.
If it does not match, then either change the hostname of the node to
match the configuration documents, or re-generate the configuration with
the correct hostname.
To change the hostname of the deployed node, you may run the following:
.. code-block:: bash
sudo hostname $NEW_HOSTNAME
sudo sh -c "echo $NEW_HOSTNAME > /etc/hostname"
sudo vi /etc/hosts # Anywhere the old hostname appears in the file, replace
# with the new hostname
Or, as an alternative, update the genesis hostname
in the site definition and then repeat the steps in the previous two sections,
"Manifest linting and combining layers" and "Building the Promenade bundle".
Installing matching kernel version
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Install the same kernel version on the genesis host that MaaS will use
to deploy new baremetal nodes.
To do this, first you must determine the kernel version that
will be deployed to those nodes. Start by looking at the host profile
definition used to deploy other control plane nodes by searching for
``control-plane: enabled``. Most likely this will be a file under
``global/profiles/host``. In this file, find the kernel info. Example:
.. code-block:: bash
platform:
image: 'xenial'
kernel: 'hwe-16.04'
kernel_params:
kernel_package: 'linux-image-4.15.0-46-generic'
It is recommended to install matching (and previously tested) kernel
.. code-block:: bash
sudo apt-get install linux-image-4.15.0-46-generic
Check the installed packages on the genesis host with ``dpkg --list``.
If there are any later kernel versions installed, remove them with
``sudo apt remove``, so that the newly installed kernel is the latest
available. Boot the genesis node using the installed kernel.
Install ntpdate/ntp
~~~~~~~~~~~~~~~~~~~
Install and run ntpdate, to ensure a reasonably sane time on genesis
host before proceeding:
.. code-block:: bash
sudo apt -y install ntpdate
sudo ntpdate ntp.ubuntu.com
If your network policy does not allow time sync with external time
sources, specify a local NTP server instead of using ``ntp.ubuntu.com``.
Then, install the NTP client:
.. code-block:: bash
sudo apt -y install ntp
Add the list of NTP servers specified in ``data.ntp.servers_joined`` in
file
``site/${NEW_SITE}/networks/common-address.yaml``
to ``/etc/ntp.conf`` as follows:
::
pool NTP_SERVER1 iburst
pool NTP_SERVER2 iburst
(repeat for each NTP server with correct NTP IP or FQDN)
Then, restart the NTP service:
.. code-block:: bash
sudo service ntp restart
If you cannot get good time to your selected time servers,
consider using alternate time sources for your deployment.
Disable the apparmor profile for ntpd:
.. code-block:: bash
sudo ln -s /etc/apparmor.d/usr.sbin.ntpd /etc/apparmor.d/disable/
sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.ntpd
This prevents an issue with the MaaS containers, which otherwise get
permission denied errors from apparmor when the MaaS container tries to
leverage libc6 for /bin/sh when MaaS container ntpd is forcefully
disabled.
Promenade bootstrap
~~~~~~~~~~~~~~~~~~~
Copy the ``${NEW_SITE}_bundle`` directory from the build node to the genesis
node, into the home directory of the user there (e.g., ``/home/ubuntu``).
Then, run the following script as sudo on the genesis node:
.. code-block:: bash
cd ${NEW_SITE}_bundle
sudo ./genesis.sh
Estimated runtime: **1h**
Following completion, run the ``validate-genesis.sh`` script to ensure
correct provisioning of the genesis node:
.. code-block:: bash
cd ${NEW_SITE}_bundle
sudo ./validate-genesis.sh
Estimated runtime: **2m**
Deploy Site with Shipyard
-------------------------
Export valid login credentials for one of the Airship Keystone users defined
for the site. Currently there are no authorization checks in place, so
the credentials for any of the site-defined users will work. For
example, we can use the ``shipyard`` user, with the password that was
defined in
``site/${NEW_SITE}/secrets/passphrases/ucp_shipyard_keystone_password.yaml``.
Example:
.. code-block:: bash
export OS_AUTH_URL="https://iam-sw.DOMAIN:443/v3"
export OS_USERNAME=shipyard
export OS_PASSWORD=password123
Next, load collected site manifests to Shipyard
.. code-block:: bash
sudo -E tools/airship shipyard create configdocs ${NEW_SITE} \
--directory=/target/${NEW_SITE}_collected
sudo tools/airship shipyard commit configdocs
Estimated runtime: **3m**
Now deploy the site with shipyard:
.. code-block:: bash
tools/airship shipyard create action deploy_site
Estimated runtime: **3h**
Check periodically for successful deployment:
.. code-block:: bash