Browse Source

Updates on site authoring and deployment guide

 - Order of few sections re-arranged
 - Use of tools/airship over cloning various repos for
   Pegleg, Promenade, and Shipyard
 - Additional info on Airship VIPs
 - Multiple grammar fixes after reviews

Change-Id: Icb18ad77844038d61046670cb327d27cfcabded3
tags/v1.4
Kaspars Skels 2 weeks ago
parent
commit
59a4dc2dd6
1 changed files with 289 additions and 305 deletions
  1. 289
    305
      doc/source/authoring_and_deployment.rst

+ 289
- 305
doc/source/authoring_and_deployment.rst View File

@@ -16,6 +16,35 @@ for a standard Airship deployment. For the most part, the site
16 16
 authoring guidance lives within ``seaworthy`` reference site in the
17 17
 form of YAML comments.
18 18
 
19
+Support
20
+-------
21
+
22
+Bugs may be viewed and reported at the following locations, depending on
23
+the component:
24
+
25
+-  OpenStack Helm: `OpenStack Storyboard group
26
+   <https://storyboard.openstack.org/#!/project_group/64>`__
27
+
28
+-  Airship: Bugs may be filed using OpenStack Storyboard for specific
29
+   projects in `Airship
30
+   group <https://storyboard.openstack.org/#!/project_group/85>`__:
31
+
32
+    -  `Airship Armada <https://storyboard.openstack.org/#!/project/1002>`__
33
+    -  `Airship
34
+       Deckhand <https://storyboard.openstack.org/#!/project/1004>`__
35
+    -  `Airship
36
+       Divingbell <https://storyboard.openstack.org/#!/project/1001>`__
37
+    -  `Airship
38
+       Drydock <https://storyboard.openstack.org/#!/project/1005>`__
39
+    -  `Airship MaaS <https://storyboard.openstack.org/#!/project/1007>`__
40
+    -  `Airship Pegleg <https://storyboard.openstack.org/#!/project/1008>`__
41
+    -  `Airship
42
+       Promenade <https://storyboard.openstack.org/#!/project/1009>`__
43
+    -  `Airship
44
+       Shipyard <https://storyboard.openstack.org/#!/project/1010>`__
45
+    -  `Airship Treasuremap
46
+       <https://storyboard.openstack.org/#!/project/airship/treasuremap>`__
47
+
19 48
 Terminology
20 49
 -----------
21 50
 
@@ -23,37 +52,56 @@ Terminology
23 52
 `IaaS <https://en.wikipedia.org/wiki/Infrastructure_as_a_service>`__
24 53
 consumers.
25 54
 
26
-**OSH**: (`OpenStack
27
-Helm <https://docs.openstack.org/openstack-helm/latest/>`__) is a
28
-collection of Helm charts used to deploy OpenStack on kubernetes.
55
+**OSH**: (`OpenStack Helm <https://docs.openstack.org/openstack-helm/latest/>`__) is a
56
+collection of Helm charts used to deploy OpenStack on Kubernetes.
57
+
58
+**Helm**: (`Helm <https://helm.sh/>`__) is a package manager for Kubernetes.
59
+Helm Charts help you define, install, and upgrade Kubernetes applications.
29 60
 
30 61
 **Undercloud/Overcloud**: Terms used to distinguish which cloud is
31 62
 deployed on top of the other. In Airship sites, OpenStack (overcloud)
32 63
 is deployed on top of Kubernetes (undercloud).
33 64
 
34
-**Airship**: A specific implementation of OpenStack Helm charts onto
35
-kubernetes, the deployment of which is the primary focus of this document.
65
+**Airship**: A specific implementation of OpenStack Helm charts that deploy
66
+Kubernetes. This deployment is the primary focus of this document.
36 67
 
37 68
 **Control Plane**: From the point of view of the cloud service provider,
38 69
 the control plane refers to the set of resources (hardware, network,
39
-storage, etc) sourced to run cloud services.
70
+storage, etc.) configured to provide cloud services for customers.
40 71
 
41 72
 **Data Plane**: From the point of view of the cloud service provider,
42 73
 the data plane is the set of resources (hardware, network, storage,
43
-etc.) sourced to run consumer workloads. When used in this document,
74
+etc.) configured to run consumer workloads. When used in this document,
44 75
 "data plane" refers to the data plane of the overcloud (OSH).
45 76
 
46 77
 **Host Profile**: A host profile is a standard way of configuring a bare
47
-metal host. Encompasses items such as the number of bonds, bond slaves,
78
+metal host. It encompasses items such as the number of bonds, bond slaves,
48 79
 physical storage mapping and partitioning, and kernel parameters.
49 80
 
81
+Versioning
82
+----------
83
+
84
+Airship reference manifests are delivered monthly as release tags in the
85
+`Treasuremap <https://github.com/airshipit/treasuremap/releases>`__.
86
+
87
+The releases are verified by `Seaworthy
88
+<https://airship-treasuremap.readthedocs.io/en/latest/seaworthy.html>`__,
89
+`Airsloop
90
+<https://airship-treasuremap.readthedocs.io/en/latest/airsloop.html>`__,
91
+and `Airship-in-a-Bottle
92
+<https://github.com/airshipit/treasuremap/blob/master/tools/deployment/aiab/README.rst>`__
93
+pipelines before delivery and are recommended for deployments instead of using
94
+the master branch directly.
95
+
96
+
50 97
 Component Overview
51
-~~~~~~~~~~~~~~~~~~
98
+------------------
52 99
 
53 100
 .. image:: diagrams/component_list.png
54 101
 
102
+
55 103
 Node Overview
56
-~~~~~~~~~~~~~
104
+-------------
57 105
 
58 106
 This document refers to several types of nodes, which vary in their
59 107
 purpose, and to some degree in their orchestration / setup:
@@ -62,53 +110,41 @@ purpose, and to some degree in their orchestration / setup:
62 110
    documents are built for your environment (e.g., your laptop)
63 111
 -  **Genesis node**: The "genesis" or "seed node" refers to a node used
64 112
    to get a new deployment off the ground, and is the first node built
65
-   in a new deployment environment.
66
--  **Control / Controller nodes**: The nodes that make up the control
67
-   plane. (Note that the Genesis node will be one of the controller
68
-   nodes.)
69
--  **Compute nodes / Worker Nodes**: The nodes that make up the data
113
+   in a new deployment environment
114
+-  **Control / Master nodes**: The nodes that make up the control
115
+   plane. (Note that the genesis node will be one of the controller
116
+   nodes)
117
+-  **Compute / Worker Nodes**: The nodes that make up the data
70 118
    plane
71 119
 
72
-Support
73
--------
120
+Hardware Preparation
121
+--------------------
74 122
 
75
-Bugs may be viewed and reported at the following locations, depending on
76
-the component:
123
+The Seaworthy site reference shows a production-worthy deployment that includes
124
+multiple disks, as well as redundant/bonded network configuration.
77 125
 
78
--  OpenStack Helm: `OpenStack Storyboard group
79
-   <https://storyboard.openstack.org/#!/project_group/64>`__
80
-
81
--  Airship: Bugs may be filed using OpenStack Storyboard for specific
82
-   projects in `Airship
83
-   group <https://storyboard.openstack.org/#!/project_group/85>`__:
126
+Airship hardware requirements are flexible, and the system can be deployed
127
+with very minimal requirements if needed (e.g., single disk, single network).
84 128
 
85
-    -  `Airship Armada <https://storyboard.openstack.org/#!/project/1002>`__
86
-    -  `Airship Berth <https://storyboard.openstack.org/#!/project/1003>`__
87
-    -  `Airship
88
-       Deckhand <https://storyboard.openstack.org/#!/project/1004>`__
89
-    -  `Airship
90
-       Divingbell <https://storyboard.openstack.org/#!/project/1001>`__
91
-    -  `Airship
92
-       Drydock <https://storyboard.openstack.org/#!/project/1005>`__
93
-    -  `Airship MaaS <https://storyboard.openstack.org/#!/project/1007>`__
94
-    -  `Airship Pegleg <https://storyboard.openstack.org/#!/project/1008>`__
95
-    -  `Airship
96
-       Promenade <https://storyboard.openstack.org/#!/project/1009>`__
97
-    -  `Airship
98
-       Shipyard <https://storyboard.openstack.org/#!/project/1010>`__
99
-    -  `Airship in a
100
-       Bottle <https://storyboard.openstack.org/#!/project/1006>`__
129
+For simplified non-bonded, and single disk examples, see
130
+`Airsloop <https://airship-treasuremap.readthedocs.io/en/latest/airsloop.html>`__.
101 131
 
102
-    -  `Airship Treasuremap
103
-       <https://storyboard.openstack.org/#!/project/openstack/airship-treasuremap>`__
132
+BIOS and IPMI
133
+~~~~~~~~~~~~~
104 134
 
105
-Hardware Prep
106
--------------
135
+1. Virtualization enabled in BIOS
136
+2. IPMI enabled in server BIOS (e.g., IPMI over LAN option enabled)
137
+3. IPMI IPs assigned, and routed to the environment you will deploy into
138
+   Note: Firmware bugs related to IPMI are common. Ensure you are running the
139
+   latest firmware version for your hardware. Otherwise, it is recommended to
140
+   perform an iLo/iDrac reset, as IPMI bugs with long-running firmware are not
141
+   uncommon.
142
+4. Set PXE as first boot device and ensure the correct NIC is selected for PXE.
107 143
 
108 144
 Disk
109 145
 ~~~~
110 146
 
111
-1. For servers that are in the control plane (including Genesis):
147
+1. For servers that are in the control plane (including genesis):
112 148
 
113 149
    - Two-disk RAID-1: Operating System
114 150
    - Two disks JBOD: Ceph Journal/Meta for control plane
@@ -119,111 +155,106 @@ Disk
119 155
    - Two-disk RAID-1: Operating System
120 156
    - Two disks JBOD: Ceph Journal/Meta for tenant-ceph
121 157
    - Two disks JBOD: Ceph OSD for tenant-ceph
122
-   - Remaining disks need to be configured according to the host profile target
123
-     for each given server (e.g. RAID-10 for OpenStack Ephemeral).
124
-
125
-BIOS and IPMI
126
-~~~~~~~~~~~~~
127
-
128
-1. Virtualization enabled in BIOS
129
-2. IPMI enabled in server BIOS (e.g., IPMI over LAN option enabled)
130
-3. IPMI IPs assigned, and routed to the environment you will deploy into
131
-   Note: Firmware bugs related to IPMI are common. Ensure you are running the
132
-   latest firmware version for your hardware. Otherwise, it is recommended to
133
-   perform an iLo/iDrac reset, as IPMI bugs with long-running firmware are not
134
-   uncommon.
135
-4. Set PXE as first boot device and ensure the correct NIC is selected for PXE
158
+   - Remaining disks configured according to the host profile target
159
+     for each given server (e.g., RAID-10 for OpenStack ephemeral).
136 160
 
137 161
 Network
138 162
 ~~~~~~~
139 163
 
140
-1. You have a network you can successfully PXE boot with your network topology
141
-   and bonding settings (dedicated PXE interace on untagged/native VLAN in this
142
-   example)
143
-2. You have (VLAN) segmented, routed networks accessible by all nodes for:
144
-
145
-   1. Management network(s) (k8s control channel)
146
-   2. Calico network(s)
147
-   3. Storage network(s)
148
-   4. Overlay network(s)
149
-   5. Public network(s)
150
-
151
-HW Sizing and minimum requirements
152
-----------------------------------
153
-
154
-+----------+----------+----------+----------+
155
-|  Node    |   disk   |  memory  |   cpu    |
156
-+==========+==========+==========+==========+
157
-|  Build   |   10 GB  |  4 GB    |   1      |
158
-+----------+----------+----------+----------+
159
-| Genesis  |   100 GB |  16 GB   |   8      |
160
-+----------+----------+----------+----------+
161
-| Control  |   10 TB  |  128 GB  |   24     |
162
-+----------+----------+----------+----------+
163
-| Compute  |   N/A*   |  N/A*    |   N/A*   |
164
-+----------+----------+----------+----------+
164
+1. You have a dedicated PXE interface on untagged/native VLAN,
165
+   1x1G interface (eno1)
166
+2. You have VLAN segmented networks,
167
+   2x10G bonded interfaces (enp67s0f0 and enp68s0f1)
168
+
169
+    - Management network (routed/OAM)
170
+    - Calico network (Kubernetes control channel)
171
+    - Storage network
172
+    - Overlay network
173
+    - Public network
174
+
175
+See detailed network configuration in the
176
+``site/${NEW_SITE}/networks/physical/networks.yaml`` configuration file.
177
+
178
+Hardware sizing and minimum requirements
179
+----------------------------------------
180
+
181
++-----------------+----------+----------+----------+
182
+|  Node           |   Disk   |  Memory  |   CPU    |
183
++=================+==========+==========+==========+
184
+| Build (laptop)  |   10 GB  |  4 GB    |   1      |
185
++-----------------+----------+----------+----------+
186
+| Genesis/Control |   500 GB |  64 GB   |   24     |
187
++-----------------+----------+----------+----------+
188
+| Compute         |   N/A*   |  N/A*    |   N/A*   |
189
++-----------------+----------+----------+----------+
165 190
 
166 191
 * Workload driven (determined by host profile)
167 192
 
193
+See detailed hardware configuration in the
194
+``site/${NEW_SITE}/networks/profiles`` folder.
168 195
 
169 196
 Establishing build node environment
170 197
 -----------------------------------
171 198
 
172 199
 1. On the machine you wish to use to generate deployment files, install required
173
-   tooling::
200
+   tooling
201
+
202
+.. code-block:: bash
174 203
 
175 204
     sudo apt -y install docker.io git
176 205
 
177
-2. Clone and link the required git repos as follows::
206
+2. Clone the ``treasuremap`` git repo as follows
207
+
208
+.. code-block:: bash
178 209
 
179
-    git clone https://git.openstack.org/openstack/airship-pegleg
180
-    git clone https://git.openstack.org/openstack/airship-treasuremap
210
+    git clone https://opendev.org/airship/treasuremap.git
211
+    cd treasuremap && git checkout <release-tag>
181 212
 
182
-Building Site documents
213
+Building site documents
183 214
 -----------------------
184 215
 
185 216
 This section goes over how to put together site documents according to
186
-your specific environment, and generate the initial Promenade bundle
217
+your specific environment and generate the initial Promenade bundle
187 218
 needed to start the site deployment.
188 219
 
189 220
 Preparing deployment documents
190 221
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
191 222
 
192
-In its current form, pegleg provides an organized structure for YAML
193
-elements, in order to separate common site elements (i.e., ``global``
223
+In its current form, Pegleg provides an organized structure for YAML
224
+elements that separates common site elements (i.e., ``global``
194 225
 folder) from unique site elements (i.e., ``site`` folder).
195 226
 
196
-To gain a full understanding of the pegleg structure, it is highly
197
-recommended to read pegleg documentation on this
227
+To gain a full understanding of the Pegleg structure, it is highly
228
+recommended to read the Pegleg documentation on this topic
198 229
 `here <https://airship-pegleg.readthedocs.io/>`__.
199 230
 
200 231
 The ``seaworthy`` site may be used as reference site. It is the
201 232
 principal pipeline for integration and continuous deployment testing of Airship.
202 233
 
203
-Change directory to the ``airship-treasuremap/site`` folder and copy the
234
+Change directory to the ``site`` folder and copy the
204 235
 ``seaworthy`` site as follows:
205 236
 
206
-::
237
+.. code-block:: bash
207 238
 
208 239
     NEW_SITE=mySite # replace with the name of your site
209
-    cd airship-treasuremap/site
240
+    cd treasuremap/site
210 241
     cp -r seaworthy $NEW_SITE
211 242
 
212 243
 Remove ``seaworthy`` specific certificates.
213 244
 
214
-::
245
+.. code-block:: bash
215 246
 
216
-    rm -f airship-treasuremap/site/${NEW_SITE}/secrets/certificates/certificates.yaml
247
+    rm -f site/${NEW_SITE}/secrets/certificates/certificates.yaml
217 248
 
218 249
 
219 250
 You will then need to manually make changes to these files. These site
220
-manifests are heavily commented to explain parameters, and importantly
251
+manifests are heavily commented to explain parameters, and more importantly
221 252
 identify all of the parameters that need to change when authoring a new
222 253
 site.
223 254
 
224 255
 These areas which must be updated for a new site are flagged with the
225
-label ``NEWSITE-CHANGEME`` in YAML commentary. Search for all instances
226
-of ``NEWSITE-CHANGEME`` in your new site definition, and follow the
256
+label ``NEWSITE-CHANGEME`` in YAML comments. Search for all instances
257
+of ``NEWSITE-CHANGEME`` in your new site definition. Then follow the
227 258
 instructions that accompany the tag in order to make all needed changes
228 259
 to author your new Airship site.
229 260
 
@@ -239,36 +270,47 @@ the order in which you should build your site files is as follows:
239 270
 Register DNS names
240 271
 ~~~~~~~~~~~~~~~~~~
241 272
 
273
+Airship has two virtual IPs.
274
+
275
+See ``data.vip`` in section of
276
+``site/${NEW_SITE}/networks/common-addresses.yaml`` configuration file.
277
+Both are implemented via Kubernetes ingress controller and require FQDNs/DNS.
278
+
242 279
 Register the following list of DNS names:
243 280
 
244 281
 ::
245 282
 
246
-    cloudformation.DOMAIN
247
-    compute.DOMAIN
248
-    dashboard.DOMAIN
249
-    drydock.DOMAIN
250
-    grafana.DOMAIN
251
-    iam.DOMAIN
252
-    identity.DOMAIN
253
-    image.DOMAIN
254
-    kibana.DOMAIN
255
-    maas.DOMAIN
256
-    nagios.DOMAIN
257
-    network.DOMAIN
258
-    nova-novncproxy.DOMAIN
259
-    object-store.DOMAIN
260
-    orchestration.DOMAIN
261
-    placement.DOMAIN
262
-    shipyard.DOMAIN
263
-    volume.DOMAIN
283
+    +---+---------------------------+-------------+
284
+    | A |             iam-sw.DOMAIN | ingress-vip |
285
+    | A |        shipyard-sw.DOMAIN | ingress-vip |
286
+    +---+---------------------------+-------------+
287
+    | A |  cloudformation-sw.DOMAIN | ingress-vip |
288
+    | A |         compute-sw.DOMAIN | ingress-vip |
289
+    | A |       dashboard-sw.DOMAIN | ingress-vip |
290
+    | A |         grafana-sw.DOMAIN | ingress-vip |
291
+    +---+---------------------------+-------------+
292
+    | A |        identity-sw.DOMAIN | ingress-vip |
293
+    | A |           image-sw.DOMAIN | ingress-vip |
294
+    | A |          kibana-sw.DOMAIN | ingress-vip |
295
+    | A |          nagios-sw.DOMAIN | ingress-vip |
296
+    | A |         network-sw.DOMAIN | ingress-vip |
297
+    | A | nova-novncproxy-sw.DOMAIN | ingress-vip |
298
+    | A |    object-store-sw.DOMAIN | ingress-vip |
299
+    | A |   orchestration-sw.DOMAIN | ingress-vip |
300
+    | A |       placement-sw.DOMAIN | ingress-vip |
301
+    | A |          volume-sw.DOMAIN | ingress-vip |
302
+    +---+---------------------------+-------------+
303
+    | A |            maas-sw.DOMAIN | maas-vip    |
304
+    | A |         drydock-sw.DOMAIN | maas-vip    |
305
+    +---+---------------------------+-------------+
264 306
 
265 307
 Here ``DOMAIN`` is a name of ingress domain, you can find it in the
266 308
 ``data.dns.ingress_domain`` section of
267 309
 ``site/${NEW_SITE}/secrets/certificates/ingress.yaml`` configuration file.
268 310
 
269
-Run the following command to get up to date list of required DNS names:
311
+Run the following command to get an up-to-date list of required DNS names:
270 312
 
271
-::
313
+.. code-block:: bash
272 314
 
273 315
     grep -E 'host: .+DOMAIN' site/${NEW_SITE}/software/config/endpoints.yaml | \
274 316
         sort -u | awk '{print $2}'
@@ -279,21 +321,21 @@ Update Secrets
279 321
 Replace passphrases under ``site/${NEW_SITE}/secrets/passphrases/``
280 322
 with random generated ones:
281 323
 
282
-- Passpharses generation ``openssl rand -hex 10``
283
-- UUID generation ``uuidgen`` (e.g. for Ceph filesystem ID)
324
+- Passphrases generation ``openssl rand -hex 10``
325
+- UUID generation ``uuidgen`` (e.g., for Ceph filesystem ID)
284 326
 - Update ``secrets/passphrases/ipmi_admin_password.yaml`` with IPMI password
285 327
 - Update ``secrets/passphrases/ubuntu_crypt_password.yaml`` with password hash:
286 328
 
287
-::
329
+.. code-block:: python
288 330
 
289 331
     python3 -c "from crypt import *; print(crypt('<YOUR_PASSWORD>', METHOD_SHA512))"
290 332
 
291 333
 Configure certificates in ``site/${NEW_SITE}/secrets/certificates/ingress.yaml``,
292
-they need to be issued for the domains configured in ``Register DNS names`` section.
334
+they need to be issued for the domains configured in the ``Register DNS names`` section.
293 335
 
294 336
 .. caution::
295 337
 
296
-    It is required to configure valid certificates, self-signed certificates
338
+    It is required to configure valid certificates. Self-signed certificates
297 339
     are not supported.
298 340
 
299 341
 Control Plane & Tenant Ceph Cluster Notes
@@ -309,7 +351,7 @@ Configuration variables for tenant ceph are located in:
309 351
 - ``site/${NEW_SITE}/software/charts/osh/openstack-tenant-ceph/ceph-osd.yaml``
310 352
 - ``site/${NEW_SITE}/software/charts/osh/openstack-tenant-ceph/ceph-client.yaml``
311 353
 
312
-Setting highlights:
354
+Configuration summary:
313 355
 
314 356
 -  data/values/conf/storage/osd[\*]/data/location: The block device that
315 357
    will be formatted by the Ceph chart and used as a Ceph OSD disk
@@ -320,43 +362,46 @@ Setting highlights:
320 362
 
321 363
 Assumptions:
322 364
 
323
-1. Ceph OSD disks are not configured for any type of RAID, they
365
+1. Ceph OSD disks are not configured for any type of RAID. Instead, they
324 366
    are configured as JBOD when connected through a RAID controller.
325
-   If RAID controller does not support JBOD, put each disk in its
367
+   If the RAID controller does not support JBOD, put each disk in its
326 368
    own RAID-0 and enable RAID cache and write-back cache if the
327 369
    RAID controller supports it.
328 370
 2. Ceph disk mapping, disk layout, journal and OSD setup is the same
329 371
    across Ceph nodes, with only their role differing. Out of the 4
330 372
    control plane nodes, we expect to have 3 actively participating in
331
-   the Ceph quorom, and the remaining 1 node designated as a standby
373
+   the Ceph quorum, and the remaining 1 node designated as a standby
332 374
    Ceph node which uses a different control plane profile
333 375
    (cp\_*-secondary) than the other three (cp\_*-primary).
334
-3. If doing a fresh install, disk are unlabeled or not labeled from a
376
+3. If performing a fresh install, disks are unlabeled or not labeled from a
335 377
    previous Ceph install, so that Ceph chart will not fail disk
336 378
    initialization.
337 379
 
338
-It's highly recommended to use SSD devices for Ceph Journal partitions.
380
+.. important::
381
+
382
+    It is highly recommended to use SSD devices for Ceph Journal partitions.
339 383
 
340 384
 If you have an operating system available on the target hardware, you
341 385
 can determine HDD and SSD devices with:
342 386
 
343
-::
387
+
388
+.. code-block:: bash
344 389
 
345 390
     lsblk -d -o name,rota
346 391
 
347 392
 where a ``rota`` (rotational) value of ``1`` indicates a spinning HDD,
348
-and where a value of ``0`` indicates non-spinning disk (i.e. SSD). (Note
349
-- Some SSDs still report a value of ``1``, so it is best to go by your
393
+and where a value of ``0`` indicates non-spinning disk (i.e., SSD). (Note:
394
+Some SSDs still report a value of ``1``, so it is best to go by your
350 395
 server specifications).
351 396
 
352 397
 For OSDs, pass in the whole block device (e.g., ``/dev/sdd``), and the
353 398
 Ceph chart will take care of disk partitioning, formatting, mounting,
354 399
 etc.
355 400
 
356
-For Ceph Journals, you can pass in a specific partition (e.g., ``/dev/sdb1``),
357
-note that it's not required to pre-create these partitions, Ceph chart
401
+For Ceph Journals, you can pass in a specific partition (e.g., ``/dev/sdb1``).
402
+Note that it's not required to pre-create these partitions. The Ceph chart
358 403
 will create journal partitions automatically if they don't exist.
359
-By default the size of every journal partition is 10G, make sure
404
+By default the size of every journal partition is 10G. Make sure
360 405
 there is enough space available to allocate all journal partitions.
361 406
 
362 407
 Consider the following example where:
@@ -367,7 +412,7 @@ Consider the following example where:
367 412
 
368 413
 The data section of this file would look like:
369 414
 
370
-::
415
+.. code-block:: yaml
371 416
 
372 417
     data:
373 418
       values:
@@ -403,54 +448,48 @@ Manifest linting and combining layers
403 448
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
404 449
 
405 450
 After constituent YAML configurations are finalized, use Pegleg to lint
406
-your manifests, and resolve any issues that result from linting before
451
+your manifests. Resolve any issues that result from linting before
407 452
 proceeding:
408 453
 
409
-::
454
+.. code-block:: bash
410 455
 
411
-    sudo airship-pegleg/tools/pegleg.sh repo \
412
-      -r airship-treasuremap lint
456
+    sudo tools/airship pegleg site -r /target lint $NEW_SITE
413 457
 
414
-Note: ``P001`` and ``P003`` linting errors are expected for missing
458
+Note: ``P001`` and ``P005`` linting errors are expected for missing
415 459
 certificates, as they are not generated until the next section. You may
416
-suppress these warnings by appending ``-x P001 -x P003`` to the lint
460
+suppress these warnings by appending ``-x P001 -x P005`` to the lint
417 461
 command.
418 462
 
419
-Next, use pegleg to perform the merge that will yield the combined
463
+Next, use Pegleg to perform the merge that will yield the combined
420 464
 global + site type + site YAML:
421 465
 
422
-::
466
+.. code-block:: bash
423 467
 
424
-    sudo sh airship-pegleg/tools/pegleg.sh site \
425
-      -r airship-treasuremap \
426
-      collect $NEW_SITE
468
+    sudo tools/airship pegleg site -r /target collect $NEW_SITE
427 469
 
428 470
 Perform a visual inspection of the output. If any errors are discovered,
429 471
 you may fix your manifests and re-run the ``lint`` and ``collect``
430 472
 commands.
431 473
 
432
-After you have an error-free output, save the resulting YAML as follows:
474
+Once you have error-free output, save the resulting YAML as follows:
433 475
 
434
-::
476
+.. code-block:: bash
435 477
 
436
-    sudo airship-pegleg/tools/pegleg.sh site \
437
-      -r airship-treasuremap \
438
-      collect $NEW_SITE -s ${NEW_SITE}_collected
478
+    sudo tools/airship pegleg site -r /target collect $NEW_SITE \
479
+        -s ${NEW_SITE}_collected
439 480
 
440
-It is this output which will be used in subsequent steps.
481
+This output is required for subsequent steps.
441 482
 
442 483
 Lastly, you should also perform a ``render`` on the documents. The
443 484
 resulting render from Pegleg will not be used as input in subsequent
444 485
 steps, but is useful for understanding what the document will look like
445 486
 once Deckhand has performed all substitutions, replacements, etc. This
446
-is also useful for troubleshooting, and addressing any Deckhand errors
487
+is also useful for troubleshooting and addressing any Deckhand errors
447 488
 prior to submitting via Shipyard:
448 489
 
449
-::
490
+.. code-block:: bash
450 491
 
451
-    sudo airship-pegleg/tools/pegleg.sh site \
452
-      -r airship-treasuremap \
453
-      render $NEW_SITE
492
+    sudo tools/airship pegleg site -r /target render $NEW_SITE
454 493
 
455 494
 Inspect the rendered document for any errors. If there are errors,
456 495
 address them in your manifests and re-run this section of the document.
@@ -458,64 +497,41 @@ address them in your manifests and re-run this section of the document.
458 497
 Building the Promenade bundle
459 498
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
460 499
 
461
-Clone the Promenade repo, if not already cloned:
462
-
463
-::
464
-
465
-    git clone https://opendev.org/airship/promenade
466
-
467
-Refer to the ``data/charts/ucp/promenade/reference`` field in
468
-``airship-treasuremap/global/software/config/versions.yaml``. If
469
-this is a pinned reference (i.e., any reference that's not ``master``),
470
-then you should checkout the same version of the Promenade repository.
471
-For example, if the Promenade reference was ``86c3c11...`` in the
472
-versions file, checkout the same version of the Promenade repo which was
473
-cloned previously:
474
-
475
-::
500
+Create an output directory for Promenade certs and run
476 501
 
477
-    (cd promenade && git checkout 86c3c11)
502
+.. code-block:: bash
478 503
 
479
-Likewise, before running the ``simple-deployment.sh`` script, you should
480
-refer to the ``data/images/ucp/promenade/promenade`` field in
481
-``~/airship-treasuremap/global/software/config/versions.yaml``. If
482
-there is a pinned reference (i.e., any image reference that's not
483
-``latest``), then this reference should be used to set the
484
-``IMAGE_PROMENADE`` environment variable. For example, if the Promenade
485
-image was pinned to ``quay.io/airshipit/promenade:d30397f...`` in
486
-the versions file, then export the previously mentioned environment
487
-variable like so:
504
+    mkdir ${NEW_SITE}_certs
505
+    sudo tools/airship promenade generate-certs \
506
+      -o /target/${NEW_SITE}_certs /target/${NEW_SITE}_collected/*.yaml
488 507
 
489
-::
490
-
491
-    export IMAGE_PROMENADE=quay.io/airshipit/promenade:d30397f...
508
+Estimated runtime: About **1 minute**
492 509
 
493
-Now, create an output directory for Promenade bundles and run the
494
-``simple-deployment.sh`` script:
510
+After the certificates has been successfully created, copy the generated
511
+certificates into the security folder. Example:
495 512
 
496
-::
513
+.. code-block:: bash
497 514
 
498
-    mkdir ${NEW_SITE}_bundle
499
-    sudo -E promenade/tools/simple-deployment.sh ${NEW_SITE}_collected ${NEW_SITE}_bundle
515
+    mkdir -p site/${NEW_SITE}/secrets/certificates
516
+    sudo cp ${NEW_SITE}_certs/certificates.yaml \
517
+      site/${NEW_SITE}/secrets/certificates/certificates.yaml
500 518
 
501
-Estimated runtime: About **1 minute**
519
+Regenerate collected YAML files to include copied certificates:
502 520
 
503
-After the bundle has been successfully created, copy the generated
504
-certificates into the security folder. Ex:
521
+.. code-block:: bash
505 522
 
506
-::
523
+    sudo rm -rf ${NEW_SITE}_collected ${NEW_SITE}_certs
524
+    sudo tools/airship pegleg site -r /target collect $NEW_SITE \
525
+        -s ${NEW_SITE}_collected
507 526
 
508
-    mkdir -p airship-treasuremap/site/${NEW_SITE}/secrets/certificates
509
-    sudo cp ${NEW_SITE}_bundle/certificates.yaml \
510
-      airship-treasuremap/site/${NEW_SITE}/secrets/certificates/certificates.yaml
527
+Finally, create the Promenade bundle:
511 528
 
512
-Regenerate collected YAML files to include copied certificates:
529
+.. code-block:: bash
513 530
 
514
-::
531
+   mkdir ${NEW_SITE}_bundle
532
+   sudo tools/airship promenade build-all --validators \
533
+     -o /target/${NEW_SITE}_bundle /target/${NEW_SITE}_collected/*.yaml
515 534
 
516
-    sudo airship-pegleg/tools/pegleg.sh site \
517
-      -r airship-treasuremap \
518
-      collect $NEW_SITE -s ${NEW_SITE}_collected
519 535
 
520 536
 Genesis node
521 537
 ------------
@@ -528,32 +544,30 @@ stated previously in this document. Also ensure that the hardware RAID
528 544
 is setup for this node per the control plane disk configuration stated
529 545
 previously in this document.
530 546
 
531
-Then, start with a manual install of Ubuntu 16.04 on the node you wish
532
-to use to seed the rest of your environment standard `Ubuntu
547
+Then, start with a manual install of Ubuntu 16.04 on the genesis node, the node
548
+you will use to seed the rest of your environment. Use standard `Ubuntu
533 549
 ISO <http://releases.ubuntu.com/16.04>`__.
534 550
 Ensure to select the following:
535 551
 
536 552
 -  UTC timezone
537
--  Hostname that matches the Genesis hostname given in
538
-   ``/data/genesis/hostname`` in
539
-   ``airship-treasuremap/site/${NEW_SITE}/networks/common-addresses.yaml``.
553
+-  Hostname that matches the genesis hostname given in
554
+   ``data.genesis.hostname`` in
555
+   ``site/${NEW_SITE}/networks/common-addresses.yaml``.
540 556
 -  At the ``Partition Disks`` screen, select ``Manual`` so that you can
541 557
    setup the same disk partitioning scheme used on the other control
542 558
    plane nodes that will be deployed by MaaS. Select the first logical
543 559
    device that corresponds to one of the RAID-1 arrays already setup in
544 560
    the hardware controller. On this device, setup partitions matching
545 561
    those defined for the ``bootdisk`` in your control plane host profile
546
-   found in ``airship-treasuremap/site/${NEW_SITE}/profiles/host``.
562
+   found in ``site/${NEW_SITE}/profiles/host``.
547 563
    (e.g., 30G for /, 1G for /boot, 100G for /var/log, and all remaining
548 564
    storage for /var). Note that the volume size syntax looking like
549 565
    ``>300g`` in Drydock means that all remaining disk space is allocated
550 566
    to this volume, and that volume needs to be at least 300G in
551 567
    size.
552
--  Ensure that OpenSSH and Docker (Docker is needed because of
553
-   miniMirror) are included as installed packages
554 568
 -  When you get to the prompt, "How do you want to manage upgrades on
555 569
    this system?", choose "No automatic updates" so that packages are
556
-   only updated at the time of our choosing (e.g. maintenance windows).
570
+   only updated at the time of our choosing (e.g., maintenance windows).
557 571
 -  Ensure the grub bootloader is also installed to the same logical
558 572
    device as in the previous step (this should be default behavior).
559 573
 
@@ -561,38 +575,39 @@ After installation, ensure the host has outbound internet access and can
561 575
 resolve public DNS entries (e.g., ``nslookup google.com``,
562 576
 ``curl https://www.google.com``).
563 577
 
564
-Ensure that the deployed Genesis hostname matches the hostname in
565
-``data/genesis/hostname`` in
566
-``airship-treasuremap/site/${NEW_SITE}/networks/common-addresses.yaml``.
578
+Ensure that the deployed genesis hostname matches the hostname in
579
+``data.genesis.hostname`` in
580
+``site/${NEW_SITE}/networks/common-addresses.yaml``.
567 581
 If it does not match, then either change the hostname of the node to
568 582
 match the configuration documents, or re-generate the configuration with
569
-the correct hostname. In order to change the hostname of the deployed
570
-node, you may run the following:
583
+the correct hostname.
571 584
 
572
-::
585
+To change the hostname of the deployed node, you may run the following:
586
+
587
+.. code-block:: bash
573 588
 
574 589
     sudo hostname $NEW_HOSTNAME
575 590
     sudo sh -c "echo $NEW_HOSTNAME > /etc/hostname"
576 591
     sudo vi /etc/hosts # Anywhere the old hostname appears in the file, replace
577 592
                        # with the new hostname
578 593
 
579
-Or to regenerate manifests, re-run the previous two sections with the
580
-after updating the genesis hostname in the site definition.
594
+Or, as an alternative, update the genesis hostname
595
+in the site definition and then repeat the steps in the previous two sections,
596
+"Manifest linting and combining layers" and "Building the Promenade bundle".
581 597
 
582 598
 Installing matching kernel version
583 599
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
584 600
 
585
-Install the same kernel version on the Genesis host that MaaS will use
601
+Install the same kernel version on the genesis host that MaaS will use
586 602
 to deploy new baremetal nodes.
587 603
 
588
-In order to do this, first you must determine the kernel version that
604
+To do this, first you must determine the kernel version that
589 605
 will be deployed to those nodes. Start by looking at the host profile
590 606
 definition used to deploy other control plane nodes by searching for
591 607
 ``control-plane: enabled``. Most likely this will be a file under
592
-``global/profiles/host``. In this file, find the kernel info -
593
-e.g.:
608
+``global/profiles/host``. In this file, find the kernel info. Example:
594 609
 
595
-::
610
+.. code-block:: bash
596 611
 
597 612
   platform:
598 613
     image: 'xenial'
@@ -601,29 +616,29 @@ e.g.:
601 616
       kernel_package: 'linux-image-4.15.0-46-generic'
602 617
 
603 618
 It is recommended to install the latest kernel. Check the latest
604
-available kernel, update the site specs and Regenerate collected
619
+available kernel, update the site specs and regenerate collected
605 620
 YAML files.
606 621
 
607 622
 Define any proxy environment variables needed for your environment to
608 623
 reach public Ubuntu package repos, and install the matching kernel on the
609
-Genesis host (make sure to run on Genesis host, not on the build host):
624
+genesis host (make sure to run on genesis host, not on the build host):
610 625
 
611 626
 To install the latest hwe-16.04 kernel:
612 627
 
613
-::
628
+.. code-block:: bash
614 629
 
615 630
     sudo apt-get install --install-recommends linux-generic-hwe-16.04
616 631
 
617 632
 To install the latest ga-16.04 kernel:
618 633
 
619
-::
634
+.. code-block:: bash
620 635
 
621 636
     sudo apt-get install --install-recommends linux-generic
622 637
 
623 638
 Check the installed packages on the genesis host with ``dpkg --list``.
624 639
 If there are any later kernel versions installed, remove them with
625
-``sudo apt remove``, so that the newly install kernel is the latest
626
-available. Boot the genesis node using install kernel.
640
+``sudo apt remove``, so that the newly installed kernel is the latest
641
+available. Boot the genesis node using the installed kernel.
627 642
 
628 643
 Install ntpdate/ntp
629 644
 ~~~~~~~~~~~~~~~~~~~
@@ -631,7 +646,7 @@ Install ntpdate/ntp
631 646
 Install and run ntpdate, to ensure a reasonably sane time on genesis
632 647
 host before proceeding:
633 648
 
634
-::
649
+.. code-block:: bash
635 650
 
636 651
     sudo apt -y install ntpdate
637 652
     sudo ntpdate ntp.ubuntu.com
@@ -641,13 +656,13 @@ sources, specify a local NTP server instead of using ``ntp.ubuntu.com``.
641 656
 
642 657
 Then, install the NTP client:
643 658
 
644
-::
659
+.. code-block:: bash
645 660
 
646 661
     sudo apt -y install ntp
647 662
 
648
-Add the list of NTP servers specified in ``data/ntp/servers_joined`` in
663
+Add the list of NTP servers specified in ``data.ntp.servers_joined`` in
649 664
 file
650
-``airship-treasuremap/site/${NEW_SITE}/networks/common-address.yaml``
665
+``site/${NEW_SITE}/networks/common-address.yaml``
651 666
 to ``/etc/ntp.conf`` as follows:
652 667
 
653 668
 ::
@@ -658,7 +673,7 @@ to ``/etc/ntp.conf`` as follows:
658 673
 
659 674
 Then, restart the NTP service:
660 675
 
661
-::
676
+.. code-block:: bash
662 677
 
663 678
     sudo service ntp restart
664 679
 
@@ -667,7 +682,7 @@ consider using alternate time sources for your deployment.
667 682
 
668 683
 Disable the apparmor profile for ntpd:
669 684
 
670
-::
685
+.. code-block:: bash
671 686
 
672 687
     sudo ln -s /etc/apparmor.d/usr.sbin.ntpd /etc/apparmor.d/disable/
673 688
     sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.ntpd
@@ -680,22 +695,21 @@ disabled.
680 695
 Promenade bootstrap
681 696
 ~~~~~~~~~~~~~~~~~~~
682 697
 
683
-Copy the ``${NEW_SITE}_bundle`` and ``${NEW_SITE}_collected``
684
-directories from the build node to the genesis node, into the home
685
-directory of the user there (e.g., ``/home/ubuntu``). Then, run the
686
-following script as sudo on the genesis node:
698
+Copy the ``${NEW_SITE}_bundle`` directory from the build node to the genesis
699
+node, into the home directory of the user there (e.g., ``/home/ubuntu``).
700
+Then, run the following script as sudo on the genesis node:
687 701
 
688
-::
702
+.. code-block:: bash
689 703
 
690 704
     cd ${NEW_SITE}_bundle
691 705
     sudo ./genesis.sh
692 706
 
693
-Estimated runtime: **40m**
707
+Estimated runtime: **1h**
694 708
 
695 709
 Following completion, run the ``validate-genesis.sh`` script to ensure
696 710
 correct provisioning of the genesis node:
697 711
 
698
-::
712
+.. code-block:: bash
699 713
 
700 714
     cd ${NEW_SITE}_bundle
701 715
     sudo ./validate-genesis.sh
@@ -705,88 +719,57 @@ Estimated runtime: **2m**
705 719
 Deploy Site with Shipyard
706 720
 -------------------------
707 721
 
708
-Start by cloning the shipyard repository to the Genesis node:
709
-
710
-::
711
-
712
-    git clone https://opendev.org/airship/shipyard
713
-
714
-Refer to the ``data/charts/ucp/shipyard/reference`` field in
715
-``airship-treasuremap/global/software/config/versions.yaml``. If
716
-this is a pinned reference (i.e., any reference that's not ``master``),
717
-then you should checkout the same version of the Shipyard repository.
718
-For example, if the Shipyard reference was ``7046ad3...`` in the
719
-versions file, checkout the same version of the Shipyard repo which was
720
-cloned previously:
721
-
722
-::
723
-
724
-    (cd shipyard && git checkout 7046ad3)
725
-
726
-Likewise, before running the ``deckhand_load_yaml.sh`` script, you
727
-should refer to the ``data/images/ucp/shipyard/shipyard`` field in
728
-``airship-treasuremap/global/software/config/versions.yaml``. If
729
-there is a pinned reference (i.e., any image reference that's not
730
-``latest``), then this reference should be used to set the
731
-``SHIPYARD_IMAGE`` environment variable. For example, if the Shipyard
732
-image was pinned to ``quay.io/airshipit/shipyard@sha256:dfc25e1...`` in
733
-the versions file, then export the previously mentioned environment
734
-variable:
735
-
736
-::
737
-
738
-    export SHIPYARD_IMAGE=quay.io/airshipit/shipyard@sha256:dfc25e1...
739
-
740 722
 Export valid login credentials for one of the Airship Keystone users defined
741
-for the site. Currently there is no authorization checks in place, so
723
+for the site. Currently there are no authorization checks in place, so
742 724
 the credentials for any of the site-defined users will work. For
743 725
 example, we can use the ``shipyard`` user, with the password that was
744 726
 defined in
745
-``airship-treasuremap/site/${NEW_SITE}/secrets/passphrases/ucp_shipyard_keystone_password.yaml``.
746
-Ex:
727
+``site/${NEW_SITE}/secrets/passphrases/ucp_shipyard_keystone_password.yaml``.
728
+Example:
747 729
 
748
-::
730
+.. code-block:: bash
731
+
732
+    export OS_AUTH_URL="https://iam-sw.DOMAIN:443/v3"
749 733
 
750 734
     export OS_USERNAME=shipyard
751
-    export OS_PASSWORD=46a75e4...
735
+    export OS_PASSWORD=password123
752 736
 
753
-(Note: Default auth variables are defined
754
-`here <https://opendev.org/airship/shipyard/src/branch/master/tools/shipyard_docker_base_command.sh>`__,
755
-and should otherwise be correct, barring any customizations of these
756
-site parameters).
737
+Next, load collected site manifests to Shipyard
757 738
 
758
-Next, run the deckhand\_load\_yaml.sh script providing an absolute path
759
-to a directory that contains collected manifests:
739
+.. code-block:: bash
760 740
 
761
-::
741
+    sudo -E tools/airship shipyard create configdocs ${NEW_SITE} \
742
+      --directory=/target/${NEW_SITE}_collected
762 743
 
763
-    sudo -E shipyard/tools/deckhand_load_yaml.sh ${NEW_SITE} $(pwd)/${NEW_SITE}_collected
744
+    sudo tools/airship shipyard commit configdocs
764 745
 
765 746
 Estimated runtime: **3m**
766 747
 
767 748
 Now deploy the site with shipyard:
768 749
 
769
-::
750
+.. code-block:: bash
751
+
752
+    tools/airship shipyard create action deploy_site
770 753
 
771
-    cd shipyard/tools/
772
-    sudo -E ./deploy_site.sh
754
+Estimated runtime: **3h**
773 755
 
774
-Estimated runtime: **1h30m**
756
+Check periodically for successful deployment:
775 757
 
776
-The message ``Site Successfully Deployed`` is the expected output at the
777
-end of a successful deployment. In this example, this means that Airship and
778
-OSH should be fully deployed.
758
+.. code-block:: bash
779 759
 
780
-Disable password-based login on Genesis
760
+    tools/airship shipyard get actions
761
+    tools/airship shipyard describe action/<ACTION>
762
+
763
+Disable password-based login on genesis
781 764
 ---------------------------------------
782 765
 
783
-Before proceeding, verify that your SSH access to the Genesis node is
766
+Before proceeding, verify that your SSH access to the genesis node is
784 767
 working with your SSH key (i.e., not using password-based
785 768
 authentication).
786 769
 
787
-Then, disable password-based SSH authentication on Genesis in
770
+Then, disable password-based SSH authentication on genesis in
788 771
 ``/etc/ssh/sshd_config`` by uncommenting the ``PasswordAuthentication``
789
-and setting its value to ``no``. Ex:
772
+and setting its value to ``no``. Example:
790 773
 
791 774
 ::
792 775
 
@@ -798,3 +781,4 @@ Then, restart the ssh service:
798 781
 
799 782
     sudo systemctl restart ssh
800 783
 
784
+

Loading…
Cancel
Save