Replicate Proxy updates in R4 install guides

This review is a follow-on to: 714538, 720252, 726468
Copied updates from R3 install guides into R4 guides.
Changed tabs to spaces in some guides.

Change-Id: I28191b0cd69c0a86d134a2fe9453928d02ce9042
Signed-off-by: MCamp859 <maryx.camp@intel.com>
This commit is contained in:
MCamp859
2020-05-15 16:52:35 -04:00
parent 8bfe26449b
commit 7158f15e6c
16 changed files with 420 additions and 253 deletions

View File

@@ -38,8 +38,8 @@ The items labeled *a* and *b* in the figure indicate two configuration files:
* {controller OAM gateway IP/floating IP/host IP} * {controller OAM gateway IP/floating IP/host IP}
* {controller management floating IP/host IP} * {controller management floating IP/host IP}
* {controller cluster gateway IP} * {controller cluster gateway IP}
* 10.96.0.1 * 10.96.0.1 {apiserver cluster IP for Kubernetes}
* 10.96.0.10 * 10.96.0.10 {coredns cluster IP for Kubernetes}
* `*.cluster.local` * `*.cluster.local`
* Configuration file *b* lists container runtime proxy variables * Configuration file *b* lists container runtime proxy variables

View File

@@ -116,11 +116,20 @@ Bootstrap system on controller-0
admin_username: admin admin_username: admin
admin_password: <admin-password> admin_password: <admin-password>
ansible_become_pass: <sysadmin-password> ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF EOF
Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs` Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs`
for information on additional Ansible bootstrap configurations for advanced for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios. Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@@ -250,6 +259,18 @@ Configure controller-0
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {} system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0 system host-stor-list controller-0
#. If required, and not already done as part of bootstrap, configure Docker to
use a proxy server.
#. List Docker proxy parameters:
::
system service-parameter-list platform docker
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
************************************* *************************************

View File

@@ -131,11 +131,20 @@ Bootstrap system on controller-0
admin_username: admin admin_username: admin
admin_password: <admin-password> admin_password: <admin-password>
ansible_become_pass: <sysadmin-password> ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF EOF
Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs` Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs`
for information on additional Ansible bootstrap configurations for advanced for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios. Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@@ -256,6 +265,18 @@ Configure controller-0
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {} system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0 system host-stor-list controller-0
#. If required, and not already done as part of bootstrap, configure Docker to
use a proxy server.
#. List Docker proxy parameters:
::
system service-parameter-list platform docker
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
************************************* *************************************

View File

@@ -137,11 +137,20 @@ Bootstrap system on controller-0
admin_username: admin admin_username: admin
admin_password: <admin-password> admin_password: <admin-password>
ansible_become_pass: <sysadmin-password> ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF EOF
Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs` Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs`
for information on additional Ansible bootstrap configurations for advanced for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios. Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@@ -165,7 +174,7 @@ Configure controller-0
:: ::
source /etc/platform/openrc source /etc/platform/openrc
#. Configure the OAM and MGMT interfaces of controller-0 and specify the #. Configure the OAM and MGMT interfaces of controller-0 and specify the
attached networks. Use the OAM and MGMT port names, for example eth0, that are attached networks. Use the OAM and MGMT port names, for example eth0, that are
@@ -173,24 +182,24 @@ Configure controller-0
:: ::
OAM_IF=<OAM-PORT> OAM_IF=<OAM-PORT>
MGMT_IF=<MGMT-PORT> MGMT_IF=<MGMT-PORT>
system host-if-modify controller-0 lo -c none system host-if-modify controller-0 lo -c none
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
for UUID in $IFNET_UUIDS; do for UUID in $IFNET_UUIDS; do
system interface-network-remove ${UUID} system interface-network-remove ${UUID}
done done
system host-if-modify controller-0 $OAM_IF -c platform system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam system interface-network-assign controller-0 $OAM_IF oam
system host-if-modify controller-0 $MGMT_IF -c platform system host-if-modify controller-0 $MGMT_IF -c platform
system interface-network-assign controller-0 $MGMT_IF mgmt system interface-network-assign controller-0 $MGMT_IF mgmt
system interface-network-assign controller-0 $MGMT_IF cluster-host system interface-network-assign controller-0 $MGMT_IF cluster-host
#. Configure NTP servers for network time synchronization: #. Configure NTP servers for network time synchronization:
:: ::
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
#. Configure Ceph storage backend #. Configure Ceph storage backend
@@ -206,6 +215,18 @@ Configure controller-0
system storage-backend-add ceph --confirmed system storage-backend-add ceph --confirmed
#. If required, and not already done as part of bootstrap, configure Docker to
use a proxy server.
#. List Docker proxy parameters:
::
system service-parameter-list platform docker
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
************************************* *************************************
@@ -322,12 +343,12 @@ Install software on controller-1 and worker nodes
:: ::
system host-list system host-list
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability | | id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available | | 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | None | None | locked | disabled | offline | | 2 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
#. Using the host id, set the personality of this host to 'controller': #. Using the host id, set the personality of this host to 'controller':
@@ -361,16 +382,16 @@ Install software on controller-1 and worker nodes
:: ::
system host-list system host-list
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability | | id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available | | 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online | | 2 | controller-1 | controller | locked | disabled | online |
| 3 | worker-0 | worker | locked | disabled | online | | 3 | worker-0 | worker | locked | disabled | online |
| 4 | worker-1 | worker | locked | disabled | online | | 4 | worker-1 | worker | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
---------------------- ----------------------
Configure controller-1 Configure controller-1
@@ -387,11 +408,11 @@ install procedure.)
:: ::
OAM_IF=<OAM-PORT> OAM_IF=<OAM-PORT>
MGMT_IF=<MGMT-PORT> MGMT_IF=<MGMT-PORT>
system host-if-modify controller-1 $OAM_IF -c platform system host-if-modify controller-1 $OAM_IF -c platform
system interface-network-assign controller-1 $OAM_IF oam system interface-network-assign controller-1 $OAM_IF oam
system interface-network-assign controller-1 $MGMT_IF cluster-host system interface-network-assign controller-1 $MGMT_IF cluster-host
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
@@ -407,7 +428,7 @@ of installing the stx-openstack manifest and helm-charts later.
:: ::
system host-label-assign controller-1 openstack-control-plane=enabled system host-label-assign controller-1 openstack-control-plane=enabled
.. incl-config-controller-1-end: .. incl-config-controller-1-end:
@@ -421,7 +442,7 @@ Unlock controller-1 in order to bring it into service:
:: ::
system host-unlock controller-1 system host-unlock controller-1
Controller-1 will reboot in order to apply configuration changes and come into Controller-1 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host service. This can take 5-10 minutes, depending on the performance of the host
@@ -440,22 +461,22 @@ Configure worker nodes
:: ::
system ceph-mon-add worker-0 system ceph-mon-add worker-0
#. Wait for the worker node monitor to complete configuration: #. Wait for the worker node monitor to complete configuration:
:: ::
system ceph-mon-list system ceph-mon-list
+--------------------------------------+-------+--------------+------------+------+ +--------------------------------------+-------+--------------+------------+------+
| uuid | ceph_ | hostname | state | task | | uuid | ceph_ | hostname | state | task |
| | mon_g | | | | | | mon_g | | | |
| | ib | | | | | | ib | | | |
+--------------------------------------+-------+--------------+------------+------+ +--------------------------------------+-------+--------------+------------+------+
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None | | 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None | | a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None | | f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None |
+--------------------------------------+-------+--------------+------------+------+ +--------------------------------------+-------+--------------+------------+------+
#. Assign the cluster-host network to the MGMT interface for the worker nodes: #. Assign the cluster-host network to the MGMT interface for the worker nodes:
@@ -464,19 +485,19 @@ Configure worker nodes
:: ::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system interface-network-assign $NODE mgmt0 cluster-host system interface-network-assign $NODE mgmt0 cluster-host
done done
#. Configure data interfaces for worker nodes. Use the DATA port names, for #. Configure data interfaces for worker nodes. Use the DATA port names, for
example eth0, that are applicable to your deployment environment. example eth0, that are applicable to your deployment environment.
.. important:: .. important::
This step is **required** for OpenStack. This step is **required** for OpenStack.
This step is optional for Kubernetes: Do this step if using SRIOV network This step is optional for Kubernetes: Do this step if using SRIOV network
attachments in hosted application containers. attachments in hosted application containers.
For Kubernetes SRIOV network attachments: For Kubernetes SRIOV network attachments:
@@ -484,55 +505,55 @@ Configure worker nodes
:: ::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-label-assign ${NODE} sriovdp=enabled system host-label-assign ${NODE} sriovdp=enabled
done done
* If planning on running DPDK in containers on this host, configure the number * If planning on running DPDK in containers on this host, configure the number
of 1G Huge pages required on both NUMA nodes: of 1G Huge pages required on both NUMA nodes:
:: ::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-memory-modify ${NODE} 0 -1G 100 system host-memory-modify ${NODE} 0 -1G 100
system host-memory-modify ${NODE} 1 -1G 100 system host-memory-modify ${NODE} 1 -1G 100
done done
For both Kubernetes and OpenStack: For both Kubernetes and OpenStack:
:: ::
DATA0IF=<DATA-0-PORT> DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT> DATA1IF=<DATA-1-PORT>
PHYSNET0='physnet0' PHYSNET0='physnet0'
PHYSNET1='physnet1' PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list SPIL=/tmp/tmp-system-host-if-list
# configure the datanetworks in sysinv, prior to referencing it # configure the datanetworks in sysinv, prior to referencing it
# in the ``system host-if-modify`` command'. # in the ``system host-if-modify`` command'.
system datanetwork-add ${PHYSNET0} vlan system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan system datanetwork-add ${PHYSNET1} vlan
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
echo "Configuring interface for: $NODE" echo "Configuring interface for: $NODE"
set -ex set -ex
system host-port-list ${NODE} --nowrap > ${SPL} system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL} system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID} system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID} system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0} system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1} system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
set +ex set +ex
done done
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
@@ -548,27 +569,27 @@ OpenStack-specific host configuration
:: ::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled system host-label-assign $NODE sriov=enabled
done done
#. **For OpenStack only:** Set up disk partition for nova-local volume group, #. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for stx-openstack nova ephemeral disks. which is needed for stx-openstack nova ephemeral disks.
:: ::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
echo "Configuring Nova local for: $NODE" echo "Configuring Nova local for: $NODE"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}') ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
PARTITION_SIZE=10 PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE}) NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID} system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
done done
-------------------- --------------------
Unlock worker nodes Unlock worker nodes
@@ -578,9 +599,9 @@ Unlock worker nodes in order to bring them into service:
:: ::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-unlock $NODE system host-unlock $NODE
done done
The worker nodes will reboot in order to apply configuration changes and come into The worker nodes will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host machine. service. This can take 5-10 minutes, depending on the performance of the host machine.
@@ -593,39 +614,39 @@ Add Ceph OSDs to controllers
.. important:: .. important::
This step requires a configured Ceph storage backend This step requires a configured Ceph storage backend.
:: ::
HOST=controller-0 HOST=controller-0
DISKS=$(system host-disk-list ${HOST}) DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster) TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb" OSDs="/dev/sdb"
for OSD in $OSDs; do for OSD in $OSDs; do
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}') system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done done
system host-stor-list $HOST system host-stor-list $HOST
#. Add OSDs to controller-1. The following example adds OSDs to the `sdb` disk: #. Add OSDs to controller-1. The following example adds OSDs to the `sdb` disk:
.. important:: .. important::
This step requires a configured Ceph storage backend This step requires a configured Ceph storage backend.
:: ::
HOST=controller-1 HOST=controller-1
DISKS=$(system host-disk-list ${HOST}) DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster) TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb" OSDs="/dev/sdb"
for OSD in $OSDs; do for OSD in $OSDs; do
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}') system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done done
system host-stor-list $HOST system host-stor-list $HOST
---------- ----------
Next steps Next steps

View File

@@ -45,11 +45,11 @@ Configure controller-0
Unlock controller-0 Unlock controller-0
------------------- -------------------
.. important:: .. important::
Make sure the Ceph storage backend is configured. If it is Make sure the Ceph storage backend is configured. If it is
not configured, you will not be able to configure storage not configured, you will not be able to configure storage
nodes. nodes.
Unlock controller-0 in order to bring it into service: Unlock controller-0 in order to bring it into service:
@@ -75,13 +75,13 @@ Install software on controller-1, storage nodes, and worker nodes
:: ::
system host-list system host-list
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability | | id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available | | 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | None | None | locked | disabled | offline | | 2 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
#. Using the host id, set the personality of this host to 'controller': #. Using the host id, set the personality of this host to 'controller':
@@ -101,14 +101,14 @@ Install software on controller-1, storage nodes, and worker nodes
:: ::
system host-update 3 personality=storage system host-update 3 personality=storage
Repeat for storage-1. Power on storage-1 and wait for the new host Repeat for storage-1. Power on storage-1 and wait for the new host
(hostname=None) to be discovered by checking 'system host-list': (hostname=None) to be discovered by checking 'system host-list':
:: ::
system host-update 4 personality=storage system host-update 4 personality=storage
This initiates the software installation on storage-0 and storage-1. This initiates the software installation on storage-0 and storage-1.
This can take 5-10 minutes, depending on the performance of the host machine. This can take 5-10 minutes, depending on the performance of the host machine.
@@ -128,7 +128,7 @@ Install software on controller-1, storage nodes, and worker nodes
:: ::
system host-update 6 personality=worker hostname=worker-1 system host-update 6 personality=worker hostname=worker-1
This initiates the install of software on worker-0 and worker-1. This initiates the install of software on worker-0 and worker-1.
@@ -138,17 +138,17 @@ Install software on controller-1, storage nodes, and worker nodes
:: ::
system host-list system host-list
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability | | id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available | | 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online | | 2 | controller-1 | controller | locked | disabled | online |
| 3 | storage-0 | storage | locked | disabled | online | | 3 | storage-0 | storage | locked | disabled | online |
| 4 | storage-1 | storage | locked | disabled | online | | 4 | storage-1 | storage | locked | disabled | online |
| 5 | worker-0 | worker | locked | disabled | online | | 5 | worker-0 | worker | locked | disabled | online |
| 6 | worker-1 | worker | locked | disabled | online | | 6 | worker-1 | worker | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
---------------------- ----------------------
Configure controller-1 Configure controller-1
@@ -177,39 +177,39 @@ Configure storage nodes
:: ::
for NODE in storage-0 storage-1; do for NODE in storage-0 storage-1; do
system interface-network-assign $NODE mgmt0 cluster-host system interface-network-assign $NODE mgmt0 cluster-host
done done
#. Add OSDs to storage-0. The following example adds OSDs to the `sdb` disk: #. Add OSDs to storage-0. The following example adds OSDs to the `sdb` disk:
:: ::
HOST=storage-0 HOST=storage-0
DISKS=$(system host-disk-list ${HOST}) DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster) TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb" OSDs="/dev/sdb"
for OSD in $OSDs; do for OSD in $OSDs; do
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}') system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done done
system host-stor-list $HOST system host-stor-list $HOST
#. Add OSDs to storage-1. The following example adds OSDs to the `sdb` disk: #. Add OSDs to storage-1. The following example adds OSDs to the `sdb` disk:
:: ::
HOST=storage-1 HOST=storage-1
DISKS=$(system host-disk-list ${HOST}) DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster) TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb" OSDs="/dev/sdb"
for OSD in $OSDs; do for OSD in $OSDs; do
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}') system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done done
system host-stor-list $HOST system host-stor-list $HOST
-------------------- --------------------
Unlock storage nodes Unlock storage nodes
@@ -219,9 +219,9 @@ Unlock storage nodes in order to bring them into service:
:: ::
for STORAGE in storage-0 storage-1; do for STORAGE in storage-0 storage-1; do
system host-unlock $STORAGE system host-unlock $STORAGE
done done
The storage nodes will reboot in order to apply configuration changes and come The storage nodes will reboot in order to apply configuration changes and come
into service. This can take 5-10 minutes, depending on the performance of the into service. This can take 5-10 minutes, depending on the performance of the
@@ -238,9 +238,9 @@ Configure worker nodes
:: ::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system interface-network-assign $NODE mgmt0 cluster-host system interface-network-assign $NODE mgmt0 cluster-host
done done
#. Configure data interfaces for worker nodes. Use the DATA port names, for #. Configure data interfaces for worker nodes. Use the DATA port names, for
example eth0, that are applicable to your deployment environment. example eth0, that are applicable to your deployment environment.
@@ -258,55 +258,55 @@ Configure worker nodes
:: ::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-label-assign ${NODE} sriovdp=enabled system host-label-assign ${NODE} sriovdp=enabled
done done
* If planning on running DPDK in containers on this host, configure the number * If planning on running DPDK in containers on this host, configure the number
of 1G Huge pages required on both NUMA nodes: of 1G Huge pages required on both NUMA nodes:
:: ::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-memory-modify ${NODE} 0 -1G 100 system host-memory-modify ${NODE} 0 -1G 100
system host-memory-modify ${NODE} 1 -1G 100 system host-memory-modify ${NODE} 1 -1G 100
done done
For both Kubernetes and OpenStack: For both Kubernetes and OpenStack:
:: ::
DATA0IF=<DATA-0-PORT> DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT> DATA1IF=<DATA-1-PORT>
PHYSNET0='physnet0' PHYSNET0='physnet0'
PHYSNET1='physnet1' PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list SPIL=/tmp/tmp-system-host-if-list
# configure the datanetworks in sysinv, prior to referencing it # configure the datanetworks in sysinv, prior to referencing it
# in the ``system host-if-modify`` command'. # in the ``system host-if-modify`` command'.
system datanetwork-add ${PHYSNET0} vlan system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan system datanetwork-add ${PHYSNET1} vlan
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
echo "Configuring interface for: $NODE" echo "Configuring interface for: $NODE"
set -ex set -ex
system host-port-list ${NODE} --nowrap > ${SPL} system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL} system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}') DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}') DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}') DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}') DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}') DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}') DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}') DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}') DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID} system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID} system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0} system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1} system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
set +ex set +ex
done done
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
@@ -322,27 +322,27 @@ OpenStack-specific host configuration
:: ::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled system host-label-assign $NODE sriov=enabled
done done
#. **For OpenStack only:** Set up disk partition for nova-local volume group, #. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for stx-openstack nova ephemeral disks. which is needed for stx-openstack nova ephemeral disks.
:: ::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
echo "Configuring Nova local for: $NODE" echo "Configuring Nova local for: $NODE"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}') ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
PARTITION_SIZE=10 PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE}) NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID} system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
done done
------------------- -------------------
Unlock worker nodes Unlock worker nodes
@@ -352,9 +352,9 @@ Unlock worker nodes in order to bring them into service:
:: ::
for NODE in worker-0 worker-1; do for NODE in worker-0 worker-1; do
system host-unlock $NODE system host-unlock $NODE
done done
The worker nodes will reboot in order to apply configuration changes and come The worker nodes will reboot in order to apply configuration changes and come
into service. This can take 5-10 minutes, depending on the performance of the into service. This can take 5-10 minutes, depending on the performance of the

View File

@@ -14,8 +14,14 @@ End user applications can be deployed on bare metal servers (instead of
virtual machines) by configuring OpenStack Ironic and deploying a pool of 1 or virtual machines) by configuring OpenStack Ironic and deploying a pool of 1 or
more bare metal servers. more bare metal servers.
.. note::
If you are behind a corporate firewall or proxy, you need to set proxy
settings. Refer to :doc:`/../../configuration/docker_proxy_config` for
details.
.. figure:: ../figures/starlingx-deployment-options-ironic.png .. figure:: ../figures/starlingx-deployment-options-ironic.png
:scale: 90% :scale: 50%
:alt: Standard with Ironic deployment configuration :alt: Standard with Ironic deployment configuration
*Figure 1: Standard with Ironic deployment configuration* *Figure 1: Standard with Ironic deployment configuration*

View File

@@ -16,6 +16,12 @@ An AIO-DX configuration provides the following benefits:
* All controller HA services go active on the remaining healthy server * All controller HA services go active on the remaining healthy server
* All virtual machines are recovered on the remaining healthy server * All virtual machines are recovered on the remaining healthy server
.. note::
If you are behind a corporate firewall or proxy, you need to set proxy
settings. Refer to :doc:`/../../configuration/docker_proxy_config` for
details.
.. figure:: ../figures/starlingx-deployment-options-duplex.png .. figure:: ../figures/starlingx-deployment-options-duplex.png
:scale: 50% :scale: 50%
:alt: All-in-one Duplex deployment configuration :alt: All-in-one Duplex deployment configuration

View File

@@ -7,6 +7,12 @@ following benefits:
single pair of physical servers single pair of physical servers
* A storage backend solution using a single-node CEPH deployment * A storage backend solution using a single-node CEPH deployment
.. note::
If you are behind a corporate firewall or proxy, you need to set proxy
settings. Refer to :doc:`/../../configuration/docker_proxy_config` for
details.
.. figure:: ../figures/starlingx-deployment-options-simplex.png .. figure:: ../figures/starlingx-deployment-options-simplex.png
:scale: 50% :scale: 50%
:alt: All-in-one Simplex deployment configuration :alt: All-in-one Simplex deployment configuration

View File

@@ -15,6 +15,12 @@ A Standard with Controller Storage configuration provides the following benefits
* On overall worker node failure, virtual machines and containers are * On overall worker node failure, virtual machines and containers are
recovered on the remaining healthy worker nodes recovered on the remaining healthy worker nodes
.. note::
If you are behind a corporate firewall or proxy, you need to set proxy
settings. Refer to :doc:`/../../configuration/docker_proxy_config` for
details.
.. figure:: ../figures/starlingx-deployment-options-controller-storage.png .. figure:: ../figures/starlingx-deployment-options-controller-storage.png
:scale: 50% :scale: 50%
:alt: Standard with Controller Storage deployment configuration :alt: Standard with Controller Storage deployment configuration

View File

@@ -8,7 +8,14 @@ A Standard with Dedicated Storage configuration provides the following benefits:
across the controller nodes in either active/active or active/standby mode across the controller nodes in either active/active or active/standby mode
* A storage back end solution using a two-to-9x node HA CEPH storage cluster * A storage back end solution using a two-to-9x node HA CEPH storage cluster
that supports a replication factor of two or three that supports a replication factor of two or three
* Up to four groups of 2x storage nodes, or up to three groups of 3x storage nodes * Up to four groups of 2x storage nodes, or up to three groups of 3x storage
nodes
.. note::
If you are behind a corporate firewall or proxy, you need to set proxy
settings. Refer to :doc:`/../../configuration/docker_proxy_config` for
details.
.. figure:: ../figures/starlingx-deployment-options-dedicated-storage.png .. figure:: ../figures/starlingx-deployment-options-dedicated-storage.png
:scale: 50% :scale: 50%

Binary file not shown.

Before

Width:  |  Height:  |  Size: 103 KiB

After

Width:  |  Height:  |  Size: 101 KiB

View File

@@ -25,7 +25,7 @@ Install application manifest and helm-charts
:: ::
system application-upload stx-openstack-<version>-centos-stable-latest.tgz system application-upload stx-openstack-<version>-centos-stable-latest.tgz
This will: This will:
@@ -36,11 +36,20 @@ Install application manifest and helm-charts
recommended StarlingX configuration of OpenStack services. recommended StarlingX configuration of OpenStack services.
#. Apply the stx-openstack application in order to bring StarlingX OpenStack into #. Apply the stx-openstack application in order to bring StarlingX OpenStack into
service. service. If your environment is preconfigured with a proxy server, then
make sure HTTPS proxy is set before applying stx-openstack.
:: ::
system application-apply stx-openstack system application-apply stx-openstack
.. note::
To set the HTTPS proxy at bootstrap time, refer to
`Ansible Bootstrap Configurations <https://docs.starlingx.io/deploy_install_guides/r3_release/ansible_bootstrap_configs.html#docker-proxy>`_.
To set the HTTPS proxy after installation, refer to
`Docker Proxy Configuration <https://docs.starlingx.io/configuration/docker_proxy_config.html>`_.
#. Wait for the activation of stx-openstack to complete. #. Wait for the activation of stx-openstack to complete.
@@ -50,7 +59,7 @@ Install application manifest and helm-charts
:: ::
watch -n 5 system application-list watch -n 5 system application-list
---------- ----------
Next steps Next steps

View File

@@ -117,11 +117,20 @@ On virtual controller-0:
admin_username: admin admin_username: admin
admin_password: <admin-password> admin_password: <admin-password>
ansible_become_pass: <sysadmin-password> ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF EOF
Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs` Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs`
for information on additional Ansible bootstrap configurations for advanced for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios. Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@@ -233,7 +242,7 @@ On virtual controller-0:
.. important:: .. important::
This step requires a configured Ceph storage backend This step requires a configured Ceph storage backend.
:: ::
@@ -241,6 +250,19 @@ On virtual controller-0:
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {} system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0 system host-stor-list controller-0
#. If required, and not already done as part of bootstrap, configure Docker to
use a proxy server.
#. List Docker proxy parameters:
::
system service-parameter-list platform docker
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
************************************* *************************************
@@ -313,7 +335,7 @@ Install software on controller-1 node
| id | hostname | personality | administrative | operational | availability | | id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available | | 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online | | 2 | controller-1 | controller | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+ +----+--------------+-------------+----------------+-------------+--------------+
---------------------- ----------------------

View File

@@ -116,11 +116,20 @@ On virtual controller-0:
admin_username: admin admin_username: admin
admin_password: <admin-password> admin_password: <admin-password>
ansible_become_pass: <sysadmin-password> ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF EOF
Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs` Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs`
for information on additional Ansible bootstrap configurations for advanced for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios. Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@@ -232,6 +241,18 @@ On virtual controller-0:
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {} system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0 system host-stor-list controller-0
#. If required, and not already done as part of bootstrap, configure Docker to
use a proxy server.
#. List Docker proxy parameters:
::
system service-parameter-list platform docker
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
************************************* *************************************

View File

@@ -122,11 +122,20 @@ On virtual controller-0:
admin_username: admin admin_username: admin
admin_password: <admin-password> admin_password: <admin-password>
ansible_become_pass: <sysadmin-password> ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF EOF
Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs` Refer to :doc:`/deploy_install_guides/r4_release/ansible_bootstrap_configs`
for information on additional Ansible bootstrap configurations for advanced for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios. Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@@ -197,6 +206,18 @@ On virtual controller-0:
system storage-backend-add ceph --confirmed system storage-backend-add ceph --confirmed
#. If required, and not already done as part of bootstrap, configure Docker to
use a proxy server.
#. List Docker proxy parameters:
::
system service-parameter-list platform docker
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
************************************* *************************************

View File

@@ -61,11 +61,11 @@ Configure controller-0
Unlock controller-0 Unlock controller-0
------------------- -------------------
.. important:: .. important::
Make sure the Ceph storage backend is configured. If it is Make sure the Ceph storage backend is configured. If it is
not configured, you will not be able to configure storage not configured, you will not be able to configure storage
nodes. nodes.
Unlock virtual controller-0 in order to bring it into service: Unlock virtual controller-0 in order to bring it into service: