Merge "Updating install guides to point to Docker proxy info"

This commit is contained in:
Zuul
2020-04-16 11:44:09 +00:00
committed by Gerrit Code Review
7 changed files with 250 additions and 140 deletions

View File

@@ -116,11 +116,20 @@ Bootstrap system on controller-0
admin_username: admin
admin_password: <sysadmin-password>
ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF
Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios.
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
#. Run the Ansible bootstrap playbook:
@@ -232,6 +241,19 @@ Configure controller-0
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0
#. If required, and not already done as part of bootstrap, configure Docker to
use a proxy server.
#. List Docker proxy parameters:
::
system service-parameter-list platform docker
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
*************************************
OpenStack-specific host configuration
*************************************

View File

@@ -131,11 +131,20 @@ Bootstrap system on controller-0
admin_username: admin
admin_password: <sysadmin-password>
ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF
Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios.
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
#. Run the Ansible bootstrap playbook:
@@ -261,6 +270,18 @@ Configure controller-0
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0
#. If required, and not already done as part of bootstrap, configure Docker to
use a proxy server.
#. List Docker proxy parameters:
::
system service-parameter-list platform docker
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
*************************************
OpenStack-specific host configuration
*************************************

View File

@@ -137,11 +137,20 @@ Bootstrap system on controller-0
admin_username: admin
admin_password: <sysadmin-password>
ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF
Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios.
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
#. Run the Ansible bootstrap playbook:
@@ -165,7 +174,7 @@ Configure controller-0
::
source /etc/platform/openrc
source /etc/platform/openrc
#. Configure the OAM and MGMT interfaces of controller-0 and specify the
attached networks. Use the OAM and MGMT port names, for example eth0, that are
@@ -173,24 +182,36 @@ Configure controller-0
::
OAM_IF=<OAM-PORT>
MGMT_IF=<MGMT-PORT>
system host-if-modify controller-0 lo -c none
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
for UUID in $IFNET_UUIDS; do
system interface-network-remove ${UUID}
done
system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam
system host-if-modify controller-0 $MGMT_IF -c platform
system interface-network-assign controller-0 $MGMT_IF mgmt
system interface-network-assign controller-0 $MGMT_IF cluster-host
OAM_IF=<OAM-PORT>
MGMT_IF=<MGMT-PORT>
system host-if-modify controller-0 lo -c none
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
for UUID in $IFNET_UUIDS; do
system interface-network-remove ${UUID}
done
system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam
system host-if-modify controller-0 $MGMT_IF -c platform
system interface-network-assign controller-0 $MGMT_IF mgmt
system interface-network-assign controller-0 $MGMT_IF cluster-host
#. Configure NTP Servers for network time synchronization:
::
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
#. If required, and not already done as part of bootstrap, configure Docker to
use a proxy server.
#. List Docker proxy parameters:
::
system service-parameter-list platform docker
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
*************************************
OpenStack-specific host configuration
@@ -308,12 +329,12 @@ Install software on controller-1 and worker nodes
::
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
#. Using the host id, set the personality of this host to 'controller':
@@ -347,16 +368,16 @@ Install software on controller-1 and worker nodes
::
system host-list
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online |
| 3 | worker-0 | worker | locked | disabled | online |
| 4 | worker-1 | worker | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online |
| 3 | worker-0 | worker | locked | disabled | online |
| 4 | worker-1 | worker | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
----------------------
Configure controller-1
@@ -373,11 +394,11 @@ install procedure.)
::
OAM_IF=<OAM-PORT>
MGMT_IF=<MGMT-PORT>
system host-if-modify controller-1 $OAM_IF -c platform
system interface-network-assign controller-1 $OAM_IF oam
system interface-network-assign controller-1 $MGMT_IF cluster-host
OAM_IF=<OAM-PORT>
MGMT_IF=<MGMT-PORT>
system host-if-modify controller-1 $OAM_IF -c platform
system interface-network-assign controller-1 $OAM_IF oam
system interface-network-assign controller-1 $MGMT_IF cluster-host
*************************************
OpenStack-specific host configuration
@@ -393,7 +414,7 @@ of installing the stx-openstack manifest and helm-charts later.
::
system host-label-assign controller-1 openstack-control-plane=enabled
system host-label-assign controller-1 openstack-control-plane=enabled
.. incl-config-controller-1-end:
@@ -407,7 +428,7 @@ Unlock controller-1 in order to bring it into service:
::
system host-unlock controller-1
system host-unlock controller-1
Controller-1 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host
@@ -426,22 +447,22 @@ Configure worker nodes
::
system ceph-mon-add worker-0
system ceph-mon-add worker-0
#. Wait for the worker node monitor to complete configuration:
::
system ceph-mon-list
+--------------------------------------+-------+--------------+------------+------+
| uuid | ceph_ | hostname | state | task |
| | mon_g | | | |
| | ib | | | |
+--------------------------------------+-------+--------------+------------+------+
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None |
+--------------------------------------+-------+--------------+------------+------+
system ceph-mon-list
+--------------------------------------+-------+--------------+------------+------+
| uuid | ceph_ | hostname | state | task |
| | mon_g | | | |
| | ib | | | |
+--------------------------------------+-------+--------------+------------+------+
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None |
+--------------------------------------+-------+--------------+------------+------+
#. Assign the cluster-host network to the MGMT interface for the worker nodes:
@@ -450,9 +471,9 @@ Configure worker nodes
::
for NODE in worker-0 worker-1; do
system interface-network-assign $NODE mgmt0 cluster-host
done
for NODE in worker-0 worker-1; do
system interface-network-assign $NODE mgmt0 cluster-host
done
#. Configure data interfaces for worker nodes. Use the DATA port names, for
example eth0, that are applicable to your deployment environment.
@@ -470,55 +491,55 @@ Configure worker nodes
::
for NODE in worker-0 worker-1; do
system host-label-assign ${NODE} sriovdp=enabled
done
for NODE in worker-0 worker-1; do
system host-label-assign ${NODE} sriovdp=enabled
done
* If planning on running DPDK in containers on this host, configure the number
of 1G Huge pages required on both NUMA nodes:
::
for NODE in worker-0 worker-1; do
system host-memory-modify ${NODE} 0 -1G 100
system host-memory-modify ${NODE} 1 -1G 100
done
for NODE in worker-0 worker-1; do
system host-memory-modify ${NODE} 0 -1G 100
system host-memory-modify ${NODE} 1 -1G 100
done
For both Kubernetes and OpenStack:
::
DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
# configure the datanetworks in sysinv, prior to referencing it
# in the ``system host-if-modify`` command'.
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
# configure the datanetworks in sysinv, prior to referencing it
# in the ``system host-if-modify`` command'.
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
for NODE in worker-0 worker-1; do
echo "Configuring interface for: $NODE"
set -ex
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
set +ex
done
for NODE in worker-0 worker-1; do
echo "Configuring interface for: $NODE"
set -ex
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
set +ex
done
*************************************
OpenStack-specific host configuration
@@ -534,27 +555,27 @@ OpenStack-specific host configuration
::
for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled
done
for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled
system host-label-assign $NODE openvswitch=enabled
system host-label-assign $NODE sriov=enabled
done
#. **For OpenStack only:** Set up disk partition for nova-local volume group,
which is needed for stx-openstack nova ephemeral disks.
::
for NODE in worker-0 worker-1; do
echo "Configuring Nova local for: $NODE"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
done
for NODE in worker-0 worker-1; do
echo "Configuring Nova local for: $NODE"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
PARTITION_SIZE=10
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${PARTITION_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
done
--------------------
Unlock worker nodes
@@ -564,9 +585,9 @@ Unlock worker nodes in order to bring them into service:
::
for NODE in worker-0 worker-1; do
system host-unlock $NODE
done
for NODE in worker-0 worker-1; do
system host-unlock $NODE
done
The worker nodes will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host machine.
@@ -579,31 +600,31 @@ Add Ceph OSDs to controllers
::
HOST=controller-0
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb"
for OSD in $OSDs; do
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done
HOST=controller-0
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb"
for OSD in $OSDs; do
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done
system host-stor-list $HOST
system host-stor-list $HOST
#. Add OSDs to controller-1. The following example adds OSDs to the `sdb` disk:
::
HOST=controller-1
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb"
for OSD in $OSDs; do
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done
HOST=controller-1
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb"
for OSD in $OSDs; do
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done
system host-stor-list $HOST
system host-stor-list $HOST
----------
Next steps

View File

@@ -52,7 +52,8 @@ Unlock controller-0 in order to bring it into service:
system host-unlock controller-0
Controller-0 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host machine.
service. This can take 5-10 minutes, depending on the performance of the host
machine.
-----------------------------------------------------------------
Install software on controller-1, storage nodes, and worker nodes
@@ -64,8 +65,8 @@ Install software on controller-1, storage nodes, and worker nodes
#. As controller-1 boots, a message appears on its console instructing you to
configure the personality of the node.
#. On the console of controller-0, list hosts to see newly discovered controller-1
host (hostname=None):
#. On the console of controller-0, list hosts to see newly discovered
controller-1 host (hostname=None):
::
@@ -256,8 +257,8 @@ Configure worker nodes
system host-label-assign ${NODE} sriovdp=enabled
done
* If planning on running DPDK in containers on this host, configure the number
of 1G Huge pages required on both NUMA nodes:
* If planning on running DPDK in containers on this host, configure the
number of 1G Huge pages required on both NUMA nodes:
::

View File

@@ -13,10 +13,12 @@ on a **StarlingX R3.0 virtual All-in-one Duplex** deployment configuration.
Install software on controller-0
--------------------------------
In the last step of :doc:`aio_duplex_environ`, the controller-0 virtual server 'duplex-controller-0' was started by the :command:`setup_configuration.sh` command.
In the last step of :doc:`aio_duplex_environ`, the controller-0 virtual server
'duplex-controller-0' was started by the :command:`setup_configuration.sh`
command.
On the host, attach to the console of virtual controller-0 and select the appropriate
installer menu options to start the non-interactive install of
On the host, attach to the console of virtual controller-0 and select the
appropriate installer menu options to start the non-interactive install of
StarlingX software on controller-0.
.. note::
@@ -117,11 +119,20 @@ On virtual controller-0:
admin_username: admin
admin_password: <sysadmin-password>
ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF
Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios.
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
#. Run the Ansible bootstrap playbook:
@@ -223,6 +234,19 @@ On virtual controller-0:
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0
#. If required, and not already done as part of bootstrap, configure Docker to
use a proxy server.
#. List Docker proxy parameters:
::
system service-parameter-list platform docker
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
*************************************
OpenStack-specific host configuration
*************************************
@@ -295,7 +319,7 @@ Install software on controller-1 node
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online |
| 2 | controller-1 | controller | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
----------------------

View File

@@ -18,8 +18,8 @@ In the last step of :doc:`controller_storage_environ`, the controller-0 virtual
server 'controllerstorage-controller-0' was started by the
:command:`setup_configuration.sh` command.
On the host, attach to the console of virtual controller-0 and select the appropriate
installer menu options to start the non-interactive install of
On the host, attach to the console of virtual controller-0 and select the
appropriate installer menu options to start the non-interactive install of
StarlingX software on controller-0.
.. note::
@@ -122,11 +122,20 @@ On virtual controller-0:
admin_username: admin
admin_password: <sysadmin-password>
ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF
Refer to :doc:`/deploy_install_guides/r3_release/ansible_bootstrap_configs`
for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios.
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a
firewall, etc. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
#. Run the Ansible bootstrap playbook:
@@ -183,6 +192,18 @@ On virtual controller-0:
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
#. If required, and not already done as part of bootstrap, configure Docker to
use a proxy server.
#. List Docker proxy parameters:
::
system service-parameter-list platform docker
#. Refer to :doc:`/../../configuration/docker_proxy_config` for
details about Docker proxy settings.
*************************************
OpenStack-specific host configuration
*************************************

View File

@@ -18,8 +18,8 @@ In the last step of :doc:`dedicated_storage_environ`, the controller-0 virtual
server 'dedicatedstorage-controller-0' was started by the
:command:`setup_configuration.sh` command.
On the host, attach to the console of virtual controller-0 and select the appropriate
installer menu options to start the non-interactive install of
On the host, attach to the console of virtual controller-0 and select the
appropriate installer menu options to start the non-interactive install of
StarlingX software on controller-0.
.. note::