Merge "Make hidden code appear in admin guide"

This commit is contained in:
Zuul 2022-01-13 10:13:55 +00:00 committed by Gerrit Code Review
commit 4ff19a840c
1 changed files with 27 additions and 27 deletions

View File

@ -32,7 +32,7 @@ internal IP. The option is ignored if ``proxy_command`` is not also set.
For example, the following parameter in the sahara configuration file For example, the following parameter in the sahara configuration file
would be used if instances are accessed through a relay machine: would be used if instances are accessed through a relay machine:
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
proxy_command='ssh relay-machine-{tenant_id} nc {host} {port}' proxy_command='ssh relay-machine-{tenant_id} nc {host} {port}'
@ -40,7 +40,7 @@ would be used if instances are accessed through a relay machine:
Whereas the following shows an example of accessing instances though Whereas the following shows an example of accessing instances though
a custom network namespace: a custom network namespace:
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
proxy_command='ip netns exec ns_for_{network_id} nc {host} {port}' proxy_command='ip netns exec ns_for_{network_id} nc {host} {port}'
@ -73,7 +73,7 @@ because some plugins (e.g. ``HDP``) determine hostname by ip.
Sahara also should be properly configured. In ``sahara.conf`` you must specify Sahara also should be properly configured. In ``sahara.conf`` you must specify
two config properties: two config properties:
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
# Use Designate for internal and external hostnames resolution: # Use Designate for internal and external hostnames resolution:
@ -109,7 +109,7 @@ through a few configuration parameters and user defined topology files.
To enable data-locality, set the ``enable_data_locality`` parameter to To enable data-locality, set the ``enable_data_locality`` parameter to
``true`` in the sahara configuration file ``true`` in the sahara configuration file
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
enable_data_locality=true enable_data_locality=true
@ -118,7 +118,7 @@ With data locality enabled, you must now specify the topology files
for the Compute and Object Storage services. These files are for the Compute and Object Storage services. These files are
specified in the sahara configuration file as follows: specified in the sahara configuration file as follows:
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
compute_topology_file=/etc/sahara/compute.topology compute_topology_file=/etc/sahara/compute.topology
@ -127,7 +127,7 @@ specified in the sahara configuration file as follows:
The ``compute_topology_file`` should contain mappings between Compute The ``compute_topology_file`` should contain mappings between Compute
nodes and racks in the following format: nodes and racks in the following format:
.. sourcecode:: cfg .. code-block::
compute1 /rack1 compute1 /rack1
compute2 /rack2 compute2 /rack2
@ -139,7 +139,7 @@ OpenStack (``host`` column in admin list for instances).
The ``swift_topology_file`` should contain mappings between Object Storage The ``swift_topology_file`` should contain mappings between Object Storage
nodes and racks in the following format: nodes and racks in the following format:
.. sourcecode:: cfg .. code-block::
node1 /rack1 node1 /rack1
node2 /rack2 node2 /rack2
@ -243,7 +243,7 @@ to use periodic tasks distribution, the following steps are required:
``periodic_coordinator_backend_url`` parameter. For example, if the ``periodic_coordinator_backend_url`` parameter. For example, if the
ZooKeeper backend is being used: ZooKeeper backend is being used:
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
periodic_coordinator_backend_url=kazoo://IP:PORT periodic_coordinator_backend_url=kazoo://IP:PORT
@ -251,7 +251,7 @@ to use periodic tasks distribution, the following steps are required:
* Tooz extras should be installed. When using Zookeeper as coordination * Tooz extras should be installed. When using Zookeeper as coordination
backend, ``kazoo`` library should be installed. It can be done with pip: backend, ``kazoo`` library should be installed. It can be done with pip:
.. sourcecode:: console .. code-block::
pip install tooz[zookeeper] pip install tooz[zookeeper]
@ -260,7 +260,7 @@ to use periodic tasks distribution, the following steps are required:
``periodic_workers_number`` parameter (only 1 thread will be launched by ``periodic_workers_number`` parameter (only 1 thread will be launched by
default). Example: default). Example:
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
periodic_workers_number=2 periodic_workers_number=2
@ -269,7 +269,7 @@ to use periodic tasks distribution, the following steps are required:
heartbeat execution (1 second by default). Heartbeats are needed to make heartbeat execution (1 second by default). Heartbeats are needed to make
sure that connection to the coordination backend is active. Example: sure that connection to the coordination backend is active. Example:
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
coordinator_heartbeat_interval=2 coordinator_heartbeat_interval=2
@ -279,7 +279,7 @@ to use periodic tasks distribution, the following steps are required:
belongs to a particular engine. A larger number of replicas leads to better belongs to a particular engine. A larger number of replicas leads to better
task distribution across the set of engines. (40 by default). Example: task distribution across the set of engines. (40 by default). Example:
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
hash_ring_replicas_count=100 hash_ring_replicas_count=100
@ -302,7 +302,7 @@ to interface with the OpenStack Key Manager service. This library provides
configurable access to a key manager. To configure sahara to use barbican as configurable access to a key manager. To configure sahara to use barbican as
the key manager, edit the sahara configuration file as follows: the key manager, edit the sahara configuration file as follows:
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
use_barbican_key_manager=true use_barbican_key_manager=true
@ -314,7 +314,7 @@ attempt to find barbican in the Identity service's service catalog.
For added control of the barbican server location, optional configuration For added control of the barbican server location, optional configuration
values may be added to specify the URL for the barbican API server. values may be added to specify the URL for the barbican API server.
.. sourcecode:: cfg .. code-block::
[castellan] [castellan]
barbican_api_endpoint=http://{barbican controller IP:PORT}/ barbican_api_endpoint=http://{barbican controller IP:PORT}/
@ -360,7 +360,7 @@ instance of sahara should have the ``os_region_name=<region>``
parameter set in the configuration file. The following example demonstrates parameter set in the configuration file. The following example demonstrates
configuring sahara to use the ``RegionOne`` region: configuring sahara to use the ``RegionOne`` region:
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
os_region_name=RegionOne os_region_name=RegionOne
@ -376,7 +376,7 @@ command), rootwrap functionality is provided to allow users other than
``root`` access to the needed operating system facilities. To use rootwrap ``root`` access to the needed operating system facilities. To use rootwrap
the following configuration parameter is required to be set: the following configuration parameter is required to be set:
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
use_rootwrap=true use_rootwrap=true
@ -390,7 +390,7 @@ steps:
``/etc/sudoers.d``. This file is setup to allow a user named ``sahara`` ``/etc/sudoers.d``. This file is setup to allow a user named ``sahara``
access to the rootwrap script. It contains the following: access to the rootwrap script. It contains the following:
.. sourcecode:: cfg .. code-block::
sahara ALL = (root) NOPASSWD: /usr/bin/sahara-rootwrap /etc/sahara/rootwrap.conf * sahara ALL = (root) NOPASSWD: /usr/bin/sahara-rootwrap /etc/sahara/rootwrap.conf *
@ -409,7 +409,7 @@ change user in script from ``sahara`` to ``stack``.
(depending on ``proxy_command`` you may need to set additional filters). (depending on ``proxy_command`` you may need to set additional filters).
It should look similar to the followings: It should look similar to the followings:
.. sourcecode:: cfg .. code-block::
[Filters] [Filters]
ip: IpNetnsExecFilter, ip, root ip: IpNetnsExecFilter, ip, root
@ -419,7 +419,7 @@ change user in script from ``sahara`` to ``stack``.
If you wish to use a rootwrap command other than ``sahara-rootwrap`` you can If you wish to use a rootwrap command other than ``sahara-rootwrap`` you can
set the following parameter in your sahara configuration file: set the following parameter in your sahara configuration file:
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
rootwrap_command='sudo sahara-rootwrap /etc/sahara/rootwrap.conf' rootwrap_command='sudo sahara-rootwrap /etc/sahara/rootwrap.conf'
@ -467,7 +467,7 @@ this example let's assume that the name of the proxy domain is
``sahara_proxy`` and the roles needed by proxy users will be ``member`` and ``sahara_proxy`` and the roles needed by proxy users will be ``member`` and
``SwiftUser``. ``SwiftUser``.
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
use_domain_for_proxy_users=true use_domain_for_proxy_users=true
@ -524,7 +524,7 @@ set to ``true`` and some extra configurations are needed:
* by designating an account with privileged rights in the cinder * by designating an account with privileged rights in the cinder
configuration: configuration:
.. sourcecode:: cfg .. code-block::
os_privileged_user_name = os_privileged_user_name =
os_privileged_user_password = os_privileged_user_password =
@ -561,7 +561,7 @@ server or disable NTP service using the examples below.
If you want to enable configuring the NTP service, you should specify the If you want to enable configuring the NTP service, you should specify the
following configs for the cluster: following configs for the cluster:
.. sourcecode:: json .. code-block::
{ {
"cluster_configs": { "cluster_configs": {
@ -574,7 +574,7 @@ following configs for the cluster:
If you want to disable configuring NTP service, you should specify following If you want to disable configuring NTP service, you should specify following
configs for the cluster: configs for the cluster:
.. sourcecode:: json .. code-block::
{ {
"cluster_configs": { "cluster_configs": {
@ -596,7 +596,7 @@ To get started quickly, use the example configuration block below, replacing
the :code:`allowed origin` field with the host(s) from which your API expects the :code:`allowed origin` field with the host(s) from which your API expects
access. access.
.. sourcecode:: cfg .. code-block::
[cors] [cors]
allowed_origin=https://we.example.com:443 allowed_origin=https://we.example.com:443
@ -627,7 +627,7 @@ means that automatic clean up is disabled). For example, if you want cluster to
be deleted after 3 hours if it didn't leave "Starting" state then you should be deleted after 3 hours if it didn't leave "Starting" state then you should
specify: specify:
.. sourcecode:: cfg .. code-block::
[DEFAULT] [DEFAULT]
cleanup_time_for_incomplete_clusters = 3 cleanup_time_for_incomplete_clusters = 3
@ -640,7 +640,7 @@ may be bigger than the default values configured in ``neutron.conf``. Then the
default limit should be raised up to some bigger value which is proportional to default limit should be raised up to some bigger value which is proportional to
the number of cluster node groups. You can change it in ``neutron.conf`` file: the number of cluster node groups. You can change it in ``neutron.conf`` file:
.. sourcecode:: cfg .. code-block::
[quotas] [quotas]
quota_security_group = 1000 quota_security_group = 1000
@ -648,6 +648,6 @@ the number of cluster node groups. You can change it in ``neutron.conf`` file:
Or you can execute openstack CLI command: Or you can execute openstack CLI command:
.. sourcecode:: console .. code-block::
openstack quota set --secgroups 1000 --secgroup-rules 10000 $PROJECT_ID openstack quota set --secgroups 1000 --secgroup-rules 10000 $PROJECT_ID