docs: remove ALL the unnecessary blockquotes

Due to missing/superflous spaces, some of the content in the
guide is displayed as blockquote without being really quoted
content. This change adds/removes spaces to remove ALL the
generated HTML blockquotes.

Change-Id: I25b0d9fa64cd474a844b5f3e6c126395a4e80f2c
This commit is contained in:
Markus Zoeller 2017-09-15 14:19:35 -06:00
parent 2aa2da1d77
commit 4592ed06f9
13 changed files with 244 additions and 242 deletions

View File

@ -43,7 +43,7 @@ be modified or removed.
The steps to define your custom roles configuration are: The steps to define your custom roles configuration are:
1. Copy the default roles provided by `tripleo-heat-templates`: 1. Copy the default roles provided by `tripleo-heat-templates`::
mkdir ~/roles mkdir ~/roles
cp /usr/share/openstack-tripleo-heat-templates/roles/* ~/roles cp /usr/share/openstack-tripleo-heat-templates/roles/* ~/roles
@ -57,18 +57,18 @@ match the name of the role. For example if adding a new role named `Galera`,
the role file name should be `Galera.yaml`. The file should at least contain the role file name should be `Galera.yaml`. The file should at least contain
the following items: the following items:
* name: Name of the role e.g "CustomController", mandatory * name: Name of the role e.g "CustomController", mandatory
* ServicesDefault: List of services, optional, defaults to an empty list * ServicesDefault: List of services, optional, defaults to an empty list
See the default roles_data.yaml or overcloud-resource-registry-puppet.j2.yaml See the default roles_data.yaml or overcloud-resource-registry-puppet.j2.yaml
for the list of supported services. Both files can be found in the top for the list of supported services. Both files can be found in the top
tripleo-heat-templates folder tripleo-heat-templates folder
Additional items like the ones below should be included as well: Additional items like the ones below should be included as well:
* CountDefault: Default number of nodes, defaults to zero * CountDefault: Default number of nodes, defaults to zero
* HostnameFormatDefault: Format string for hostname, optional * HostnameFormatDefault: Format string for hostname, optional
* Description: A few sentences describing the role and information * Description: A few sentences describing the role and information
pertaining to the usage of the role. pertaining to the usage of the role.
The role file format is a basic yaml structure. The expectation is that there The role file format is a basic yaml structure. The expectation is that there
is a single role per file. See the roles `README.rst` for additional details. For is a single role per file. See the roles `README.rst` for additional details. For

View File

@ -141,11 +141,11 @@ Deploying the Overcloud with an External Backend
#. Copy the Manila driver-specific configuration file to your home directory: #. Copy the Manila driver-specific configuration file to your home directory:
- Generic driver:: - Generic driver::
sudo cp /usr/share/openstack-tripleo-heat-templates/environments/manila-generic-config.yaml ~ sudo cp /usr/share/openstack-tripleo-heat-templates/environments/manila-generic-config.yaml ~
- NetApp driver:: - NetApp driver::
sudo cp /usr/share/openstack-tripleo-heat-templates/environments/manila-netapp-config.yaml ~ sudo cp /usr/share/openstack-tripleo-heat-templates/environments/manila-netapp-config.yaml ~

View File

@ -137,10 +137,10 @@ integration points for additional third-party services, drivers or plugins.
The following interfaces are available: The following interfaces are available:
* `OS::TripleO::ControllerExtraConfigPre`: Controller node additional configuration * `OS::TripleO::ControllerExtraConfigPre`: Controller node additional configuration
* `OS::TripleO::ComputeExtraConfigPre`: Compute node additional configuration * `OS::TripleO::ComputeExtraConfigPre`: Compute node additional configuration
* `OS::TripleO::CephStorageExtraConfigPre` : CephStorage node additional configuration * `OS::TripleO::CephStorageExtraConfigPre` : CephStorage node additional configuration
* `OS::TripleO::NodeExtraConfig`: additional configuration applied to all nodes (all roles). * `OS::TripleO::NodeExtraConfig`: additional configuration applied to all nodes (all roles).
Below is an example of a per-node configuration template that shows additional node configuration Below is an example of a per-node configuration template that shows additional node configuration
via standard heat SoftwareConfig_ resources:: via standard heat SoftwareConfig_ resources::

View File

@ -31,12 +31,12 @@ value for compute nodes::
The parameters available are: The parameters available are:
* `ExtraConfig`: Apply the data to all nodes, e.g all roles * `ExtraConfig`: Apply the data to all nodes, e.g all roles
* `ComputeExtraConfig`: Apply the data only to Compute nodes * `ComputeExtraConfig`: Apply the data only to Compute nodes
* `ControllerExtraConfig`: Apply the data only to Controller nodes * `ControllerExtraConfig`: Apply the data only to Controller nodes
* `BlockStorageExtraConfig`: Apply the data only to BlockStorage nodes * `BlockStorageExtraConfig`: Apply the data only to BlockStorage nodes
* `ObjectStorageExtraConfig`: Apply the data only to ObjectStorage nodes * `ObjectStorageExtraConfig`: Apply the data only to ObjectStorage nodes
* `CephStorageExtraConfig`: Apply the data only to CephStorage nodes * `CephStorageExtraConfig`: Apply the data only to CephStorage nodes
For any custom roles (defined via roles_data.yaml) the parameter name will For any custom roles (defined via roles_data.yaml) the parameter name will
be RoleNameExtraConfig where RoleName is the name specified in roles_data.yaml. be RoleNameExtraConfig where RoleName is the name specified in roles_data.yaml.

View File

@ -51,11 +51,11 @@ deploy command.
The same approach is possible for each role via these parameters: The same approach is possible for each role via these parameters:
* ControllerSchedulerHints * ControllerSchedulerHints
* ComputeSchedulerHints * ComputeSchedulerHints
* BlockStorageSchedulerHints * BlockStorageSchedulerHints
* ObjectStorageSchedulerHints * ObjectStorageSchedulerHints
* CephStorageSchedulerHints * CephStorageSchedulerHints
For custom roles (defined via roles_data.yaml) the parameter will be named For custom roles (defined via roles_data.yaml) the parameter will be named
RoleNameSchedulerHints, where RoleName is the name specified in roles_data.yaml. RoleNameSchedulerHints, where RoleName is the name specified in roles_data.yaml.

View File

@ -71,7 +71,7 @@ Before deploying the Overcloud
1. Install client packages on overcloud-full image: 1. Install client packages on overcloud-full image:
- Prepare installation script:: - Prepare installation script::
cat >install.sh<<EOF cat >install.sh<<EOF
#!/usr/bin/sh #!/usr/bin/sh
@ -79,90 +79,90 @@ Before deploying the Overcloud
yum install -y sensu fluentd collectd yum install -y sensu fluentd collectd
EOF EOF
- Run the script using virt-customize:: - Run the script using virt-customize::
LIBGUESTFS_BACKEND=direct virt-customize -a /path/to/overcloud-full.qcow2 \ LIBGUESTFS_BACKEND=direct virt-customize -a /path/to/overcloud-full.qcow2 \
--upload install.sh:/tmp/install.sh \ --upload install.sh:/tmp/install.sh \
--run-command "sh /tmp/install.sh" \ --run-command "sh /tmp/install.sh" \
--selinux-relabel --selinux-relabel
- Upload new image to undercloud image registry:: - Upload new image to undercloud image registry::
openstack overcloud image upload --update-existing openstack overcloud image upload --update-existing
2. Operational tools configuration files: 2. Operational tools configuration files:
The files have some documentation about the parameters that need to be configured The files have some documentation about the parameters that need to be configured
- Availability Monitoring:: - Availability Monitoring::
/usr/share/openstack-tripleo-heat-templates/environments/monitoring-environment.yaml /usr/share/openstack-tripleo-heat-templates/environments/monitoring-environment.yaml
- Centralized Logging:: - Centralized Logging::
/usr/share/openstack-tripleo-heat-templates/environments/logging-environment.yaml /usr/share/openstack-tripleo-heat-templates/environments/logging-environment.yaml
- Performance Monitoring:: - Performance Monitoring::
/usr/share/openstack-tripleo-heat-templates/environments/collectd-environment.yaml /usr/share/openstack-tripleo-heat-templates/environments/collectd-environment.yaml
3. Configure the environment 3. Configure the environment
The easiest way to configure our environment will be to create a parameter file, let's called paramters.yaml with all the paramteres defined. The easiest way to configure our environment will be to create a parameter file, let's called paramters.yaml with all the paramteres defined.
- Availability Monitoring:: - Availability Monitoring::
MonitoringRabbitHost: server_ip # Server were the rabbitmq was installed MonitoringRabbitHost: server_ip # Server were the rabbitmq was installed
MonitoringRabbitPort: 5672 # Rabbitmq port MonitoringRabbitPort: 5672 # Rabbitmq port
MonitoringRabbitUserName: sensu_user # the rabbitmq user to be used by sensu MonitoringRabbitUserName: sensu_user # the rabbitmq user to be used by sensu
MonitoringRabbitPassword: sensu_password # The password of the sensu user MonitoringRabbitPassword: sensu_password # The password of the sensu user
MonitoringRabbitUseSSL: false # Set to false MonitoringRabbitUseSSL: false # Set to false
MonitoringRabbitVhost: "/sensu_vhost" # The virtual host of the rabbitmq MonitoringRabbitVhost: "/sensu_vhost" # The virtual host of the rabbitmq
- Centralized Logging:: - Centralized Logging::
LoggingServers: # The servers LoggingServers: # The servers
- host: server_ip # The ip of the server - host: server_ip # The ip of the server
port: 24224 # Port to send the logs [ 24224 plain & 24284 SSL ] port: 24224 # Port to send the logs [ 24224 plain & 24284 SSL ]
LoggingUsesSSL: false # Plain or SSL connections LoggingUsesSSL: false # Plain or SSL connections
# If LoggingUsesSSL is set to false the following lines can # If LoggingUsesSSL is set to false the following lines can
# be deleted # be deleted
LoggingSharedKey: secret # The key LoggingSharedKey: secret # The key
LoggingSSLCertificate: | # The content of the SSL Certificate LoggingSSLCertificate: | # The content of the SSL Certificate
-----BEGIN CERTIFICATE----- -----BEGIN CERTIFICATE-----
...contens of server.pem here... ...contens of server.pem here...
-----END CERTIFICATE----- -----END CERTIFICATE-----
- Performance Monitoring:: - Performance Monitoring::
CollectdServer: collectd0.example.com # Collectd server, where the data is going to be sent CollectdServer: collectd0.example.com # Collectd server, where the data is going to be sent
CollectdServerPort: 25826 # Collectd port CollectdServerPort: 25826 # Collectd port
# CollectdSecurityLevel: None # Security by default None the other values are # CollectdSecurityLevel: None # Security by default None the other values are
# Encrypt & Sign, but the two following parameters # Encrypt & Sign, but the two following parameters
# need to be set too # need to be set too
# CollectdUsername: user # User to connect to the server # CollectdUsername: user # User to connect to the server
# CollectdPassword: password # Password to connect to the server # CollectdPassword: password # Password to connect to the server
# Collectd, by default, comes with several plugins # Collectd, by default, comes with several plugins
# extra plugins can added on this parameter # extra plugins can added on this parameter
CollectdExtraPlugins: CollectdExtraPlugins:
- disk # disk plugin - disk # disk plugin
- df # df plugin - df # df plugin
ExtraConfig: # If the plugins need to be set, this is the location ExtraConfig: # If the plugins need to be set, this is the location
collectd::plugin::disk::disks: collectd::plugin::disk::disks:
- "/^[vhs]d[a-f][0-9]?$/" - "/^[vhs]d[a-f][0-9]?$/"
collectd::plugin::df::mountpoints: collectd::plugin::df::mountpoints:
- "/" - "/"
collectd::plugin::df::ignoreselected: false collectd::plugin::df::ignoreselected: false
4. Continue following the TripleO instructions for deploying an overcloud:: 4. Continue following the TripleO instructions for deploying an overcloud::
openstack overcloud deploy --templates \ openstack overcloud deploy --templates \
[-e /usr/share/openstack-tripleo-heat-templates/environments/monitoring-environment.yaml] \ [-e /usr/share/openstack-tripleo-heat-templates/environments/monitoring-environment.yaml] \
[-e /usr/share/openstack-tripleo-heat-templates/environments/logging-environment.yaml] \ [-e /usr/share/openstack-tripleo-heat-templates/environments/logging-environment.yaml] \
[-e /usr/share/openstack-tripleo-heat-templates/environments/collectd-environment.yaml] \ [-e /usr/share/openstack-tripleo-heat-templates/environments/collectd-environment.yaml] \
-e parameters.yaml -e parameters.yaml
5. Wait for the completion of the overcloud deployment process. 5. Wait for the completion of the overcloud deployment process.

View File

@ -11,9 +11,9 @@ Execute below command to create the ``roles_data.yaml``::
Once a roles file is created, the following changes are required: Once a roles file is created, the following changes are required:
- Deploy Command - Deploy Command
- Parameters - Parameters
- Network Config - Network Config
Deploy Command Deploy Command
---------------- ----------------
@ -45,11 +45,11 @@ Parameters
Following are the list of parameters which need to be provided for deploying Following are the list of parameters which need to be provided for deploying
with OVS DPDK support. with OVS DPDK support.
* OvsPmdCoreList: List of Logical CPUs to be allocated for Poll Mode Driver * OvsPmdCoreList: List of Logical CPUs to be allocated for Poll Mode Driver
* OvsDpdkCoreList: List of Logical CPUs to be allocated for the openvswitch * OvsDpdkCoreList: List of Logical CPUs to be allocated for the openvswitch
host process (lcore list) host process (lcore list)
* OvsDpdkMemoryChannels: Number of memory channels * OvsDpdkMemoryChannels: Number of memory channels
* OvsDpdkSocketMemory: Socket memory list per NUMA node * OvsDpdkSocketMemory: Socket memory list per NUMA node
Example:: Example::
@ -76,9 +76,9 @@ DPDK supported network interfaces should be specified in the network config
templates to configure OVS DPDK on the node. The following new network config templates to configure OVS DPDK on the node. The following new network config
types have been added to support DPDK. types have been added to support DPDK.
- ovs_user_bridge - ovs_user_bridge
- ovs_dpdk_port - ovs_dpdk_port
- ovs_dpdk_bond - ovs_dpdk_bond
Example:: Example::

View File

@ -99,7 +99,7 @@ created on the undercloud, one should use a non-root user.
3. Export environment variables 3. Export environment variables
:: ::
export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean*" export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean*"
@ -145,21 +145,21 @@ created on the undercloud, one should use a non-root user.
#. Build the required images: #. Build the required images:
.. admonition:: RHEL .. admonition:: RHEL
:class: rhel :class: rhel
Download the RHEL 7.3 cloud image or copy it over from a different location, Download the RHEL 7.3 cloud image or copy it over from a different location,
for example: for example:
``https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.3/x86_64/product-downloads``, ``https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.3/x86_64/product-downloads``,
and define the needed environment variables for RHEL 7.3 prior to running and define the needed environment variables for RHEL 7.3 prior to running
``tripleo-build-images``:: ``tripleo-build-images``::
export DIB_LOCAL_IMAGE=rhel-guest-image-7.3-35.x86_64.qcow2 export DIB_LOCAL_IMAGE=rhel-guest-image-7.3-35.x86_64.qcow2
.. admonition:: RHEL Portal Registration .. admonition:: RHEL Portal Registration
:class: portal :class: portal
To register the image builds to the Red Hat Portal define the following variables:: To register the image builds to the Red Hat Portal define the following variables::
export REG_METHOD=portal export REG_METHOD=portal
export REG_USER="[your username]" export REG_USER="[your username]"
@ -169,27 +169,27 @@ created on the undercloud, one should use a non-root user.
export REG_REPOS="rhel-7-server-rpms rhel-7-server-extras-rpms rhel-ha-for-rhel-7-server-rpms \ export REG_REPOS="rhel-7-server-rpms rhel-7-server-extras-rpms rhel-ha-for-rhel-7-server-rpms \
rhel-7-server-optional-rpms rhel-7-server-openstack-6.0-rpms" rhel-7-server-optional-rpms rhel-7-server-openstack-6.0-rpms"
.. admonition:: Ceph .. admonition:: Ceph
:class: ceph :class: ceph
If using Ceph, additional channels need to be added to `REG_REPOS`. If using Ceph, additional channels need to be added to `REG_REPOS`.
Enable the appropriate channels for the desired release, as indicated below. Enable the appropriate channels for the desired release, as indicated below.
Do not enable any other channels not explicitly marked for that release. Do not enable any other channels not explicitly marked for that release.
:: ::
rhel-7-server-rhceph-2-mon-rpms rhel-7-server-rhceph-2-mon-rpms
rhel-7-server-rhceph-2-osd-rpms rhel-7-server-rhceph-2-osd-rpms
rhel-7-server-rhceph-2-tools-rpms rhel-7-server-rhceph-2-tools-rpms
.. admonition:: RHEL Satellite Registration .. admonition:: RHEL Satellite Registration
:class: satellite :class: satellite
To register the image builds to a Satellite define the following To register the image builds to a Satellite define the following
variables. Only using an activation key is supported when registering to variables. Only using an activation key is supported when registering to
Satellite, username/password is not supported for security reasons. The Satellite, username/password is not supported for security reasons. The
activation key must enable the repos shown:: activation key must enable the repos shown::
export REG_METHOD=satellite export REG_METHOD=satellite
# REG_SAT_URL should be in the format of: # REG_SAT_URL should be in the format of:
@ -206,25 +206,27 @@ created on the undercloud, one should use a non-root user.
# rhel-7-server-rhceph-{2,1.3}-tools-rpms # rhel-7-server-rhceph-{2,1.3}-tools-rpms
export REG_ACTIVATION_KEY="[activation key]" export REG_ACTIVATION_KEY="[activation key]"
:: ::
openstack overcloud image build openstack overcloud image build
.. admonition:: RHEL ..
:class: rhel
:: .. admonition:: RHEL
:class: rhel
::
openstack overcloud image build --config-file /usr/share/openstack-tripleo-common/image-yaml/overcloud-images.yaml --config-file $OS_YAML openstack overcloud image build --config-file /usr/share/openstack-tripleo-common/image-yaml/overcloud-images.yaml --config-file $OS_YAML
See the help for ``openstack overcloud image build`` for further options. See the help for ``openstack overcloud image build`` for further options.
The YAML files are cumulative. Order on the command line is important. The The YAML files are cumulative. Order on the command line is important. The
packages, elements, and options sections will append. All others will overwrite packages, elements, and options sections will append. All others will overwrite
previously read values. previously read values.
.. note:: .. note::
This command will build **overcloud-full** images (\*.qcow2, \*.initrd, This command will build **overcloud-full** images (\*.qcow2, \*.initrd,
\*.vmlinuz) and **ironic-python-agent** images (\*.initramfs, \*.kernel) \*.vmlinuz) and **ironic-python-agent** images (\*.initramfs, \*.kernel)
@ -363,8 +365,8 @@ subnet. If needed, define the nameserver to be used for the environment::
.. admonition:: Stable Branch .. admonition:: Stable Branch
:class: stable :class: stable
For Mitaka release and older, the subnet commands are executed within the For Mitaka release and older, the subnet commands are executed within the
`neutron` command:: `neutron` command::
neutron subnet-list neutron subnet-list
neutron subnet-update <subnet-uuid> --dns-nameserver <nameserver-ip> neutron subnet-update <subnet-uuid> --dns-nameserver <nameserver-ip>

View File

@ -68,9 +68,9 @@ the following command on the undercloud::
ssh -Nf user@virthost -L 0.0.0.0:443:192.168.24.2:443 # If SSL ssh -Nf user@virthost -L 0.0.0.0:443:192.168.24.2:443 # If SSL
ssh -Nf user@virthost -L 0.0.0.0:3000:192.168.24.1:3000 # If no SSL ssh -Nf user@virthost -L 0.0.0.0:3000:192.168.24.1:3000 # If no SSL
.. note:: Quickstart started creating the tunnel automatically .. note:: Quickstart started creating the tunnel automatically
during Pike. If using an older version you will have to create during Pike. If using an older version you will have to create
the tunnel manually, for example:: the tunnel manually, for example::
ssh -F /root/.quickstart/ssh.config.ansible undercloud -L 0.0.0.0:443:192.168.24.2:443 ssh -F /root/.quickstart/ssh.config.ansible undercloud -L 0.0.0.0:443:192.168.24.2:443
@ -189,10 +189,10 @@ deployment in general, as well as for each individual environment.
.. admonition:: Newton .. admonition:: Newton
:class: newton :class: newton
In Newton it was not possible to configure individual In Newton it was not possible to configure individual
environments. The environment templates should be updated environments. The environment templates should be updated
directly with the required parameters before uploading a new directly with the required parameters before uploading a new
plan. plan.
Individual roles can also be configured by clicking on the Pencil icon Individual roles can also be configured by clicking on the Pencil icon
beside the role name on each card. beside the role name on each card.
@ -203,8 +203,8 @@ beside the role name on each card.
.. admonition:: Newton .. admonition:: Newton
:class: newton :class: newton
In Newton, you may need to assign at least one node to the role In Newton, you may need to assign at least one node to the role
before the related configuration options are loaded. before the related configuration options are loaded.
Assign Nodes Assign Nodes

View File

@ -131,55 +131,55 @@ Each service may define output variable(s) which control config file generation,
initialization, and stepwise deployment of all the containers for this service. initialization, and stepwise deployment of all the containers for this service.
The following sections are available: The following sections are available:
* config_settings: This setting is generally inherited from the * config_settings: This setting is generally inherited from the
puppet/services templates and may be appended to if required puppet/services templates and may be appended to if required
to support the docker specific config settings. to support the docker specific config settings.
* step_config: This setting controls the manifest that is used to * step_config: This setting controls the manifest that is used to
create docker config files via puppet. The puppet tags below are create docker config files via puppet. The puppet tags below are
used along with this manifest to generate a config directory for used along with this manifest to generate a config directory for
this container. this container.
* kolla_config: Contains YAML that represents how to map config files * kolla_config: Contains YAML that represents how to map config files
into the kolla container. This config file is typically mapped into into the kolla container. This config file is typically mapped into
the container itself at the /var/lib/kolla/config_files/config.json the container itself at the /var/lib/kolla/config_files/config.json
location and drives how kolla's external config mechanisms work. location and drives how kolla's external config mechanisms work.
* docker_config: Data that is passed to the docker-cmd hook to configure * docker_config: Data that is passed to the docker-cmd hook to configure
a container, or step of containers at each step. See the available steps a container, or step of containers at each step. See the available steps
below and the related docker-cmd hook documentation in the heat-agents below and the related docker-cmd hook documentation in the heat-agents
project. project.
* puppet_config: This section is a nested set of key value pairs * puppet_config: This section is a nested set of key value pairs
that drive the creation of config files using puppet. that drive the creation of config files using puppet.
Required parameters include: Required parameters include:
* puppet_tags: Puppet resource tag names that are used to generate config * puppet_tags: Puppet resource tag names that are used to generate config
files with puppet. Only the named config resources are used to generate files with puppet. Only the named config resources are used to generate
a config file. Any service that specifies tags will have the default a config file. Any service that specifies tags will have the default
tags of 'file,concat,file_line,augeas,cron' appended to the setting. tags of 'file,concat,file_line,augeas,cron' appended to the setting.
Example: keystone_config Example: keystone_config
* config_volume: The name of the volume (directory) where config files * config_volume: The name of the volume (directory) where config files
will be generated for this service. Use this as the location to will be generated for this service. Use this as the location to
bind mount into the running Kolla container for configuration. bind mount into the running Kolla container for configuration.
* config_image: The name of the docker image that will be used for * config_image: The name of the docker image that will be used for
generating configuration files. This is often the same container generating configuration files. This is often the same container
that the runtime service uses. Some services share a common set of that the runtime service uses. Some services share a common set of
config files which are generated in a common base container. config files which are generated in a common base container.
* step_config: This setting controls the manifest that is used to * step_config: This setting controls the manifest that is used to
create docker config files via puppet. The puppet tags below are create docker config files via puppet. The puppet tags below are
used along with this manifest to generate a config directory for used along with this manifest to generate a config directory for
this container. this container.
* docker_puppet_tasks: This section provides data to drive the * docker_puppet_tasks: This section provides data to drive the
docker-puppet.py tool directly. The task is executed only once docker-puppet.py tool directly. The task is executed only once
within the cluster (not on each node) and is useful for several within the cluster (not on each node) and is useful for several
puppet snippets we require for initialization of things like puppet snippets we require for initialization of things like
keystone endpoints, database users, etc. See docker-puppet.py keystone endpoints, database users, etc. See docker-puppet.py
for formatting. for formatting.
Docker steps Docker steps

View File

@ -142,12 +142,12 @@ it can be changed if they are all consistent. This will be the plan name.
1. Create the Swift container. 1. Create the Swift container.
.. code-block:: bash .. code-block:: bash
openstack action execution run tripleo.plan.create_container \ openstack action execution run tripleo.plan.create_container \
'{"container":"my_cloud"}' '{"container":"my_cloud"}'
.. note:: .. note::
Creating a swift container directly isn't sufficient, as this Mistral Creating a swift container directly isn't sufficient, as this Mistral
action also sets metadata on the container and may include further action also sets metadata on the container and may include further
@ -155,7 +155,7 @@ it can be changed if they are all consistent. This will be the plan name.
2. Upload the files to Swift. 2. Upload the files to Swift.
.. code-block:: bash .. code-block:: bash
swift upload my_cloud path/to/tripleo/templates swift upload my_cloud path/to/tripleo/templates
@ -163,7 +163,7 @@ it can be changed if they are all consistent. This will be the plan name.
for the uploaded templates, do some initial template processing and generate for the uploaded templates, do some initial template processing and generate
the passwords. the passwords.
.. code-block:: bash .. code-block:: bash
openstack workflow execution create tripleo.plan_management.v1.create_deployment_plan \ openstack workflow execution create tripleo.plan_management.v1.create_deployment_plan \
'{"container":"my_cloud"}' '{"container":"my_cloud"}'

View File

@ -30,31 +30,31 @@ Upgrading the Undercloud
1. Disable the old OpenStack release repositories and enable new 1. Disable the old OpenStack release repositories and enable new
release repositories on the undercloud: release repositories on the undercloud:
.. admonition:: Mitaka to Newton .. admonition:: Mitaka to Newton
:class: mton :class: mton
:: ::
export CURRENT_VERSION=mitaka export CURRENT_VERSION=mitaka
export NEW_VERSION=newton export NEW_VERSION=newton
.. admonition:: Newton to Ocata .. admonition:: Newton to Ocata
:class: ntoo :class: ntoo
:: ::
export CURRENT_VERSION=newton export CURRENT_VERSION=newton
export NEW_VERSION=ocata export NEW_VERSION=ocata
Backup and disable current repos. Note that the repository files might be Backup and disable current repos. Note that the repository files might be
named differently depending on your installation:: named differently depending on your installation::
mkdir /home/stack/REPOBACKUP mkdir /home/stack/REPOBACKUP
sudo mv /etc/yum.repos.d/delorean* /home/stack/REPOBACKUP/ sudo mv /etc/yum.repos.d/delorean* /home/stack/REPOBACKUP/
Get and enable new repos for `NEW_VERSION`: Get and enable new repos for `NEW_VERSION`:
.. include:: ../repositories.txt .. include:: ../repositories.txt
2. Run undercloud upgrade: 2. Run undercloud upgrade:
@ -71,37 +71,37 @@ Upgrading the Undercloud
.. admonition:: Mitaka to Newton .. admonition:: Mitaka to Newton
:class: mton :class: mton
In the first release of instack-undercloud newton(5.0.0), the undercloud In the first release of instack-undercloud newton(5.0.0), the undercloud
telemetry services are **disabled** by default. In order to maintain the telemetry services are **disabled** by default. In order to maintain the
telemetry services during the mitaka to newton upgrade the operator must telemetry services during the mitaka to newton upgrade the operator must
explicitly enable them **before** running the undercloud upgrade. This explicitly enable them **before** running the undercloud upgrade. This
is done by adding:: is done by adding::
enable_telemetry = true enable_telemetry = true
in the [DEFAULT] section of the undercloud.conf configuration file. in the [DEFAULT] section of the undercloud.conf configuration file.
If you are using any newer newton release, this option is switched back If you are using any newer newton release, this option is switched back
to **enabled** by default to make upgrade experience better. Hence, if to **enabled** by default to make upgrade experience better. Hence, if
you are using a later newton release you don't need to explicitly enable you are using a later newton release you don't need to explicitly enable
this option. this option.
.. admonition:: Ocata to Pike .. admonition:: Ocata to Pike
:class: mton :class: mton
Prior to Pike, TripleO deployed Ceph with puppet-ceph. With the Prior to Pike, TripleO deployed Ceph with puppet-ceph. With the
Pike release it is possible to use TripleO to deploy Ceph with Pike release it is possible to use TripleO to deploy Ceph with
either ceph-ansible or puppet-ceph, though puppet-ceph is either ceph-ansible or puppet-ceph, though puppet-ceph is
deprecated. To use ceph-ansible, the CentOS Storage SIG Ceph deprecated. To use ceph-ansible, the CentOS Storage SIG Ceph
repository must be enabled on the undercloud and the repository must be enabled on the undercloud and the
ceph-ansible package must then be installed:: ceph-ansible package must then be installed::
sudo yum -y install --enablerepo=extras centos-release-ceph-jewel sudo yum -y install --enablerepo=extras centos-release-ceph-jewel
sudo yum -y install ceph-ansible sudo yum -y install ceph-ansible
It is not yet possible to migrate an existing puppet-ceph It is not yet possible to migrate an existing puppet-ceph
deployment to a ceph-ansible deployment. Only new deployments deployment to a ceph-ansible deployment. Only new deployments
are currently possible with ceph-ansible. are currently possible with ceph-ansible.
The following commands will upgrade the undercloud: The following commands will upgrade the undercloud:
@ -298,12 +298,12 @@ Upgrading the Overcloud to Newton and earlier
:class: mton :class: mton
**Deliver the migration for ceilometer to run under httpd.** **Deliver the migration for ceilometer to run under httpd.**
This is to deliver the migration for ceilometer to be run under httpd (apache) This is to deliver the migration for ceilometer to be run under httpd (apache)
rather than eventlet as was the case before. To execute this step run rather than eventlet as was the case before. To execute this step run
`overcloud deploy`, passing in the full set of environment files plus `overcloud deploy`, passing in the full set of environment files plus
`major-upgrade-ceilometer-wsgi-mitaka-newton.yaml`:: `major-upgrade-ceilometer-wsgi-mitaka-newton.yaml`::
openstack overcloud deploy --templates \ openstack overcloud deploy --templates \
-e <full environment> \ -e <full environment> \
@ -354,19 +354,19 @@ Upgrading the Overcloud to Newton and earlier
.. admonition:: Mitaka to Newton .. admonition:: Mitaka to Newton
:class: mton :class: mton
**Explicitly disable sahara services if so desired:** **Explicitly disable sahara services if so desired:**
As discussed at bug1630247_ sahara services are disabled by default As discussed at bug1630247_ sahara services are disabled by default
in the Newton overcloud deployment. This special case is handled for in the Newton overcloud deployment. This special case is handled for
the duration of the upgrade by defaulting to 'keep sahara-\*'. the duration of the upgrade by defaulting to 'keep sahara-\*'.
That is by default sahara services are restarted after the mitaka to That is by default sahara services are restarted after the mitaka to
newton upgrade of controller nodes and sahara config is re-applied newton upgrade of controller nodes and sahara config is re-applied
during the final upgrade converge step. during the final upgrade converge step.
If an operator wishes to **disable** sahara services as part of the mitaka If an operator wishes to **disable** sahara services as part of the mitaka
to newton upgrade they need to include the major-upgrade-remove-sahara.yaml_ to newton upgrade they need to include the major-upgrade-remove-sahara.yaml_
environment file during the controller upgrade step as well as during environment file during the controller upgrade step as well as during
the converge step later:: the converge step later::
openstack overcloud deploy --templates \ openstack overcloud deploy --templates \
-e <full environment> \ -e <full environment> \
@ -419,19 +419,19 @@ Upgrading the Overcloud to Newton and earlier
.. admonition:: Mitaka to Newton .. admonition:: Mitaka to Newton
:class: mton :class: mton
**Explicitly disable sahara services if so desired:** **Explicitly disable sahara services if so desired:**
As discussed at bug1630247_ sahara services are disabled by default As discussed at bug1630247_ sahara services are disabled by default
in the Newton overcloud deployment. This special case is handled for in the Newton overcloud deployment. This special case is handled for
the duration of the upgrade by defaulting to 'keep sahara-\*'. the duration of the upgrade by defaulting to 'keep sahara-\*'.
That is by default sahara services are restarted after the mitaka to That is by default sahara services are restarted after the mitaka to
newton upgrade of controller nodes and sahara config is re-applied newton upgrade of controller nodes and sahara config is re-applied
during the final upgrade converge step. during the final upgrade converge step.
If an operator wishes to **disable** sahara services as part of the mitaka If an operator wishes to **disable** sahara services as part of the mitaka
to newton upgrade they need to include the major-upgrade-remove-sahara.yaml_ to newton upgrade they need to include the major-upgrade-remove-sahara.yaml_
environment file during the controller upgrade earlier and converge environment file during the controller upgrade earlier and converge
step here:: step here::
openstack overcloud deploy --templates \ openstack overcloud deploy --templates \
-e <full environment> \ -e <full environment> \
@ -461,13 +461,13 @@ Upgrading the Overcloud to Newton and earlier
:class: mton :class: mton
**Deliver the data migration for aodh.** **Deliver the data migration for aodh.**
This is to deliver the data migration for aodh. In Newton, aodh uses its This is to deliver the data migration for aodh. In Newton, aodh uses its
own mysql backend. This step migrates all the existing alarm data from own mysql backend. This step migrates all the existing alarm data from
mongodb to the new mysql backend. To execute this step run mongodb to the new mysql backend. To execute this step run
`overcloud deploy`, passing in the full set of environment files plus `overcloud deploy`, passing in the full set of environment files plus
`major-upgrade-aodh-migration.yaml`:: `major-upgrade-aodh-migration.yaml`::
openstack overcloud deploy --templates \ openstack overcloud deploy --templates \
-e <full environment> \ -e <full environment> \

View File

@ -6,7 +6,7 @@ Create a snapshot of a running server
Create a new image by taking a snapshot of a running server and download the Create a new image by taking a snapshot of a running server and download the
image. image.
:: ::
nova image-create instance_name image_name nova image-create instance_name image_name
glance image-download image_name --file exported_vm.qcow2 glance image-download image_name --file exported_vm.qcow2
@ -15,7 +15,7 @@ Import an image into Overcloud and launch an instance
----------------------------------------------------- -----------------------------------------------------
Upload the exported image into glance in Overcloud and launch a new instance. Upload the exported image into glance in Overcloud and launch a new instance.
:: ::
glance image-create --name imported_image --file exported_vm.qcow2 --disk-format qcow2 --container-format bare glance image-create --name imported_image --file exported_vm.qcow2 --disk-format qcow2 --container-format bare
nova boot --poll --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id imported nova boot --poll --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id imported