docs: remove ALL the unnecessary blockquotes

Due to missing/superflous spaces, some of the content in the
guide is displayed as blockquote without being really quoted
content. This change adds/removes spaces to remove ALL the
generated HTML blockquotes.

Change-Id: I25b0d9fa64cd474a844b5f3e6c126395a4e80f2c
This commit is contained in:
Markus Zoeller 2017-09-15 14:19:35 -06:00
parent 2aa2da1d77
commit 4592ed06f9
13 changed files with 244 additions and 242 deletions

View File

@ -43,7 +43,7 @@ be modified or removed.
The steps to define your custom roles configuration are:
1. Copy the default roles provided by `tripleo-heat-templates`:
1. Copy the default roles provided by `tripleo-heat-templates`::
mkdir ~/roles
cp /usr/share/openstack-tripleo-heat-templates/roles/* ~/roles
@ -57,18 +57,18 @@ match the name of the role. For example if adding a new role named `Galera`,
the role file name should be `Galera.yaml`. The file should at least contain
the following items:
* name: Name of the role e.g "CustomController", mandatory
* ServicesDefault: List of services, optional, defaults to an empty list
See the default roles_data.yaml or overcloud-resource-registry-puppet.j2.yaml
for the list of supported services. Both files can be found in the top
tripleo-heat-templates folder
* name: Name of the role e.g "CustomController", mandatory
* ServicesDefault: List of services, optional, defaults to an empty list
See the default roles_data.yaml or overcloud-resource-registry-puppet.j2.yaml
for the list of supported services. Both files can be found in the top
tripleo-heat-templates folder
Additional items like the ones below should be included as well:
* CountDefault: Default number of nodes, defaults to zero
* HostnameFormatDefault: Format string for hostname, optional
* Description: A few sentences describing the role and information
pertaining to the usage of the role.
* CountDefault: Default number of nodes, defaults to zero
* HostnameFormatDefault: Format string for hostname, optional
* Description: A few sentences describing the role and information
pertaining to the usage of the role.
The role file format is a basic yaml structure. The expectation is that there
is a single role per file. See the roles `README.rst` for additional details. For

View File

@ -141,11 +141,11 @@ Deploying the Overcloud with an External Backend
#. Copy the Manila driver-specific configuration file to your home directory:
- Generic driver::
- Generic driver::
sudo cp /usr/share/openstack-tripleo-heat-templates/environments/manila-generic-config.yaml ~
- NetApp driver::
- NetApp driver::
sudo cp /usr/share/openstack-tripleo-heat-templates/environments/manila-netapp-config.yaml ~

View File

@ -137,10 +137,10 @@ integration points for additional third-party services, drivers or plugins.
The following interfaces are available:
* `OS::TripleO::ControllerExtraConfigPre`: Controller node additional configuration
* `OS::TripleO::ComputeExtraConfigPre`: Compute node additional configuration
* `OS::TripleO::CephStorageExtraConfigPre` : CephStorage node additional configuration
* `OS::TripleO::NodeExtraConfig`: additional configuration applied to all nodes (all roles).
* `OS::TripleO::ControllerExtraConfigPre`: Controller node additional configuration
* `OS::TripleO::ComputeExtraConfigPre`: Compute node additional configuration
* `OS::TripleO::CephStorageExtraConfigPre` : CephStorage node additional configuration
* `OS::TripleO::NodeExtraConfig`: additional configuration applied to all nodes (all roles).
Below is an example of a per-node configuration template that shows additional node configuration
via standard heat SoftwareConfig_ resources::

View File

@ -31,12 +31,12 @@ value for compute nodes::
The parameters available are:
* `ExtraConfig`: Apply the data to all nodes, e.g all roles
* `ComputeExtraConfig`: Apply the data only to Compute nodes
* `ControllerExtraConfig`: Apply the data only to Controller nodes
* `BlockStorageExtraConfig`: Apply the data only to BlockStorage nodes
* `ObjectStorageExtraConfig`: Apply the data only to ObjectStorage nodes
* `CephStorageExtraConfig`: Apply the data only to CephStorage nodes
* `ExtraConfig`: Apply the data to all nodes, e.g all roles
* `ComputeExtraConfig`: Apply the data only to Compute nodes
* `ControllerExtraConfig`: Apply the data only to Controller nodes
* `BlockStorageExtraConfig`: Apply the data only to BlockStorage nodes
* `ObjectStorageExtraConfig`: Apply the data only to ObjectStorage nodes
* `CephStorageExtraConfig`: Apply the data only to CephStorage nodes
For any custom roles (defined via roles_data.yaml) the parameter name will
be RoleNameExtraConfig where RoleName is the name specified in roles_data.yaml.

View File

@ -51,11 +51,11 @@ deploy command.
The same approach is possible for each role via these parameters:
* ControllerSchedulerHints
* ComputeSchedulerHints
* BlockStorageSchedulerHints
* ObjectStorageSchedulerHints
* CephStorageSchedulerHints
* ControllerSchedulerHints
* ComputeSchedulerHints
* BlockStorageSchedulerHints
* ObjectStorageSchedulerHints
* CephStorageSchedulerHints
For custom roles (defined via roles_data.yaml) the parameter will be named
RoleNameSchedulerHints, where RoleName is the name specified in roles_data.yaml.

View File

@ -71,7 +71,7 @@ Before deploying the Overcloud
1. Install client packages on overcloud-full image:
- Prepare installation script::
- Prepare installation script::
cat >install.sh<<EOF
#!/usr/bin/sh
@ -79,90 +79,90 @@ Before deploying the Overcloud
yum install -y sensu fluentd collectd
EOF
- Run the script using virt-customize::
- Run the script using virt-customize::
LIBGUESTFS_BACKEND=direct virt-customize -a /path/to/overcloud-full.qcow2 \
--upload install.sh:/tmp/install.sh \
--run-command "sh /tmp/install.sh" \
--selinux-relabel
- Upload new image to undercloud image registry::
- Upload new image to undercloud image registry::
openstack overcloud image upload --update-existing
2. Operational tools configuration files:
The files have some documentation about the parameters that need to be configured
The files have some documentation about the parameters that need to be configured
- Availability Monitoring::
- Availability Monitoring::
/usr/share/openstack-tripleo-heat-templates/environments/monitoring-environment.yaml
/usr/share/openstack-tripleo-heat-templates/environments/monitoring-environment.yaml
- Centralized Logging::
- Centralized Logging::
/usr/share/openstack-tripleo-heat-templates/environments/logging-environment.yaml
/usr/share/openstack-tripleo-heat-templates/environments/logging-environment.yaml
- Performance Monitoring::
- Performance Monitoring::
/usr/share/openstack-tripleo-heat-templates/environments/collectd-environment.yaml
/usr/share/openstack-tripleo-heat-templates/environments/collectd-environment.yaml
3. Configure the environment
The easiest way to configure our environment will be to create a parameter file, let's called paramters.yaml with all the paramteres defined.
The easiest way to configure our environment will be to create a parameter file, let's called paramters.yaml with all the paramteres defined.
- Availability Monitoring::
- Availability Monitoring::
MonitoringRabbitHost: server_ip # Server were the rabbitmq was installed
MonitoringRabbitPort: 5672 # Rabbitmq port
MonitoringRabbitUserName: sensu_user # the rabbitmq user to be used by sensu
MonitoringRabbitPassword: sensu_password # The password of the sensu user
MonitoringRabbitUseSSL: false # Set to false
MonitoringRabbitVhost: "/sensu_vhost" # The virtual host of the rabbitmq
MonitoringRabbitHost: server_ip # Server were the rabbitmq was installed
MonitoringRabbitPort: 5672 # Rabbitmq port
MonitoringRabbitUserName: sensu_user # the rabbitmq user to be used by sensu
MonitoringRabbitPassword: sensu_password # The password of the sensu user
MonitoringRabbitUseSSL: false # Set to false
MonitoringRabbitVhost: "/sensu_vhost" # The virtual host of the rabbitmq
- Centralized Logging::
- Centralized Logging::
LoggingServers: # The servers
- host: server_ip # The ip of the server
port: 24224 # Port to send the logs [ 24224 plain & 24284 SSL ]
LoggingUsesSSL: false # Plain or SSL connections
# If LoggingUsesSSL is set to false the following lines can
# be deleted
LoggingSharedKey: secret # The key
LoggingSSLCertificate: | # The content of the SSL Certificate
-----BEGIN CERTIFICATE-----
...contens of server.pem here...
-----END CERTIFICATE-----
LoggingServers: # The servers
- host: server_ip # The ip of the server
port: 24224 # Port to send the logs [ 24224 plain & 24284 SSL ]
LoggingUsesSSL: false # Plain or SSL connections
# If LoggingUsesSSL is set to false the following lines can
# be deleted
LoggingSharedKey: secret # The key
LoggingSSLCertificate: | # The content of the SSL Certificate
-----BEGIN CERTIFICATE-----
...contens of server.pem here...
-----END CERTIFICATE-----
- Performance Monitoring::
- Performance Monitoring::
CollectdServer: collectd0.example.com # Collectd server, where the data is going to be sent
CollectdServerPort: 25826 # Collectd port
# CollectdSecurityLevel: None # Security by default None the other values are
# Encrypt & Sign, but the two following parameters
# need to be set too
# CollectdUsername: user # User to connect to the server
# CollectdPassword: password # Password to connect to the server
CollectdServer: collectd0.example.com # Collectd server, where the data is going to be sent
CollectdServerPort: 25826 # Collectd port
# CollectdSecurityLevel: None # Security by default None the other values are
# Encrypt & Sign, but the two following parameters
# need to be set too
# CollectdUsername: user # User to connect to the server
# CollectdPassword: password # Password to connect to the server
# Collectd, by default, comes with several plugins
# extra plugins can added on this parameter
CollectdExtraPlugins:
- disk # disk plugin
- df # df plugin
ExtraConfig: # If the plugins need to be set, this is the location
collectd::plugin::disk::disks:
- "/^[vhs]d[a-f][0-9]?$/"
collectd::plugin::df::mountpoints:
- "/"
collectd::plugin::df::ignoreselected: false
# Collectd, by default, comes with several plugins
# extra plugins can added on this parameter
CollectdExtraPlugins:
- disk # disk plugin
- df # df plugin
ExtraConfig: # If the plugins need to be set, this is the location
collectd::plugin::disk::disks:
- "/^[vhs]d[a-f][0-9]?$/"
collectd::plugin::df::mountpoints:
- "/"
collectd::plugin::df::ignoreselected: false
4. Continue following the TripleO instructions for deploying an overcloud::
openstack overcloud deploy --templates \
[-e /usr/share/openstack-tripleo-heat-templates/environments/monitoring-environment.yaml] \
[-e /usr/share/openstack-tripleo-heat-templates/environments/logging-environment.yaml] \
[-e /usr/share/openstack-tripleo-heat-templates/environments/collectd-environment.yaml] \
-e parameters.yaml
openstack overcloud deploy --templates \
[-e /usr/share/openstack-tripleo-heat-templates/environments/monitoring-environment.yaml] \
[-e /usr/share/openstack-tripleo-heat-templates/environments/logging-environment.yaml] \
[-e /usr/share/openstack-tripleo-heat-templates/environments/collectd-environment.yaml] \
-e parameters.yaml
5. Wait for the completion of the overcloud deployment process.

View File

@ -11,9 +11,9 @@ Execute below command to create the ``roles_data.yaml``::
Once a roles file is created, the following changes are required:
- Deploy Command
- Parameters
- Network Config
- Deploy Command
- Parameters
- Network Config
Deploy Command
----------------
@ -45,11 +45,11 @@ Parameters
Following are the list of parameters which need to be provided for deploying
with OVS DPDK support.
* OvsPmdCoreList: List of Logical CPUs to be allocated for Poll Mode Driver
* OvsDpdkCoreList: List of Logical CPUs to be allocated for the openvswitch
host process (lcore list)
* OvsDpdkMemoryChannels: Number of memory channels
* OvsDpdkSocketMemory: Socket memory list per NUMA node
* OvsPmdCoreList: List of Logical CPUs to be allocated for Poll Mode Driver
* OvsDpdkCoreList: List of Logical CPUs to be allocated for the openvswitch
host process (lcore list)
* OvsDpdkMemoryChannels: Number of memory channels
* OvsDpdkSocketMemory: Socket memory list per NUMA node
Example::
@ -76,9 +76,9 @@ DPDK supported network interfaces should be specified in the network config
templates to configure OVS DPDK on the node. The following new network config
types have been added to support DPDK.
- ovs_user_bridge
- ovs_dpdk_port
- ovs_dpdk_bond
- ovs_user_bridge
- ovs_dpdk_port
- ovs_dpdk_bond
Example::

View File

@ -99,7 +99,7 @@ created on the undercloud, one should use a non-root user.
3. Export environment variables
::
::
export DIB_YUM_REPO_CONF="/etc/yum.repos.d/delorean*"
@ -145,21 +145,21 @@ created on the undercloud, one should use a non-root user.
#. Build the required images:
.. admonition:: RHEL
:class: rhel
.. admonition:: RHEL
:class: rhel
Download the RHEL 7.3 cloud image or copy it over from a different location,
for example:
``https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.3/x86_64/product-downloads``,
and define the needed environment variables for RHEL 7.3 prior to running
``tripleo-build-images``::
Download the RHEL 7.3 cloud image or copy it over from a different location,
for example:
``https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.3/x86_64/product-downloads``,
and define the needed environment variables for RHEL 7.3 prior to running
``tripleo-build-images``::
export DIB_LOCAL_IMAGE=rhel-guest-image-7.3-35.x86_64.qcow2
.. admonition:: RHEL Portal Registration
:class: portal
.. admonition:: RHEL Portal Registration
:class: portal
To register the image builds to the Red Hat Portal define the following variables::
To register the image builds to the Red Hat Portal define the following variables::
export REG_METHOD=portal
export REG_USER="[your username]"
@ -169,27 +169,27 @@ created on the undercloud, one should use a non-root user.
export REG_REPOS="rhel-7-server-rpms rhel-7-server-extras-rpms rhel-ha-for-rhel-7-server-rpms \
rhel-7-server-optional-rpms rhel-7-server-openstack-6.0-rpms"
.. admonition:: Ceph
:class: ceph
.. admonition:: Ceph
:class: ceph
If using Ceph, additional channels need to be added to `REG_REPOS`.
Enable the appropriate channels for the desired release, as indicated below.
Do not enable any other channels not explicitly marked for that release.
If using Ceph, additional channels need to be added to `REG_REPOS`.
Enable the appropriate channels for the desired release, as indicated below.
Do not enable any other channels not explicitly marked for that release.
::
::
rhel-7-server-rhceph-2-mon-rpms
rhel-7-server-rhceph-2-osd-rpms
rhel-7-server-rhceph-2-tools-rpms
.. admonition:: RHEL Satellite Registration
:class: satellite
.. admonition:: RHEL Satellite Registration
:class: satellite
To register the image builds to a Satellite define the following
variables. Only using an activation key is supported when registering to
Satellite, username/password is not supported for security reasons. The
activation key must enable the repos shown::
To register the image builds to a Satellite define the following
variables. Only using an activation key is supported when registering to
Satellite, username/password is not supported for security reasons. The
activation key must enable the repos shown::
export REG_METHOD=satellite
# REG_SAT_URL should be in the format of:
@ -206,25 +206,27 @@ created on the undercloud, one should use a non-root user.
# rhel-7-server-rhceph-{2,1.3}-tools-rpms
export REG_ACTIVATION_KEY="[activation key]"
::
::
openstack overcloud image build
openstack overcloud image build
.. admonition:: RHEL
:class: rhel
..
::
.. admonition:: RHEL
:class: rhel
::
openstack overcloud image build --config-file /usr/share/openstack-tripleo-common/image-yaml/overcloud-images.yaml --config-file $OS_YAML
See the help for ``openstack overcloud image build`` for further options.
See the help for ``openstack overcloud image build`` for further options.
The YAML files are cumulative. Order on the command line is important. The
packages, elements, and options sections will append. All others will overwrite
previously read values.
The YAML files are cumulative. Order on the command line is important. The
packages, elements, and options sections will append. All others will overwrite
previously read values.
.. note::
.. note::
This command will build **overcloud-full** images (\*.qcow2, \*.initrd,
\*.vmlinuz) and **ironic-python-agent** images (\*.initramfs, \*.kernel)
@ -363,8 +365,8 @@ subnet. If needed, define the nameserver to be used for the environment::
.. admonition:: Stable Branch
:class: stable
For Mitaka release and older, the subnet commands are executed within the
`neutron` command::
For Mitaka release and older, the subnet commands are executed within the
`neutron` command::
neutron subnet-list
neutron subnet-update <subnet-uuid> --dns-nameserver <nameserver-ip>

View File

@ -68,9 +68,9 @@ the following command on the undercloud::
ssh -Nf user@virthost -L 0.0.0.0:443:192.168.24.2:443 # If SSL
ssh -Nf user@virthost -L 0.0.0.0:3000:192.168.24.1:3000 # If no SSL
.. note:: Quickstart started creating the tunnel automatically
during Pike. If using an older version you will have to create
the tunnel manually, for example::
.. note:: Quickstart started creating the tunnel automatically
during Pike. If using an older version you will have to create
the tunnel manually, for example::
ssh -F /root/.quickstart/ssh.config.ansible undercloud -L 0.0.0.0:443:192.168.24.2:443
@ -189,10 +189,10 @@ deployment in general, as well as for each individual environment.
.. admonition:: Newton
:class: newton
In Newton it was not possible to configure individual
environments. The environment templates should be updated
directly with the required parameters before uploading a new
plan.
In Newton it was not possible to configure individual
environments. The environment templates should be updated
directly with the required parameters before uploading a new
plan.
Individual roles can also be configured by clicking on the Pencil icon
beside the role name on each card.
@ -203,8 +203,8 @@ beside the role name on each card.
.. admonition:: Newton
:class: newton
In Newton, you may need to assign at least one node to the role
before the related configuration options are loaded.
In Newton, you may need to assign at least one node to the role
before the related configuration options are loaded.
Assign Nodes

View File

@ -131,55 +131,55 @@ Each service may define output variable(s) which control config file generation,
initialization, and stepwise deployment of all the containers for this service.
The following sections are available:
* config_settings: This setting is generally inherited from the
puppet/services templates and may be appended to if required
to support the docker specific config settings.
* config_settings: This setting is generally inherited from the
puppet/services templates and may be appended to if required
to support the docker specific config settings.
* step_config: This setting controls the manifest that is used to
create docker config files via puppet. The puppet tags below are
used along with this manifest to generate a config directory for
this container.
* step_config: This setting controls the manifest that is used to
create docker config files via puppet. The puppet tags below are
used along with this manifest to generate a config directory for
this container.
* kolla_config: Contains YAML that represents how to map config files
into the kolla container. This config file is typically mapped into
the container itself at the /var/lib/kolla/config_files/config.json
location and drives how kolla's external config mechanisms work.
* kolla_config: Contains YAML that represents how to map config files
into the kolla container. This config file is typically mapped into
the container itself at the /var/lib/kolla/config_files/config.json
location and drives how kolla's external config mechanisms work.
* docker_config: Data that is passed to the docker-cmd hook to configure
a container, or step of containers at each step. See the available steps
below and the related docker-cmd hook documentation in the heat-agents
project.
* docker_config: Data that is passed to the docker-cmd hook to configure
a container, or step of containers at each step. See the available steps
below and the related docker-cmd hook documentation in the heat-agents
project.
* puppet_config: This section is a nested set of key value pairs
that drive the creation of config files using puppet.
Required parameters include:
* puppet_config: This section is a nested set of key value pairs
that drive the creation of config files using puppet.
Required parameters include:
* puppet_tags: Puppet resource tag names that are used to generate config
files with puppet. Only the named config resources are used to generate
a config file. Any service that specifies tags will have the default
tags of 'file,concat,file_line,augeas,cron' appended to the setting.
Example: keystone_config
* puppet_tags: Puppet resource tag names that are used to generate config
files with puppet. Only the named config resources are used to generate
a config file. Any service that specifies tags will have the default
tags of 'file,concat,file_line,augeas,cron' appended to the setting.
Example: keystone_config
* config_volume: The name of the volume (directory) where config files
will be generated for this service. Use this as the location to
bind mount into the running Kolla container for configuration.
* config_volume: The name of the volume (directory) where config files
will be generated for this service. Use this as the location to
bind mount into the running Kolla container for configuration.
* config_image: The name of the docker image that will be used for
generating configuration files. This is often the same container
that the runtime service uses. Some services share a common set of
config files which are generated in a common base container.
* config_image: The name of the docker image that will be used for
generating configuration files. This is often the same container
that the runtime service uses. Some services share a common set of
config files which are generated in a common base container.
* step_config: This setting controls the manifest that is used to
create docker config files via puppet. The puppet tags below are
used along with this manifest to generate a config directory for
this container.
* step_config: This setting controls the manifest that is used to
create docker config files via puppet. The puppet tags below are
used along with this manifest to generate a config directory for
this container.
* docker_puppet_tasks: This section provides data to drive the
docker-puppet.py tool directly. The task is executed only once
within the cluster (not on each node) and is useful for several
puppet snippets we require for initialization of things like
keystone endpoints, database users, etc. See docker-puppet.py
for formatting.
* docker_puppet_tasks: This section provides data to drive the
docker-puppet.py tool directly. The task is executed only once
within the cluster (not on each node) and is useful for several
puppet snippets we require for initialization of things like
keystone endpoints, database users, etc. See docker-puppet.py
for formatting.
Docker steps

View File

@ -142,12 +142,12 @@ it can be changed if they are all consistent. This will be the plan name.
1. Create the Swift container.
.. code-block:: bash
.. code-block:: bash
openstack action execution run tripleo.plan.create_container \
'{"container":"my_cloud"}'
.. note::
.. note::
Creating a swift container directly isn't sufficient, as this Mistral
action also sets metadata on the container and may include further
@ -155,7 +155,7 @@ it can be changed if they are all consistent. This will be the plan name.
2. Upload the files to Swift.
.. code-block:: bash
.. code-block:: bash
swift upload my_cloud path/to/tripleo/templates
@ -163,7 +163,7 @@ it can be changed if they are all consistent. This will be the plan name.
for the uploaded templates, do some initial template processing and generate
the passwords.
.. code-block:: bash
.. code-block:: bash
openstack workflow execution create tripleo.plan_management.v1.create_deployment_plan \
'{"container":"my_cloud"}'

View File

@ -30,31 +30,31 @@ Upgrading the Undercloud
1. Disable the old OpenStack release repositories and enable new
release repositories on the undercloud:
.. admonition:: Mitaka to Newton
:class: mton
.. admonition:: Mitaka to Newton
:class: mton
::
export CURRENT_VERSION=mitaka
export NEW_VERSION=newton
.. admonition:: Newton to Ocata
:class: ntoo
.. admonition:: Newton to Ocata
:class: ntoo
::
export CURRENT_VERSION=newton
export NEW_VERSION=ocata
Backup and disable current repos. Note that the repository files might be
named differently depending on your installation::
Backup and disable current repos. Note that the repository files might be
named differently depending on your installation::
mkdir /home/stack/REPOBACKUP
sudo mv /etc/yum.repos.d/delorean* /home/stack/REPOBACKUP/
Get and enable new repos for `NEW_VERSION`:
Get and enable new repos for `NEW_VERSION`:
.. include:: ../repositories.txt
.. include:: ../repositories.txt
2. Run undercloud upgrade:
@ -71,37 +71,37 @@ Upgrading the Undercloud
.. admonition:: Mitaka to Newton
:class: mton
In the first release of instack-undercloud newton(5.0.0), the undercloud
telemetry services are **disabled** by default. In order to maintain the
telemetry services during the mitaka to newton upgrade the operator must
explicitly enable them **before** running the undercloud upgrade. This
is done by adding::
In the first release of instack-undercloud newton(5.0.0), the undercloud
telemetry services are **disabled** by default. In order to maintain the
telemetry services during the mitaka to newton upgrade the operator must
explicitly enable them **before** running the undercloud upgrade. This
is done by adding::
enable_telemetry = true
in the [DEFAULT] section of the undercloud.conf configuration file.
in the [DEFAULT] section of the undercloud.conf configuration file.
If you are using any newer newton release, this option is switched back
to **enabled** by default to make upgrade experience better. Hence, if
you are using a later newton release you don't need to explicitly enable
this option.
If you are using any newer newton release, this option is switched back
to **enabled** by default to make upgrade experience better. Hence, if
you are using a later newton release you don't need to explicitly enable
this option.
.. admonition:: Ocata to Pike
:class: mton
Prior to Pike, TripleO deployed Ceph with puppet-ceph. With the
Pike release it is possible to use TripleO to deploy Ceph with
either ceph-ansible or puppet-ceph, though puppet-ceph is
deprecated. To use ceph-ansible, the CentOS Storage SIG Ceph
repository must be enabled on the undercloud and the
ceph-ansible package must then be installed::
Prior to Pike, TripleO deployed Ceph with puppet-ceph. With the
Pike release it is possible to use TripleO to deploy Ceph with
either ceph-ansible or puppet-ceph, though puppet-ceph is
deprecated. To use ceph-ansible, the CentOS Storage SIG Ceph
repository must be enabled on the undercloud and the
ceph-ansible package must then be installed::
sudo yum -y install --enablerepo=extras centos-release-ceph-jewel
sudo yum -y install ceph-ansible
It is not yet possible to migrate an existing puppet-ceph
deployment to a ceph-ansible deployment. Only new deployments
are currently possible with ceph-ansible.
It is not yet possible to migrate an existing puppet-ceph
deployment to a ceph-ansible deployment. Only new deployments
are currently possible with ceph-ansible.
The following commands will upgrade the undercloud:
@ -298,12 +298,12 @@ Upgrading the Overcloud to Newton and earlier
:class: mton
**Deliver the migration for ceilometer to run under httpd.**
**Deliver the migration for ceilometer to run under httpd.**
This is to deliver the migration for ceilometer to be run under httpd (apache)
rather than eventlet as was the case before. To execute this step run
`overcloud deploy`, passing in the full set of environment files plus
`major-upgrade-ceilometer-wsgi-mitaka-newton.yaml`::
This is to deliver the migration for ceilometer to be run under httpd (apache)
rather than eventlet as was the case before. To execute this step run
`overcloud deploy`, passing in the full set of environment files plus
`major-upgrade-ceilometer-wsgi-mitaka-newton.yaml`::
openstack overcloud deploy --templates \
-e <full environment> \
@ -354,19 +354,19 @@ Upgrading the Overcloud to Newton and earlier
.. admonition:: Mitaka to Newton
:class: mton
**Explicitly disable sahara services if so desired:**
As discussed at bug1630247_ sahara services are disabled by default
in the Newton overcloud deployment. This special case is handled for
the duration of the upgrade by defaulting to 'keep sahara-\*'.
**Explicitly disable sahara services if so desired:**
As discussed at bug1630247_ sahara services are disabled by default
in the Newton overcloud deployment. This special case is handled for
the duration of the upgrade by defaulting to 'keep sahara-\*'.
That is by default sahara services are restarted after the mitaka to
newton upgrade of controller nodes and sahara config is re-applied
during the final upgrade converge step.
That is by default sahara services are restarted after the mitaka to
newton upgrade of controller nodes and sahara config is re-applied
during the final upgrade converge step.
If an operator wishes to **disable** sahara services as part of the mitaka
to newton upgrade they need to include the major-upgrade-remove-sahara.yaml_
environment file during the controller upgrade step as well as during
the converge step later::
If an operator wishes to **disable** sahara services as part of the mitaka
to newton upgrade they need to include the major-upgrade-remove-sahara.yaml_
environment file during the controller upgrade step as well as during
the converge step later::
openstack overcloud deploy --templates \
-e <full environment> \
@ -419,19 +419,19 @@ Upgrading the Overcloud to Newton and earlier
.. admonition:: Mitaka to Newton
:class: mton
**Explicitly disable sahara services if so desired:**
As discussed at bug1630247_ sahara services are disabled by default
in the Newton overcloud deployment. This special case is handled for
the duration of the upgrade by defaulting to 'keep sahara-\*'.
**Explicitly disable sahara services if so desired:**
As discussed at bug1630247_ sahara services are disabled by default
in the Newton overcloud deployment. This special case is handled for
the duration of the upgrade by defaulting to 'keep sahara-\*'.
That is by default sahara services are restarted after the mitaka to
newton upgrade of controller nodes and sahara config is re-applied
during the final upgrade converge step.
That is by default sahara services are restarted after the mitaka to
newton upgrade of controller nodes and sahara config is re-applied
during the final upgrade converge step.
If an operator wishes to **disable** sahara services as part of the mitaka
to newton upgrade they need to include the major-upgrade-remove-sahara.yaml_
environment file during the controller upgrade earlier and converge
step here::
If an operator wishes to **disable** sahara services as part of the mitaka
to newton upgrade they need to include the major-upgrade-remove-sahara.yaml_
environment file during the controller upgrade earlier and converge
step here::
openstack overcloud deploy --templates \
-e <full environment> \
@ -461,13 +461,13 @@ Upgrading the Overcloud to Newton and earlier
:class: mton
**Deliver the data migration for aodh.**
**Deliver the data migration for aodh.**
This is to deliver the data migration for aodh. In Newton, aodh uses its
own mysql backend. This step migrates all the existing alarm data from
mongodb to the new mysql backend. To execute this step run
`overcloud deploy`, passing in the full set of environment files plus
`major-upgrade-aodh-migration.yaml`::
This is to deliver the data migration for aodh. In Newton, aodh uses its
own mysql backend. This step migrates all the existing alarm data from
mongodb to the new mysql backend. To execute this step run
`overcloud deploy`, passing in the full set of environment files plus
`major-upgrade-aodh-migration.yaml`::
openstack overcloud deploy --templates \
-e <full environment> \

View File

@ -6,7 +6,7 @@ Create a snapshot of a running server
Create a new image by taking a snapshot of a running server and download the
image.
::
::
nova image-create instance_name image_name
glance image-download image_name --file exported_vm.qcow2
@ -15,7 +15,7 @@ Import an image into Overcloud and launch an instance
-----------------------------------------------------
Upload the exported image into glance in Overcloud and launch a new instance.
::
::
glance image-create --name imported_image --file exported_vm.qcow2 --disk-format qcow2 --container-format bare
nova boot --poll --key-name default --flavor m1.demo --image imported_image --nic net-id=net_id imported