Complete documentation for advanced profile flow

There were some gaps in how we actually use the AHC data to do more advanced
deployments. This patch tries to address those gaps.

Co-Authored-By: Dmitry Tantsur <dtantsur@redhat.com>
Change-Id: I52b0224feb41887c105fa93e90c36be7bc87c7fb
This commit is contained in:
John Trowbridge 2015-06-04 15:20:50 -04:00 committed by Dmitry Tantsur
parent b8c353a98e
commit 459b496f82
5 changed files with 214 additions and 57 deletions

View File

@ -6,6 +6,7 @@ In this chapter you will find advanced deployment of various RDO-Manager areas.
.. toctree::
Advanced Profile Matching <profile_matching>
Ready-States (BIOS, RAID) <ready_states>
Automated Health Check <automated_health_check>

View File

@ -1,25 +1,26 @@
Automated Health Check (AHC)
============================
Additional setup steps to take advantage of the AHC features.
Start with matching the nodes to profiles as described in
:doc:`profile_matching`.
Enable running benchmarks during discovery
------------------------------------------
By default, the benchmark tests do not run during the discovery process.
We can enable this feature by setting DISCOVERY_RUNBENCH=1 in the
instack.answer file prior to running instack-install-undercloud.
You can enable this feature by setting *discovery_runbench = true* in the
**undercloud.conf** file prior to installing the undercloud.
If you want to enable this feature after installing the undercloud, you can set
*discovery_runbench = true* in **undercloud.conf**, and re-run
``openstack undercloud install``
Analyze the collected benchmark data
------------------------------------
After discovery has completed, we can do analysis on the benchmark data.
* Install the ahc-tools package::
sudo yum install -y ahc-tools
* Run the ahc-report script to see a general overview of the hardware
* Run the ``ahc-report`` script to see a general overview of the hardware
::
@ -255,40 +256,13 @@ After discovery has completed, we can do analysis on the benchmark data.
However we can see that the variance of the "standalone_randread_4k_KBps"
metric was above the threshold, so the group is marked as unstable.
.. _ahc_matching:
Exclude outliers from deployment
--------------------------------
We will use the sample reports above to construct some matching rules for our deployment. These matching rules will determine what profile gets assigned to each node.
We will use the sample reports above to construct some matching rules
for our deployment. Refer to :doc:`profile_matching` for details.
* Open the /etc/ahc-tools/edeploy/control.specs file. By default it will look close to this
::
[
('disk', '$disk', 'size', 'gt(4)'),
('network', '$eth', 'ipv4', 'network(192.0.2.0/24)'),
('memory', 'total', 'size', 'ge(4294967296)'),
]
These rules match on the data collected during discovery. There is a set of helper functions to make matching more flexible.
* network() : the network interface shall be in the specified network
* gt(), ge(), lt(), le() : greater than (or equal), lower than (or equal)
* in() : the item to match shall be in a specified set
* regexp() : match a regular expression
* or(), and(), not(): boolean functions. or() and and() take 2 parameters and not() one parameter.
There are also placeholders, '$disk' and '$eth' in the above example. These will store the value in that place for later use.
* For example if we had a "fact" from discovery:
('disk', 'sda', 'size', '40')
This would match the first rule in the above control.specs file, and we would store "disk": "sda".
* Add a rule to the control.specs file to match the system with two CPUs
* Add a rule to the **control.specs** file to match the system with two CPUs
::
@ -299,7 +273,8 @@ We will use the sample reports above to construct some matching rules for our de
('memory', 'total', 'size', 'ge(4294967296)'),
]
* Add a rule to the control.specs file to exclude systems with below average disk performance from the control role
* Add a rule to the **control.specs** file to exclude systems with below
average disk performance from the control role
::
@ -311,8 +286,5 @@ We will use the sample reports above to construct some matching rules for our de
('memory', 'total', 'size', 'ge(4294967296)'),
]
* After changing the matching rules, we are ready to do the matching
::
sudo -E ahc-match
* Now rerun the matching and proceed with remaining steps from
:doc:`profile_matching`.

View File

@ -0,0 +1,192 @@
Advanced Profile Matching
=========================
Here are additional setup steps to take advantage of the advanced profile
matching and the AHC features.
Enable advanced profile matching
--------------------------------
* Install the ahc-tools package::
sudo yum install -y ahc-tools
* Add the credentials for Ironic and Swift to the
**/etc/ahc-tools/ahc-tools.conf** file.
These will be the same credentials that ironic-discoverd uses,
and can be copied from **/etc/ironic-discoverd/discoverd.conf**::
$ sudo -i
# mkdir -p /etc/ahc-tools
# sed 's/\[discoverd/\[ironic/' /etc/ironic-discoverd/discoverd.conf > /etc/ahc-tools/ahc-tools.conf
# chmod 0600 /etc/ahc-tools/ahc-tools.conf
# exit
Example::
[ironic]
os_auth_url = http://192.0.2.1:5000/v2.0
os_username = ironic
os_password = <PASSWORD>
os_tenant_name = service
[swift]
os_auth_url = http://192.0.2.1:5000/v2.0
os_username = ironic
os_password = <PASSWORD>
os_tenant_name = service
Accessing additional introspection data
---------------------------------------
Every introspection run (as described in
:doc:`../basic_deployment/basic_deployment`) collects a lot of additional
facts about the hardware and puts them as JSON in Swift. Swift container name
is ``ironic-inspector`` and can be modified in
**/etc/ironic-discoverd/discoverd.conf**. Swift object name is stored under
``hardware_swift_object`` key in Ironic node extra field.
State file
----------
Configuration file **/etc/ahc-tools/edeploy/state** defines how many nodes of
each profile we want to match. This file contains list of tuples with profile
name and number of nodes for this profile. ``*`` symbol can be used to match
any number, but make sure that such tuple will go last.
For example to start with 1 control node and any number of compute ones,
populate this file with the following contents::
[('control', '1'), ('compute', '*')]
Matching rules
--------------
These matching rules will determine what profile gets assigned to each node
and are stored in files named **/etc/ahc-tools/edeploy/PROFILE.specs** for
each profile defined in **/etc/ahc-tools/edeploy/state**.
Open the **/etc/ahc-tools/edeploy/control.specs** file.
This is a JSON-like file that might look like this::
[
('disk', '$disk', 'size', 'gt(4)'),
('network', '$eth', 'ipv4', 'network(192.0.2.0/24)'),
('memory', 'total', 'size', 'ge(4294967296)'),
]
These rules match on the data collected during introspection.
Note that disk size is in GiB, while memory size is in KiB.
There is a set of helper functions to make matching more flexible.
* network() : the network interface shall be in the specified network
* gt(), ge(), lt(), le() : greater than (or equal), lower than (or equal)
* in() : the item to match shall be in a specified set
* regexp() : match a regular expression
* or(), and(), not(): boolean functions. or() and and() take 2 parameters
and not() one parameter.
There are also placeholders, *$disk* and *$eth* in the above example.
These will store the value in that place for later use.
* For example if we had a "fact" from discovery::
('disk', 'sda', 'size', '40')
This would match the first rule in the above control.specs file,
and we would store ``"disk": "sda"``.
Running advanced profile matching
---------------------------------
* After adjusting the matching rules, we are ready to do the matching::
sudo ahc-match
* This will attempt to match all of the available nodes to the roles
we have defined in the **/etc/ahc-tools/edeploy/state** file.
When a node matches a role, the role is added to the node in Ironic in
the form of a capability. We can check this with ``ironic node-show``::
[stack@instack ~]# ironic node-show b73fb5fa-1a2c-49c6-b38e-8de41e3c0532 | grep properties -A2
| properties | {u'memory_mb': u'4096', u'cpu_arch': u'x86_64', u'local_gb': u'40', |
| | u'cpus': u'1', u'capabilities': u'profile:control,boot_option:local'} |
| instance_uuid | None
* In the above output, we can see that the control profile is added
as a capability to the node. Next we will need to create flavors in Nova
that actually map to these profiles.
Create flavors to use advanced matching
---------------------------------------
In order to use the profiles assigned to the Ironic nodes, Nova needs to have
flavors that have the property "capabilities:profile" set to the intended profile.
For example, with just the compute and control profiles:
* Create the flavors
::
openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 control
openstack flavor create --id auto --ram 4096 --disk 40 --vcpus 1 compute
.. note::
The values for ram, disk, and vcpus should be set to a minimal lower bound,
as Nova will still check that the Ironic nodes have at least this much
even if we set lower properties in the **.specs** files.
* Assign the properties
::
openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="compute" compute
openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" --property "capabilities:profile"="control" control
Use the flavors to deploy
-------------------------
By default, all nodes are deployed to the **baremetal** flavor.
The RDO-Manager CLI has options to support more advanced role matching.
Continuing with the example with only a control and compute profile:
* Get the Tuskar plan id
::
tuskar plan-list
* Deploy the overcloud
::
openstack overcloud deploy --control-flavor control --compute-flavor compute --plan-uuid <UUID from above>
Use the flavors to scale
-------------------------
The process to scale an overcloud that uses our advanced profiles is the same
as the process used when we only have the **baremetal** flavor.
.. note::
The original overcloud must have been deployed as above in order to scale
using advanced profiles, as the flavor to role mapping happens then.
* Update the **/etc/ahc-tools/edeploy/state** file to match the number
of nodes we want to match to each role.
* Run `sudo ahc-match` to match available nodes to the defined roles.
* Scale the overcloud (example below adds two more nodes to the compute role)
::
openstack overcloud scale stack overcloud overcloud -r Compute-1 -n 2

View File

@ -1,18 +1,9 @@
Ready-State (BIOS, RAID)
========================
Match deployment profiles
-------------------------
Before doing the ready-state configuration, we first must match the nodes to profiles.
Start with matching the nodes to profiles as described in
:doc:`profile_matching`.
::
sudo yum install -y ahc-tools
sudo -E ahc-match
Ready-state configuration
-------------------------
Trigger the BIOS and RAID configuration based on the deployment profile::
Then trigger the BIOS and RAID configuration based on the deployment profile::
instack-ironic-deployment --configure-nodes

View File

@ -255,7 +255,8 @@ unusable.
This second mode allows users to ensure that their different hardware types end
up running their intended role, though requires manual configuration of the role
definitions and role matching via the ahc-match tool (:ref:`ahc_matching`).
definitions and role matching via the ahc-match tool (see
:doc:`../advanced_deployment/profile_matching`).