[ansible] Major changes in playbooks "API"
Possibly existing out-of-tree playbooks will be imcompatible with this version and must be rewritten! Changes include: - all info passed into ansible playbooks from ironic is now available in the playbooks as elements of 'ironic' dictionary to better differentiate those from other vars possibly created/set inside playbooks. - any field of node's instance_info having a form of "image_<field>" is now available in playbooks as "ironic.image.<field>" var. - 'parted' tag in playbooks is removed and instead differentiation between partition and whole-disk imaged is being done based on ironic.image.type var value. - 'shutdown' tag is removed, and soft power-off is moved to a separate playbook, defined by new driver_info field 'ansible_shutdown_playbook' ('shutdown.yaml' by default) - default 'deploy' role is split into smaller roles, each targeting a separate stage of deployment process to faciliate customiation and re-use - discover - e.g. set root device and image target - prepare - if needed, prepare system, e.g. create partitions - deploy - download/convert/write user image and configdrive - configure - post-deployment steps, e.g. installing the bootloader Documentation is updated. Change-Id: I158a96d26dc9a114b6b607267c13e3ee1939cac9
This commit is contained in:
parent
c67e89c1f1
commit
b963a18c63
@ -9,7 +9,7 @@ and requiring no agents running on the node being configured.
|
||||
All communications with the node are by default performed over secure SSH
|
||||
transport.
|
||||
|
||||
This deployment driver is using Ansible playbooks to define the
|
||||
The Ansible-deploy deployment driver is using Ansible playbooks to define the
|
||||
deployment logic. It is not based on `Ironic Python Agent`_ (IPA)
|
||||
and does not generally need it to be running in the deploy ramdisk.
|
||||
|
||||
@ -44,8 +44,7 @@ CLI command via Python's ``subprocess`` library.
|
||||
|
||||
Each action (deploy, clean) is described by single playbook with roles,
|
||||
which is run whole during deployment, or tag-wise during cleaning.
|
||||
Control of deployment types and cleaning steps is through tags and
|
||||
auxiliary steps file for cleaning.
|
||||
Control of cleaning steps is through tags and auxiliary clean steps file.
|
||||
The playbooks for actions can be set per-node, as is cleaning steps
|
||||
file.
|
||||
|
||||
@ -76,7 +75,7 @@ Configdrive partition
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Creating a configdrive partition is supported for both whole disk
|
||||
and partition images, on both ``msdos`` and ``GPT`` labeled disks.
|
||||
and partition images.
|
||||
|
||||
Root device hints
|
||||
~~~~~~~~~~~~~~~~~
|
||||
@ -107,9 +106,9 @@ Logging
|
||||
|
||||
Logging is implemented as custom Ansible callback module,
|
||||
that makes use of ``oslo.log`` and ``oslo.config`` libraries
|
||||
and can interleave Ansible event log into the log file configured in
|
||||
main ironic configuration file (``/etc/ironic/ironic.conf`` by default),
|
||||
or use a separate file to log Ansible events into.
|
||||
and can re-use logging configuration defined in the main ironic configuration
|
||||
file (``/etc/ironic/ironic.conf`` by default) to set logging for Ansible
|
||||
events, or use a separate file for this purpose.
|
||||
|
||||
.. note::
|
||||
Currently this has some quirks in DevStack - due to default
|
||||
@ -118,13 +117,11 @@ or use a separate file to log Ansible events into.
|
||||
DevStack in 'developer' mode using ``screen``.
|
||||
|
||||
|
||||
|
||||
Requirements
|
||||
============
|
||||
|
||||
ironic
|
||||
Requires ironic API ≥ 1.22 when using callback functionality.
|
||||
For better logging, ironic should be > 6.1.0 release.
|
||||
Requires ironic of Newton release or newer.
|
||||
|
||||
Ansible
|
||||
Tested with and targets Ansible ≥ 2.1
|
||||
@ -144,7 +141,7 @@ Bootstrap image requirements
|
||||
- python-netifaces (for ironic callback)
|
||||
|
||||
Set of scripts to build a suitable deploy ramdisk based on TinyCore Linux,
|
||||
and an element for ``diskimage-builder`` will be provided.
|
||||
and an element for ``diskimage-builder`` is provided.
|
||||
|
||||
Setting up your environment
|
||||
===========================
|
||||
@ -280,6 +277,11 @@ ansible_deploy_playbook
|
||||
to use when deploying this node.
|
||||
Default is ``deploy.yaml``.
|
||||
|
||||
ansible_shutdown_playbook
|
||||
Name of the playbook file inside the ``playbooks_path`` folder
|
||||
to use to gracefully shutdown the node in-band.
|
||||
Default is ``shutdown.yaml``.
|
||||
|
||||
ansible_clean_playbook
|
||||
Name of the playbook file inside the ``playbooks_path`` folder
|
||||
to use when cleaning the node.
|
||||
@ -336,6 +338,23 @@ add-ironic-nodes.yaml
|
||||
as well as some per-node variables.
|
||||
Include it in all your custom playbooks as the first play.
|
||||
|
||||
The default ``deploy.yaml`` playbook is using several smaller roles that
|
||||
correspond to particular stages of deployment process:
|
||||
|
||||
- ``discover`` - e.g. set root device and image target
|
||||
- ``prepare`` - if needed, prepare system, for example create partitions
|
||||
- ``deploy`` - download/convert/write user image and configdrive
|
||||
- ``configure`` - post-deployment steps, e.g. installing the bootloader
|
||||
|
||||
Some more included roles are:
|
||||
|
||||
- ``wait`` - used when the driver is configured to not use callback from
|
||||
node to start the deployment. This role waits for OpenSSH server to
|
||||
become available on the node to connect to.
|
||||
- ``shutdown`` - used to gracefully power the node off in-band
|
||||
- ``clean`` - defines cleaning procedure, with each clean step defined
|
||||
as separate playbook tag.
|
||||
|
||||
Extending playbooks
|
||||
-------------------
|
||||
|
||||
@ -344,14 +363,19 @@ Most probably you'd start experimenting like this:
|
||||
#. Create a copy of ``deploy.yaml`` playbook, name it distinctively.
|
||||
#. Create Ansible roles with your customized logic in ``roles`` folder.
|
||||
|
||||
A. Add the role with logic to be run *before* image download/writing
|
||||
as the first role in your playbook. This is a good place to
|
||||
set facts overriding those provided/omitted by the driver,
|
||||
like ``ironic_partitions`` or ``ironic_root_device``.
|
||||
B. Add the role with logic to be run *after* image is written to disk
|
||||
as second-to-last role in the playbook (right before ``shutdown`` role).
|
||||
A. In your custom deploy playbook, replace the ``prepare`` role
|
||||
with your own one that defines steps to be run
|
||||
*before* image download/writing.
|
||||
This is a good place to set facts overriding those provided/omitted
|
||||
by the driver, like ``ironic_partitions`` or ``ironic_root_device``,
|
||||
and create custom partitions or (software) RAIDs.
|
||||
B. In your custom deploy playbook, replace the ``configure`` role
|
||||
with your own one that defines steps to be run
|
||||
*after* image is written to disk.
|
||||
This is a good place for example to configure the bootloader and
|
||||
add kernel options to avoid additional reboots.
|
||||
|
||||
#. Assign the playbook you've created to the node's
|
||||
#. Assign the custom deploy playbook you've created to the node's
|
||||
``driver_info/ansible_deploy_playbook`` field.
|
||||
#. Run deployment.
|
||||
|
||||
@ -364,93 +388,82 @@ Most probably you'd start experimenting like this:
|
||||
Variables you have access to
|
||||
----------------------------
|
||||
|
||||
This driver will pass the following extra arguments to ``ansible-playbook``
|
||||
invocation which you can use in your plays as well
|
||||
This driver will pass the single JSON-ified extra var argument to
|
||||
Ansible (as ``ansible-playbook -e ..``).
|
||||
Those values are then accessible in your plays as well
|
||||
(some of them are optional and might not be defined):
|
||||
|
||||
``image``
|
||||
Dictionary of the following structure:
|
||||
.. code-block:: yaml
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{"image": {
|
||||
"url": "<url-to-user-image>",
|
||||
"disk_format": "<qcow|raw|..>",
|
||||
"checksum": "<hash-algo:hash>",
|
||||
"mem_req": 12345
|
||||
}
|
||||
}
|
||||
ironic:
|
||||
nodes:
|
||||
- ip: <IPADDRESS>
|
||||
name: <NODE_UUID>
|
||||
user: <USER ANSIBLE WILL USE>
|
||||
extra: <COPY OF NODE's EXTRA FIELD>
|
||||
image:
|
||||
url: <URL TO FETCH THE USER IMAGE FROM>
|
||||
disk_format: <qcow2|raw|...>
|
||||
container_format: <bare|...>
|
||||
checksum: <hash-algo:hashstring>
|
||||
mem_req: <REQUIRED FREE MEMORY TO DOWNLOAD IMAGE TO RAM>
|
||||
tags: <LIST OF IMAGE TAGS AS DEFINED IN GLANCE>
|
||||
properties: <DICT OF IMAGE PROPERTIES AS DEFINED IN GLANCE>
|
||||
configdrive:
|
||||
type: <url|file>
|
||||
location: <URL OR PATH ON CONDUCTOR>
|
||||
partition_info:
|
||||
preserve_ephemeral: <bool>
|
||||
ephemeral_format: <FILESYSTEM TO CREATE ON EPHEMERAL PARTITION>
|
||||
partitions: <LIST OF PARTITIONS IN FORMAT EXPECTED BY PARTED MODULE>
|
||||
|
||||
where
|
||||
|
||||
- ``url`` - URL to download the target image from as set in
|
||||
``instance_info/image_url``.
|
||||
- ``disk_format`` - fetched from Glance or set in
|
||||
``instance_info/image_disk_format``.
|
||||
Mainly used to distinguish ``raw`` images that can be streamed directly
|
||||
to disk.
|
||||
- ``checksum`` - (optional) image checksum as fetched from Glance or set
|
||||
in ``instance_info/image_checksum``. Used to verify downloaded image.
|
||||
When deploying from Glance, this will always be ``md5`` checksum.
|
||||
When deploying standalone, can also be set in the form ``<algo>:<hash>``
|
||||
to specify another hashing algorithm, which must be supported by
|
||||
Python ``hashlib`` package from standard library.
|
||||
- ``mem_req`` - (optional) required available memory on the node to fit
|
||||
the target image when not streamed to disk directly.
|
||||
Calculated from the image size and ``[ansible]extra_memory``
|
||||
config option.
|
||||
Some more explanations:
|
||||
|
||||
``configdrive``
|
||||
Optional. When defined in ``instance_info`` is a dictionary
|
||||
of the following structure:
|
||||
``ironic.nodes``
|
||||
List of dictionaries (currently of only one element) that will be used by
|
||||
``add-ironic-nodes.yaml`` play to populate in-memory inventory.
|
||||
It also contains a copy of node's ``extra`` field so you can access it in
|
||||
the playbooks. The Ansible's host is set to node's UUID.
|
||||
|
||||
.. code-block:: json
|
||||
``ironic.image``
|
||||
All fields of node's ``instance_info`` that start with ``image_`` are
|
||||
passed inside this variable. Some extra notes and fields:
|
||||
|
||||
{"configdrive": {
|
||||
"type": "<url|file>",
|
||||
"location": "<local-path-or-url>"
|
||||
}
|
||||
}
|
||||
- ``mem_req`` is calculated from image size (if available) and config
|
||||
option ``[ansible]extra_memory``.
|
||||
- if ``checksum`` initially does not start with ``hash-algo:``, hashing
|
||||
algorithm is assumed to be ``md5`` (default in Glance).
|
||||
|
||||
where
|
||||
|
||||
- ``type`` - either ``url`` or ``file``
|
||||
- ``location`` - depending on ``type``, either a URL or path to file
|
||||
stored on ironic-conductor node to fetch the content
|
||||
of configdrive partition from.
|
||||
|
||||
``ironic_partitions``
|
||||
``ironic.partiton_info.partitions``
|
||||
Optional. List of dictionaries defining partitions to create on the node
|
||||
in the form:
|
||||
|
||||
.. code-block:: json
|
||||
.. code-block:: yaml
|
||||
|
||||
{"ironic_partitions": [
|
||||
{
|
||||
"name": "<partition name>",
|
||||
"size_mib": 12345,
|
||||
"boot": "yes|no|..",
|
||||
"swap": "yes|no|.."
|
||||
}
|
||||
]}
|
||||
partitions:
|
||||
- name: <NAME OF PARTITION>
|
||||
size_mib: <SIZE OF THE PARTITION>
|
||||
boot: <bool>
|
||||
swap: <bool>
|
||||
|
||||
The driver will populate this list from ``root_gb``, ``swap_mb`` and
|
||||
``ephemeral_gb`` fields of ``instance_info``.
|
||||
|
||||
``ephemeral_format``
|
||||
Please read the documentation included in the ``parted`` module's source
|
||||
for more info on the module and its arguments.
|
||||
|
||||
``ironic.partiton_info.ephemeral_format``
|
||||
Optional. Taken from ``instance_info``, it defines file system to be
|
||||
created on the ephemeral partition.
|
||||
Defaults to the value of ``[pxe]default_ephemeral_format`` option
|
||||
in ironic configuration file.
|
||||
|
||||
``preserve_ephemeral``
|
||||
``ironic.partiton_info.preserve_ephemeral``
|
||||
Optional. Taken from the ``instance_info``, it specifies if the ephemeral
|
||||
partition must be preserved or rebuilt. Defaults to ``no``.
|
||||
|
||||
``ironic_extra``
|
||||
Dictionary holding a copy of ``extra`` field of ironic node,
|
||||
with any per-node information.
|
||||
|
||||
As usual for Ansible playbooks, you also have access to standard
|
||||
Ansible facts discovered by ``setup`` module.
|
||||
|
||||
@ -458,17 +471,20 @@ Included custom Ansible modules
|
||||
-------------------------------
|
||||
|
||||
The provided ``playbooks_path/library`` folder includes several custom
|
||||
Ansible modules used by default implementation of ``deploy`` role.
|
||||
Ansible modules used by default implementation of ``deploy`` and
|
||||
``prepare`` roles.
|
||||
You can use these modules in your playbooks as well.
|
||||
|
||||
``stream_url``
|
||||
Streaming download from HTTP(S) source to the disk device directly,
|
||||
tries to be compatible with Ansible-core ``get_url`` module in terms of
|
||||
tries to be compatible with Ansible's ``get_url`` module in terms of
|
||||
module arguments.
|
||||
Due to the low level of such operation it is not idempotent.
|
||||
|
||||
``parted``
|
||||
creates partition tables and partitions with ``parted`` utility.
|
||||
Due to the low level of such operation it is not idempotent.
|
||||
Please read the documentation included in the module's source
|
||||
for more information about this module and its arguments.
|
||||
|
||||
.. _Ironic Python Agent: http://docs.openstack.org/developer/ironic-python-agent
|
||||
|
@ -108,6 +108,7 @@ METRICS = metrics_utils.get_metrics_logger(__name__)
|
||||
|
||||
DEFAULT_PLAYBOOKS = {
|
||||
'deploy': 'deploy.yaml',
|
||||
'shutdown': 'shutdown.yaml',
|
||||
'clean': 'clean.yaml'
|
||||
}
|
||||
DEFAULT_CLEAN_STEPS = 'clean_steps.yaml'
|
||||
@ -126,6 +127,10 @@ OPTIONAL_PROPERTIES = {
|
||||
'ansible_deploy_playbook': _('Name of the Ansible playbook used for '
|
||||
'deployment. Default is %s. Optional.'
|
||||
) % DEFAULT_PLAYBOOKS['deploy'],
|
||||
'ansible_shutdown_playbook': _('Name of the Ansible playbook used to '
|
||||
'power off the node in-band. '
|
||||
'Default is %s. Optional.'
|
||||
) % DEFAULT_PLAYBOOKS['shutdown'],
|
||||
'ansible_clean_playbook': _('Name of the Ansible playbook used for '
|
||||
'cleaning. Default is %s. Optional.'
|
||||
) % DEFAULT_PLAYBOOKS['clean'],
|
||||
@ -189,7 +194,7 @@ def _prepare_extra_vars(host_list, variables=None):
|
||||
nodes_var = []
|
||||
for node_uuid, ip, user, extra in host_list:
|
||||
nodes_var.append(dict(name=node_uuid, ip=ip, user=user, extra=extra))
|
||||
extra_vars = dict(ironic_nodes=nodes_var)
|
||||
extra_vars = dict(nodes=nodes_var)
|
||||
if variables:
|
||||
extra_vars.update(variables)
|
||||
return extra_vars
|
||||
@ -198,9 +203,10 @@ def _prepare_extra_vars(host_list, variables=None):
|
||||
def _run_playbook(name, extra_vars, key, tags=None, notags=None):
|
||||
"""Execute ansible-playbook."""
|
||||
playbook = os.path.join(CONF.ansible.playbooks_path, name)
|
||||
ironic_vars = {'ironic': extra_vars}
|
||||
args = [CONF.ansible.ansible_playbook_script, playbook,
|
||||
'-i', INVENTORY_FILE,
|
||||
'-e', json.dumps(extra_vars),
|
||||
'-e', json.dumps(ironic_vars),
|
||||
]
|
||||
|
||||
if CONF.ansible.config_file_path:
|
||||
@ -242,7 +248,6 @@ def _parse_partitioning_info(node):
|
||||
|
||||
info = node.instance_info
|
||||
i_info = {}
|
||||
|
||||
partitions = []
|
||||
root_partition = {'name': 'root',
|
||||
'size_mib': info['root_mb'],
|
||||
@ -270,19 +275,20 @@ def _parse_partitioning_info(node):
|
||||
i_info['preserve_ephemeral'] = (
|
||||
'yes' if info['preserve_ephemeral'] else 'no')
|
||||
|
||||
i_info['ironic_partitions'] = partitions
|
||||
return i_info
|
||||
i_info['partitions'] = partitions
|
||||
return {'partition_info': i_info}
|
||||
|
||||
|
||||
def _prepare_variables(task):
|
||||
node = task.node
|
||||
i_info = node.instance_info
|
||||
image = {
|
||||
'url': i_info['image_url'],
|
||||
'mem_req': _calculate_memory_req(task),
|
||||
'disk_format': i_info.get('image_disk_format'),
|
||||
}
|
||||
checksum = i_info.get('image_checksum')
|
||||
image = {}
|
||||
for i_key, i_value in i_info.items():
|
||||
if i_key.startswith('image_'):
|
||||
image[i_key[6:]] = i_value
|
||||
image['mem_req'] = _calculate_memory_req(task)
|
||||
|
||||
checksum = image.get('checksum')
|
||||
if checksum:
|
||||
# NOTE(pas-ha) checksum can be in <algo>:<checksum> format
|
||||
# as supported by various Ansible modules, mostly good for
|
||||
@ -290,8 +296,7 @@ def _prepare_variables(task):
|
||||
# With no <algo> we take that instance_info is populated from Glance,
|
||||
# where API reports checksum as MD5 always.
|
||||
if ':' not in checksum:
|
||||
checksum = 'md5:%s' % checksum
|
||||
image['checksum'] = checksum
|
||||
image['checksum'] = 'md5:%s' % checksum
|
||||
variables = {'image': image}
|
||||
configdrive = i_info.get('configdrive')
|
||||
if configdrive:
|
||||
@ -416,17 +421,12 @@ class AnsibleDeploy(agent_base.HeartbeatMixin, base.DeployInterface):
|
||||
|
||||
def _ansible_deploy(self, task, node_address):
|
||||
"""Internal function for deployment to a node."""
|
||||
notags = ['shutdown']
|
||||
if CONF.ansible.use_ramdisk_callback:
|
||||
notags.append('wait')
|
||||
notags = ['wait'] if CONF.ansible.use_ramdisk_callback else []
|
||||
node = task.node
|
||||
LOG.debug('IP of node %(node)s is %(ip)s',
|
||||
{'node': node.uuid, 'ip': node_address})
|
||||
variables = _prepare_variables(task)
|
||||
iwdi = node.driver_internal_info.get('is_whole_disk_image')
|
||||
if iwdi:
|
||||
notags.append('parted')
|
||||
else:
|
||||
if not node.driver_internal_info.get('is_whole_disk_image'):
|
||||
variables.update(_parse_partitioning_info(task.node))
|
||||
playbook, user, key = _parse_ansible_driver_info(task.node)
|
||||
node_list = [(node.uuid, node_address, user, node.extra)]
|
||||
@ -648,11 +648,10 @@ class AnsibleDeploy(agent_base.HeartbeatMixin, base.DeployInterface):
|
||||
try:
|
||||
node_address = _get_node_ip(task)
|
||||
playbook, user, key = _parse_ansible_driver_info(
|
||||
node)
|
||||
node, action='shutdown')
|
||||
node_list = [(node.uuid, node_address, user, node.extra)]
|
||||
extra_vars = _prepare_extra_vars(node_list)
|
||||
_run_playbook(playbook, extra_vars, key,
|
||||
tags=['shutdown'])
|
||||
_run_playbook(playbook, extra_vars, key)
|
||||
_wait_until_powered_off(task)
|
||||
except Exception as e:
|
||||
LOG.warning(
|
||||
|
@ -7,5 +7,5 @@
|
||||
ansible_host: "{{ item.ip }}"
|
||||
ansible_user: "{{ item.user }}"
|
||||
ironic_extra: "{{ item.extra | default({}) }}"
|
||||
with_items: "{{ ironic_nodes }}"
|
||||
with_items: "{{ ironic.nodes }}"
|
||||
tags: always
|
||||
|
@ -9,6 +9,10 @@
|
||||
|
||||
- hosts: ironic
|
||||
roles:
|
||||
- role: deploy
|
||||
- role: shutdown
|
||||
tags: shutdown
|
||||
- discover
|
||||
- prepare
|
||||
- deploy
|
||||
- configure
|
||||
post_tasks:
|
||||
- name: flush disk state
|
||||
command: sync
|
||||
|
@ -5,8 +5,11 @@ readonly target_disk=$1
|
||||
readonly root_part=$2
|
||||
readonly root_part_mount=/mnt/rootfs
|
||||
|
||||
# We need to run partprobe to ensure all partitions are visible
|
||||
# We need to run partprobe to ensure all partitions are visible.
|
||||
# On some test environments this is too fast
|
||||
# and kernel does not have time to react to changes
|
||||
partprobe $target_disk
|
||||
sleep 5
|
||||
|
||||
mkdir -p $root_part_mount
|
||||
|
@ -1,3 +1,3 @@
|
||||
- name: configure bootloader
|
||||
- name: install grub
|
||||
become: yes
|
||||
script: install_grub.sh {{ ironic_root_device }} {{ ironic_image_target }}
|
@ -0,0 +1,2 @@
|
||||
- include: grub.yaml
|
||||
when: "{{ ironic.image.type | default('whole-disk-image') == 'partition' }}"
|
@ -16,9 +16,6 @@
|
||||
|
||||
# NOTE(pas-ha) this is mostly copied over from Ironic Python Agent
|
||||
# compared to the original file in IPA,
|
||||
# all logging is disabled to let Ansible output the full trace.
|
||||
# The places that log to fail are commented out to be replaced later
|
||||
# with different handler when making this script a real Ansible module
|
||||
|
||||
# TODO(pas-ha) rewrite this shell script to be a proper Ansible module
|
||||
|
||||
@ -46,7 +43,7 @@ DEVICE="$1"
|
||||
|
||||
# We need to run partx -u to ensure all partitions are visible so the
|
||||
# following blkid command returns partitions just imaged to the device
|
||||
partx -u $DEVICE # || fail "running partx -u $DEVICE"
|
||||
partx -u $DEVICE || fail "running partx -u $DEVICE"
|
||||
|
||||
# todo(jayf): partx -u doesn't work in all cases, but partprobe fails in
|
||||
# devstack. We run both commands now as a temporary workaround for bug 1433812
|
||||
@ -56,16 +53,11 @@ partprobe $DEVICE || true
|
||||
|
||||
# Check for preexisting partition for configdrive
|
||||
EXISTING_PARTITION=`/sbin/blkid -l -o device $DEVICE -t LABEL=config-2`
|
||||
if [ $? = 0 ]; then
|
||||
#log "Existing configdrive found on ${DEVICE} at ${EXISTING_PARTITION}"
|
||||
ISO_PARTITION=$EXISTING_PARTITION
|
||||
else
|
||||
|
||||
if [ -z $EXISTING_PARTITION ]; then
|
||||
# Check if it is GPT partition and needs to be re-sized
|
||||
partprobe $DEVICE print 2>&1 | grep "fix the GPT to use all of the space"
|
||||
if [ $? = 0 ]; then
|
||||
#log "Fixing GPT to use all of the space on device $DEVICE"
|
||||
sgdisk -e $DEVICE #|| fail "move backup GPT data structures to the end of ${DEVICE}"
|
||||
if [ `partprobe $DEVICE print 2>&1 | grep "fix the GPT to use all of the space"` ]; then
|
||||
log "Fixing GPT to use all of the space on device $DEVICE"
|
||||
sgdisk -e $DEVICE || fail "move backup GPT data structures to the end of ${DEVICE}"
|
||||
|
||||
# Need to create new partition for config drive
|
||||
# Not all images have partion numbers in a sequential numbers. There are holes.
|
||||
@ -77,15 +69,15 @@ else
|
||||
gdisk -l $DEVICE | grep -A$MAX_DISK_PARTITIONS "Number Start" | grep -v "Number Start" > $EXISTING_PARTITION_LIST
|
||||
|
||||
# Create small partition at the end of the device
|
||||
#log "Adding configdrive partition to $DEVICE"
|
||||
sgdisk -n 0:-64MB:0 $DEVICE #|| fail "creating configdrive on ${DEVICE}"
|
||||
log "Adding configdrive partition to $DEVICE"
|
||||
sgdisk -n 0:-64MB:0 $DEVICE || fail "creating configdrive on ${DEVICE}"
|
||||
|
||||
gdisk -l $DEVICE | grep -A$MAX_DISK_PARTITIONS "Number Start" | grep -v "Number Start" > $UPDATED_PARTITION_LIST
|
||||
|
||||
CONFIG_PARTITION_ID=`diff $EXISTING_PARTITION_LIST $UPDATED_PARTITION_LIST | tail -n1 |awk '{print $2}'`
|
||||
ISO_PARTITION="${DEVICE}${CONFIG_PARTITION_ID}"
|
||||
else
|
||||
#log "Working on MBR only device $DEVICE"
|
||||
log "Working on MBR only device $DEVICE"
|
||||
|
||||
# get total disk size, to detect if that exceeds 2TB msdos limit
|
||||
disksize_bytes=$(blockdev --getsize64 $DEVICE)
|
||||
@ -99,16 +91,19 @@ else
|
||||
endlimit=$(($MAX_MBR_SIZE_MB - 1))
|
||||
fi
|
||||
|
||||
#log "Adding configdrive partition to $DEVICE"
|
||||
parted -a optimal -s -- $DEVICE mkpart primary ext2 $startlimit $endlimit #|| fail "creating configdrive on ${DEVICE}"
|
||||
log "Adding configdrive partition to $DEVICE"
|
||||
parted -a optimal -s -- $DEVICE mkpart primary fat32 $startlimit $endlimit || fail "creating configdrive on ${DEVICE}"
|
||||
|
||||
# Find partition we just created
|
||||
# Dump all partitions, ignore empty ones, then get the last partition ID
|
||||
ISO_PARTITION=`sfdisk --dump $DEVICE | grep -v ' 0,' | tail -n1 | awk -F ':' '{print $1}' | sed -e 's/\s*$//'` #|| fail "finding ISO partition created on ${DEVICE}"
|
||||
ISO_PARTITION=`sfdisk --dump $DEVICE | grep -v ' 0,' | tail -n1 | awk -F ':' '{print $1}' | sed -e 's/\s*$//'` || fail "finding ISO partition created on ${DEVICE}"
|
||||
|
||||
# Wait for udev to pick up the partition
|
||||
udevadm settle --exit-if-exists=$ISO_PARTITION
|
||||
fi
|
||||
else
|
||||
log "Existing configdrive found on ${DEVICE} at ${EXISTING_PARTITION}"
|
||||
ISO_PARTITION=$EXISTING_PARTITION
|
||||
fi
|
||||
|
||||
# Output the created/discovered partition for configdrive
|
||||
|
@ -1,37 +1,43 @@
|
||||
- name: download configdrive data
|
||||
get_url:
|
||||
url: "{{ configdrive.location }}"
|
||||
url: "{{ ironic.configdrive.location }}"
|
||||
dest: /tmp/{{ inventory_hostname }}.gz.base64
|
||||
async: 600
|
||||
poll: 15
|
||||
when: "{{ configdrive.type|default('') == 'url' }}"
|
||||
when: "{{ ironic.configdrive.type|default('') == 'url' }}"
|
||||
|
||||
- block:
|
||||
- name: copy configdrive file to node
|
||||
copy:
|
||||
src: "{{ configdrive.location }}"
|
||||
src: "{{ ironic.configdrive.location }}"
|
||||
dest: /tmp/{{ inventory_hostname }}.gz.base64
|
||||
- name: remove configdrive from conductor
|
||||
delegate_to: conductor
|
||||
file:
|
||||
path: "{{ configdrive.location }}"
|
||||
path: "{{ ironic.configdrive.location }}"
|
||||
state: absent
|
||||
when: "{{ configdrive.type|default('') == 'file' }}"
|
||||
when: "{{ ironic.configdrive.type|default('') == 'file' }}"
|
||||
|
||||
- name: unpack configdrive
|
||||
shell: cat /tmp/{{ inventory_hostname }}.gz.base64 | base64 --decode | gunzip > /tmp/{{ inventory_hostname }}.cndrive
|
||||
|
||||
- name: prepare config drive partition
|
||||
become: yes
|
||||
script: partition_configdrive.sh {{ ironic_root_device }}
|
||||
register: configdrive_partition_output
|
||||
- block:
|
||||
- name: prepare config drive partition
|
||||
become: yes
|
||||
script: partition_configdrive.sh {{ ironic_root_device }}
|
||||
register: configdrive_partition_output
|
||||
|
||||
- name: test the output of configdrive partitioner
|
||||
assert:
|
||||
that:
|
||||
- "{{ (configdrive_partition_output.stdout_lines | last).split() | length == 2 }}"
|
||||
- "{{ (configdrive_partition_output.stdout_lines | last).split() | first == 'configdrive' }}"
|
||||
- name: test the output of configdrive partitioner
|
||||
assert:
|
||||
that:
|
||||
- "{{ (configdrive_partition_output.stdout_lines | last).split() | length == 2 }}"
|
||||
- "{{ (configdrive_partition_output.stdout_lines | last).split() | first == 'configdrive' }}"
|
||||
|
||||
- name: store configdrive partition
|
||||
set_fact:
|
||||
ironic_configdrive_target: "{{ (configdrive_partition_output.stdout_lines | last).split() | last }}"
|
||||
when: "{{ ironic_configdrive_target is undefined }}"
|
||||
|
||||
- name: write configdrive
|
||||
become: yes
|
||||
command: dd if=/tmp/{{ inventory_hostname }}.cndrive of={{ (configdrive_partition_output.stdout_lines | last).split() | last }} bs=64K oflag=direct
|
||||
command: dd if=/tmp/{{ inventory_hostname }}.cndrive of={{ ironic_configdrive_target }} bs=64K oflag=direct
|
||||
|
@ -1,11 +1,12 @@
|
||||
- name: fail if not enough memory to store downloaded image
|
||||
fail:
|
||||
- name: check that downloaded image will fit into memory
|
||||
assert:
|
||||
that: "{{ ansible_memfree_mb }} >= {{ ironic.image.mem_req }}"
|
||||
msg: "The image size is too big, no free memory available"
|
||||
when: "{{ ansible_memfree_mb }} < {{ image.mem_req }}"
|
||||
|
||||
- name: download image with checksum validation
|
||||
get_url:
|
||||
url: "{{ image.url }}"
|
||||
url: "{{ ironic.image.url }}"
|
||||
dest: /tmp/{{ inventory_hostname }}.img
|
||||
checksum: "{{ image.checksum|default(omit) }}"
|
||||
checksum: "{{ ironic.image.checksum|default(omit) }}"
|
||||
async: 600
|
||||
poll: 15
|
||||
|
@ -1,20 +1,7 @@
|
||||
- include: root-device.yaml
|
||||
|
||||
- include: parted.yaml
|
||||
tags:
|
||||
- parted
|
||||
|
||||
- include: download.yaml
|
||||
when: "{{ image.disk_format != 'raw' }}"
|
||||
when: "{{ ironic.image.disk_format != 'raw' }}"
|
||||
|
||||
- include: write.yaml
|
||||
|
||||
- include: configdrive.yaml
|
||||
when: configdrive is defined
|
||||
|
||||
- include: grub.yaml
|
||||
tags:
|
||||
- parted
|
||||
|
||||
- name: flush
|
||||
command: sync
|
||||
when: "{{ ironic.configdrive is defined }}"
|
||||
|
@ -3,17 +3,17 @@
|
||||
command: qemu-img convert -t directsync -O host_device /tmp/{{ inventory_hostname }}.img {{ ironic_image_target }}
|
||||
async: 400
|
||||
poll: 10
|
||||
when: "{{ image.disk_format != 'raw' }}"
|
||||
when: "{{ ironic.image.disk_format != 'raw' }}"
|
||||
|
||||
- name: stream to target
|
||||
become: yes
|
||||
stream_url:
|
||||
url: "{{ image.url }}"
|
||||
url: "{{ ironic.image.url }}"
|
||||
dest: "{{ ironic_image_target }}"
|
||||
checksum: "{{ image.checksum }}"
|
||||
checksum: "{{ ironic.image.checksum|default(omit) }}"
|
||||
async: 600
|
||||
poll: 15
|
||||
when: "{{ image.disk_format == 'raw' }}"
|
||||
when: "{{ ironic.image.disk_format == 'raw' }}"
|
||||
|
||||
- name: flush
|
||||
command: sync
|
||||
|
@ -0,0 +1,2 @@
|
||||
- include: parted.yaml
|
||||
when: "{{ ironic.image.type | default('whole-disk-image') == 'partition' }}"
|
@ -1,16 +1,16 @@
|
||||
- name: erase partition table
|
||||
become: yes
|
||||
command: dd if=/dev/zero of={{ ironic_root_device }} bs=512 count=36
|
||||
when: "{{ not preserve_ephemeral|default('no')|bool }}"
|
||||
when: "{{ not ironic.partition_info.preserve_ephemeral|default('no')|bool }}"
|
||||
|
||||
- name: run parted
|
||||
become: yes
|
||||
parted:
|
||||
device: "{{ ironic_root_device }}"
|
||||
dryrun: "{{ preserve_ephemeral|default('no')|bool }}"
|
||||
new_label: yes
|
||||
label: msdos
|
||||
partitions: "{{ ironic_partitions }}"
|
||||
new_label: yes
|
||||
dryrun: "{{ ironic.partition_info.preserve_ephemeral|default('no')|bool }}"
|
||||
partitions: "{{ ironic.partition_info.partitions }}"
|
||||
register: parts
|
||||
|
||||
- name: reset image target to root partition
|
||||
@ -24,5 +24,9 @@
|
||||
|
||||
- name: format ephemeral partition
|
||||
become: yes
|
||||
command: mkfs -F -t {{ ephemeral_format }} -L ephemeral0 {{ parts.created.ephemeral }}
|
||||
when: "{{ parts.created.ephemeral is defined and not preserve_ephemeral|default('no')|bool }}"
|
||||
filesystem:
|
||||
dev: "{{ parts.created.ephemeral }}"
|
||||
fstype: "{{ ironic.partition_info.ephemeral_format }}"
|
||||
force: yes
|
||||
opts: "-L ephemeral0"
|
||||
when: "{{ parts.created.ephemeral is defined and not ironic.partition_info.preserve_ephemeral|default('no')|bool }}"
|
6
ironic_staging_drivers/ansible/playbooks/shutdown.yaml
Normal file
6
ironic_staging_drivers/ansible/playbooks/shutdown.yaml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- include: add-ironic-nodes.yaml
|
||||
|
||||
- hosts: ironic
|
||||
roles:
|
||||
- shutdown
|
@ -156,7 +156,7 @@ class TestAnsibleMethods(db_base.DbTestCase):
|
||||
execute_mock.assert_called_once_with(
|
||||
'env', 'ANSIBLE_CONFIG=/path/to/config',
|
||||
'ansible-playbook', '/path/to/playbooks/deploy', '-i',
|
||||
ansible_deploy.INVENTORY_FILE, '-e', '{"foo": "bar"}',
|
||||
ansible_deploy.INVENTORY_FILE, '-e', '{"ironic": {"foo": "bar"}}',
|
||||
'--tags=spam', '--skip-tags=ham',
|
||||
'--private-key=/path/to/key', '-vvv', '--timeout=100')
|
||||
|
||||
@ -173,7 +173,7 @@ class TestAnsibleMethods(db_base.DbTestCase):
|
||||
execute_mock.assert_called_once_with(
|
||||
'env', 'ANSIBLE_CONFIG=/path/to/config',
|
||||
'ansible-playbook', '/path/to/playbooks/deploy', '-i',
|
||||
ansible_deploy.INVENTORY_FILE, '-e', '{"foo": "bar"}',
|
||||
ansible_deploy.INVENTORY_FILE, '-e', '{"ironic": {"foo": "bar"}}',
|
||||
'--private-key=/path/to/key')
|
||||
|
||||
@mock.patch.object(com_utils, 'execute', return_value=('out', 'err'),
|
||||
@ -189,7 +189,7 @@ class TestAnsibleMethods(db_base.DbTestCase):
|
||||
execute_mock.assert_called_once_with(
|
||||
'env', 'ANSIBLE_CONFIG=/path/to/config',
|
||||
'ansible-playbook', '/path/to/playbooks/deploy', '-i',
|
||||
ansible_deploy.INVENTORY_FILE, '-e', '{"foo": "bar"}',
|
||||
ansible_deploy.INVENTORY_FILE, '-e', '{"ironic": {"foo": "bar"}}',
|
||||
'--private-key=/path/to/key', '-vvvv')
|
||||
|
||||
@mock.patch.object(com_utils, 'execute',
|
||||
@ -209,56 +209,51 @@ class TestAnsibleMethods(db_base.DbTestCase):
|
||||
execute_mock.assert_called_once_with(
|
||||
'env', 'ANSIBLE_CONFIG=/path/to/config',
|
||||
'ansible-playbook', '/path/to/playbooks/deploy', '-i',
|
||||
ansible_deploy.INVENTORY_FILE, '-e', '{"foo": "bar"}',
|
||||
ansible_deploy.INVENTORY_FILE, '-e', '{"ironic": {"foo": "bar"}}',
|
||||
'--private-key=/path/to/key')
|
||||
|
||||
def test__parse_partitioning_info(self):
|
||||
def test__parse_partitioning_info_root_only(self):
|
||||
expected_info = {
|
||||
'ironic_partitions':
|
||||
[{'boot': 'yes', 'swap': 'no',
|
||||
'size_mib': INSTANCE_INFO['root_mb'],
|
||||
'name': 'root'}]}
|
||||
'partition_info': {
|
||||
'partitions': [
|
||||
{'name': 'root',
|
||||
'size_mib': INSTANCE_INFO['root_mb'],
|
||||
'boot': 'yes',
|
||||
'swap': 'no'}
|
||||
]}}
|
||||
|
||||
i_info = ansible_deploy._parse_partitioning_info(self.node)
|
||||
|
||||
self.assertEqual(expected_info, i_info)
|
||||
|
||||
def test__parse_partitioning_info_swap(self):
|
||||
def test__parse_partitioning_info_all(self):
|
||||
in_info = dict(INSTANCE_INFO)
|
||||
in_info['swap_mb'] = 128
|
||||
self.node.instance_info = in_info
|
||||
self.node.save()
|
||||
|
||||
expected_info = {
|
||||
'ironic_partitions':
|
||||
[{'boot': 'yes', 'swap': 'no',
|
||||
'size_mib': INSTANCE_INFO['root_mb'],
|
||||
'name': 'root'},
|
||||
{'boot': 'no', 'swap': 'yes',
|
||||
'size_mib': 128, 'name': 'swap'}]}
|
||||
|
||||
i_info = ansible_deploy._parse_partitioning_info(self.node)
|
||||
|
||||
self.assertEqual(expected_info, i_info)
|
||||
|
||||
def test__parse_partitioning_info_ephemeral(self):
|
||||
in_info = dict(INSTANCE_INFO)
|
||||
in_info['ephemeral_mb'] = 128
|
||||
in_info['ephemeral_mb'] = 256
|
||||
in_info['ephemeral_format'] = 'ext4'
|
||||
in_info['preserve_ephemeral'] = True
|
||||
self.node.instance_info = in_info
|
||||
self.node.save()
|
||||
|
||||
expected_info = {
|
||||
'ironic_partitions':
|
||||
[{'boot': 'yes', 'swap': 'no',
|
||||
'size_mib': INSTANCE_INFO['root_mb'],
|
||||
'name': 'root'},
|
||||
{'boot': 'no', 'swap': 'no',
|
||||
'size_mib': 128, 'name': 'ephemeral'}],
|
||||
'ephemeral_format': 'ext4',
|
||||
'preserve_ephemeral': 'yes'
|
||||
}
|
||||
'partition_info': {
|
||||
'ephemeral_format': 'ext4',
|
||||
'preserve_ephemeral': 'yes',
|
||||
'partitions': [
|
||||
{'name': 'root',
|
||||
'size_mib': INSTANCE_INFO['root_mb'],
|
||||
'boot': 'yes',
|
||||
'swap': 'no'},
|
||||
{'name': 'swap',
|
||||
'size_mib': 128,
|
||||
'boot': 'no',
|
||||
'swap': 'yes'},
|
||||
{'name': 'ephemeral',
|
||||
'size_mib': 256,
|
||||
'boot': 'no',
|
||||
'swap': 'no'},
|
||||
]}}
|
||||
|
||||
i_info = ansible_deploy._parse_partitioning_info(self.node)
|
||||
|
||||
self.assertEqual(expected_info, i_info)
|
||||
@ -282,7 +277,7 @@ class TestAnsibleMethods(db_base.DbTestCase):
|
||||
('other-uuid', '5.6.7.8', 'eggs', 'vikings')]
|
||||
ansible_vars = {"foo": "bar"}
|
||||
self.assertEqual(
|
||||
{"ironic_nodes": [
|
||||
{"nodes": [
|
||||
{"name": "fake-uuid", "ip": '1.2.3.4',
|
||||
"user": "spam", "extra": "ham"},
|
||||
{"name": "other-uuid", "ip": '5.6.7.8',
|
||||
@ -293,7 +288,9 @@ class TestAnsibleMethods(db_base.DbTestCase):
|
||||
@mock.patch.object(ansible_deploy, '_calculate_memory_req', autospec=True,
|
||||
return_value=2000)
|
||||
def test__prepare_variables(self, mem_req_mock):
|
||||
expected = {"image": {"url": "http://image", "mem_req": 2000,
|
||||
expected = {"image": {"url": "http://image",
|
||||
"source": "fake-image",
|
||||
"mem_req": 2000,
|
||||
"disk_format": "qcow2",
|
||||
"checksum": "md5:checksum"}}
|
||||
with task_manager.acquire(self.context, self.node.uuid) as task:
|
||||
@ -307,7 +304,9 @@ class TestAnsibleMethods(db_base.DbTestCase):
|
||||
i_info['image_checksum'] = 'sha256:checksum'
|
||||
self.node.instance_info = i_info
|
||||
self.node.save()
|
||||
expected = {"image": {"url": "http://image", "mem_req": 2000,
|
||||
expected = {"image": {"url": "http://image",
|
||||
"source": "fake-image",
|
||||
"mem_req": 2000,
|
||||
"disk_format": "qcow2",
|
||||
"checksum": "sha256:checksum"}}
|
||||
with task_manager.acquire(self.context, self.node.uuid) as task:
|
||||
@ -321,7 +320,9 @@ class TestAnsibleMethods(db_base.DbTestCase):
|
||||
i_info['configdrive'] = 'http://configdrive_url'
|
||||
self.node.instance_info = i_info
|
||||
self.node.save()
|
||||
expected = {"image": {"url": "http://image", "mem_req": 2000,
|
||||
expected = {"image": {"url": "http://image",
|
||||
"source": "fake-image",
|
||||
"mem_req": 2000,
|
||||
"disk_format": "qcow2",
|
||||
"checksum": "md5:checksum"},
|
||||
'configdrive': {'type': 'url',
|
||||
@ -338,7 +339,9 @@ class TestAnsibleMethods(db_base.DbTestCase):
|
||||
self.node.instance_info = i_info
|
||||
self.node.save()
|
||||
self.config(tempdir='/path/to/tmpfiles')
|
||||
expected = {"image": {"url": "http://image", "mem_req": 2000,
|
||||
expected = {"image": {"url": "http://image",
|
||||
"source": "fake-image",
|
||||
"mem_req": 2000,
|
||||
"disk_format": "qcow2",
|
||||
"checksum": "md5:checksum"},
|
||||
'configdrive': {'type': 'file',
|
||||
@ -793,7 +796,7 @@ class TestAnsibleDeploy(db_base.DbTestCase):
|
||||
(self.node['uuid'],
|
||||
DRIVER_INTERNAL_INFO['ansible_cleaning_ip'],
|
||||
'test_u')]}, 'test_k',
|
||||
notags=['shutdown', 'wait'])
|
||||
notags=['wait'])
|
||||
|
||||
@mock.patch.object(ansible_deploy, '_run_playbook', autospec=True)
|
||||
@mock.patch.object(ansible_deploy, '_prepare_extra_vars', autospec=True)
|
||||
@ -835,7 +838,7 @@ class TestAnsibleDeploy(db_base.DbTestCase):
|
||||
(self.node['uuid'],
|
||||
DRIVER_INTERNAL_INFO['ansible_cleaning_ip'],
|
||||
'test_u')]}, 'test_k',
|
||||
notags=['shutdown', 'wait', 'parted'])
|
||||
notags=['wait'])
|
||||
|
||||
@mock.patch.object(fake.FakePower, 'get_power_state',
|
||||
return_value=states.POWER_OFF)
|
||||
@ -898,8 +901,8 @@ class TestAnsibleDeploy(db_base.DbTestCase):
|
||||
((task, states.POWER_ON),)]
|
||||
self.assertEqual(expected_power_calls,
|
||||
power_action_mock.call_args_list)
|
||||
ansible_mock.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY,
|
||||
tags=['shutdown'])
|
||||
ansible_mock.assert_called_once_with('shutdown.yaml',
|
||||
mock.ANY, mock.ANY)
|
||||
|
||||
@mock.patch.object(ansible_deploy, '_get_node_ip_heartbeat', autospec=True,
|
||||
return_value='1.2.3.4')
|
||||
|
39
releasenotes/notes/ansible-change-api-510961a1132a2ced.yaml
Normal file
39
releasenotes/notes/ansible-change-api-510961a1132a2ced.yaml
Normal file
@ -0,0 +1,39 @@
|
||||
---
|
||||
features:
|
||||
- |
|
||||
Ansible-deploy driver has considerably changed in terms of playbook
|
||||
structure and accepted incoming variables.
|
||||
|
||||
+ all info passed into Ansible playbooks from ironic is now available in
|
||||
the playbooks as elements of ``ironic`` dictionary to better
|
||||
differentiate those from other vars possibly created/set
|
||||
inside playbooks.
|
||||
|
||||
+ any field of node's instance_info having a form of ``image_<field>``
|
||||
is now available in playbooks as ``ironic.image.<field>`` variable.
|
||||
|
||||
+ ``parted`` tag in playbooks is removed and instead differentiation
|
||||
between partition and whole-disk imaged is being done based on
|
||||
``ironic.image.type`` variable value.
|
||||
|
||||
+ ``shutdown`` tag is removed, and soft power-off is moved to a separate
|
||||
playbook, defined by new optional ``driver_info`` field
|
||||
``ansible_shutdown_playbook`` (the default ``shutdown.yaml``
|
||||
is provided in the code tree).
|
||||
|
||||
+ default ``deploy`` role is split into smaller roles,
|
||||
each targeting a separate stage of deployment process
|
||||
to faciliate customiation and re-use
|
||||
|
||||
- ``discover`` - e.g. set root device and image target
|
||||
- ``prepare`` - if needed, prepare system, e.g. create partitions
|
||||
- ``deploy`` - download/convert/write user image and configdrive
|
||||
- ``configure`` - post-deployment steps, e.g. installing the bootloader
|
||||
|
||||
upgrade:
|
||||
- |
|
||||
Ansible-deploy driver has considerably changed in terms of playbook
|
||||
structure and accepted incoming variables.
|
||||
|
||||
**Any out-of-tree playbooks written for previous versions are incompatible
|
||||
with this release and must be changed at least to accept new variables!**
|
Loading…
Reference in New Issue
Block a user