
Story: 2011352 Task: 52603 Change-Id: I636600874e3358b5f6cfae52562c423196fa4ca2 Signed-off-by: Ngairangbam Mili <ngairangbam.mili@windriver.com>
182 lines
6.9 KiB
ReStructuredText
182 lines
6.9 KiB
ReStructuredText
.. WARNING: Add no lines of text between the label immediately following
|
||
.. and the title.
|
||
|
||
.. _add-certificate-to-uefi-secure-boot-database-a474c0b1acfc:
|
||
|
||
========================================================================
|
||
Add Certificate to UEFI Secure Boot Database on already installed System
|
||
========================================================================
|
||
|
||
The playbook described in this section can be used to add a new certificate to
|
||
the database (signature database) section of |UEFI| Secure Boot on an already
|
||
installed system. This |UEFI| Secure Boot signature database stores the public
|
||
keys and certificates that are used to verify the signed |EFI| binaries during
|
||
the boot process.
|
||
|
||
As previously described in :ref:`use-uefi-secure-boot`, the Secure Boot
|
||
keys/certificates are installed on the |prod| hosts prior to the installation
|
||
of |prod|, using board-vendor -specific tools in the firmware's setup menus.
|
||
However, in some cases, you need to update the |UEFI| Secure Boot keys on a
|
||
running system, for example, certificate expiry.
|
||
|
||
The playbook will add the certificate to all the online hosts
|
||
in a system. In a |prod-dc| environment, the playbook can optionally iterate
|
||
through all the online subclouds and add the certificate to their respective
|
||
online hosts.
|
||
|
||
.. rubric:: |context|
|
||
|
||
The playbook can be run on any deployment configuration supported by
|
||
|prod| (|AIO|, standard, and distributed cloud configurations) as long as it is
|
||
run from the active controller. Any host that does not use |UEFI| or have Secure
|
||
Boot disabled will be skipped by this procedure.
|
||
|
||
.. rubric:: |prereq|
|
||
|
||
You need to have the private key of a |KEK| installed on all hosts of the
|
||
system. The |KEK| key is a part of the |UEFI| Secure Boot key hierarchy. It is
|
||
used to authorize updates to the Secure Boot signature databases. This |KEK| key
|
||
would have been created and used when initially setting up |UEFI| Secure Boot on
|
||
the server(s).
|
||
|
||
.. rubric:: |proc|
|
||
|
||
#. Create an inventory file.
|
||
|
||
You must create an inventory file to specify the main playbook parameters,
|
||
similar to the `update_platform_certificates` playbook. The inventory file
|
||
includes sensitive data such as |KEK|. Thus, use ansible-vault in order to
|
||
securely store the contents of the inventory file. Ansible vault will ask
|
||
for a password which is used for subsequent ansible-vault access and
|
||
ansible-playbook commands.
|
||
|
||
Example:
|
||
|
||
.. code-block::
|
||
|
||
~(keystone_admin)]$ ansible-vault create secure-boot-cert-inventory.yml
|
||
|
||
This will open an editor in which you need to manually add or paste your
|
||
inventory parameters, as specified in the example below.
|
||
|
||
Example parameter:
|
||
|
||
.. code-block:: none
|
||
|
||
all:
|
||
vars:
|
||
secure_boot_cert: <base64_cert>
|
||
key_exchange_key: <base64_key>
|
||
children:
|
||
target_group:
|
||
vars:
|
||
# SSH password to connect to all subclouds
|
||
ansible_ssh_user: sysadmin
|
||
ansible_ssh_pass: <sysadmin-passwd>
|
||
# Sudo password
|
||
ansible_become_pass: <sysadmin-passwd>
|
||
|
||
Where,
|
||
|
||
``secure_boot_cert`` is the certificate that will be installed in the
|
||
|UEFI| Secure Boot database.
|
||
|
||
``key_exchange_key`` is the private key used to authorize updates to the
|
||
|UEFI| Secure Boot database.
|
||
|
||
.. note::
|
||
|
||
In both the Secure Boot certificate and |KEK|, the values must be
|
||
provided as a single-line base64-encoded of the respective |PEM| file (
|
||
that is, the output of ``base64 –w0 <pem-file>``). If the file is in
|
||
another format such as DER, you should first convert it in ``openssl
|
||
x509 -in mycert.crt | base64 -w0``.
|
||
|
||
- ``ansible_ssh_user``: The username to use to connect to the target system using ssh.
|
||
|
||
- ``ansible_ssh_pass``: The password to use to connect to the target system using ssh.
|
||
|
||
- ``ansible_become_pass``: The ansible_ssh_user’s sudo password.
|
||
|
||
If a separate set of overrides are required for a group of hosts, add ``children``
|
||
groups under ``target_group``.
|
||
|
||
The following example shows one set of ssh/sudo passwords for ``subcloud1`` and
|
||
``subcloud2``, and another set of ssh/sudo passwords for ``subcloud3``.
|
||
|
||
.. code-block:: none
|
||
|
||
all:
|
||
vars:
|
||
...
|
||
children:
|
||
target_group:
|
||
vars:
|
||
...
|
||
children:
|
||
different_password_group:
|
||
vars:
|
||
ansible_ssh_user: sysadmin
|
||
ansible_ssh_pass: <sysadmin-passwd>
|
||
ansible_become_pass: <sysadmin-passwd>
|
||
hosts:
|
||
subcloud1:
|
||
subcloud2:
|
||
different_password_group2:
|
||
vars:
|
||
ansible_ssh_user: sysadmin
|
||
ansible_ssh_pass: <different-sysadmin-passwd>
|
||
ansible_become_pass: <different-sysadmin-passwd>
|
||
hosts:
|
||
subcloud3:
|
||
|
||
#. Run the playbook.
|
||
|
||
Run the Ansible playbook to start the migration process. You will be
|
||
prompted for the vault password created in the previous step.
|
||
|
||
Example:
|
||
|
||
.. code-block::
|
||
|
||
~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/update_secure_boot_certificate.yml -i secure-boot-cert-inventory.yml --extra-vars "target_list=localhost,subcloud1" --ask-vault-pas
|
||
|
||
.. note::
|
||
|
||
- Run the playbook always from an active controller.
|
||
|
||
- In |prod-dc| systems, the playbook can be run from the system
|
||
controller and the ``target_list`` parameter can be used to target the
|
||
desired subclouds.
|
||
|
||
- If the ``target_list`` parameter is ``localhost``, not only will the
|
||
current host be updated, but also all the online hosts that are listed in
|
||
``system host-list``.
|
||
|
||
The target of the playbook can be set using the ``target_list`` option under
|
||
``--extra-vars`` with the following possible values:
|
||
|
||
- ``localhost``: Will target the localhost (standalone systems or system
|
||
controller), along with all the hosts listed in ``system host-list``.
|
||
|
||
- ``subcloud1``, ``subcloud2``: A comma separated list of hosts the
|
||
playbook will target. The playbook will also run from each subcloud’s
|
||
active controller, then update all the online hosts in each subcloud.
|
||
|
||
- ``all_online_subclouds``: Will query :command:`dcmanager subcloud list`
|
||
and retrieve a list of online subclouds to target.
|
||
|
||
Playbook Running Multiple Subclouds
|
||
-----------------------------------
|
||
|
||
This playbook takes about 1 minute per subcloud when it is run serially from a
|
||
System Controller. To optimize the run time, it is recommended to use the ``-f
|
||
(--forks)`` option, which allows Ansible to process multiple subclouds in
|
||
parallel.
|
||
|
||
A fork count of 40 is recommended. This runs up to 40 subclouds concurrently,
|
||
with each batch completing in about 1 minute, depending on the system
|
||
performance and network conditions.
|
||
|
||
|