Files
docs/doc/source/security/kubernetes/add-certificate-to-uefi-secure-boot-database-a474c0b1acfc.rst
Ngairangbam Mili 05492ce5f3 Scalability considerations for Secure Boot Certificate playbook
Story: 2011352
Task: 52603

Change-Id: I636600874e3358b5f6cfae52562c423196fa4ca2
Signed-off-by: Ngairangbam Mili <ngairangbam.mili@windriver.com>
2025-08-08 13:34:32 +00:00

6.9 KiB
Raw Blame History

Add Certificate to UEFI Secure Boot Database on already installed System

The playbook described in this section can be used to add a new certificate to the database (signature database) section of Secure Boot on an already installed system. This Secure Boot signature database stores the public keys and certificates that are used to verify the signed binaries during the boot process.

As previously described in use-uefi-secure-boot, the Secure Boot keys/certificates are installed on the hosts prior to the installation of , using board-vendor -specific tools in the firmware's setup menus. However, in some cases, you need to update the Secure Boot keys on a running system, for example, certificate expiry.

The playbook will add the certificate to all the online hosts in a system. In a environment, the playbook can optionally iterate through all the online subclouds and add the certificate to their respective online hosts.

The playbook can be run on any deployment configuration supported by (, standard, and distributed cloud configurations) as long as it is run from the active controller. Any host that does not use or have Secure Boot disabled will be skipped by this procedure.

You need to have the private key of a installed on all hosts of the system. The key is a part of the Secure Boot key hierarchy. It is used to authorize updates to the Secure Boot signature databases. This key would have been created and used when initially setting up Secure Boot on the server(s).

  1. Create an inventory file.

    You must create an inventory file to specify the main playbook parameters, similar to the update_platform_certificates playbook. The inventory file includes sensitive data such as . Thus, use ansible-vault in order to securely store the contents of the inventory file. Ansible vault will ask for a password which is used for subsequent ansible-vault access and ansible-playbook commands.

    Example:

    ~(keystone_admin)]$ ansible-vault create secure-boot-cert-inventory.yml

    This will open an editor in which you need to manually add or paste your inventory parameters, as specified in the example below.

    Example parameter:

    all:
      vars:
        secure_boot_cert: <base64_cert>
        key_exchange_key: <base64_key>
      children:
        target_group:
          vars:
            # SSH password to connect to all subclouds
            ansible_ssh_user: sysadmin
            ansible_ssh_pass: <sysadmin-passwd>
            # Sudo password
            ansible_become_pass: <sysadmin-passwd>

    Where,

    secure_boot_cert is the certificate that will be installed in the Secure Boot database.

    key_exchange_key is the private key used to authorize updates to the Secure Boot database.

    Note

    In both the Secure Boot certificate and , the values must be provided as a single-line base64-encoded of the respective file ( that is, the output of base64 w0 <pem-file>). If the file is in another format such as DER, you should first convert it in openssl x509 -in mycert.crt | base64 -w0.

    • ansible_ssh_user: The username to use to connect to the target system using ssh.
    • ansible_ssh_pass: The password to use to connect to the target system using ssh.
    • ansible_become_pass: The ansible_ssh_users sudo password.

    If a separate set of overrides are required for a group of hosts, add children groups under target_group.

    The following example shows one set of ssh/sudo passwords for subcloud1 and subcloud2, and another set of ssh/sudo passwords for subcloud3.

    all:
      vars:
        ...
      children:
        target_group:
          vars:
            ...
          children:
            different_password_group:
              vars:
                ansible_ssh_user: sysadmin
                ansible_ssh_pass: <sysadmin-passwd>
                ansible_become_pass: <sysadmin-passwd>
              hosts:
                subcloud1:
                subcloud2:
            different_password_group2:
              vars:
                ansible_ssh_user: sysadmin
                ansible_ssh_pass: <different-sysadmin-passwd>
                ansible_become_pass: <different-sysadmin-passwd>
              hosts:
                subcloud3:
  2. Run the playbook.

    Run the Ansible playbook to start the migration process. You will be prompted for the vault password created in the previous step.

    Example:

    ~(keystone_admin)]$ ansible-playbook /usr/share/ansible/stx-ansible/playbooks/update_secure_boot_certificate.yml -i secure-boot-cert-inventory.yml --extra-vars "target_list=localhost,subcloud1" --ask-vault-pas

    Note

    • Run the playbook always from an active controller.
    • In systems, the playbook can be run from the system controller and the target_list parameter can be used to target the desired subclouds.
    • If the target_list parameter is localhost, not only will the current host be updated, but also all the online hosts that are listed in system host-list.

    The target of the playbook can be set using the target_list option under --extra-vars with the following possible values:

    • localhost: Will target the localhost (standalone systems or system controller), along with all the hosts listed in system host-list.
    • subcloud1, subcloud2: A comma separated list of hosts the playbook will target. The playbook will also run from each subclouds active controller, then update all the online hosts in each subcloud.
    • all_online_subclouds: Will query dcmanager subcloud list and retrieve a list of online subclouds to target.

Playbook Running Multiple Subclouds

This playbook takes about 1 minute per subcloud when it is run serially from a System Controller. To optimize the run time, it is recommended to use the -f (--forks) option, which allows Ansible to process multiple subclouds in parallel.

A fork count of 40 is recommended. This runs up to 40 subclouds concurrently, with each batch completing in about 1 minute, depending on the system performance and network conditions.