nova/doc/source/install/compute-install-ubuntu.rst
melanie witt db455548a1 Use force=True for os-brick disconnect during delete
The 'force' parameter of os-brick's disconnect_volume() method allows
callers to ignore flushing errors and ensure that devices are being
removed from the host.

We should use force=True when we are going to delete an instance to
avoid leaving leftover devices connected to the compute host which
could then potentially be reused to map to volumes to an instance that
should not have access to those volumes.

We can use force=True even when disconnecting a volume that will not be
deleted on termination because os-brick will always attempt to flush
and disconnect gracefully before forcefully removing devices.

Closes-Bug: #2004555

Change-Id: I3629b84d3255a8fe9d8a7cea8c6131d7c40899e8
2023-05-10 07:09:05 -07:00

8.7 KiB

Install and configure a compute node for Ubuntu

This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or virtual machines (VMs). For simplicity, this configuration uses the Quick EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes.

Note

This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the example architectures <overview-example-architectures> section. Each additional compute node requires a unique IP address.

Install and configure components

  1. Install the packages:

    # apt install nova-compute
  2. Edit the /etc/nova/nova.conf file and complete the following actions:

    • In the [DEFAULT] section, configure RabbitMQ message queue access:

      [DEFAULT]
      # ...
      transport_url = rabbit://openstack:RABBIT_PASS@controller

      Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.

    • In the [api] and [keystone_authtoken] sections, configure Identity service access:

      [api]
      # ...
      auth_strategy = keystone
      
      [keystone_authtoken]
      # ...
      www_authenticate_uri = http://controller:5000/
      auth_url = http://controller:5000/
      memcached_servers = controller:11211
      auth_type = password
      project_domain_name = Default
      user_domain_name = Default
      project_name = service
      username = nova
      password = NOVA_PASS

      Replace NOVA_PASS with the password you chose for the nova user in the Identity service.

      Note

      Comment out or remove any other options in the [keystone_authtoken] section.

    • In the [service_user] section, configure service user tokens <service_user_token>:

      [service_user]
      send_service_user_token = true
      auth_url = https://controller/identity
      auth_strategy = keystone
      auth_type = password
      project_domain_name = Default
      project_name = service
      user_domain_name = Default
      username = nova
      password = NOVA_PASS

      Replace NOVA_PASS with the password you chose for the nova user in the Identity service.

    • In the [DEFAULT] section, configure the my_ip option:

      [DEFAULT]
      # ...
      my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS

      Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in the example architecture <overview-example-architectures>.

    • Configure the [neutron] section of /etc/nova/nova.conf. Refer to the Networking service install guide <install/compute-install-ubuntu.html#configure-the-compute-service-to-use-the-networking-service> for more details.

    • In the [vnc] section, enable and configure remote console access:

      [vnc]
      # ...
      enabled = true
      server_listen = 0.0.0.0
      server_proxyclient_address = $my_ip
      novncproxy_base_url = http://controller:6080/vnc_auto.html

      The server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node.

      Note

      If the web browser to access remote consoles resides on a host that cannot resolve the controller hostname, you must replace controller with the management interface IP address of the controller node.

    • In the [glance] section, configure the location of the Image service API:

      [glance]
      # ...
      api_servers = http://controller:9292
    • In the [oslo_concurrency] section, configure the lock path:

      [oslo_concurrency]
      # ...
      lock_path = /var/lib/nova/tmp
    • In the [placement] section, configure the Placement API:

      [placement]
      # ...
      region_name = RegionOne
      project_domain_name = Default
      project_name = service
      auth_type = password
      user_domain_name = Default
      auth_url = http://controller:5000/v3
      username = placement
      password = PLACEMENT_PASS

      Replace PLACEMENT_PASS with the password you choose for the placement user in the Identity service. Comment out any other options in the [placement] section.

Finalize installation

  1. Determine whether your compute node supports hardware acceleration for virtual machines:

    $ egrep -c '(vmx|svm)' /proc/cpuinfo

    If this command returns a value of one or greater, your compute node supports hardware acceleration which typically requires no additional configuration.

    If this command returns a value of zero, your compute node does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.

    • Edit the [libvirt] section in the /etc/nova/nova-compute.conf file as follows:

      [libvirt]
      # ...
      virt_type = qemu
  2. Restart the Compute service:

    # service nova-compute restart

Note

If the nova-compute service fails to start, check /var/log/nova/nova-compute.log. The error message AMQP server on controller:5672 is unreachable likely indicates that the firewall on the controller node is preventing access to port 5672. Configure the firewall to open port 5672 on the controller node and restart nova-compute service on the compute node.

Add the compute node to the cell database

Important

Run the following commands on the controller node.

  1. Source the admin credentials to enable admin-only CLI commands, then confirm there are compute hosts in the database:

    $ . admin-openrc
    
    $ openstack compute service list --service nova-compute
    +----+-------+--------------+------+-------+---------+----------------------------+
    | ID | Host  | Binary       | Zone | State | Status  | Updated At                 |
    +----+-------+--------------+------+-------+---------+----------------------------+
    | 1  | node1 | nova-compute | nova | up    | enabled | 2017-04-14T15:30:44.000000 |
    +----+-------+--------------+------+-------+---------+----------------------------+
  2. Discover compute hosts:

    # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
    
    Found 2 cell mappings.
    Skipping cell0 since it does not contain hosts.
    Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
    Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
    Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
    Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3

    Note

    When you add new compute nodes, you must run nova-manage cell_v2 discover_hosts on the controller node to register those new compute nodes. Alternatively, you can set an appropriate interval in /etc/nova/nova.conf:

    [scheduler]
    discover_hosts_in_cells_interval = 300