Install and configure a compute node This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or VMs. For simplicity, this configuration uses the QEMU hypervisor with the KVM extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes. This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the example architectures section using the same networking service as your existing environment. For either networking service, follow the NTP configuration and OpenStack packages instructions. For OpenStack Networking (neutron), also follow the OpenStack Networking compute node instructions. For legacy networking (nova-network), also follow the legacy networking compute node instructions. Each additional compute node requires unique IP addresses. To install and configure the Compute hypervisor components Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets indicates potential default configuration options that you should retain. Install the packages: # apt-get install nova-compute sysfsutils # yum install openstack-nova-compute sysfsutils # zypper install openstack-nova-compute genisoimage kvm libvirt Edit the /etc/nova/nova.conf file and complete the following actions: In the [DEFAULT] and [oslo_messaging_rabbit] sections, configure RabbitMQ message queue access: [DEFAULT] ... rpc_backend = rabbit [oslo_messaging_rabbit] ... rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ. In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access: [DEFAULT] ... auth_strategy = keystone [keystone_authtoken] ... auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = nova password = NOVA_PASS Replace NOVA_PASS with the password you chose for the nova user in the Identity service. Comment out or remove any other options in the [keystone_authtoken] section. In the [DEFAULT] section, configure the my_ip option: [DEFAULT] ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in the example architecture. In the [DEFAULT] section, enable and configure remote console access: [DEFAULT] ... vnc_enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = MANAGEMENT_INTERFACE_IP_ADDRESS novncproxy_base_url = http://controller:6080/vnc_auto.html The server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node. Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in the example architecture. If the web browser to access remote consoles resides on a host that cannot resolve the controller hostname, you must replace controller with the management interface IP address of the controller node. In the [glance] section, configure the location of the Image service: [glance] ... host = controller In the [oslo_concurrency] section, configure the lock path: [oslo_concurrency] ... lock_path = /var/run/nova [oslo_concurrency] ... lock_path = /var/lib/nova/tmp (Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section: [DEFAULT] ... verbose = True Ensure the kernel module nbd is loaded. # modprobe nbd Ensure the module will be loaded on every boot by adding nbd in the /etc/modules-load.d/nbd.conf file. To install and configure the Compute hypervisor components Install the packages: # apt-get install nova-compute Respond to the prompts for database management, Identity service credentials, service endpoint registration, and message queue credentials.. To finalize installation Determine whether your compute node supports hardware acceleration for virtual machines: $ egrep -c '(vmx|svm)' /proc/cpuinfo If this command returns a value of one or greater, your compute node supports hardware acceleration which typically requires no additional configuration. If this command returns a value of zero, your compute node does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM. Edit the [libvirt] section in the /etc/nova/nova-compute.conf /etc/nova/nova.conf file as follows: [libvirt] ... virt_type = qemu Restart the Compute service: # service nova-compute restart Start the Compute service including its dependencies and configure them to start automatically when the system boots: # systemctl enable libvirtd.service openstack-nova-compute.service # systemctl start libvirtd.service openstack-nova-compute.service By default, the Ubuntu packages create an SQLite database. Because this configuration uses a SQL database server, you can remove the SQLite database file: # rm -f /var/lib/nova/nova.sqlite