10 KiB
Install and configure a compute node
This section describes how to install and configure the Compute
service on a compute node. The service supports several hypervisors <hypervisor> to deploy instances <instance>
or VMs <virtual machine (VM)>. For simplicity,
this configuration uses the QEMU <Quick EMUlator (QEMU)> hypervisor with
the KVM <kernel-based VM (KVM)> extension on
compute nodes that support hardware acceleration for virtual machines.
On legacy hardware, this configuration uses the generic QEMU hypervisor.
You can follow these instructions with minor modifications to
horizontally scale your environment with additional compute nodes.
Note
This section assumes that you are following the instructions in this
guide step-by-step to configure the first compute node. If you want to
configure additional compute nodes, prepare them in a similar fashion to
the first compute node in the example architectures
<overview-example-architectures> section. Each additional
compute node requires a unique IP address.
Install and configure components
obs
Install the packages:
# zypper install openstack-nova-compute genisoimage qemu-kvm libvirt
rdo
Install the packages:
# yum install openstack-nova-compute
ubuntu or debian
Install the packages:
# apt install nova-compute
debian
Respond to prompts for debconf.
Edit the
/etc/nova/nova.conffile and complete the following actions:rdo or obs
In the
[DEFAULT]section, enable only the compute and metadata APIs:[DEFAULT] # ... enabled_apis = osapi_compute,metadata
obs
In the
[DEFAULT]section, set thecompute_driver:[DEFAULT] # ... compute_driver = libvirt.LibvirtDriver
In the
[DEFAULT]section, configureRabbitMQmessage queue access:[DEFAULT] # ... transport_url = rabbit://openstack:RABBIT_PASS@controllerReplace
RABBIT_PASSwith the password you chose for theopenstackaccount inRabbitMQ.In the
[api]and[keystone_authtoken]sections, configure Identity service access:[api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = NOVA_PASSReplace
NOVA_PASSwith the password you chose for thenovauser in the Identity service.Note
Comment out or remove any other options in the
[keystone_authtoken]section.
debian
In the
[DEFAULT]section, check that themy_ipoption is correctly set (this value is handled by the config and postinst scripts of thenova-commonpackage using debconf):[DEFAULT] # ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESSReplace
MANAGEMENT_INTERFACE_IP_ADDRESSwith the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in theexample architecture <overview-example-architectures>.
obs or rdo or ubuntu
In the
[DEFAULT]section, configure themy_ipoption:[DEFAULT] # ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESSReplace
MANAGEMENT_INTERFACE_IP_ADDRESSwith the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in theexample architecture <overview-example-architectures>.In the
[DEFAULT]section, enable support for the Networking service:[DEFAULT] # ... use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriverNote
By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the
nova.virt.firewall.NoopFirewallDriverfirewall driver.
In the
[vnc]section, enable and configure remote console access:[vnc] # ... enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.htmlThe server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node.
Note
If the web browser to access remote consoles resides on a host that cannot resolve the
controllerhostname, you must replacecontrollerwith the management interface IP address of the controller node.In the
[glance]section, configure the location of the Image service API:[glance] # ... api_servers = http://controller:9292
obs
In the
[oslo_concurrency]section, configure the lock path:[oslo_concurrency] # ... lock_path = /var/run/nova
rdo or ubuntu
In the
[oslo_concurrency]section, configure the lock path:[oslo_concurrency] # ... lock_path = /var/lib/nova/tmp
ubuntu
- Due to a packaging bug, remove the
log_diroption from the[DEFAULT]section.
obs or debian
Ensure the kernel module
nbdis loaded.# modprobe nbdEnsure the module loads on every boot by adding
nbdto the/etc/modules-load.d/nbd.conffile.
Finalize installation
Determine whether your compute node supports hardware acceleration for virtual machines:
$ egrep -c '(vmx|svm)' /proc/cpuinfoIf this command returns a value of
one or greater, your compute node supports hardware acceleration which typically requires no additional configuration.If this command returns a value of
zero, your compute node does not support hardware acceleration and you must configurelibvirtto use QEMU instead of KVM.obs or rdo
Edit the
[libvirt]section in the/etc/nova/nova.conffile as follows:[libvirt] # ... virt_type = qemu
ubuntu
Edit the
[libvirt]section in the/etc/nova/nova-compute.conffile as follows:[libvirt] # ... virt_type = qemu
debian
Replace the
nova-compute-kvmpackage withnova-compute-qemuwhich automatically changes the/etc/nova/nova-compute.conffile and installs the necessary dependencies:# apt install nova-compute-qemu
obs or rdo
Start the Compute service including its dependencies and configure them to start automatically when the system boots:
# systemctl enable libvirtd.service openstack-nova-compute.service # systemctl start libvirtd.service openstack-nova-compute.service
ubuntu or debian
Restart the Compute service:
# service nova-compute restart
Note
If the nova-compute service fails to start, check
/var/log/nova/nova-compute.log. The error message
AMQP server on controller:5672 is unreachable likely
indicates that the firewall on the controller node is preventing access
to port 5672. Configure the firewall to open port 5672 on the controller
node and restart nova-compute service on the compute
node.