just a conversion to rst files. * ch_nova-compute-install.xml * ch_nova-verify.xml Change-Id: I2d4fa2db9984bf3a9d11fd393ecd3a7fec38a6db Implements: blueprint installguide-liberty
7.9 KiB
Install and configure a compute node
This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or VMs. For simplicity, this configuration uses the QEMU hypervisor with the KVM extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes.
Note
This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the example architectures section using the same networking service as your existing environment. For either networking service, follow the NTP configuration and OpenStack packages instructions. For OpenStack Networking (neutron), also follow the OpenStack Networking compute node instructions. For legacy networking (nova-network), also follow the legacy networking compute node instructions. Each additional compute node requires unique IP addresses.
To install and configure the Compute hypervisor components
Note
Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets indicates potential default configuration options that you should retain.
obs
Install the packages:
# zypper install openstack-nova-compute genisoimage kvm libvirt
rdo
Install the packages:
# yum install openstack-nova-compute sysfsutils
ubuntu
Install the packages:
# apt-get install nova-compute sysfsutils
Edit the
/etc/nova/nova.conf
file and complete the following actions:In the
[DEFAULT]
and [oslo_messaging_rabbit] sections, configureRabbitMQ
message queue access:[DEFAULT] ... rpc_backend = rabbit [oslo_messaging_rabbit] ... rabbit_host = controller rabbit_userid = openstack rabbit_password = RABBIT_PASS
Replace
RABBIT_PASS
with the password you chose for theopenstack
account inRabbitMQ
.In the
[DEFAULT]
and[keystone_authtoken]
sections, configure Identity service access:[DEFAULT] ... auth_strategy = keystone [keystone_authtoken] ... auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = nova password = NOVA_PASS
Replace
NOVA_PASS
with the password you chose for thenova
user in the Identity service.Note
Comment out or remove any other options in the
[keystone_authtoken]
section.In the
[DEFAULT]
section, configure themy_ip
option:[DEFAULT] ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
Replace
MANAGEMENT_INTERFACE_IP_ADDRESS
with the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in the example architecture.In the
[DEFAULT]
section, enable and configure remote console access:[DEFAULT] ... vnc_enabled = True vncserver_listen = 0.0.0.0 vncserver_proxyclient_address = MANAGEMENT_INTERFACE_IP_ADDRESS novncproxy_base_url = http://controller:6080/vnc_auto.html
The server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node.
Replace
MANAGEMENT_INTERFACE_IP_ADDRESS
with the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in the example architecture.Note
If the web browser to access remote consoles resides on a host that cannot resolve the
controller
hostname, you must replacecontroller
with the management interface IP address of the controller node.In the
[glance]
section, configure the location of the Image service:[glance] ... host = controller
obs
In the
[oslo_concurrency]
section, configure the lock path:[oslo_concurrency] ... lock_path = /var/run/nova
rdo or ubuntu
In the
[oslo_concurrency]
section, configure the lock path:[oslo_concurrency] ... lock_path = /var/lib/nova/tmp
(Optional) To assist with troubleshooting, enable verbose logging in the
[DEFAULT]
section:[DEFAULT] ... verbose = True
obs
3.
Ensure the kernel module
nbd
is loaded.# modprobe nbd
Ensure the module will be loaded on every boot by adding
nbd
in the/etc/modules-load.d/nbd.conf
file.
To finalize installation
Determine whether your compute node supports hardware acceleration for virtual machines:
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of
one or greater
, your compute node supports hardware acceleration which typically requires no additional configuration.If this command returns a value of
zero
, your compute node does not support hardware acceleration and you must configurelibvirt
to use QEMU instead of KVM.obs or rdo
Edit the
[libvirt]
section in the/etc/nova/nova.conf
file as follows:[libvirt] ... virt_type = qemu
ubuntu
Edit the
[libvirt]
section in the/etc/nova/nova-compute.conf
file as follows:[libvirt] ... virt_type = qemu
obs or rdo
Start the Compute service including its dependencies and configure them to start automatically when the system boots:
# systemctl enable libvirtd.service openstack-nova-compute.service # systemctl start libvirtd.service openstack-nova-compute.service
ubuntu
Restart the Compute service:
# service nova-compute restart
By default, the Ubuntu packages create an SQLite database.
Because this configuration uses a SQL database server, you can remove the SQLite database file:
# rm -f /var/lib/nova/nova.sqlite