diff --git a/doc/common/samples/nova.conf b/doc/common/samples/nova.conf
index 530ca24128..a5e4a341ee 100644
--- a/doc/common/samples/nova.conf
+++ b/doc/common/samples/nova.conf
@@ -14,7 +14,6 @@ compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
# configured in cinder.conf
# COMPUTE
-libvirt_type=qemu
compute_driver=libvirt.LibvirtDriver
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini
@@ -67,3 +66,7 @@ signing_dirname = /tmp/keystone-signing-nova
# DATABASE
[database]
connection=mysql://nova:yourpassword@192.168.206.130/nova
+
+# LIBVIRT
+[libvirt]
+virt_type=qemu
\ No newline at end of file
diff --git a/doc/common/section_support-compute.xml b/doc/common/section_support-compute.xml
index 1c65f45103..3179af03f9 100644
--- a/doc/common/section_support-compute.xml
+++ b/doc/common/section_support-compute.xml
@@ -45,7 +45,7 @@
based on configuration settings. In
nova.conf, include the
logfile option to enable logging.
- Alternatively you can set use_syslog=1
+ Alternatively you can set use_syslog = 1
so that the nova daemon logs to syslog.
@@ -217,9 +217,10 @@
Injection problemsIf instances do not boot or boot slowly, investigate
file injection as a cause.
- To disable injection in libvirt, set
- to
- -2.
+ To disable injection in libvirt, set the following in
+ nova.conf:
+ [libvirt]
+inject_partition = -2If you have not enabled the configuration drive and
you want to make user-specified files available from
diff --git a/doc/config-reference/compute/section_compute-configure-xen.xml b/doc/config-reference/compute/section_compute-configure-xen.xml
index 422d865110..9890b1b65f 100644
--- a/doc/config-reference/compute/section_compute-configure-xen.xml
+++ b/doc/config-reference/compute/section_compute-configure-xen.xml
@@ -12,10 +12,10 @@
XenAPI driver. To enable the XenAPI driver, add the following
configuration options /etc/nova/nova.conf
and restart the nova-compute service:
- compute_driver=xenapi.XenAPIDriver
-xenapi_connection_url=http://your_xenapi_management_ip_address
-xenapi_connection_username=root
-xenapi_connection_password=your_password
+ compute_driver = xenapi.XenAPIDriver
+xenapi_connection_url = http://your_xenapi_management_ip_address
+xenapi_connection_username = root
+xenapi_connection_password = your_passwordThese connection details are used by the OpenStack Compute
service to contact your hypervisor and are the same details
you use to connect XenCenter, the XenServer management
@@ -27,13 +27,13 @@ internal network IP Address (169.250.0.1) to contact XenAPI, this does not
allow live migration between hosts, and other functionalities like host aggregates
do not work.
-It is possible to manage Xen using libvirt, though this is not
- well-tested or supported.
- To experiment using Xen through libvirt add the following
- configuration options
- /etc/nova/nova.conf:
- compute_driver=libvirt.LibvirtDriver
-libvirt_type=xen
+It is possible to manage Xen using libvirt, though this is not well-tested or supported. To
+ experiment using Xen through libvirt add the following configuration options
+ /etc/nova/nova.conf:
+ compute_driver = libvirt.LibvirtDriver
+
+[libvirt]
+virt_type = xenAgent
@@ -42,12 +42,11 @@ Generally a large timeout is required for Windows instances, bug you may want to
Firewall
-
-If using nova-network, IPTables is supported:
-firewall_driver=nova.virt.firewall.IptablesFirewallDriver
- Alternately, doing the isolation in Dom0:
-firewall_driver=nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver
-
+If using nova-network, IPTables is supported:
+ firewall_driver = nova.virt.firewall.IptablesFirewallDriver
+ Alternately, doing the isolation in Dom0:
+ firewall_driver = nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver
+ VNC proxy address
@@ -57,16 +56,16 @@ and XenServer is on the address: 169.254.0.1, you can use the following:
Storage
-
-You can specify which Storage Repository to use with nova by looking at the
-following flag. The default is to use the local-storage setup by the default installer:
-sr_matching_filter="other-config:i18n-key=local-storage"
-Another good alternative is to use the "default" storage (for example
- if you have attached NFS or any other shared storage):
-sr_matching_filter="default-sr:true"
-To use a XenServer pool, you must create the pool
-by using the Host Aggregates feature.
-
+You can specify which Storage Repository to use with nova by looking at the following flag.
+ The default is to use the local-storage setup by the default installer:
+ sr_matching_filter = "other-config:i18n-key=local-storage"
+ Another good alternative is to use the "default" storage (for example if you
+ have attached NFS or any other shared storage): sr_matching_filter = "default-sr:true"
+
+ To use a XenServer pool, you must create the pool by using the
+ Host Aggregates feature.
+
+ Xen configuration referenceTo customize the Xen driver, use the configuration option settings
diff --git a/doc/config-reference/compute/section_compute-hypervisors.xml b/doc/config-reference/compute/section_compute-hypervisors.xml
index e1bc45ba10..93a69ec2f0 100644
--- a/doc/config-reference/compute/section_compute-hypervisors.xml
+++ b/doc/config-reference/compute/section_compute-hypervisors.xml
@@ -80,12 +80,10 @@
>nova-compute service is installed and
running is the machine that runs all the virtual machines,
referred to as the compute node in this guide.
- By default, the selected hypervisor is KVM. To change to
- another hypervisor, change the
- libvirt_type option in
- nova.conf and restart the
- nova-compute
- service.
+ By default, the selected hypervisor is KVM. To change to another hypervisor, change
+ the virt_type option in the [libvirt] section of
+ nova.conf and restart the nova-compute service.Here are the general nova.conf
options that are used to configure the compute node's
hypervisor: .
diff --git a/doc/config-reference/compute/section_hypervisor_kvm.xml b/doc/config-reference/compute/section_hypervisor_kvm.xml
index 22bd8d48a4..28ba1dce06 100644
--- a/doc/config-reference/compute/section_hypervisor_kvm.xml
+++ b/doc/config-reference/compute/section_hypervisor_kvm.xml
@@ -14,8 +14,10 @@
To enable KVM explicitly, add the following configuration options to the
/etc/nova/nova.conf file:
- compute_driver=libvirt.LibvirtDriver
-libvirt_type=kvm
+ compute_driver = libvirt.LibvirtDriver
+
+[libvirt]
+virt_type = kvmThe KVM hypervisor supports the following virtual machine image formats:
@@ -93,17 +95,18 @@ libvirt_type=kvm
CPU model names. These models are defined in the
/usr/share/libvirt/cpu_map.xml file. Check this file to
determine which models are supported by your local installation.
- Two Compute configuration options define which type of CPU model is exposed to the
- hypervisor when using KVM: libvirt_cpu_mode and
- libvirt_cpu_model.
- The libvirt_cpu_mode option can take one of the following values:
+ Two Compute configuration options in the [libvirt] group of
+ nova.conf define which type of CPU model is exposed to the
+ hypervisor when using KVM: cpu_mode and
+ cpu_model.
+ The cpu_mode option can take one of the following values:
none, host-passthrough,
host-model, and custom.Host model (default for KVM & QEMU)If your nova.conf file contains
- libvirt_cpu_mode=host-model, libvirt identifies the CPU model
- in /usr/share/libvirt/cpu_map.xml file that most closely
+ cpu_mode=host-model, libvirt identifies the CPU model in
+ /usr/share/libvirt/cpu_map.xml file that most closely
matches the host, and requests additional CPU flags to complete the match. This
configuration provides the maximum functionality and performance and maintains good
reliability and compatibility if the guest is migrated to another host with slightly
@@ -112,29 +115,30 @@ libvirt_type=kvm
Host pass throughIf your nova.conf file contains
- libvirt_cpu_mode=host-passthrough, libvirt tells KVM to pass
- through the host CPU with no modifications. The difference to host-model, instead of
- just matching feature flags, every last detail of the host CPU is matched. This
- gives absolutely best performance, and can be important to some apps which check low
- level CPU details, but it comes at a cost with respect to migration: the guest can
- only be migrated to an exactly matching host CPU.
+ cpu_mode=host-passthrough, libvirt tells KVM to pass through
+ the host CPU with no modifications. The difference to host-model, instead of just
+ matching feature flags, every last detail of the host CPU is matched. This gives
+ absolutely best performance, and can be important to some apps which check low level
+ CPU details, but it comes at a cost with respect to migration: the guest can only be
+ migrated to an exactly matching host CPU.CustomIf your nova.conf file contains
- libvirt_cpu_mode=custom, you can explicitly specify one of
- the supported named model using the libvirt_cpu_model configuration option. For
- example, to configure the KVM guests to expose Nehalem CPUs, your
- nova.conf file should contain:
- libvirt_cpu_mode=custom
-libvirt_cpu_model=Nehalem
+ cpu_mode=custom, you can explicitly specify one of the
+ supported named model using the cpu_model configuration option. For example, to
+ configure the KVM guests to expose Nehalem CPUs, your nova.conf
+ file should contain:
+ [libvirt]
+cpu_mode = custom
+cpu_model = NehalemNone (default for all libvirt-driven hypervisors other than KVM &
QEMU)If your nova.conf file contains
- libvirt_cpu_mode=none, libvirt does not specify a CPU model.
- Instead, the hypervisor chooses the default model.
+ cpu_mode=none, libvirt does not specify a CPU model. Instead,
+ the hypervisor chooses the default model.
diff --git a/doc/config-reference/compute/section_hypervisor_lxc.xml b/doc/config-reference/compute/section_hypervisor_lxc.xml
index 24b019974a..d85603eb1f 100644
--- a/doc/config-reference/compute/section_hypervisor_lxc.xml
+++ b/doc/config-reference/compute/section_hypervisor_lxc.xml
@@ -21,8 +21,10 @@ xml:id="lxc">
To enable LXC, ensure the following options are set in
/etc/nova/nova.conf on all hosts running the nova-compute
- service.compute_driver=libvirt.LibvirtDriver
-libvirt_type=lxc
+ service.compute_driver = libvirt.LibvirtDriver
+
+[libvirt]
+virt_type = lxcOn Ubuntu 12.04, enable LXC support in OpenStack by installing the
nova-compute-lxc package.
diff --git a/doc/config-reference/compute/section_hypervisor_qemu.xml b/doc/config-reference/compute/section_hypervisor_qemu.xml
index 8d9efc7cb5..34a6005a2b 100644
--- a/doc/config-reference/compute/section_hypervisor_qemu.xml
+++ b/doc/config-reference/compute/section_hypervisor_qemu.xml
@@ -22,10 +22,11 @@
virtualization for guests.
-
- To enable QEMU, add these settings to
- nova.conf:compute_driver=libvirt.LibvirtDriver
-libvirt_type=qemu
+ To enable QEMU, add these settings to
+ nova.conf:compute_driver = libvirt.LibvirtDriver
+
+[libvirt]
+virt_type = qemu
For some operations you may also have to install the guestmount utility:On Ubuntu:
@@ -62,7 +63,7 @@ libvirt_type=qemu
with no overcommit.
The second command, setsebool, may take a while.
- #openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu
+ #openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu#setsebool -P virt_use_execmem on#ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64#service libvirtd restart