Fixing deprecated libvirt options for nova.conf

[DEFAULT] => [libvirt]
libvirt_type => virt_type
libvirt_NAME => NAME

Closes-bug: #1253812

Change-Id: I154ff62954bda5562f7e9c9ca1e56feecf18faf1
This commit is contained in:
Summer Long 2014-05-03 21:48:36 +10:00
parent 78c3142d8a
commit fa6a754723
7 changed files with 75 additions and 67 deletions

View File

@ -14,7 +14,6 @@ compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
# configured in cinder.conf
# COMPUTE
libvirt_type=qemu
compute_driver=libvirt.LibvirtDriver
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini
@ -67,3 +66,7 @@ signing_dirname = /tmp/keystone-signing-nova
# DATABASE
[database]
connection=mysql://nova:yourpassword@192.168.206.130/nova
# LIBVIRT
[libvirt]
virt_type=qemu

View File

@ -45,7 +45,7 @@
based on configuration settings. In
<filename>nova.conf</filename>, include the
<literal>logfile</literal> option to enable logging.
Alternatively you can set <literal>use_syslog=1</literal>
Alternatively you can set <literal>use_syslog = 1</literal>
so that the nova daemon logs to syslog.</para>
</section>
<section xml:id="section_compute-GuruMed-reports">
@ -217,9 +217,10 @@
<title>Injection problems</title>
<para>If instances do not boot or boot slowly, investigate
file injection as a cause.</para>
<para>To disable injection in libvirt, set
<option>libvirt_inject_partition</option> to
<literal>-2</literal>.</para>
<para>To disable injection in libvirt, set the following in
<filename>nova.conf</filename>:</para>
<programlisting language="ini">[libvirt]
inject_partition = -2</programlisting>
<note>
<para>If you have not enabled the configuration drive and
you want to make user-specified files available from

View File

@ -12,10 +12,10 @@
XenAPI driver. To enable the XenAPI driver, add the following
configuration options <filename>/etc/nova/nova.conf</filename>
and restart the <systemitem class="service">nova-compute</systemitem> service:</para>
<programlisting language="ini">compute_driver=xenapi.XenAPIDriver
xenapi_connection_url=http://your_xenapi_management_ip_address
xenapi_connection_username=root
xenapi_connection_password=your_password</programlisting>
<programlisting language="ini">compute_driver = xenapi.XenAPIDriver
xenapi_connection_url = http://your_xenapi_management_ip_address
xenapi_connection_username = root
xenapi_connection_password = your_password</programlisting>
<para>These connection details are used by the OpenStack Compute
service to contact your hypervisor and are the same details
you use to connect XenCenter, the XenServer management
@ -27,13 +27,13 @@ internal network IP Address (169.250.0.1) to contact XenAPI, this does not
allow live migration between hosts, and other functionalities like host aggregates
do not work.
</para></note>
<para>It is possible to manage Xen using libvirt, though this is not
well-tested or supported.
To experiment using Xen through libvirt add the following
configuration options
<filename>/etc/nova/nova.conf</filename>:
<programlisting language="ini">compute_driver=libvirt.LibvirtDriver
libvirt_type=xen</programlisting></para>
<para>It is possible to manage Xen using libvirt, though this is not well-tested or supported. To
experiment using Xen through libvirt add the following configuration options
<filename>/etc/nova/nova.conf</filename>:
<programlisting language="ini">compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = xen</programlisting></para>
<section xml:id="xen-agent">
<title>Agent</title>
<para>
@ -42,12 +42,11 @@ Generally a large timeout is required for Windows instances, bug you may want to
</para></section>
<section xml:id="xen-firewall">
<title>Firewall</title>
<para>
If using nova-network, IPTables is supported:
<programlisting language="ini">firewall_driver=nova.virt.firewall.IptablesFirewallDriver</programlisting>
Alternately, doing the isolation in Dom0:
<programlisting language="ini">firewall_driver=nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver</programlisting>
</para></section>
<para>If using nova-network, IPTables is supported:
<programlisting language="ini">firewall_driver = nova.virt.firewall.IptablesFirewallDriver</programlisting>
Alternately, doing the isolation in Dom0:
<programlisting language="ini">firewall_driver = nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver</programlisting>
</para></section>
<section xml:id="xen-vnc">
<title>VNC proxy address</title>
<para>
@ -57,16 +56,16 @@ and XenServer is on the address: 169.254.0.1, you can use the following:
</para></section>
<section xml:id="xen-storage">
<title>Storage</title>
<para>
You can specify which Storage Repository to use with nova by looking at the
following flag. The default is to use the local-storage setup by the default installer:
<programlisting language="ini">sr_matching_filter="other-config:i18n-key=local-storage"</programlisting>
Another good alternative is to use the "default" storage (for example
if you have attached NFS or any other shared storage):
<programlisting language="ini">sr_matching_filter="default-sr:true"</programlisting>
<note><para>To use a XenServer pool, you must create the pool
by using the Host Aggregates feature.</para></note>
</para></section>
<para>You can specify which Storage Repository to use with nova by looking at the following flag.
The default is to use the local-storage setup by the default installer:
<programlisting language="ini">sr_matching_filter = "other-config:i18n-key=local-storage"</programlisting>
Another good alternative is to use the "default" storage (for example if you
have attached NFS or any other shared storage): <programlisting language="ini">sr_matching_filter = "default-sr:true"</programlisting>
<note>
<para>To use a XenServer pool, you must create the pool by using the
Host Aggregates feature.</para>
</note>
</para></section>
<section xml:id="xen-config-reference-table">
<title>Xen configuration reference</title>
<para>To customize the Xen driver, use the configuration option settings

View File

@ -80,12 +80,10 @@
>nova-compute</systemitem> service is installed and
running is the machine that runs all the virtual machines,
referred to as the compute node in this guide.</para>
<para>By default, the selected hypervisor is KVM. To change to
another hypervisor, change the
<literal>libvirt_type</literal> option in
<filename>nova.conf</filename> and restart the
<systemitem class="service">nova-compute</systemitem>
service.</para>
<para>By default, the selected hypervisor is KVM. To change to another hypervisor, change
the <literal>virt_type</literal> option in the <literal>[libvirt]</literal> section of
<filename>nova.conf</filename> and restart the <systemitem class="service"
>nova-compute</systemitem> service.</para>
<para>Here are the general <filename>nova.conf</filename>
options that are used to configure the compute node's
hypervisor: <xref linkend="config_table_nova_hypervisor"/>.</para>

View File

@ -14,8 +14,10 @@
</note>
<para>To enable KVM explicitly, add the following configuration options to the
<filename>/etc/nova/nova.conf</filename> file:</para>
<programlisting language="ini">compute_driver=libvirt.LibvirtDriver
libvirt_type=kvm</programlisting>
<programlisting language="ini">compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = kvm</programlisting>
<para>The KVM hypervisor supports the following virtual machine image formats:</para>
<itemizedlist>
<listitem>
@ -93,17 +95,18 @@ libvirt_type=kvm</programlisting>
CPU model names. These models are defined in the
<filename>/usr/share/libvirt/cpu_map.xml</filename> file. Check this file to
determine which models are supported by your local installation.</para>
<para>Two Compute configuration options define which type of CPU model is exposed to the
hypervisor when using KVM: <literal>libvirt_cpu_mode</literal> and
<literal>libvirt_cpu_model</literal>.</para>
<para>The <literal>libvirt_cpu_mode</literal> option can take one of the following values:
<para>Two Compute configuration options in the <literal>[libvirt]</literal> group of
<filename>nova.conf</filename> define which type of CPU model is exposed to the
hypervisor when using KVM: <literal>cpu_mode</literal> and
<literal>cpu_model</literal>.</para>
<para>The <literal>cpu_mode</literal> option can take one of the following values:
<literal>none</literal>, <literal>host-passthrough</literal>,
<literal>host-model</literal>, and <literal>custom</literal>.</para>
<simplesect>
<title>Host model (default for KVM &amp; QEMU)</title>
<para>If your <filename>nova.conf</filename> file contains
<literal>libvirt_cpu_mode=host-model</literal>, libvirt identifies the CPU model
in <filename>/usr/share/libvirt/cpu_map.xml</filename> file that most closely
<literal>cpu_mode=host-model</literal>, libvirt identifies the CPU model in
<filename>/usr/share/libvirt/cpu_map.xml</filename> file that most closely
matches the host, and requests additional CPU flags to complete the match. This
configuration provides the maximum functionality and performance and maintains good
reliability and compatibility if the guest is migrated to another host with slightly
@ -112,29 +115,30 @@ libvirt_type=kvm</programlisting>
<simplesect>
<title>Host pass through</title>
<para>If your <filename>nova.conf</filename> file contains
<literal>libvirt_cpu_mode=host-passthrough</literal>, libvirt tells KVM to pass
through the host CPU with no modifications. The difference to host-model, instead of
just matching feature flags, every last detail of the host CPU is matched. This
gives absolutely best performance, and can be important to some apps which check low
level CPU details, but it comes at a cost with respect to migration: the guest can
only be migrated to an exactly matching host CPU.</para>
<literal>cpu_mode=host-passthrough</literal>, libvirt tells KVM to pass through
the host CPU with no modifications. The difference to host-model, instead of just
matching feature flags, every last detail of the host CPU is matched. This gives
absolutely best performance, and can be important to some apps which check low level
CPU details, but it comes at a cost with respect to migration: the guest can only be
migrated to an exactly matching host CPU.</para>
</simplesect>
<simplesect>
<title>Custom</title>
<para>If your <filename>nova.conf</filename> file contains
<literal>libvirt_cpu_mode=custom</literal>, you can explicitly specify one of
the supported named model using the libvirt_cpu_model configuration option. For
example, to configure the KVM guests to expose Nehalem CPUs, your
<filename>nova.conf</filename> file should contain:</para>
<programlisting language="ini">libvirt_cpu_mode=custom
libvirt_cpu_model=Nehalem</programlisting>
<literal>cpu_mode=custom</literal>, you can explicitly specify one of the
supported named model using the cpu_model configuration option. For example, to
configure the KVM guests to expose Nehalem CPUs, your <filename>nova.conf</filename>
file should contain:</para>
<programlisting language="ini">[libvirt]
cpu_mode = custom
cpu_model = Nehalem</programlisting>
</simplesect>
<simplesect>
<title>None (default for all libvirt-driven hypervisors other than KVM &amp;
QEMU)</title>
<para>If your <filename>nova.conf</filename> file contains
<literal>libvirt_cpu_mode=none</literal>, libvirt does not specify a CPU model.
Instead, the hypervisor chooses the default model.</para>
<literal>cpu_mode=none</literal>, libvirt does not specify a CPU model. Instead,
the hypervisor chooses the default model.</para>
</simplesect>
</section>
<section xml:id="kvm-guest-agent-support">

View File

@ -21,8 +21,10 @@ xml:id="lxc">
<para>To enable LXC, ensure the following options are set in
<filename>/etc/nova/nova.conf</filename> on all hosts running the <systemitem class="service"
>nova-compute</systemitem>
service.<programlisting language="ini">compute_driver=libvirt.LibvirtDriver
libvirt_type=lxc</programlisting></para>
service.<programlisting language="ini">compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = lxc</programlisting></para>
<para>On Ubuntu 12.04, enable LXC support in OpenStack by installing the
<literal>nova-compute-lxc</literal> package.</para>
</section>

View File

@ -22,10 +22,11 @@
virtualization for guests.</para>
</listitem>
</itemizedlist></para>
<para>
To enable QEMU, add these settings to
<filename>nova.conf</filename>:<programlisting language="ini">compute_driver=libvirt.LibvirtDriver
libvirt_type=qemu</programlisting></para>
<para>To enable QEMU, add these settings to
<filename>nova.conf</filename>:<programlisting language="ini">compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = qemu</programlisting></para>
<para>
For some operations you may also have to install the <command>guestmount</command> utility:</para>
<para>On Ubuntu:
@ -62,7 +63,7 @@ libvirt_type=qemu</programlisting></para>
with no overcommit.</para>
<note><para>The second command, <command>setsebool</command>, may take a while.
</para></note>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu</userinput>
<screen><prompt>#</prompt> <userinput>openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu</userinput>
<prompt>#</prompt> <userinput>setsebool -P virt_use_execmem on</userinput>
<prompt>#</prompt> <userinput>ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64</userinput>
<prompt>#</prompt> <userinput>service libvirtd restart</userinput></screen>