connection_type => compute_driver

Convert examples that use connection_type to compute_driver with
appropriate driver strings. There probably needs to be slightly more
explanation of the change. Also, there are additional tables that
are not modified by this patch, as I was unclear if they were
autogenerated or not.

Patchset does an include for updates
to the hypervisor config table and common config table
with the new compute_driver settings.

Change-Id: I5cdfb1a59c9029760de4796127bfbe16f4306d4c
This commit is contained in:
Sean Dague 2012-07-10 19:36:21 -04:00 committed by annegentle
parent a9b78d704b
commit e421478fad
9 changed files with 60 additions and 150 deletions

View File

@ -9,7 +9,7 @@
<para>The recommended way to use Xen with OpenStack is through the XenAPI
driver. To enable the XenAPI driver, add the following configuration options
<filename>/etc/nova/nova.conf</filename>:
<programlisting>connection_type=xenapi
<programlisting>compute_driver=xenapi.XenAPIDriver
xenapi_connection_url=http://your_xenapi_management_ip_address
xenapi_connection_username=root
xenapi_connection_password=your_password</programlisting>
@ -39,7 +39,7 @@ Linux or Oracle Linux. Unfortunately, this is not well tested or supported
as of the Essex release.
To experiment using Xen through libvirt add the following configuration options
<filename>/etc/nova/nova.conf</filename>:
<programlisting>connection_type=libvirt
<programlisting>compute_driver=libvirt.LibvirtDriver
libvirt_type=xen</programlisting></para>
<para>The rest of this section describes Xen, XCP, and XenServer, the
@ -344,3 +344,4 @@ XenAPI plugins Readme</link>.
</section>
</section>

View File

@ -7,7 +7,7 @@
<title>KVM</title>
<para>KVM is configured as the default hypervisor for Compute. To enable KVM explicitly, add the
following configuration options
<filename>/etc/nova/nova.conf</filename>:<programlisting>connection_type=libvirt
<filename>/etc/nova/nova.conf</filename>:<programlisting>compute_driver=libvirt.LibvirtDriver
libvirt_type=kvm</programlisting>
The KVM hypervisor supports the following virtual machine image formats:<itemizedlist>
<listitem>
@ -127,3 +127,4 @@ kvm-amd</programlisting></para>
This is a symptom that the KVM kernel modules have not been loaded.</para>
</section>
</section>

View File

@ -16,9 +16,10 @@ xml:id="lxc">
<para>To enable LXC, ensure the following options are set in
<filename>/etc/nova/nova.conf</filename> on all hosts running the <systemitem class="service"
>nova-compute</systemitem>
service.<programlisting>connection_type=libvirt
service.<programlisting>compute_driver=libvirt.LibvirtDriver
libvirt_type=lxc</programlisting></para>
<para>On Ubuntu 12.04, enable LXC support in OpenStack by installing the
<literal>nova-compute-lxc</literal> package.</para>
</section>

View File

@ -24,7 +24,7 @@
(e.g., if you are running Compute inside of a VM and the hypervisor does not expose the
required hardware support), you can use QEMU instead. KVM and QEMU have the same level of
support in OpenStack, but KVM will provide better performance. To enable
QEMU:<programlisting>connection_type=libvirt
QEMU:<programlisting>compute_driver=libvirt.LibvirtDriver
libvirt_type=qemu</programlisting>
The QEMU hypervisor supports the following virtual machine image formats:<itemizedlist>
@ -47,7 +47,8 @@ libvirt_type=qemu</programlisting>
<prompt>$></prompt> <userinput>setsebool -P virt_use_execmem on</userinput>
<prompt>$></prompt> <userinput>sudo ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64</userinput>
<prompt>$></prompt> <userinput>sudo service libvirtd restart</userinput></screen>
</section>
</section>

View File

@ -11,10 +11,10 @@
<para>The OpenStack system has several key projects that are separate
installations but can work together depending on your cloud needs: OpenStack
Compute, OpenStack Object Storage, and OpenStack Image Store. There are basic configuration
decisions to make, and the <link xlink:href="http://docs.openstack.org/trunk/openstack-compute/install/content/">OpenStack Install Guide</link>
Compute, OpenStack Object Storage, and OpenStack Image Store. There are basic configuration
decisions to make, and the <link xlink:href="http://docs.openstack.org/trunk/openstack-compute/install/content/">OpenStack Install Guide</link>
covers a basic walkthrough.</para>
<section xml:id="configuring-openstack-compute-basics">
<?dbhtml stop-chunking?>
<title>Post-Installation Configuration for OpenStack Compute</title>
@ -332,10 +332,10 @@ $ <userinput>sudo service nova-compute restart</userinput></screen>
<screen><prompt>$</prompt> <userinput>nova-manage version list</userinput></screen>
</section>
<section xml:id="diagnose-compute">
<title>Diagnose your compute nodes</title>
<para>You can obtain extra informations about the running
virtual machines : their CPU usage, the memory, the disk io or
network io, per instance, by running the <command>nova
@ -493,7 +493,7 @@ sql_connection=mysql://root:&lt;password&gt;@127.0.0.1/nova
network_manager=nova.network.manager.FlatManager
image_service=nova.image.glance.GlanceImageService
flat_network_bridge=xenbr0
connection_type=xenapi
compute_driver=xenapi.XenAPIDriver
xenapi_connection_url=https://&lt;XenServer IP&gt;
xenapi_connection_username=root
xenapi_connection_password=supersecret
@ -561,7 +561,7 @@ xenapi_remap_vbd_dev=true
<xref linkend="ch-identity-mgmt-config"/> for additional information. </para>
<para>To customize authorization settings for Compute, see these
configuration settings in <filename>nova.conf</filename>.</para>
<xi:include href="tables/auth-nova-conf.xml"/>
<para>To customize certificate authority settings for Compute, see these configuration settings in <filename>nova.conf</filename>.</para>
<xi:include href="tables/ca-nova-conf.xml"/>
@ -597,7 +597,7 @@ xenapi_remap_vbd_dev=true
networking:</para>
<screen><prompt>$</prompt> <userinput>sudo apt-get install -y radvd</userinput>
<prompt>$</prompt> <userinput>sudo bash -c "echo 1 &gt; /proc/sys/net/ipv6/conf/all/forwarding"</userinput>
<prompt>$</prompt> <userinput>sudo bash -c "echo 1 &gt; /proc/sys/net/ipv6/conf/all/forwarding"</userinput>
<prompt>$</prompt> <userinput>sudo bash -c "echo 0 &gt; /proc/sys/net/ipv6/conf/all/accept_ra"</userinput></screen>
<para>Edit the <filename>nova.conf</filename> file on all nodes to
@ -628,7 +628,7 @@ xenapi_remap_vbd_dev=true
<para>Note that <literal>vlan_start</literal> and <literal>vpn_start</literal> parameters are not used by
FlatDHCPManager.</para>
<xi:include href="tables/ipv6-nova-conf.xml"/>
</section>
@ -669,7 +669,7 @@ xenapi_remap_vbd_dev=true
<listitem>
<para><emphasis role="bold">Shared storage:</emphasis>
NOVA-INST-DIR/instances/ (eg /var/lib/nova/instances) has to be
mounted by shared storage. This guide uses NFS but other options,
mounted by shared storage. This guide uses NFS but other options,
including the
<link xlink:href="http://gluster.org/community/documentation//index.php/OSConnect">OpenStack Gluster Connector</link>
are available.</para>
@ -687,7 +687,7 @@ xenapi_remap_vbd_dev=true
</itemizedlist>
<note><para>
This guide assumes the default value for instances_path in your nova.conf
("NOVA-INST-DIR/instances"). If you have changed the state_path or
("NOVA-INST-DIR/instances"). If you have changed the state_path or
instances_path variables, please modify accordingly
</para></note>
<note><para>This feature for cloud administrators only, since the use of nova-manage is necessary.
@ -742,12 +742,12 @@ xenapi_remap_vbd_dev=true
<link xlink:href="https://help.ubuntu.com/community/SettingUpNFSHowTo">the Ubuntu NFS HowTo to
setup an NFS server on HostA, and NFS Clients on HostB and HostC.</link> </para>
<para> Our aim is to export NOVA-INST-DIR/instances from HostA,
<para> Our aim is to export NOVA-INST-DIR/instances from HostA,
and have it readable and writable by the nova user on HostB and HostC.</para>
</listitem>
<listitem>
<para>
Using your knowledge from the Ubuntu documentation, configure the
Using your knowledge from the Ubuntu documentation, configure the
NFS server at HostA by adding a line to <filename>/etc/exports</filename>
<screen><prompt>$</prompt> <userinput>NOVA-INST-DIR/instances HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash)</userinput></screen>
</para>
@ -864,7 +864,7 @@ root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l
</listitem>
</orderedlist></para>
<xi:include href="tables/live-migration-nova-conf.xml"/>
</section>
<section xml:id="configuring-database-connections">
@ -1463,3 +1463,4 @@ limits =(POST, "*", .*, 10, MINUTE);(POST, "*/servers", ^/servers, 50, DAY);(PUT
</simplesect>
</section>
</chapter>

View File

@ -7,7 +7,7 @@
<para>This section assumes you have a working installation of OpenStack Compute and want to
select a particular hypervisor or run with multiple hypervisors. Before you try to get a VM
running within OpenStack Compute, be sure you have installed a hypervisor and used the
hypervisor's documentation to run a test VM and get it working.</para>
hypervisor's documentation to run a test VM and get it working.</para>
<section xml:id="selecting-a-hypervisor">
<title>Selecting a Hypervisor</title>
<para>OpenStack Compute supports many hypervisors, an array of which must provide a bit of
@ -34,132 +34,17 @@
<listitem><para><link xlink:href="http://www.vmware.com/products/vsphere-hypervisor/support.html">VMWare
ESX/ESXi</link> 4.1 update 1, runs VMWare-based Linux and Windows images
through a connection with the ESX server.</para></listitem>
<listitem><para><link xlink:href="http://www.xen.org">Xen</link> - XenServer,
<listitem><para><link xlink:href="http://www.xen.org">Xen</link> - XenServer,
Xen Cloud Platform (XCP), use to run Linux or Windows virtual machines. You must
install the nova-compute service in a para-virtualized VM.</para></listitem></itemizedlist>
</section>
</section>
<section xml:id="hypervisor-configuration-basics"><title>Hypervisor Configuration Basics</title>
<para>The node where the nova-compute service is installed and running is the machine that
runs all the virtual machines, referred to as the compute node in this guide. </para>
<para>By default, the selected hypervisor is KVM. To change to another hypervisor, change
the libvirt_type option in nova.conf and restart the nova-compute service. </para>
<para>Here are the nova.conf options that are used to configure the compute node.</para>
<table rules="all">
<caption>Description of nova.conf configuration options for the compute
node</caption>
<thead>
<tr>
<td>Configuration Option</td>
<td>Default</td>
<td>Description</td>
</tr>
</thead>
<tbody>
<tr><td>connection_type</td>
<td>default: 'libvirt'</td>
<td>libvirt, xenapi, or fake; Value that indicates the virtualization
connection type</td></tr>
<tr>
<td>compute_manager</td>
<td>default: 'nova.compute.manager.ComputeManager'</td>
<td>String value; Manager to use for nova-compute</td>
</tr>
<tr>
<td>compute_driver</td>
<td>default: 'nova.virt.connection.get_connection'</td>
<td>String value; Driver to use for controlling virtualization</td>
</tr>
<tr>
<td>images_path</td>
<td>default: '$state_path/images'</td>
<td>Directory; Location where decrypted images are stored on disk (when not
using Glance)</td>
</tr>
<tr>
<td>instances_path</td>
<td>default: '$state_path/instances'</td>
<td>Directory; Location where instances are stored on disk (when not using
Glance)</td>
</tr>
<tr>
<td>libvirt_type</td>
<td>default: 'kvm'</td>
<td>String; Libvirt domain type (valid options are: kvm, qemu, uml, xen) </td>
</tr>
<tr>
<td>allow_project_net_traffic</td>
<td>default: 'true'</td>
<td>true or false; Indicates whether to allow in-project network traffic </td>
</tr>
<tr>
<td>firewall_driver</td>
<td>default: 'nova.virt.libvirt_conn.IptablesFirewallDriver'</td>
<td>String; Firewall driver for instances, defaults to iptables</td>
</tr>
<tr>
<td>injected_network_template</td>
<td>default: ''</td>
<td>Directory and file name; Template file for injected network
information</td>
</tr>
<tr>
<td>libvirt_uri</td>
<td>default: empty string</td>
<td>String; Override the default libvirt URI (which is dependent on libvirt_type)</td>
</tr>
<tr>
<td>libvirt_xml_template</td>
<td>default: ''</td>
<td>Directory and file name; Libvirt XML template</td>
</tr>
<tr>
<td>libvirt_inject_password</td>
<td>default: 'false'</td>
<td>When set, libvirt will inject the admin password into instances before startup.
An agent is not required in the instance.
The admin password is specified as part of the server create API call. If no password is
specified, then a randomly generated password is used.</td>
</tr>
<tr>
<td>use_cow_images</td>
<td>default: 'true'</td>
<td>true or false; Indicates whether to use copy-on-write (qcow2) images.
If set to false and using qemu or kvm, backing files will not be used.</td>
</tr>
<tr>
<td>force_raw_images</td>
<td>default: 'true'</td>
<td>true or false; If true, backing image files will be converted to
raw image format.</td>
</tr>
<tr>
<td>rescue_image_id</td>
<td>default: 'ami-rescue'</td>
<td>String; AMI image to use for rescue</td>
</tr>
<tr>
<td>rescue_kernel_id</td>
<td>default: 'aki-rescue'</td>
<td>String; AKI image to use for rescue</td>
</tr>
<tr>
<td>rescue_ramdisk_id</td>
<td>default: 'ari-rescue'</td>
<td>String; ARI image to use for rescue</td>
</tr>
<tr>
<td>libvirt_nonblocking</td>
<td>default: 'false'</td>
<td>When set to 'true', libvirt APIs will be called in a seperate OS thread pool to avoid blocking the main thread.
This feature is especially desirable if you use the snapshot feature, which has a notably long execution time, or have many instances in a given compute node.
The feature is experimental and is disabled by default.
</td>
</tr>
</tbody></table>
<xi:include href="tables/hypervisors-nova-conf.xml"/>
</section>
<xi:include href="../common/kvm.xml" />
@ -168,3 +53,4 @@
<xi:include href="../common/lxc.xml" />
<xi:include href="../common/vmware.xml" />
</chapter>

View File

@ -83,9 +83,28 @@
default files used are: [] </para></td>
</tr>
<tr>
<td><para>connection_type=&lt;None&gt; </para></td>
<td><para> (StrOpt) Virtualization API connection type :
libvirt, xenapi, or fake </para></td>
<td>compute_driver</td>
<td>default: 'nova.virt.connection.get_connection'</td>
<td>String value; Driver to use for controlling
virtualization. For convenience if the driver
exists under the nove.virt namespace, nova.virt
can be removed. There are 5 drivers in core
openstack: fake.FakeDriver,
libvirt.LibvirtDriver,
baremetal.BareMetalDriver, xenapi.XenAPIDriver,
vmwareapi.VMWareESXDriver. If nothing is
specified the older connection_type mechanism
will be used. Be aware that method will be
removed post Folsom release.
</td>
</tr>
<tr>
<td>connection_type (Deprecated)</td>
<td>default: 'libvirt'</td>
<td>libvirt, xenapi, or fake; Value that
indicates the virtualization connection
type. Deprecated as of Folsom, will be removed in
G release.</td>
</tr>
<tr>
<td><para>

View File

@ -151,31 +151,31 @@
<td> vmwareapi_api_retry_count=10 </td>
<td> (FloatOpt) The number of times we retry on
failures, e.g., socket error, etc. Used only if
connection_type is vmwareapi </td>
compute_driver is vmwareapi.VMWareESXDriver. </td>
</tr>
<tr>
<td>vmwareapi_host_ip=&lt;None&gt; </td>
<td> (StrOpt) URL for connection to VMWare ESX
host.Required if connection_type is vmwareapi.
host.Required if compute_driver is vmwareapi.VMWareESXDriver.
</td>
</tr>
<tr>
<td>vmwareapi_host_password=&lt;None&gt; </td>
<td> (StrOpt) Password for connection to VMWare ESX
host. Used only if connection_type is vmwareapi.
host. Used only if compute_driver is vmwareapi.VMWareESXDriver.
</td>
</tr>
<tr>
<td>vmwareapi_host_username=&lt;None&gt; </td>
<td> (StrOpt) Username for connection to VMWare ESX
host. Used only if connection_type is vmwareapi.
host. Used only if compute_driver is vmwareapi.VMWareESXDriver.
</td>
</tr>
<tr>
<td> vmwareapi_task_poll_interval=5.0 </td>
<td> (FloatOpt) The interval used for polling of
remote tasks. Used only if connection_type is
vmwareapi </td>
remote tasks. Used only if compute_driver is
vmwareapi.VMWareESXDriver, </td>
</tr>
<tr>
<td> vmwareapi_vlan_interface=vmnic0 </td>

View File

@ -19,7 +19,7 @@ sql_connection=mysql://nova:yourpassword@192.168.206.130/nova
# COMPUTE
libvirt_type=qemu
connection_type=libvirt
compute_driver=libvirt.LibvirtDriver
instance_name_template=instance-%08x
api_paste_config=/etc/nova/api-paste.ini
allow_resize_to_same_host=True