New VM Image Guide
This is a proposal for a new manual, that focuses entirely on creating and modifying virtual machine images. Change-Id: If7979572c9251e19b8ceb992767c6073394f81a2
@@ -54,239 +54,34 @@
|
||||
<section xml:id="starting-images">
|
||||
<title>Getting virtual machine images</title>
|
||||
<?dbhtml stop-chunking?>
|
||||
<section xml:id="cirros-images">
|
||||
<title>CirrOS (test) images</title>
|
||||
<para>Scott Moser maintains a set of small virtual machine
|
||||
images that are designed for testing. These images use
|
||||
<literal>cirros</literal> as the login user. They
|
||||
are hosted under the CirrOS project on Launchpad
|
||||
and<link
|
||||
xlink:href="https://launchpad.net/cirros/+download"
|
||||
>are available for download</link>. </para>
|
||||
<para> If your deployment uses QEMU or KVM, we recommend
|
||||
using the images in QCOW2 format. The most recent
|
||||
64-bit QCOW2 image as of this writing is <link
|
||||
xlink:href="https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img"
|
||||
>cirros-0.3.0-x86_64-disk.img</link>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="ubuntu-images">
|
||||
<title>Ubuntu images</title>
|
||||
<para>Canonical maintains an <link
|
||||
xlink:href="http://uec-images.ubuntu.com">official
|
||||
set of Ubuntu-based images</link> These accounts
|
||||
use <literal>ubuntu</literal> as the login
|
||||
user.</para>
|
||||
<para>If your deployment uses QEMU or KVM, we recommend
|
||||
using the images in QCOW2 format. The most recent
|
||||
version of the 64-bit QCOW2 image for Ubuntu 12.04 is
|
||||
<link
|
||||
xlink:href="http://uec-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img"
|
||||
>precise-server-cloudimg-amd64-disk1.img</link>.</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="fedora-images">
|
||||
<title>Fedora images</title>
|
||||
<para>The Fedora project maintains prebuilt Fedora JEOS
|
||||
(Just Enough OS) images for download at <link
|
||||
xlink:href="http://berrange.fedorapeople.org/images"
|
||||
>http://berrange.fedorapeople.org/images
|
||||
</link>.</para>
|
||||
<para>A 64-bit QCOW2 image for Fedora 16, <link
|
||||
xlink:href="http://berrange.fedorapeople.org/images/2012-02-29/f16-x86_64-openstack-sda.qcow2"
|
||||
> f16-x86_64-openstack-sda.qcow2</link>, is
|
||||
available for download. </para>
|
||||
</section>
|
||||
<section xml:id="suse-sles-images">
|
||||
<title>openSUSE and SLES 11 images</title>
|
||||
<para><link xlink:href="http://susestudio.com">SUSE
|
||||
Studio</link> is an easy way to build virtual
|
||||
appliances for openSUSE and SLES 11 (SUSE Linux
|
||||
Enterprise Server) that are compatible with OpenStack.
|
||||
Free registration is required to download or build
|
||||
images.</para>
|
||||
|
||||
<para>For example, Christian Berendt used openSUSE to
|
||||
create <link
|
||||
xlink:href="http://susestudio.com/a/YRUrwO/testing-instance-for-openstack-opensuse-121"
|
||||
>a test openSUSE 12.1 (JeOS) image</link>.</para>
|
||||
</section>
|
||||
<section xml:id="rcb-images">
|
||||
<title>Rackspace Cloud Builders (multiple distros)
|
||||
images</title>
|
||||
<para>Rackspace Cloud Builders maintains a list of
|
||||
pre-built images from various distributions (RedHat,
|
||||
CentOS, Fedora, Ubuntu) at <link
|
||||
xlink:href="https://github.com/rackerjoe/oz-image-build"
|
||||
>rackerjoe/oz-image-build on Github</link>.</para>
|
||||
</section>
|
||||
</section>
|
||||
<para>Refer to the <link
|
||||
xlink:href="../openstack-image/admin/content/ch_obtaining_images.html"
|
||||
>OpenStack Virtual Machine Image Guide</link> for detailed
|
||||
information.</para>
|
||||
<section xml:id="tool-support-creating-new-images">
|
||||
<?dbhtml stop-chunking?>
|
||||
<title>Tool support for creating images</title>
|
||||
<para>There are several open-source third-party tools
|
||||
available that simplify the task of creating new virtual
|
||||
machine images.</para>
|
||||
<section xml:id="oz">
|
||||
<title>Oz (KVM)</title>
|
||||
<para><link xlink:href="http://aeolusproject.org/oz.html"
|
||||
>Oz</link> is a command-line tool that has the
|
||||
ability to create images for common Linux
|
||||
distributions. Rackspace Cloud Builders uses Oz to
|
||||
create virtual machines, see <link
|
||||
xlink:href="https://github.com/rackerjoe/oz-image-build"
|
||||
>rackerjoe/oz-image-build on Github</link> for
|
||||
their Oz templates. For an example from the Fedora
|
||||
Project wiki, see <link
|
||||
xlink:href="https://fedoraproject.org/wiki/Getting_started_with_OpenStack_Nova#Building_an_Image_With_Oz"
|
||||
> Building an image with Oz</link>. </para>
|
||||
machine images. Refer to the <link
|
||||
xlink:href="../openstack-image/admin/content/ch_creating_images_automatically.html"
|
||||
>OpenStack Virtual Machine Image Guide</link> for detailed
|
||||
information.</para>
|
||||
</section>
|
||||
<section xml:id="ubuntu-vm-builder">
|
||||
<title>VMBuilder (KVM, Xen)</title>
|
||||
<para><link xlink:href="https://launchpad.net/vmbuilder"
|
||||
>VMBuilder</link> can be used to create virtual
|
||||
machine images for different hypervisors.</para>
|
||||
<para>The <link
|
||||
xlink:href="https://help.ubuntu.com/12.04/serverguide/jeos-and-vmbuilder.html"
|
||||
> Ubuntu 12.04 server guide</link> has
|
||||
documentation on how to use VMBuilder.</para>
|
||||
</section>
|
||||
<section xml:id="boxgrinder">
|
||||
<title>BoxGrinder (KVM, Xen, VMWare)</title>
|
||||
<para><link xlink:href="http://boxgrinder.org"
|
||||
>BoxGrinder</link> is another tool for creating
|
||||
virtual machine images, which it calls appliances.
|
||||
BoxGrinder can create Fedora, Red Hat Enterprise
|
||||
Linux, or CentOS images. BoxGrinder is currently only
|
||||
supported on Fedora. </para>
|
||||
</section>
|
||||
<section xml:id="veewee">
|
||||
<title>VeeWee (KVM)</title>
|
||||
<para><link
|
||||
xlink:href="https://github.com/jedi4ever/veewee">
|
||||
VeeWee</link> is often used to build <link
|
||||
xlink:href="http://vagrantup.com">Vagrant</link>
|
||||
boxes, but it can also be used to build KVM
|
||||
images.</para>
|
||||
<para>See the <link
|
||||
xlink:href="https://github.com/jedi4ever/veewee/blob/master/doc/definition.md"
|
||||
>doc/definition.md</link> and <link
|
||||
xlink:href="https://github.com/jedi4ever/veewee/blob/master/doc/template.md"
|
||||
>doc/template.md</link> VeeWee documentation files
|
||||
for more details. </para>
|
||||
</section>
|
||||
<section xml:id="imagefactory">
|
||||
<title>imagefactory</title>
|
||||
<para><link xlink:href="http://imgfac.org/"
|
||||
>imagefactory</link> is a new tool from the <link
|
||||
xlink:href="http://www.aeolusproject.org/"
|
||||
>Aeolus</link> project designed to automate the
|
||||
building, converting, and uploading images to
|
||||
different cloud providers. It includes support for
|
||||
OpenStack-based clouds.</para>
|
||||
</section>
|
||||
|
||||
</section>
|
||||
<section xml:id="image-customizing-what-you-need-to-know">
|
||||
<?dbhtml stop-chunking?>
|
||||
<title>Customizing an image for OpenStack</title>
|
||||
<para>This section describes what customizations you should to
|
||||
<para>The <link
|
||||
xlink:href="../openstack-image/admin/content/ch_openstack_images.html"
|
||||
>OpenStack Virtual Machine Image Guide</link> describes what customizations you should to
|
||||
your image to maximize compatibility with
|
||||
OpenStack.</para>
|
||||
<section xml:id="support-metadata-or-config-drive">
|
||||
<title>Support metadata service or config drive</title>
|
||||
<para/>
|
||||
<para>An image needs to be able to retrieve information
|
||||
from OpenStack, such as the ssh public key and <link
|
||||
linkend="user-data">user data</link> that the user
|
||||
submitted when requesting the image. This information
|
||||
is accessible via the metadata service or the config
|
||||
drive. The easiest way to support this is to install
|
||||
the <link xlink:href="http://launchpad.net/cloud-init"
|
||||
>cloud-init</link> package into your image.</para>
|
||||
</section>
|
||||
<section xml:id="support-resizing">
|
||||
<title>Support resizing</title>
|
||||
<para>The size of the disk in a virtual machine image is determined when you initially
|
||||
create the image. However, OpenStack lets you launch instances with different size
|
||||
drives by specifying different flavors. For example, if your image was created with a 5 GB disk, and
|
||||
you launch an instance with a flavor of <literal>m1.small</literal>, the resulting
|
||||
virtual machine instance will have a primary disk of 10GB. When an instance's disk
|
||||
is resized up, zeros are just added to the end.</para>
|
||||
<para>Your image needs to be able to resize its partitions
|
||||
on boot to match the size requested by the user.
|
||||
Otherwise, after the instance boots, you will need to
|
||||
manually resize the partitions if you want to access
|
||||
the additional storage you have access to when the
|
||||
disk size associated with the flavor exceeds the disk
|
||||
size your image was created with. </para>
|
||||
<para>Your image must be configured to deal with two issues:<itemizedlist>
|
||||
<listitem>
|
||||
<para>The image's partition table describes
|
||||
the original size of the image</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>The image's filesystem fills the
|
||||
original size of the image</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
<simplesect>
|
||||
<title>Adjusting the partition table on instance
|
||||
boot</title>
|
||||
<para>Your image will need to run a script on boot to
|
||||
modify the partition table. Due to a limitation in
|
||||
the Linux kernel, you cannot modify a partition
|
||||
table of a disk that has partition currently
|
||||
mounted (you can for LVM, but not for "raw
|
||||
disks"); this partition adjustment has to happen
|
||||
inside the initramfs before the root volume is
|
||||
mounted, or a reboot has to be done to free the
|
||||
mount of <filename>/</filename>.</para>
|
||||
<para xlink:href="https://launchpad.net/cloud-utils"
|
||||
>Ubuntu cloud images and cirros images use a tool
|
||||
called <command>growpart</command> that is part of
|
||||
the <link
|
||||
xlink:href="http://launchpad.net/cloud-utils"
|
||||
>cloud-utils</link> package.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Adjusting the filesystem</title>
|
||||
<para>You will need to resize the file system in
|
||||
addition to the partition table. If you have
|
||||
cloud-init installed, it will do the resize
|
||||
assuming the partition tables have been adjusted
|
||||
properly. Cirros images run resize2fs on the root
|
||||
partition on boot.</para>
|
||||
<note>
|
||||
<para>If you are using XenServer as your
|
||||
hypervisor, the above steps are not needed as
|
||||
the Compute service will automatically adjust
|
||||
the partition and filesystem for your instance
|
||||
on boot. Automatic resize will occur if the
|
||||
following are all true:<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>auto_disk_config=True</literal>
|
||||
in
|
||||
<filename>nova.conf</filename>.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>The disk on the image has only
|
||||
one partition.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>The file system on the one
|
||||
partition is ext3 or ext4.</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
</note>
|
||||
</simplesect>
|
||||
</section>
|
||||
|
||||
</section>
|
||||
<section xml:id="manually-creating-qcow2-images">
|
||||
<title>Creating raw or QCOW2 images</title>
|
||||
<para>This section describes how to create a raw or QCOW2
|
||||
<para>This <link
|
||||
xlink:href="../openstack-image/admin/content/ch_creating_images_manually.html"
|
||||
>OpenStack Virtual Machine Image Guide</link> describes how to create a raw or QCOW2
|
||||
image from a Linux installation ISO file. Raw images are
|
||||
the simplest image file format and are supported by all of
|
||||
the hypervisors. QCOW2 images have several advantages over
|
||||
@@ -295,331 +90,7 @@
|
||||
<para>QCOW2 images are only supported with KVM and
|
||||
QEMU hypervisors.</para>
|
||||
</note></para>
|
||||
<para>As an example, this section will describe how to create
|
||||
aa CentOS 6.2 image. <link
|
||||
xlink:href="http://isoredirect.centos.org/centos/6/isos/x86_64/"
|
||||
>64-bit ISO images of CentOS 6.2</link> can be
|
||||
downloaded from one of the CentOS mirrors. This example
|
||||
uses the CentOS netinstall ISO, which is a smaller ISO
|
||||
file that downloads packages from the Internet as
|
||||
needed.</para>
|
||||
<simplesect>
|
||||
<title>Create an empty image (raw)</title>
|
||||
<para>Here we create a 5GB raw image using the
|
||||
<command>kvm-img</command> command:
|
||||
<screen><prompt>$</prompt> <userinput>IMAGE=centos-6.2.img</userinput>
|
||||
<prompt>$</prompt> <userinput>kvm-img create -f raw $IMAGE 5G</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Create an empty image (QCOW2)</title>
|
||||
<para>Here we create a a 5GB QCOW2 image using the
|
||||
<command>kvm-img</command> command:
|
||||
<screen><prompt>$</prompt> <userinput>IMAGE=centos-6.2.img</userinput>
|
||||
<prompt>$</prompt> <userinput>kvm-img create -f qcow $IMAGE 5G</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Boot the ISO using the image</title>
|
||||
<para>First, find a spare vnc display. (Note that vnc
|
||||
display <literal>:N</literal> correspond to TCP port
|
||||
5900+N, so that <literal>:0</literal> corresponds to
|
||||
port 5900). Check which ones are currently in use with
|
||||
the <command>lsof</command> command, as
|
||||
root:<screen><prompt>#</prompt> <userinput>lsof -i | grep "TCP \*:590"</userinput>
|
||||
<computeroutput>kvm 3437 libvirt-qemu 14u IPv4 1629164 0t0 TCP *:5900 (LISTEN)
|
||||
kvm 24966 libvirt-qemu 24u IPv4 1915470 0t0 TCP *:5901 (LISTEN)</computeroutput></screen></para>
|
||||
<para>This shows that vnc displays <literal>:0</literal>
|
||||
and <literal>:1</literal> are in use. In this example,
|
||||
we will use VNC display <literal>:2</literal>.</para>
|
||||
<para> Also, we want a temporary file to send power
|
||||
signals to the VM instance. We default to
|
||||
<filename>/tmp/file.mon</filename>, but make sure
|
||||
it doesn't exist yet. If it does, use a different file
|
||||
name for the <literal>MONITOR</literal> variable
|
||||
defined
|
||||
below:<screen><prompt>$</prompt> <userinput>IMAGE=centos-6.2.img</userinput>
|
||||
<prompt>$</prompt> <userinput>ISO=CentOS-6.2-x86_64-netinstall.iso</userinput>
|
||||
<prompt>$</prompt> <userinput>VNCDISPLAY=:2</userinput>
|
||||
<prompt>$</prompt> <userinput>MONITOR=/tmp/file.mon</userinput>
|
||||
<prompt>$</prompt> <userinput>sudo kvm -m 1024 -cdrom $ISO -drive file=${IMAGE},if=virtio,index=0 \
|
||||
-boot d -net nic -net user -nographic -vnc ${VNCDISPLAY} \
|
||||
-monitor unix:${MONITOR},server,nowait</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Connect to the instance via VNC</title>
|
||||
<para>VNC is a remote desktop protocol that will give you
|
||||
full-screen display access to the virtual machine
|
||||
instance, as well as let you interact with keyboard
|
||||
and mouse. Use a VNC client (e.g., <link
|
||||
xlink:href="http://projects.gnome.org/vinagre/"
|
||||
>Vinagre</link> on Gnome, <link
|
||||
xlink:href="http://userbase.kde.org/Krdc"
|
||||
>Krdc</link> on KDE, xvnc4viewer from <link
|
||||
xlink:href="http://www.realvnc.com"
|
||||
>RealVNC</link>, xtightvncviewer from <link
|
||||
xlink:href="http://www.tightvnc.com"
|
||||
>TightVNC</link>) to connect to the machine using
|
||||
the display you specified. You should now see a CentOS
|
||||
install screen.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Point the installer to a CentOS web server</title>
|
||||
<para>The CentOS net installer requires that the user
|
||||
specify the web site and a CentOS directory that
|
||||
corresponds to one of the CentOS mirrors. <itemizedlist>
|
||||
<listitem>
|
||||
<para>Web site name:
|
||||
<literal>mirror.umd.edu</literal>
|
||||
(consider using other mirrors as an
|
||||
alternative)</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>CentOS directory:
|
||||
<literal>centos/6.2/os/x86_64</literal></para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
<para>See <link
|
||||
xlink:href="http://www.centos.org/modules/tinycontent/index.php?id=30"
|
||||
>CentOS mirror page</link> to get a full list of
|
||||
mirrors, click on the "HTTP" link of a mirror to
|
||||
retrieve the web site name of a mirror.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Partition the disks</title>
|
||||
<para>There are different options for partitioning the
|
||||
disks. The default installation will use LVM
|
||||
partitions, and will create three partitions
|
||||
(<filename>/boot</filename>,
|
||||
<filename>/</filename>, swap). The simplest
|
||||
approach is to create a single ext4 partition, mounted
|
||||
to "<literal>/</literal>".</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Step through the install</title>
|
||||
<para>The simplest thing to do is to choose the "Server"
|
||||
install, which will install an SSH server.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>When install completes, shut down the
|
||||
instance</title>
|
||||
<para>Power down the instance using the monitor socket
|
||||
file to send a power down signal, as
|
||||
root:<screen><prompt>#</prompt> <userinput>MONITOR=/tmp/file.mon</userinput>
|
||||
<prompt>#</prompt> <userinput>echo 'system_powerdown' | socat - UNIX-CONNECT:$MONITOR</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Start the instance again without the ISO</title>
|
||||
<para>
|
||||
<screen>
|
||||
<prompt>$</prompt> <userinput>VNCDISPLAY=:2</userinput>
|
||||
<prompt>$</prompt> <userinput>MONITOR=/tmp/file.mon</userinput>
|
||||
<prompt>$</prompt> <userinput>sudo kvm -m 1024 -drive file=${IMAGE},if=virtio,index=0 \
|
||||
-boot c -net nic -net user -nographic -vnc ${VNCDISPLAY} \
|
||||
-monitor unix:${MONITOR},server,nowait</userinput></screen>
|
||||
</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Connect to instance via VNC</title>
|
||||
<para>When you boot the first time, it will ask you about
|
||||
authentication tools, you can just choose 'Exit'.
|
||||
Then, log in as root using the root password you
|
||||
specified.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Edit HWADDR from eth0 config file</title>
|
||||
<para>The operating system records the MAC address of the
|
||||
virtual ethernet card in
|
||||
<filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename>
|
||||
during the instance process. However, each time the
|
||||
image boots up, the virtual ethernet card will have a
|
||||
different MAC address, so this information must be
|
||||
deleted from the configuration file.</para>
|
||||
<para>Edit
|
||||
<filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename>
|
||||
and remove the <literal>HWADDR=</literal> line.
|
||||
</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Configure to fetch metadata</title>
|
||||
<para>An instance must perform several steps on startup by
|
||||
interacting with the metada service (e.g., retrieve
|
||||
ssh public key, execute user data script). There are
|
||||
several ways to implement this functionality, including:<itemizedlist>
|
||||
<listitem>
|
||||
<para>Install a <link
|
||||
xlink:href="http://koji.fedoraproject.org/koji/packageinfo?packageID=12620"
|
||||
>cloud-init RPM</link> , which is a
|
||||
port of the Ubuntu <link
|
||||
xlink:href="https://launchpad.net/cloud-init"
|
||||
>cloud-init</link> package. This is
|
||||
the recommended approach.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Modify
|
||||
<filename>/etc/rc.local</filename> to
|
||||
fetch desired information from the
|
||||
metadata service, as described
|
||||
below.</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Using cloud-init to fetch the public key</title>
|
||||
<para>The cloud-init package will automatically fetch the
|
||||
public key from the metadata server and place the key
|
||||
in an account. The account varies by distribution. On
|
||||
Ubuntu-based virtual virtual machines, the account is
|
||||
called "ubuntu". On Fedora-based virtual machines, the
|
||||
account is called "ec2-user".</para>
|
||||
<para>You can change the name of the account used by
|
||||
cloud-init by editing the
|
||||
<filename>/etc/cloud/cloud.cfg</filename> file and
|
||||
adding a line with a different user. For example, to
|
||||
configure cloud-init to put the key in an account
|
||||
named "admin", edit the config file so it has the
|
||||
line:<programlisting>user: admin</programlisting></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Writing a script to fetch the public key</title>
|
||||
<para>To fetch the ssh public key and add it to the root
|
||||
account, edit the <filename>/etc/rc.local</filename>
|
||||
file and add the following lines before the line
|
||||
“touch /var/lock/subsys/local”</para>
|
||||
<programlisting>depmod -a
|
||||
modprobe acpiphp
|
||||
|
||||
# simple attempt to get the user ssh key using the meta-data service
|
||||
mkdir -p /root/.ssh
|
||||
echo >> /root/.ssh/authorized_keys
|
||||
curl -m 10 -s http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key | grep 'ssh-rsa' >> /root/.ssh/authorized_keys
|
||||
echo "AUTHORIZED_KEYS:"
|
||||
echo "************************"
|
||||
cat /root/.ssh/authorized_keys
|
||||
echo "************************"
|
||||
</programlisting>
|
||||
<note>
|
||||
<para>Some VNC clients replace : (colon) with ;
|
||||
(semicolon) and _ (underscore) with - (hyphen).
|
||||
Make sure it's http: not http; and authorized_keys
|
||||
not authorized-keys.</para>
|
||||
</note>
|
||||
<note>
|
||||
<para>The above script only retrieves the ssh public
|
||||
key from the metadata server. It does not retrieve
|
||||
<emphasis role="italic">user data</emphasis>,
|
||||
which is optional data that can be passed by the
|
||||
user when requesting a new instance. User data is
|
||||
often used for running a custom script when an
|
||||
instance comes up.</para>
|
||||
<para>As the OpenStack metadata service is compatible
|
||||
with version 2009-04-04 of the Amazon EC2 metadata
|
||||
service, consult the Amazon EC2 documentation on
|
||||
<link
|
||||
xlink:href="http://docs.amazonwebservices.com/AWSEC2/2009-04-04/UserGuide/AESDG-chapter-instancedata.html"
|
||||
>Using Instance Metadata</link> for details on
|
||||
how to retrieve user data.</para>
|
||||
</note>
|
||||
|
||||
</simplesect>
|
||||
|
||||
<simplesect>
|
||||
<title>Shut down the instance</title>
|
||||
<para>From inside the instance, as
|
||||
root:<screen><prompt>#</prompt> <userinput>/sbin/shutdown -h now</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Modifying the image (raw)</title>
|
||||
<para>You can make changes to the filesystem of an image
|
||||
without booting it, by mounting the image as a file
|
||||
system. To mount a raw image, you need to attach it to
|
||||
a loop device (e.g., <filename>/dev/loop0</filename>,
|
||||
<filename>/dev/loop1</filename>). To identify the
|
||||
next unused loop device, as
|
||||
root:<screen><prompt>#</prompt> <userinput>losetup -f</userinput>
|
||||
<computeroutput>/dev/loop0</computeroutput></screen>In
|
||||
the example above, <filename>/dev/loop0</filename> is
|
||||
available for use. Associate it to the image using
|
||||
<command>losetup</command>, and expose the
|
||||
partitions as device files using
|
||||
<command>kpartx</command>, as root:</para>
|
||||
<para>
|
||||
<screen><prompt>#</prompt> <userinput>IMAGE=centos-6.2.img</userinput>
|
||||
<prompt>#</prompt> <userinput>losetup /dev/loop0 $IMAGE</userinput>
|
||||
<prompt>#</prompt> <userinput>kpartx -av /dev/loop0</userinput></screen>
|
||||
</para>
|
||||
<para>If the image has, say three partitions (/boot, /,
|
||||
/swap), there should be one new device created per
|
||||
partition:<screen><prompt>$</prompt> <userinput>ls -l /dev/mapper/loop0p*</userinput><computeroutput>
|
||||
brw-rw---- 1 root disk 43, 49 2012-03-05 15:32 /dev/mapper/loop0p1
|
||||
brw-rw---- 1 root disk 43, 50 2012-03-05 15:32 /dev/mapper/loop0p2
|
||||
brw-rw---- 1 root disk 43, 51 2012-03-05 15:32 /dev/mapper/loop0p3</computeroutput></screen></para>
|
||||
<para>To mount the second partition, as
|
||||
root:<screen><prompt>#</prompt> <userinput>mkdir /mnt/image</userinput>
|
||||
<prompt>#</prompt> <userinput>mount /dev/mapper/loop0p2 /mnt/image</userinput></screen></para>
|
||||
<para>You can now modify the files in the image by going
|
||||
to <filename>/mnt/image</filename>. When done, unmount
|
||||
the image and release the loop device, as
|
||||
root:<screen><prompt>#</prompt> <userinput>umount /mnt/image</userinput>
|
||||
<prompt>#</prompt> <userinput>losetup -d /dev/loop0</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Modifying the image (qcow2)</title>
|
||||
<para>You can make changes to the filesystem of an image
|
||||
without booting it, by mounting the image as a file
|
||||
system. To mount a QEMU image, you need the nbd kernel
|
||||
module to be loaded. Load the nbd kernel module, as root:<screen><prompt>#</prompt> <userinput>modprobe nbd max_part=8</userinput></screen><note>
|
||||
<para>If nbd has already been loaded with
|
||||
<literal>max_part=0</literal>, you will
|
||||
not be able to mount an image if it has
|
||||
multiple partitions. In this case, you may
|
||||
need to first unload the nbd kernel module,
|
||||
and then load it. To unload it, as
|
||||
root:<screen><prompt>#</prompt> <userinput>rmmod nbd</userinput></screen></para>
|
||||
</note></para>
|
||||
<para>Connect your image to one of the network block
|
||||
devices (e.g., <filename>/dev/nbd0</filename>,
|
||||
<filename>/dev/nbd1</filename>). In this example,
|
||||
we use <filename>/dev/nbd3</filename>. As
|
||||
root:<screen><prompt>#</prompt> <userinput>IMAGE=centos-6.2.img</userinput>
|
||||
<prompt>#</prompt> <userinput>qemu-nbd -c /dev/nbd3 $IMAGE</userinput></screen></para>
|
||||
<para>If the image has, say three partitions (/boot, /,
|
||||
/swap), there should be one new device created per partition:<screen><prompt>$</prompt> <userinput>ls -l /dev/nbd3*</userinput>
|
||||
<computeroutput>brw-rw---- 1 root disk 43, 48 2012-03-05 15:32 /dev/nbd3
|
||||
brw-rw---- 1 root disk 43, 49 2012-03-05 15:32 /dev/nbd3p1
|
||||
brw-rw---- 1 root disk 43, 50 2012-03-05 15:32 /dev/nbd3p2
|
||||
brw-rw---- 1 root disk 43, 51 2012-03-05 15:32 /dev/nbd3p3</computeroutput></screen><note>
|
||||
<para>If the network block device you selected was
|
||||
already in use, the initial
|
||||
<command>qemu-nbd</command> command will
|
||||
fail silently, and the
|
||||
<filename>/dev/nbd3p{1,2,3}</filename>
|
||||
device files will not be created.</para>
|
||||
</note></para>
|
||||
<para>To mount the second partition, as
|
||||
root:<screen><prompt>#</prompt> <userinput>mkdir /mnt/image</userinput>
|
||||
<prompt>#</prompt> <userinput>mount /dev/nbd3p2 /mnt/image</userinput></screen></para>
|
||||
<para>You can now modify the files in the image by going
|
||||
to <filename>/mnt/image</filename>. When done, unmount
|
||||
the image and release the network block device, as
|
||||
root:<screen><prompt>#</prompt> <userinput>umount /mnt/image</userinput>
|
||||
<prompt>#</prompt> <userinput>qemu-nbd -d /dev/nbd3</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Upload the image to glance (raw)</title>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>IMAGE=centos-6.2.img</userinput>
|
||||
<prompt>$</prompt> <userinput>NAME=centos-6.2</userinput>
|
||||
<prompt>$</prompt> <userinput>glance image-create --name="${NAME}" --is-public=true --container-format=ovf --disk-format=raw < ${IMAGE}</userinput></screen>
|
||||
</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Upload the image to glance (qcow2)</title>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>IMAGE=centos-6.2.img</userinput>
|
||||
<prompt>$</prompt> <userinput>NAME=centos-6.2</userinput>
|
||||
<prompt>$</prompt> <userinput>glance image-create --name="${NAME}" --is-public=true --container-format=ovf --disk-format=qcow2 < ${IMAGE}</userinput></screen>
|
||||
</para>
|
||||
</simplesect>
|
||||
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="booting-a-test-image">
|
||||
<title>Booting a test image</title>
|
||||
|
||||
62
doc/src/docbkx/openstack-image/bk-imageguide.xml
Normal file
@@ -0,0 +1,62 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<book xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:svg="http://www.w3.org/2000/svg"
|
||||
xmlns:html="http://www.w3.org/1999/xhtml"
|
||||
version="5.0"
|
||||
xml:id="openstack-image-manual-trunk">
|
||||
<?rax title.font.size="28px" subtitle.font.size="28px"?>
|
||||
<title>OpenStack Virtual Machine Image Guide</title>
|
||||
<info>
|
||||
<author>
|
||||
<personname>
|
||||
<firstname/>
|
||||
<surname/>
|
||||
</personname>
|
||||
<affiliation>
|
||||
<orgname>OpenStack</orgname>
|
||||
</affiliation>
|
||||
</author>
|
||||
<copyright>
|
||||
<year>2013</year>
|
||||
<holder>OpenStack Foundation</holder>
|
||||
</copyright>
|
||||
<releaseinfo>current</releaseinfo>
|
||||
<productname>OpenStack Compute</productname>
|
||||
<pubdate>2013-05-28</pubdate>
|
||||
<legalnotice role="cc-by">
|
||||
<annotation>
|
||||
<remark>Remaining licensing details are filled in by
|
||||
the template.</remark>
|
||||
</annotation>
|
||||
</legalnotice>
|
||||
<abstract>
|
||||
<para>This manual describes how to obtain, create, and
|
||||
modify virtual machine images that are compatible with
|
||||
OpenStack. </para>
|
||||
</abstract>
|
||||
<revhistory>
|
||||
<!-- ... continue addding more revisions here as you change this document using the markup shown below... -->
|
||||
<revision>
|
||||
<date>2013-05-28</date>
|
||||
<revdescription>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Initial release of this guide.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</revdescription>
|
||||
</revision>
|
||||
</revhistory>
|
||||
</info>
|
||||
<!-- Chapters are referred from the book file through these include statements. You can add additional chapters using these types of statements. -->
|
||||
<xi:include href="ch_introduction.xml"/>
|
||||
<xi:include href="ch_obtaining_images.xml"/>
|
||||
<xi:include href="ch_openstack_images.xml"/>
|
||||
<xi:include href="ch_modifying_images.xml"/>
|
||||
<xi:include href="ch_creating_images_manually.xml"/>
|
||||
<xi:include href="ch_creating_images_automatically.xml"/>
|
||||
<xi:include href="ch_converting.xml"/>
|
||||
|
||||
</book>
|
||||
321
doc/src/docbkx/openstack-image/centos-example.xml
Normal file
@@ -0,0 +1,321 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<section
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="centos-image">
|
||||
<title>Example: CentOS image</title>
|
||||
<para>We'll run through an example of installing a CentOS image. This will focus mainly on
|
||||
CentOS 6.4. Because the CentOS installation process may change across versions, if you are
|
||||
using a different version of CentOS the installer steps may differ.</para>
|
||||
<simplesect>
|
||||
<title>Download a CentOS install ISO</title>
|
||||
<para>
|
||||
<orderedlist>
|
||||
<listitem>
|
||||
<para>Navigate to the <link
|
||||
xlink:href="http://www.centos.org/modules/tinycontent/index.php?id=30"
|
||||
>CentOS mirrors</link> page.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Click one of the <literal>HTTP</literal> links in the right-hand column
|
||||
next to one of the mirrors.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Click the folder link of the CentOS version you want to use (e.g.,
|
||||
<literal>6.4/</literal>).</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Click the <literal>isos/</literal> folder link.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Click the <literal>x86_64/</literal> folder link for 64-bit images.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Click the ISO image you want to download. The netinstall ISO (e.g.,
|
||||
<filename>CentOS-6.4-x86_64-netinstall.iso</filename>) is a good choice
|
||||
since it's a smaller image that will download missing packages from the
|
||||
Internet during the install process.</para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Start the install process</title>
|
||||
<para>Start the installation process using either <command>virt-manager</command> or
|
||||
<command>virt-install</command> as described in the previous section. If using
|
||||
<command>virt-install</command>, don't forget to connect your VNC client to the
|
||||
virtual machine.</para>
|
||||
<para>We will assume the name of your virtual machine image is
|
||||
<literal>centos-6.4</literal>, which we need to know when using <command>virsh</command>
|
||||
commands to manipulate the state of the image.</para>
|
||||
<para>If you're using virt-manager, the commands should look something like
|
||||
this:<screen><prompt>#</prompt> <userinput>qemu-img create -f qcow2 /tmp/centos-6.4.qcow2 10G</userinput>
|
||||
<prompt>#</prompt> <userinput>virt-install --virt-type kvm --name centos-6.4 --ram 1024 \
|
||||
--cdrom=/data/isos/CentOS-6.4-x86_64-netinstall.iso \
|
||||
--disk /tmp/centos-6.4.qcow2,format=qcow2 \
|
||||
--network network=default \
|
||||
--graphics vnc,listen=0.0.0.0 --noautoconsole \
|
||||
--os-type=linux --os-variant=rhel6</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Step through the install</title>
|
||||
<para>At the initial Installer boot menu, choose the "Install or upgrade an existing system" option. Step through the
|
||||
install prompts, the defaults should be fine.</para>
|
||||
<mediaobject>
|
||||
<imageobject role="fo">
|
||||
<imagedata fileref="figures/centos-install.png" format="PNG" scale="60"/>
|
||||
</imageobject>
|
||||
<imageobject role="html">
|
||||
<imagedata fileref="figures/centos-install.png" format="PNG"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Configure TCP/IP</title>
|
||||
<para>The default TCP/IP settings are fine. In particular, ensure
|
||||
that Enable IPv4 support is enabled with DHCP, which is the default. </para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="figures/centos-tcpip.png" format="PNG" contentwidth="6in"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Point the installer to a CentOS web server</title>
|
||||
<para>Choose URL as the installation method.</para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="figures/install-method.png" format="PNG" contentwidth="6in"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
<para>Depending on the version of CentOS, the net installer requires that the user
|
||||
specify either a URL, or the web site and a CentOS directory that corresponds to one of
|
||||
the CentOS mirrors. If the installer asks for a single URL, an example of a valid URL
|
||||
would be: <literal>http://mirror.umd/centos/6/os/x86_64</literal>.<note>
|
||||
<para>Consider using other mirrors as an alternative to mirror.umd.edu.</para>
|
||||
</note></para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="figures/url-setup.png" format="PNG" contentwidth="6in"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
|
||||
<para>If the installer asks for web site name and CentOS directory separately, an example
|
||||
would be:</para>
|
||||
<para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Web site name: <literal>mirror.umd.edu</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>CentOS directory: <literal>centos/6/os/x86_64</literal></para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
<para>See <link xlink:href="http://www.centos.org/modules/tinycontent/index.php?id=30"
|
||||
>CentOS mirror page</link> to get a full list of mirrors, click on the "HTTP"
|
||||
link of a mirror to retrieve the web site name of a mirror.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Storage devices</title>
|
||||
<para>If asked about what type of devices your installation involves, choose "Basic
|
||||
Storage Devices".</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Hostname</title>
|
||||
<para>The installer may ask you to choose a hostname. The default
|
||||
(<literal>localhost.localdomain</literal>) is fine. We will install the cloud-init
|
||||
packge later, which will set the hostname on boot when a new instance is provisioned
|
||||
using this image.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Partition the disks</title>
|
||||
<para>There are different options for partitioning the disks. The default installation
|
||||
will use LVM partitions, and will create three partitions (<filename>/boot</filename>,
|
||||
<filename>/</filename>, swap), and this will work fine. Alternatively, you may wish
|
||||
to create a single ext4 partition, mounted to "<literal>/</literal>", should also work
|
||||
fine.</para>
|
||||
<para>If unsure, we recommend you use the installer's default partition scheme, since there
|
||||
is no clear advantage to one scheme of another.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Step through the install</title>
|
||||
<para>Step through the install, using the default options. The simplest thing to do is
|
||||
to choose the "Basic Server" install (may be called "Server" install on older versions
|
||||
of CentOS), which will install an SSH server.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Detach the CD-ROM and reboot</title>
|
||||
<para>Once the install completes, you will see the screen "Congratulations, your CentOS
|
||||
installation is complete".</para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="figures/centos-complete.png" format="PNG" contentwidth="6in"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
|
||||
<para>To eject a disk using <command>virsh</command>, libvirt requires that you attach an
|
||||
empty disk at the same target that the CDROM was previously attached, which should be
|
||||
<literal>hdc</literal>. You can confirm the appropriate target using the
|
||||
<command>dom dumpxml <replaceable>vm-image</replaceable></command> command.</para>
|
||||
<screen><prompt>#</prompt> <userinput>virsh dumpxml centos-6.4</userinput>
|
||||
<computeroutput><domain type='kvm'>
|
||||
<name>centos-6.4</name>
|
||||
...
|
||||
<disk type='block' device='cdrom'>
|
||||
<driver name='qemu' type='raw'/>
|
||||
<target dev='hdc' bus='ide'/>
|
||||
<readonly/>
|
||||
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
|
||||
</disk>
|
||||
...
|
||||
</domain>
|
||||
</computeroutput></screen>
|
||||
<para>Run the following
|
||||
commands from the host to eject the disk and reboot using virsh, as root. If you are
|
||||
using virt-manager, the commands below will work, but you can also use the GUI to the
|
||||
detach and reboot it by manually stopping and
|
||||
starting.<screen><prompt>#</prompt> <userinput>virsh attach-disk --type cdrom --mode readonly centos-6.4 "" hdc</userinput>
|
||||
<prompt>#</prompt> <userinput>virsh destroy centos-6.4</userinput>
|
||||
<prompt>#</prompt> <userinput>virsh start centos-6.4</userinput></screen></para>
|
||||
<note><para>In theory, the <command>virsh reboot centos-6.4</command> command can be used instead of using
|
||||
destroy and start commands. However, in our testing we were unable to reboot
|
||||
successfully using the <command>virsh reboot</command> command.</para></note>
|
||||
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Log in to newly created image</title>
|
||||
<para>When you boot the first time after install, it may ask you about authentication
|
||||
tools, you can just choose "Exit". Then, log in as root using the root password you
|
||||
specified.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Configure to fetch metadata</title>
|
||||
<para>An instance must perform several steps on start up by
|
||||
interacting with the metadata service (e.g., retrieve ssh public
|
||||
key, execute user data script). There are several ways to implement
|
||||
this functionality, including:<itemizedlist>
|
||||
<listitem>
|
||||
<para>Install a cloud-init RPM, which is a port of the
|
||||
Ubuntu <link
|
||||
xlink:href="https://launchpad.net/cloud-init"
|
||||
>cloud-init</link> package. This is the recommended
|
||||
approach.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Modify <filename>/etc/rc.local</filename> to fetch
|
||||
desired information from the metadata service, as
|
||||
described below.</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Using cloud-init to fetch the public key</title>
|
||||
<para>The cloud-init package will automatically fetch the public key from the metadata
|
||||
server and place the key in an account. You can install cloud-init inside the CentOS
|
||||
guest by adding the EPEL
|
||||
repo:<screen><prompt>#</prompt> <userinput>rpm -Uvh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm</userinput>
|
||||
<prompt>#</prompt> <userinput>yum install cloud-init</userinput></screen></para>
|
||||
<para>The account varies by distribution. On Ubuntu-based virtual virtual machines, the
|
||||
account is called "ubuntu". On Fedora-based virtual machines, the account is called
|
||||
"ec2-user".</para>
|
||||
<para>You can change the name of the account used by cloud-init by editing the
|
||||
<filename>/etc/cloud/cloud.cfg</filename> file and adding a line with a
|
||||
different user. For example, to configure cloud-init to put the key in an account
|
||||
named "admin", edit the config file so it has the
|
||||
line:<programlisting>user: admin</programlisting></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Writing a script to fetch the public key (if no cloud-init)</title>
|
||||
<para>If you are not able to install the cloud-init package in your image, to fetch the
|
||||
ssh public key and add it to the root account, edit the
|
||||
<filename>/etc/rc.local</filename> file and add the following lines before the line
|
||||
“<literal>touch /var/lock/subsys/local</literal>”</para>
|
||||
<programlisting>if [ ! -d /root/.ssh ]; then
|
||||
mkdir -p /root/.ssh
|
||||
chmod 700 /root/.ssh
|
||||
fi
|
||||
|
||||
# Fetch public key using HTTP
|
||||
ATTEMPTS=30
|
||||
FAILED=0
|
||||
while [ ! -f /root/.ssh/authorized_keys ]; do
|
||||
curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/metadata-key 2>/dev/null
|
||||
if [ \$? -eq 0 ]; then
|
||||
cat /tmp/metadata-key >> /root/.ssh/authorized_keys
|
||||
chmod 0600 /root/.ssh/authorized_keys
|
||||
restorecon /root/.ssh/authorized_keys
|
||||
rm -f /tmp/metadata-key
|
||||
echo "Successfully retrieved public key from instance metadata"
|
||||
echo "*****************"
|
||||
echo "AUTHORIZED KEYS"
|
||||
echo "*****************"
|
||||
cat /root/.ssh/authorized_keys
|
||||
echo "*****************"
|
||||
done
|
||||
|
||||
</programlisting>
|
||||
<note>
|
||||
<para>Some VNC clients replace : (colon) with ; (semicolon) and _ (underscore) with
|
||||
- (hyphen). Make sure it's http: not http; and authorized_keys not
|
||||
authorized-keys.</para>
|
||||
</note>
|
||||
<note>
|
||||
<para>The above script only retrieves the ssh public key from the metadata server.
|
||||
It does not retrieve <emphasis role="italic">user data</emphasis>, which is
|
||||
optional data that can be passed by the user when requesting a new instance.
|
||||
User data is often used for running a custom script when an instance comes
|
||||
up.</para>
|
||||
<para>As the OpenStack metadata service is compatible with version 2009-04-04 of the
|
||||
Amazon EC2 metadata service, consult the Amazon EC2 documentation on <link
|
||||
xlink:href="http://docs.amazonwebservices.com/AWSEC2/2009-04-04/UserGuide/AESDG-chapter-instancedata.html"
|
||||
>Using Instance Metadata</link> for details on how to retrieve user
|
||||
data.</para>
|
||||
</note>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Configure console</title>
|
||||
<para>In order for <command>nova console-log</command> to work properly on CentOS 6.x,
|
||||
guests you may need to add the following lines to
|
||||
<filename>/boot/grub/menu.lst</filename><programlisting>serial –unit=0 –speed=115200
|
||||
terminal –timeout=10 console serial
|
||||
# Edit the kernel line to add the console entries
|
||||
kernel <replaceable>...</replaceable> console=tty0 console=ttyS0,115200n8 </programlisting></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Shut down the instance</title>
|
||||
<para>From inside the instance, as
|
||||
root:<screen><prompt>#</prompt> <userinput>/sbin/shutdown -h now</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Clean up (e.g., remove MAC address details)</title>
|
||||
<para>The operating system records the MAC address of the virtual ethernet card in locations
|
||||
such as <filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename> and
|
||||
<filename>/etc/udev/rules.d/70-persistent-net.rules</filename> during the instance
|
||||
process. However, each time the image boots up, the virtual ethernet card will have a
|
||||
different MAC address, so this information must be deleted from the configuration file. </para>
|
||||
<para>There is a utility called <command>virt-sysprep</command>, that performs various
|
||||
cleanup tasks such as removing the MAC address references. It will clean up a virtual
|
||||
machine image in
|
||||
place:<screen><prompt>#</prompt> <userinput>virt-sysprep -d centos-6.4</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Undefine the libvirt domain</title>
|
||||
<para>Now that the image is ready to be uploaded to the Image service,
|
||||
we know longer need to have this virtual machine image managed by
|
||||
libvirt. Use the <command>virsh undefine
|
||||
<replaceable>vm-image</replaceable></command> command to
|
||||
inform
|
||||
libvirt.<screen><prompt>#</prompt> <userinput>virsh undefine centos-6.4</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Image is complete</title>
|
||||
<para>The underlying image file you created with <command>qemu-img create</command> (e.g.
|
||||
<filename>/tmp/centos-6.4.qcow2</filename>) is now ready for uploading to the OpenStack
|
||||
Image service. </para>
|
||||
</simplesect>
|
||||
</section>
|
||||
67
doc/src/docbkx/openstack-image/ch_converting.xml
Normal file
@@ -0,0 +1,67 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_converting">
|
||||
<title>Converting between image formats</title>
|
||||
<para>Converting images from one format to another is generally straightforward.</para>
|
||||
<simplesect>
|
||||
<title>qemu-img convert: raw, qcow2, VDI, VMDK</title>
|
||||
<para>The <command>qemu-img convert</command> command can do conversion between multiple
|
||||
formats, including raw, qcow2, VDI (VirtualBox), VMDK (VMWare) and VHD (Hyper-V).<table
|
||||
frame="all">
|
||||
<title>qemu-img format strings</title>
|
||||
<tgroup cols="2">
|
||||
<colspec colname="c1" colnum="1" colwidth="1.0*"/>
|
||||
<colspec colname="c2" colnum="2" colwidth="1.0*"/>
|
||||
<thead>
|
||||
<row>
|
||||
<entry>Image format</entry>
|
||||
<entry>Argument to qemu-img</entry>
|
||||
</row>
|
||||
</thead>
|
||||
<tbody>
|
||||
<row>
|
||||
<entry>raw</entry>
|
||||
<entry><literal>raw</literal></entry>
|
||||
</row>
|
||||
<row>
|
||||
<entry>qcow2</entry>
|
||||
<entry><literal>qcow2</literal></entry>
|
||||
</row>
|
||||
<row>
|
||||
<entry>VDI (VirtualBox)</entry>
|
||||
<entry><literal>vdi</literal></entry>
|
||||
</row>
|
||||
<row>
|
||||
<entry>VMDK (VMWare)</entry>
|
||||
<entry><literal>vmdk</literal></entry>
|
||||
</row>
|
||||
<row>
|
||||
<entry>VHD (Hyper-V)</entry>
|
||||
<entry><literal>vpc</literal></entry>
|
||||
</row>
|
||||
</tbody>
|
||||
</tgroup>
|
||||
</table></para>
|
||||
<para>This example will convert a raw image file named centos63.dsk to a qcow2 image file </para>
|
||||
<para>
|
||||
<screen><prompt>$</prompt> <userinput>qemu-img convert -f raw -O qcow2 centos64.dsk centos64.qcow2</userinput></screen>
|
||||
</para>
|
||||
<para>To convert from vmdk to raw, you would do:
|
||||
<screen><prompt>$</prompt><userinput> qemu-img convert -f vmdk -O raw centos64.vmdk centos64.img </userinput></screen></para>
|
||||
<para>
|
||||
<note>
|
||||
<para>The <literal>-f <replaceable>format</replaceable></literal> flag is optional.
|
||||
If omitted, qemu-img will try to infer the image format.</para>
|
||||
</note>
|
||||
</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>VBoxManage: VDI (VirtualBox) to raw</title>
|
||||
<para>If you've created a VDI image using VirtualBox, you can convert it to raw format using
|
||||
the <command>VBoxManage</command> command-line tool that ships with VirtualBox. On Mac
|
||||
OS X, VirtualBox stores images by default in the <filename>~/VirtualBox VMs/</filename>
|
||||
directory. The following example creates a raw image in the current directory from a
|
||||
VirtualBox VDI
|
||||
image.<screen><prompt>$</prompt> <userinput>VBoxManage clonehd ~/VirtualBox\ VMs/fedora18.vdi fedora18.img --format raw</userinput></screen></para>
|
||||
</simplesect>
|
||||
</chapter>
|
||||
@@ -0,0 +1,129 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_creating_images_automatically">
|
||||
<title>Tool support for image creation</title>
|
||||
<?dbhtml stop-chunking?>
|
||||
<para>There are several tools that are desgined to automate image creation.</para>
|
||||
<section xml:id="oz">
|
||||
<title>Oz</title>
|
||||
<para><link xlink:href="https://github.com/clalancette/oz/wiki">Oz</link> is a command-line
|
||||
tool that automates the process of creating a virtual machine image file. Oz is a Python
|
||||
app that interacts with KVM to step through the process of installing a virtual machine.
|
||||
It uses a predefined set of kickstart (RedHat-based systems) and preseed files
|
||||
(Debian-based systems) for operating systems that it supports, and it call also be using
|
||||
to create Microsoft Windows images. On Fedora, install Oz with yum:<screen><prompt>#</prompt> <userinput>yum install oz</userinput></screen><note>
|
||||
<para>As of this writing, there are no Oz packages for Ubuntu, so you will need to
|
||||
either install from source or build your own .deb file.</para>
|
||||
</note></para>
|
||||
<para>A full treatment of Oz is beyond the scope of this doucment, but we will provide an
|
||||
example. You can find additional examples of Oz template files on github at <link xlink:href="https://github.com/rackerjoe/oz-image-build/tree/master/templates">rackerjoe/oz-image-build/templates</link>. Here's how you would create a CentOS
|
||||
6.4 image with Oz.</para>
|
||||
<para>Create a template file (we'll call it <filename>centos64.tdl</filename>) with the
|
||||
following contents. The only entry you will need to change is the
|
||||
<literal><rootpw></literal>
|
||||
contents.<programlisting language="xml"><template>
|
||||
<name>centos64</name>
|
||||
<os>
|
||||
<name>CentOS-6</name>
|
||||
<version>4</version>
|
||||
<arch>x86_64</arch>
|
||||
<install type='iso'>
|
||||
<iso>http://mirror.rackspace.com/CentOS/6/isos/x86_64/CentOS-6.4-x86_64-bin-DVD1.iso</iso>
|
||||
</install>
|
||||
<rootpw>CHANGE THIS TO YOUR ROOT PASSWORD</rootpw>
|
||||
</os>
|
||||
<description>CentOS 6.4 x86_64</description>
|
||||
<repositories>
|
||||
<repository name='epel-6'>
|
||||
<url>http://download.fedoraproject.org/pub/epel/6/$basearch</url>
|
||||
<signed>no</signed>
|
||||
</repository>
|
||||
</repositories>
|
||||
<packages>
|
||||
<package name='epel-release'/>
|
||||
<package name='cloud-utils'/>
|
||||
<package name='cloud-init'/>
|
||||
</packages>
|
||||
<commands>
|
||||
<command name='update'>
|
||||
yum -y update
|
||||
yum clean all
|
||||
sed -i '/^HWADDR/d' /etc/sysconfig/network-scripts/ifcfg-eth0
|
||||
echo -n > /etc/udev/rules.d/70-persistent-net.rules
|
||||
echo -n > /lib/udev/rules.d/75-persistent-net-generator.rules
|
||||
</command>
|
||||
</commands>
|
||||
</template></programlisting>
|
||||
</para>
|
||||
<para>This Oz template specifies where to download the Centos 6.4 install ISO. Oz will use
|
||||
the version information to identify which kickstart file to use. In this case, it will
|
||||
be <link
|
||||
xlink:href="https://github.com/clalancette/oz/blob/master/oz/auto/rhel-6-jeos.ks"
|
||||
>rhel-6-jeos.ks</link>. It adds EPEL as a repository and install the
|
||||
<literal>epel-release</literal>, <literal>cloud-utils</literal>, and
|
||||
<literal>cloud-init</literal> packages, as specified in the
|
||||
<literal>packages</literal> section of the file. </para>
|
||||
<para>After Oz does the initial OS install using the kickstart file, it will customize the
|
||||
image by doing an update and removing any reference to the eth0 device that libvirt
|
||||
creates while Oz was doing the customizing, as specified in the
|
||||
<literal>command</literal> section of the XML file.</para>
|
||||
<para>To run this, do, as root: </para>
|
||||
<para><screen><prompt>#</prompt> <userinput>oz-install -d3 -u centos64.tdl -w centos64-libvirt.xml</userinput></screen><itemizedlist>
|
||||
<listitem>
|
||||
<para>The <literal>-d3</literal> flag tells Oz to show status information as it
|
||||
runs.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>The <literal>-u</literal> tells Oz to do the customization (install extra
|
||||
packages, run the commands) once it does the initial install.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>The <literal>-w <filename></literal> flag tells Oz what filename to use
|
||||
to write out a libvirt XML file (otherwise it will default to something like
|
||||
<filename>centos64Apr_03_2013-12:39:42</filename>.</para>
|
||||
</listitem>
|
||||
</itemizedlist>If you leave out the <literal>-u</literal> flag, or you want to edit the
|
||||
file to do additional customizations, you can use the <command>oz-customize</command>
|
||||
command, using the libvirt XML file that <command>oz-install</command> creates. For
|
||||
example:
|
||||
<screen><prompt>#</prompt> <userinput>oz-customize -d3 centos64.tdl centos64-libvirt.xml</userinput></screen>
|
||||
Oz will invoke libvirt to boot the image inside of KVM, then Oz will ssh into the
|
||||
instance and perform the customizations. </para>
|
||||
</section>
|
||||
<section xml:id="vmbuilder">
|
||||
<title>vmbuilder</title>
|
||||
<para><link xlink:href="https://launchpad.net/vmbuilder">vmbuilder</link> (Virtual Machine
|
||||
Builder) is another command-line tool that can be used to create virtual machine images
|
||||
for different hypervisors. The version of vmbuilder that ships with Ubuntu can only
|
||||
create Ubuntu virtual machine guests. The version of vmbuilder that ships with Debian
|
||||
can create Ubuntu and Debian virtual machine guests.</para>
|
||||
<para>The <link
|
||||
xlink:href="https://help.ubuntu.com/12.04/serverguide/jeos-and-vmbuilder.html"
|
||||
>Ubuntu 12.04 server guide</link> has documentation on how to use vmbuilder to
|
||||
create an Ubuntu image.</para>
|
||||
</section>
|
||||
<section xml:id="boxgrinder">
|
||||
<title>BoxGrinder</title>
|
||||
<para>
|
||||
<link xlink:href="http://boxgrinder.org/">BoxGrinder</link> is another tool for
|
||||
creating virtual machine images, which it calls appliances. BoxGrinder can create
|
||||
Fedora, Red Hat Enterprise Linux, or CentOS images. BoxGrinder is currently only
|
||||
supported on Fedora. </para>
|
||||
</section>
|
||||
<section xml:id="veewee">
|
||||
<title>VeeWee</title>
|
||||
<para><link
|
||||
xlink:href="https://github.com/jedi4ever/veewee">
|
||||
VeeWee</link> is often used to build <link
|
||||
xlink:href="http://vagrantup.com">Vagrant</link>
|
||||
boxes, but it can also be used to build KVM
|
||||
images.</para>
|
||||
</section>
|
||||
<section xml:id="imagefactory">
|
||||
<title>imagefactory</title>
|
||||
<para><link xlink:href="http://imgfac.org/">imagefactory</link> is a newer tool designed to
|
||||
automate the building, converting, and uploading images to different cloud providers. It
|
||||
uses Oz as its back-end and includes support for OpenStack-based clouds.</para>
|
||||
</section>
|
||||
|
||||
|
||||
</chapter>
|
||||
108
doc/src/docbkx/openstack-image/ch_creating_images_manually.xml
Normal file
@@ -0,0 +1,108 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_creating_images_manually">
|
||||
<title>Creating images manually</title>
|
||||
<para>To create a new image, you will need the installation CD or DVD ISO file for the guest
|
||||
operating system. You'll also need access to a virtualization tool. You can use KVM for
|
||||
this. Or, if you have a GUI desktop virtualization tool (e.g., VMWare Fusion, VirtualBox),
|
||||
you can use that instead and just convert the file to raw once you're done.</para>
|
||||
<para>When you create a new virtual machine image, you will need to connect to the graphical
|
||||
console of the hypervisor, which acts as the virtual machine's display and allows you to
|
||||
interact with the guest operating system's installer using your keyboard and mouse. KVM can
|
||||
expose the graphical console using the <link
|
||||
xlink:href="https://en.wikipedia.org/wiki/Virtual_Network_Computing">VNC</link> (Virtual
|
||||
Network Computing) protocol or the newer <link xlink:href="http://spice-space.org"
|
||||
>SPICE</link> protocol. We'll use the VNC protocol here, since you're more likely to be
|
||||
able to find a VNC client that works on your local desktop.</para>
|
||||
<section xml:id="virt-manager">
|
||||
<title>Using the virt-manager X11 GUI</title>
|
||||
<para>If you plan to create a virtual machine image on a machine that can run X11 applications,
|
||||
the simplest way to do so is to use the <command>virt-manager</command> GUI, which is
|
||||
installable as the <literal>virt-manager</literal> package on both Fedora-based and
|
||||
Debian-based systems. This GUI has an embedded VNC client in it that will let you view and
|
||||
interact with the guest's graphical console.</para>
|
||||
<para> If you are building the image on a headless server, and you have an X server on your
|
||||
local machine, you can launch <command>virt-manager</command> using ssh X11 forwarding to
|
||||
access the GUI. Since virt-manager interacts directly with libvirt, you typically need to be
|
||||
root to access it. If you can ssh directly in as root (or with a user that has permissions
|
||||
to interact with libvirt),
|
||||
do:<screen><prompt>$</prompt> <userinput>ssh -X root@server virt-manager</userinput></screen></para>
|
||||
<para>If the account you use to ssh into your server does not have permissions to run libvirt,
|
||||
but has sudo prvileges, do:<screen><prompt>$</prompt> <userinput>ssh -X root@server</userinput>
|
||||
<prompt>$</prompt> <userinput>sudo virt-manager</userinput> </screen><note>
|
||||
<para>The <literal>-X</literal> flag passed to ssh will enable X11 forwarding over ssh.
|
||||
If this does not work, try replacing it with the <literal>-Y</literal> flag.</para>
|
||||
</note></para>
|
||||
<para>Click the "New" button at the top-left and step through the instructions. <mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="figures/virt-manager-new.png" format="PNG" contentwidth="6in"/>
|
||||
</imageobject>
|
||||
</mediaobject>You will be shown a series of dialog boxes that will allow you to specify
|
||||
information about the virtual machine.</para>
|
||||
</section>
|
||||
<section xml:id="virt-install">
|
||||
<title>Using virt-install and connecting using a local VNC client</title>
|
||||
<para>If you do not with to use virt-manager (e.g., you don't want to install the
|
||||
dependencies on your server, you don't have an X server running locally, the X11
|
||||
forwarding over SSH isn't working), you can use the <command>virt-install</command> tool
|
||||
to boot the virtual machine through libvirt and connect to the graphical console from a
|
||||
VNC client installed on your local machine.</para>
|
||||
<para>Since VNC is a standard protocol, there are multiple clients available that implement
|
||||
the VNC spec, including <link
|
||||
xlink:href="http://sourceforge.net/apps/mediawiki/tigervnc/index.php?title=Welcome_to_TigerVNC"
|
||||
>TigerVNC</link> (multiple platforms), <link xlink:href="http://tightvnc.com/"
|
||||
>TightVNC</link> (multiple platforms), <link xlink:href="http://realvnc.com/"
|
||||
>RealVNC</link> (multiple platforms), <link
|
||||
xlink:href="http://sourceforge.net/projects/chicken/">Chicken</link> (Mac OS X),
|
||||
<link xlink:href="http://userbase.kde.org/Krdc">Krde</link> (KDE), and <link
|
||||
xlink:href="http://projects.gnome.org/vinagre/">Vinagre</link> (GNOME).</para>
|
||||
<para>Here is an example of using the <command>qemu-img</command> command to create an empty
|
||||
image file <command>virt-install</command> command to start up a virtual machine using
|
||||
that image file. As root:</para>
|
||||
<para>
|
||||
<screen><prompt>#</prompt> <command>qemu-img create -f qcow2 /data/centos-6.4.qcow2 10G</command>
|
||||
<prompt>#</prompt> <userinput>virt-install --virt-type kvm --name centos-6.4 --ram 1024 \
|
||||
--cdrom=/data/CentOS-6.4-x86_64-netinstall.iso \
|
||||
--disk path=/data/centos-6.4.qcow2,size=10,format=qcow2 \
|
||||
--network network=default\
|
||||
--graphics vnc,listen=0.0.0.0 --noautoconsole \
|
||||
--os-type=linux --os-variant=rhel6</userinput>
|
||||
<computeroutput>
|
||||
Starting install...
|
||||
Creating domain... | 0 B 00:00
|
||||
Domain installation still in progress. You can reconnect to
|
||||
the console to complete the installation process.</computeroutput></screen>
|
||||
</para>
|
||||
<para>This uses the KVM hypervisor to start up a virtual machine with the libvirt name of
|
||||
<literal>centos-6.4</literal> with 1024MB of RAM, with a virtual CDROM drive
|
||||
associated with the <filename>/data/CentOS-6.4-x86_64-netinstall.iso</filename> file,
|
||||
and a local hard disk which is stored in the host at
|
||||
<filename>/data/centos-6.4.qcow2</filename> that is 10GB in size in qcow2 format. It
|
||||
configures networking to use libvirt's default network. There is a VNC server that is
|
||||
listening on all interfaces, and libvirt will not attempt to launch a VNC client
|
||||
automatically nor try to display the text console (<literal>--no-autoconsole</literal>).
|
||||
Finally, libvirt will attempt to optimize the configuration for a Linux guest running a
|
||||
RHEL 6.x distribution.<note>
|
||||
<para>When using the libvirt <literal>default</literal> network, libvirt will
|
||||
connect the virtual machine's interface to a bridge called
|
||||
<literal>virbr0</literal>. There is a dnsmasq process managed by libvirt
|
||||
that will hand out an IP address on the 192.168.122.0/24 subnet, and libvirt has
|
||||
iptables rules for doing NAT for IP addresses on this subnet.</para>
|
||||
</note></para>
|
||||
<para>Do <command>virt-install --os-variant list</command> to see a range of allowed
|
||||
<literal>--os-variant</literal> options.</para>
|
||||
<para>Use the <command>virsh vncdisplay <replaceable>vm-name</replaceable></command> command
|
||||
to get the VNC port
|
||||
number.<screen><prompt>#</prompt> <userinput>virsh vncdisplay centos-6.4</userinput>
|
||||
<computeroutput>:1</computeroutput></screen></para>
|
||||
<para>In the example above, the guest <literal>centos-6.4</literal> uses VNC display
|
||||
<literal>:1</literal>, which corresponds to tcp port <literal>5901</literal>. You
|
||||
should be able to connect to a VNC client running on your local machine to display :1 on
|
||||
the remote machine and step through the installation process.</para>
|
||||
</section>
|
||||
|
||||
|
||||
<xi:include href="centos-example.xml"/>
|
||||
|
||||
<xi:include href="ubuntu-example.xml"/>
|
||||
</chapter>
|
||||
147
doc/src/docbkx/openstack-image/ch_introduction.xml
Normal file
@@ -0,0 +1,147 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="ch_introduction">
|
||||
<title>Introduction</title>
|
||||
<para>An OpenStack Compute cloud is not very useful unless you have virtual machine images
|
||||
(which some people call "virtual appliances"). This guide describes how to obtain, create,
|
||||
and modify virtual machine images that are compatible with OpenStack.</para>
|
||||
<para>To keep these brief, we'll sometimes use the term "image" instead of "virtual machine
|
||||
image".</para>
|
||||
<simplesect>
|
||||
<title>What is a virtual machine image?</title>
|
||||
<para>A virtual machine image is a single file which contains a virtual disk that has a
|
||||
bootable operating system installed on it. </para>
|
||||
<para>Virtual machine images come in different formats, some
|
||||
of which are described below. In a later chapter, we'll
|
||||
describe how to convert between formats.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Raw</title>
|
||||
<para>The "raw" image format is the simplest one, and is
|
||||
natively supported by both KVM and Xen hypervisors. You
|
||||
can think of a raw image as being the bit-equivalent of a
|
||||
block device file, created as if somebody had copied, say,
|
||||
<filename>/dev/sda</filename> to a file using the
|
||||
<command>dd</command> command. <note>
|
||||
<para>We don't recommend creating raw images by dd'ing
|
||||
block device files, we discuss how to create raw
|
||||
images later.</para>
|
||||
</note></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>qcow2</title>
|
||||
<para>The <link xlink:href="http://en.wikibooks.org/wiki/QEMU/Images">qcow2</link> (QEMU
|
||||
copy-on-write version 2) format is commonly used with the KVM hypervisor. It has some
|
||||
additional features over the raw format, such as:<itemizedlist>
|
||||
<listitem>
|
||||
<para>Using sparse representation, so the image size is smaller</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Support for snapshots</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
<para>Because qcow2 is sparse, it's often faster to convert a raw image to qcow2 and upload
|
||||
it then to upload the raw file. </para>
|
||||
<para>
|
||||
<note>
|
||||
<para>Because raw images don't support snapshots, OpenStack Compute will
|
||||
automatically convert raw image files to qcow2 as needed.</para>
|
||||
</note>
|
||||
</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>AMI/AKI/ARI</title>
|
||||
<para>The <link
|
||||
xlink:href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html"
|
||||
>AMI/AKI/ARI </link>format was the initial image
|
||||
format supported by Amazon EC2. The image consists of
|
||||
three files:<variablelist>
|
||||
<varlistentry>
|
||||
<term>AMI (Amazon Machine Image)</term>
|
||||
<listitem>
|
||||
<para>This is a virtual machine image in raw
|
||||
format, as described above.</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>AKI (Amazon Kernel Image)</term>
|
||||
<listitem>
|
||||
<para>A kernel file that the hypervisor will
|
||||
load initially to boot the image. For a
|
||||
Linux machine, this would be a
|
||||
<emphasis>vmlinuz</emphasis> file.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>ARI (Amazon Ramdisk Image)</term>
|
||||
<listitem>
|
||||
<para>An optional ramdisk file mounted at boot
|
||||
time. For a Linux machine, this would be
|
||||
an <emphasis>initrd</emphasis>
|
||||
file.</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
</variablelist></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>UEC tarball</title>
|
||||
<para>A UEC (Ubuntu Enterprise Cloud) tarball is a gzipped tarfile that contains an AMI
|
||||
file, AKI file, and ARI file.<note>
|
||||
<para>Ubuntu Enterprise Cloud refers to a discontinued Eucalyptus-based Ubuntu cloud
|
||||
solution that has been replaced by the OpenStack-based Ubuntu Cloud
|
||||
Infrastructure.</para>
|
||||
</note></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>VMDK</title>
|
||||
<para>VMWare's ESXi hypervisor uses the <link
|
||||
xlink:href="http://www.vmware.com/technical-resources/interfaces/vmdk.html"
|
||||
>VMDK</link> (Virtual Machine Disk) format for images. </para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>VDI</title>
|
||||
<para>VirtualBox uses the <link
|
||||
xlink:href="https://forums.virtualbox.org/viewtopic.php?t=8046">VDI</link> (Virtual
|
||||
Disk Image) format for image files. None of the OpenStack Compute hypervisors support
|
||||
VDI directly, so you will need to convert these files to a different format to use them
|
||||
with OpenStack.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>VHD</title>
|
||||
<para>Microsoft Hyper-V uses the VHD (Virtual Hard Disk) format for images..</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>VHDX</title>
|
||||
<para>The version of Hyper-V that ships with Microsoft Server 2012 uses the newer <link
|
||||
xlink:href="http://technet.microsoft.com/en-us/library/hh831446.aspx">VHDX</link>
|
||||
format, which has some additional features over VHD such as support for larger disk
|
||||
sizes and protection against data corruption during power failures.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>OVF</title>
|
||||
<para><link xlink:href="http://www.dmtf.org/standards/ovf">OVF</link> (Open Virtualization
|
||||
Format) is a packaging format for virtual machines, defined by the Distributed
|
||||
Management Task Force (DMTF) standards group. An OVF package contains one or more image
|
||||
files, a .ovf XML metadata file that contains information about the virtual machine, and
|
||||
possibly other files as well.</para>
|
||||
<para>An OVF package can be distributed in different ways. For example, it could be
|
||||
distributed as a set of discrete files, or as a tar archive file with an .ova (open
|
||||
virtual appliance/application) extension.</para>
|
||||
<para>OpenStack Compute does not currently have support for OVF packages, so you will need
|
||||
to extract the image file(s) from an OVF package if you wish to use it with
|
||||
OpenStack.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>ISO</title>
|
||||
<para>The <link
|
||||
xlink:href="http://www.ecma-international.org/publications/standards/Ecma-119.htm"
|
||||
>ISO</link> format is a disk image formatted with the read-only ISO 9660 (also known
|
||||
as ECMA-119) filesystem commonly used for CDs and DVDs. While we don't normally think of
|
||||
ISO a virtual machine image format, since ISOs contain bootable filesystems with an
|
||||
installed operating system, you can treat them the same you treat other virtual machine
|
||||
image files.</para>
|
||||
</simplesect>
|
||||
</chapter>
|
||||
252
doc/src/docbkx/openstack-image/ch_modifying_images.xml
Normal file
@@ -0,0 +1,252 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_modifying_images">
|
||||
<title>Modifying images</title>
|
||||
<?dbhtml stop-chunking?>
|
||||
<para>Once you have obtained a virtual machine image, you may want to make some changes to it
|
||||
before uploading it to the OpenStack Image service. Here we describe several tools available
|
||||
that allow you to modify images.<warning>
|
||||
<para>Do not attempt to use these tools to modify an image that is attached to a running
|
||||
virtual machine. These tools are designed to only modify images that are not
|
||||
currently running.</para>
|
||||
</warning></para>
|
||||
<section xml:id="guestfish">
|
||||
<title>guestfish</title>
|
||||
<para>The <command>guestfish</command> program is a tool from the <link
|
||||
xlink:href="http://libguestfs.org/">libguestfs</link> project that allows you to
|
||||
modify the files inside of a virtual machine image.</para>
|
||||
<para>Note that guestfish doesn't mount the image directly into the local filesystem.
|
||||
Instead, it provides you with a shell interface that allows you to view, edit, and
|
||||
delete files. Many of the guestfish commands (e.g., <command>touch</command>,
|
||||
<command>chmod</command>, <command>rm</command>) are similar to traditional bash
|
||||
commands.</para>
|
||||
<simplesect>
|
||||
<title>Example guestfish session</title>
|
||||
<para>We often need to modify a virtual machine image to remove any traces of the MAC
|
||||
address that was assigned to the virtual network interface card when the image was
|
||||
first created, since the MAC address will be different when it boots the next time.
|
||||
In this example, we show how we can use guestfish to remove references to the old
|
||||
MAC address by deleting the
|
||||
<filename>/etc/udev/rules.d/70-persistent-net.rules</filename> file and removing
|
||||
the <literal>HWADDR</literal> line from the
|
||||
<filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename> file.</para>
|
||||
<para>Assume we have a CentOS qcow2 image called
|
||||
<filename>centos63_desktop.img</filename>. We would mount the image in
|
||||
read-write mode by doing, as root:
|
||||
<screen><prompt>#</prompt> <userinput>guestfish --rw -a centos63_desktop.img</userinput>
|
||||
<computeroutput>
|
||||
Welcome to guestfish, the libguestfs filesystem interactive shell for
|
||||
editing virtual machine filesystems.
|
||||
|
||||
Type: 'help' for help on commands
|
||||
'man' to read the manual
|
||||
'quit' to quit the shell
|
||||
|
||||
><fs></computeroutput></screen>This
|
||||
starts a guestfish session. Note that the guestfish prompt looks like a fish:
|
||||
<literal>> <fs></literal>.</para>
|
||||
<para>We must first use the <command>run</command> command at the guestfish prompt
|
||||
before we can do anything else. This will launch a virtual machine, which will be
|
||||
used to perform all of the file
|
||||
manipulations.<screen><prompt>><fs></prompt> <userinput>run</userinput></screen>
|
||||
We can now view the filesystems in the image using the
|
||||
<command>list-filesystems</command>
|
||||
command:<screen><prompt>><fs></prompt> <userinput>list-filesystems</userinput>
|
||||
<computeroutput>/dev/vda1: ext4
|
||||
/dev/vg_centosbase/lv_root: ext4
|
||||
/dev/vg_centosbase/lv_swap: swap</computeroutput></screen>We
|
||||
need to mount the logical volume that contains the root partition:
|
||||
<screen><prompt>><fs></prompt> <userinput>mount /dev/vg_centosbase/lv_root /</userinput></screen></para>
|
||||
<para>Next, we want to delete a file. We can use the <command>rm</command> guestfish
|
||||
command, which works the same way it does in a traditional shell.</para>
|
||||
<para><screen><prompt>><fs></prompt> <userinput>rm /etc/udev/rules.d/70-persistent-net.rules</userinput></screen>We
|
||||
want to edit the <filename>ifcfg-eth0</filename> file to remove the
|
||||
<literal>HWADDR</literal> line. The <command>edit</command> command will copy
|
||||
the file to the host, invoke your editor, and then copy the file back.
|
||||
<screen><prompt>><fs></prompt> <userinput>edit /etc/sysconfig/network-scripts/ifcfg-eth0 </userinput></screen></para>
|
||||
<para>Le's say we want to modify this image to load the 8021q kernel at boot time. We'll
|
||||
need to create an executable script in the
|
||||
<filename>/etc/sysconfig/modules/</filename> directory. We can use the
|
||||
<command>touch</command> guestfish command to create an empty file, use the
|
||||
<command>edit</command> command to edit it, and use the <command>chmod</command>
|
||||
command to make it
|
||||
executable.<screen><prompt>><fs></prompt> <userinput>touch /etc/sysconfig/modules/8021q.modules</userinput>
|
||||
<prompt>><fs></prompt> <userinput>edit /etc/sysconfig/modules/8021q.modules</userinput></screen>
|
||||
We add the following line to the file and save
|
||||
it<programlisting>modprobe 8021q</programlisting> Then we set to executable:
|
||||
<screen>><fs> <userinput>chmod 0755 /etc/sysconfig/modules/8021q.modules</userinput></screen></para>
|
||||
<para>We're done, so we can exit using the <command>exit</command>
|
||||
command:<screen><prompt>><fs></prompt> <userinput>exit</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Going further with guestfish</title>
|
||||
<para>There is an enormous amount of functionality in guestfish and a full treatment is
|
||||
beyond the scope of this document. Instead, we recommend that you read the <link
|
||||
xlink:href="http://libguestfs.org/guestfs-recipes.1.html">guestfs-recipes</link>
|
||||
documentation page for a sense of what is possible with these tools.</para>
|
||||
</simplesect>
|
||||
</section>
|
||||
<section xml:id="guestmount">
|
||||
<title>guestmount</title>
|
||||
<para>For some types of changes, you may find it easier to mount the image's filesystem
|
||||
directly in the guest. The <command>guestmount</command> program, also from the
|
||||
libguestfs project, allows you to do so.</para>
|
||||
<para>For example, to mount the root partition from our
|
||||
<filename>centos63_desktop.qcow2</filename> image to <filename>/mnt</filename>, we
|
||||
can do:</para>
|
||||
<para>
|
||||
<screen><prompt>#</prompt> <userinput>guestmount -a centos63_desktop.qcow2 -m /dev/vg_centosbase/lv_root --rw /mnt</userinput></screen>
|
||||
</para>
|
||||
<para>If we didn't know in advance what the mountpoint is in the guest, we could use the
|
||||
<literal>-i</literal>(inspect) flag to tell guestmount to automatically determine
|
||||
what mount point to
|
||||
use:<screen><prompt>#</prompt> <userinput>guestmount -a centos63_desktop.qcow2 -i --rw /mnt</userinput></screen>Once
|
||||
mounted, we could do things like list the installed packages using
|
||||
rpm:<screen><prompt>#</prompt> <userinput>rpm -qa --dbpath /mnt/var/lib/rpm</userinput></screen>
|
||||
Once done, we
|
||||
unmount:<screen><prompt>#</prompt> <userinput>umount /mnt</userinput></screen></para>
|
||||
</section>
|
||||
<section xml:id="virt-tools">
|
||||
<title>virt-* tools</title>
|
||||
<para>The <link xlink:href="http://libguestfs.org/">libguestfs</link> project has a number
|
||||
of other useful tools, including:<itemizedlist>
|
||||
<listitem>
|
||||
<para><link xlink:href="http://libguestfs.org/virt-df.1.html">virt-df</link> for
|
||||
displaying free space inside of an image.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><link xlink:href="http://libguestfs.org/virt-resize.1.html"
|
||||
>virt-resize</link> for resizing an image.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><link xlink:href="http://libguestfs.org/virt-sysprep.1.html"
|
||||
>virt-sysprep</link> for preparing an image for distribution (e.g.,
|
||||
delete SSH host keys, remove MAC address info, remove user accounts).</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><link xlink:href="http://libguestfs.org/virt-sparsify.1.html"
|
||||
>virt-sparsify</link> for making an image sparse</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><link xlink:href="http://libguestfs.org/virt-v2v/">virt-p2v</link> for
|
||||
converting a physical machine to an image that runs on KVM</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><link xlink:href="http://libguestfs.org/virt-v2v/">virt-v2v</link> for
|
||||
converting Xen and VMWare images to KVM images</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
</section>
|
||||
<section xml:id="losetup-kpartx-nbd">
|
||||
<title>Loop devices, kpartx, network block devices</title>
|
||||
<para>If you don't have access to libguestfs, you can mount image file systems directly in
|
||||
the host using loop devices, kpartx, and network block devices.<warning>
|
||||
<para>Mounting untrusted guest images using the tools described in this section is a
|
||||
security risk, always use libguestfs tools such as guestfish and guestmount if
|
||||
you have access to them. See <link
|
||||
xlink:href="https://www.berrange.com/posts/2013/02/20/a-reminder-why-you-should-never-mount-guest-disk-images-on-the-host-os/"
|
||||
>A reminder why you should never mount guest disk images on the host
|
||||
OS</link> by Daniel Berrangé for more details.</para>
|
||||
</warning></para>
|
||||
<simplesect>
|
||||
<title>Mounting a raw image (without LVM)</title>
|
||||
<para>If you have a raw virtual machine image that is not using LVM to manage its
|
||||
partitions. First, use the <command>losetup </command>command to find an unused loop
|
||||
device.
|
||||
<screen><prompt>#</prompt> <userinput>losetup -f</userinput>
|
||||
<computeroutput>/dev/loop0</computeroutput></screen></para>
|
||||
<para>In this example, <filename>/dev/loop0</filename> is free. Associate a loop device
|
||||
with the raw
|
||||
image:<screen><prompt>#</prompt> <userinput>losetup /dev/loop0 fedora17.img</userinput></screen></para>
|
||||
<para>If the image only has a single partition, you can mount the loop device
|
||||
directly:<screen><prompt>#</prompt> <userinput>mount /dev/loop0 /mnt</userinput></screen></para>
|
||||
<para>If the image has multiple partitions, use <command>kpartx</command> to expose the
|
||||
partitions as separate devices (e.g., <filename>/dev/mapper/loop0p1</filename>),
|
||||
then mount the partition that corresponds to the root file
|
||||
system:<screen><prompt>#</prompt> <userinput>kpartx -av /dev/loop0</userinput></screen></para>
|
||||
<para>If the image has, say three partitions (/boot, /, /swap), there should be one new
|
||||
device created per
|
||||
partition:<screen><prompt>$</prompt> <userinput>ls -l /dev/mapper/loop0p*</userinput><computeroutput>
|
||||
brw-rw---- 1 root disk 43, 49 2012-03-05 15:32 /dev/mapper/loop0p1
|
||||
brw-rw---- 1 root disk 43, 50 2012-03-05 15:32 /dev/mapper/loop0p2
|
||||
brw-rw---- 1 root disk 43, 51 2012-03-05 15:32 /dev/mapper/loop0p3</computeroutput></screen>To
|
||||
mount the second partition, as
|
||||
root:<screen><prompt>#</prompt> <userinput>mkdir /mnt/image</userinput>
|
||||
<prompt>#</prompt> <userinput>mount /dev/mapper/loop0p2 /mnt</userinput></screen>Once
|
||||
you're done, to clean
|
||||
up:<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
|
||||
<prompt>#</prompt> <userinput>kpartx -d /dev/loop0</userinput>
|
||||
<prompt>#</prompt> <userinput>losetup -d /dev/loop0</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Mounting a raw image (with LVM)</title>
|
||||
<para>If your partitions are managed with LVM, use losetup and kpartx as in the previous
|
||||
example to expose the partitions to the
|
||||
host:<screen><prompt>#</prompt> <userinput>losetup -f</userinput>
|
||||
<computeroutput>/dev/loop0</computeroutput>
|
||||
<prompt>#</prompt> <userinput>losetup /dev/loop0 rhel62.img</userinput>
|
||||
<prompt>#</prompt> <userinput>kpartx -av /dev/loop0</userinput></screen></para>
|
||||
<para>Next, you need to use the <command>vgscan</command> command to identify the LVM
|
||||
volume groups and then <command>vgchange</command> to expose the volumes as
|
||||
devices:<screen><prompt>#</prompt> <userinput>vgscan</userinput>
|
||||
<computeroutput> Reading all physical volumes. This may take a while...
|
||||
Found volume group "vg_rhel62x8664" using metadata type lvm2</computeroutput>
|
||||
<prompt>#</prompt> <userinput>vgchange -ay</userinput>
|
||||
<computeroutput> 2 logical volume(s) in volume group "vg_rhel62x8664" now active</computeroutput>
|
||||
<prompt>#</prompt> <userinput>mount /dev/vg_rhel62x8664/lv_root /mnt</userinput></screen></para>
|
||||
<para>Clean up when you're
|
||||
done:<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
|
||||
<prompt>#</prompt> <userinput>vgchange -an vg_rhel62x8664</userinput>
|
||||
<prompt>#</prompt> <userinput>kpartx -d /dev/loop0</userinput>
|
||||
<prompt>#</prompt> <userinput>losetup -d /dev/loop0</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Mounting a qcow2 image (without LVM)</title>
|
||||
<para>You need the <literal>nbd</literal> (network block device) kernel module loaded to
|
||||
mount qcow2 images. This will load it with support for 16 block devices, which is
|
||||
fine for our purposes. As
|
||||
root:<screen><prompt>#</prompt> <userinput>modprobe nbd max_part=16</userinput></screen></para>
|
||||
<para>Assuming the first block device (<filename>/dev/nbd0</filename>) is not currently
|
||||
in use, we can expose the disk partitions using the <command>qemu-nbd</command> and
|
||||
<command>partprobe</command> commands. As
|
||||
root:<screen><prompt>#</prompt> <userinput>qemu-nbd -c /dev/nbd0 image.qcow2</userinput>
|
||||
<prompt>#</prompt> <userinput>partprobe /dev/nbd0</userinput></screen></para>
|
||||
<para>If the image has, say three partitions (/boot, /, /swap), there should be one new
|
||||
device created per partition:<screen><prompt>$</prompt> <userinput>ls -l /dev/nbd3*</userinput>
|
||||
<computeroutput>brw-rw---- 1 root disk 43, 48 2012-03-05 15:32 /dev/nbd0
|
||||
brw-rw---- 1 root disk 43, 49 2012-03-05 15:32 /dev/nbd0p1
|
||||
brw-rw---- 1 root disk 43, 50 2012-03-05 15:32 /dev/nbd0p2
|
||||
brw-rw---- 1 root disk 43, 51 2012-03-05 15:32 /dev/nbd0p3</computeroutput></screen><note>
|
||||
<para>If the network block device you selected was already in use, the initial
|
||||
<command>qemu-nbd</command> command will fail silently, and the
|
||||
<filename>/dev/nbd3p{1,2,3}</filename> device files will not be
|
||||
created.</para>
|
||||
</note></para>
|
||||
<para>If the image partitions are not managed with LVM, they can be mounted
|
||||
directly:<screen><prompt>#</prompt> <userinput>mkdir /mnt/image</userinput>
|
||||
<prompt>#</prompt> <userinput>mount /dev/nbd3p2 /mnt</userinput></screen></para>
|
||||
<para>When you're done, clean
|
||||
up:<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
|
||||
<prompt>#</prompt> <userinput>qemu-nbd -d /dev/g nbd0</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Mounting a qcow2 image (with LVM)</title>
|
||||
<para>If the image partitions are managed with LVM, after you use
|
||||
<command>qemu-nbd</command> and <command>partprobe</command>, you must use
|
||||
<command>vgscan</command> and <command>vgchange -ay</command> in order to expose
|
||||
the LVM partitions as devices that can be
|
||||
mounted:<screen><prompt>#</prompt> <userinput>modprobe nbd max_part=16</userinput>
|
||||
<prompt>#</prompt> <userinput>qemu-nbd -c /dev/nbd0 image.qcow2</userinput>
|
||||
<prompt>#</prompt> <userinput>partprobe /dev/nbd0</userinput><prompt>#</prompt> <userinput>vgscan</userinput>
|
||||
<computeroutput> Reading all physical volumes. This may take a while...
|
||||
Found volume group "vg_rhel62x8664" using metadata type lvm2</computeroutput>
|
||||
<prompt>#</prompt> <userinput>vgchange -ay</userinput>
|
||||
<computeroutput> 2 logical volume(s) in volume group "vg_rhel62x8664" now active</computeroutput>
|
||||
<prompt>#</prompt> <userinput>mount /dev/vg_rhel62x8664/lv_root /mnt</userinput></screen></para>
|
||||
<para>When you're done, clean
|
||||
up:<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
|
||||
<prompt>#</prompt> <userinput>vgchange -an vg_rhel62x8664</userinput>
|
||||
<prompt>#</prompt> <userinput>qemu-nbd -d /dev/nbd0</userinput></screen></para>
|
||||
</simplesect>
|
||||
</section>
|
||||
</chapter>
|
||||
81
doc/src/docbkx/openstack-image/ch_obtaining_images.xml
Normal file
@@ -0,0 +1,81 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="ch_obtaining_images">
|
||||
<title>Obtaining images</title>
|
||||
<?dbhtml stop-chunking?>
|
||||
<para>The simplest way to obtain a virtual machine image that works with OpenStack is is to
|
||||
download one that someone else has already created. </para>
|
||||
<section xml:id="cirros-images">
|
||||
<title>CirrOS (test) images</title>
|
||||
<para>CirrOS is a minimal Linux distribution that was designed for use as a test image on
|
||||
clouds such as OpenStack Compute. You can download a CirrOS image in various formats
|
||||
from the <link xlink:href="https://launchpad.net/cirros/+download">CirrOS Launchpad
|
||||
download page</link>.</para>
|
||||
<para> If your deployment uses QEMU or KVM, we recommend using the images in qcow2
|
||||
format. The most recent 64-bit qcow2 image as of this writing is <link
|
||||
xlink:href="https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img"
|
||||
>cirros-0.3.0-x86_64-disk.img</link>
|
||||
<note>
|
||||
<para>In a CirrOS image, the login account is <literal>cirros</literal>. The
|
||||
password is <literal>cubswin:)</literal></para>
|
||||
</note></para>
|
||||
</section>
|
||||
|
||||
<section xml:id="ubuntu-images">
|
||||
<title>Official Ubuntu images</title>
|
||||
<para>Canonical maintains an <link xlink:href="http://cloud-images.ubuntu.com/">official
|
||||
set of Ubuntu-based images</link>.</para>
|
||||
<para>Images are arranged by Ubuntu release, and by image release date, with "current" being
|
||||
the most recent. For example, the page that contains the most recently built image for
|
||||
Ubuntu 12.04 "Precise Pangolin" is <link
|
||||
xlink:href="http://cloud-images.ubuntu.com/precise/current/"
|
||||
>http://cloud-images.ubuntu.com/precise/current/</link>. Scroll to the bottom of the
|
||||
page for links to images that can be downloaded directly. </para>
|
||||
<para>If your deployment uses QEMU or KVM, we recommend using the images in qcow2
|
||||
format. The most recent version of the 64-bit QCOW2 image for Ubuntu 12.04 is <link
|
||||
xlink:href="http://uec-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img"
|
||||
>precise-server-cloudimg-amd64-disk1.img</link>.<note>
|
||||
<para>In an Ubuntu cloud image, the login account is
|
||||
<literal>ubuntu</literal>.</para>
|
||||
</note></para>
|
||||
</section>
|
||||
<section xml:id="suse-sles-images">
|
||||
<title>Official openSUSE and SLES images</title>
|
||||
<para>SUSE does not provide openSUSE or SUSE Linux Enterprise
|
||||
Server (SLES) images for direct download. Instead, they
|
||||
provide a web-based tool called <link
|
||||
xlink:href="http://susestudio.com">SUSE Studio</link>
|
||||
that you can use to build openSUSE and SLES images. </para>
|
||||
|
||||
<para>For example, Christian Berendt used openSUSE to create <link
|
||||
xlink:href="http://susestudio.com/a/YRUrwO/testing-instance-for-openstack-opensuse-121"
|
||||
>a test openSUSE 12.1 image</link>.</para>
|
||||
</section>
|
||||
<section xml:id="other-distros">
|
||||
<title>Official images from other Linux distributions</title>
|
||||
<para>As of this writing, we are not aware of any distributions other than Ubuntu and
|
||||
openSUSE/SLES that provide images for download.</para>
|
||||
</section>
|
||||
<section xml:id="fedora-images">
|
||||
<title>Unofficial Fedora images</title>
|
||||
<para>Daniel Berrangé from Red Hat hosts some downloadable qcow2 Fedora images at <link
|
||||
xlink:href="http://berrange.fedorapeople.org/images"
|
||||
>http://berrange.fedorapeople.org/images</link>.</para>
|
||||
</section>
|
||||
<section xml:id="rcb-images">
|
||||
<title>Rackspace Cloud Builders (multiple distros)
|
||||
images</title>
|
||||
<para>Rackspace Cloud Builders maintains a list of pre-built images from various
|
||||
distributions (RedHat, CentOS, Fedora, Ubuntu). Links to these images can be found at
|
||||
<link xlink:href="https://github.com/rackerjoe/oz-image-build"
|
||||
>rackerjoe/oz-image-build on Github</link>.</para>
|
||||
</section>
|
||||
<section xml:id="windows-images">
|
||||
<title>Microsoft Windows Images</title>
|
||||
<para>Cloudbase Solutions hosts an <link xlink:href="http://www.cloudbase.it/ws2012/"
|
||||
>OpenStack Windows Server 2012 Standard Evaluation image</link> that runs on
|
||||
Hyper-V, KVM, and XenServer/XCP.</para>
|
||||
</section>
|
||||
</chapter>
|
||||
293
doc/src/docbkx/openstack-image/ch_openstack_images.xml
Normal file
@@ -0,0 +1,293 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
||||
xml:id="ch_openstack_images">
|
||||
<title>OpenStack Linux image requirements</title>
|
||||
<?dbhtml stop-chunking?>
|
||||
<para>For a Linux-based image to have full functionality in an OpenStack Compute cloud, there
|
||||
are a few requirements. For some of these, the requirement can be fulfilled by installing
|
||||
the <link xlink:href="https://cloudinit.readthedocs.org/en/latest/">cloud-init</link>
|
||||
package. You should read this section before creating your own image to be sure that the
|
||||
image supports the OpenStack features you plan on using.<itemizedlist>
|
||||
<listitem>
|
||||
<para>Disk partitions and resize root partition on boot (cloud-init)</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>No hard-coded MAC address information</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>SSH server running</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Disable firewall</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Access instance using ssh public key (cloud-init)</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Process user data and other metadata (cloud-init)</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Paravirtualized Xen support in Linux kernel (Xen hypervisor only with Linux
|
||||
kernel version < 3.0)</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
<section xml:id="support-resizing">
|
||||
<title>Disk partitions and resize root partition on boot (cloud-init)</title>
|
||||
<para>When you create a new Linux image, the first decision you will need to make is how to
|
||||
partition the disks. The choice of partition method can affect the resizing
|
||||
functionality, as described below. </para>
|
||||
<para>The size of the disk in a virtual machine image is determined when you initially
|
||||
create the image. However, OpenStack lets you launch instances with different size
|
||||
drives by specifying different flavors. For example, if your image was created with a 5
|
||||
GB disk, and you launch an instance with a flavor of <literal>m1.small</literal>, the
|
||||
resulting virtual machine instance will have (by default) a primary disk of 10GB. When
|
||||
an instance's disk is resized up, zeros are just added to the end.</para>
|
||||
<para>Your image needs to be able to resize its partitions on boot to match the size
|
||||
requested by the user. Otherwise, after the instance boots, you will need to manually
|
||||
resize the partitions if you want to access the additional storage you have access to
|
||||
when the disk size associated with the flavor exceeds the disk size your image was
|
||||
created with.</para>
|
||||
<simplesect>
|
||||
<title>Xen: 1 ext3/ext4 partition (no LVM, no /boot, no swap)</title>
|
||||
<para>If you are using the OpenStack XenAPI driver, the Compute service will
|
||||
automatically adjust the partition and filesystem for your instance on boot.
|
||||
Automatic resize will occur if the following are all true:<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>auto_disk_config=True</literal> in
|
||||
<filename>nova.conf</filename>.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>The disk on the image has only one partition.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>The file system on the one partition is ext3 or ext4.</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
<para>Therefore, if you are using Xen, we recommend that when you create your images,
|
||||
you create a single ext3 or ext4 partition (not managed by LVM). Otherwise, read
|
||||
on.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Non-Xen with cloud-init/cloud-tools: 1 ext3/ext4 partition (no LVM, no /boot, no
|
||||
swap)</title>
|
||||
<para>Your image must be configured to deal with two issues:<itemizedlist>
|
||||
<listitem>
|
||||
<para>The image's partition table describes the original size of the
|
||||
image</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>The image's filesystem fills the original size of the image</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
<para>Then, during the boot process:<itemizedlist>
|
||||
<listitem>
|
||||
<para>the partition table must be modified to be made aware of the
|
||||
additional space<itemizedlist>
|
||||
<listitem>
|
||||
<para>If you are not using LVM, you must modify the table to
|
||||
extend the existing root partition to encompass this
|
||||
additional space</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>If you are using LVM, you can create add a new LVM entry
|
||||
to the partition table, create a new LVM physical volume,
|
||||
add it to the volume group, and extend the logical partition
|
||||
with the root volume</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>the root volume filesystem must be resized</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
<para>The simplest way to support this in your image is to install the <link
|
||||
xlink:href="https://launchpad.net/cloud-utils">cloud-utils</link>package
|
||||
(contains the <command>growpart</command> tool for extending partitions) and the
|
||||
<link xlink:href="https://launchpad.net/cloud-init">cloud-init</link> package
|
||||
into your image. With these installed, the image will perform the root partition
|
||||
resize on boot (e.g., in <filename>/etc/rc.local</filename>). These packages are in
|
||||
the Ubuntu package repository, as well as the EPEL repository (for
|
||||
Fedora/RHEL/CentOS/Scientific Linux guests).</para>
|
||||
<para>If you are able to install the cloud-utils and cloud-init packages, we recommend
|
||||
that when you create your images, you create a single ext3 or ext4 partition (not
|
||||
managed by LVM). </para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Non-Xen without cloud-init/cloud-tools: LVM</title>
|
||||
<para>If you cannot install cloud-init and cloud-tools inside of your guest, and you
|
||||
want to support resize, you will need to write a script that your image will run on
|
||||
boot to modify the partition table. In this case, we recommend using LVM to manage
|
||||
your partitions. Due to a limitation in the Linux kernel (as of this writing), you
|
||||
cannot modify a partition table of a raw disk that has partition currently mounted,
|
||||
but you can do this for LVM. </para>
|
||||
<para>Your script will need to do something like the following:<orderedlist>
|
||||
<listitem>
|
||||
<para>Detect if there is any additional space on the disk (e.g., parsing
|
||||
output of <command>parted /dev/sda --script "print
|
||||
free"</command>)</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create a new LVM partition with the additional space (e.g.,
|
||||
<command>parted /dev/sda --script "mkpart lvm ..."</command>)</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Create a new physical volume (e.g., <command>pvcreate
|
||||
/dev/<replaceable>sda6</replaceable></command> )</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Extend the volume group with this physical partition (e.g.,
|
||||
<command>vgextend <replaceable>vg00</replaceable>
|
||||
/dev/<replaceable>sda6</replaceable></command>)</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Extend the logical volume contained the root partition by the amount
|
||||
of space (e.g., <command>lvextend
|
||||
/dev/mapper/<replaceable>node-root</replaceable>
|
||||
/dev/<replaceable>sda6</replaceable></command>)</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Resize the root file system (e.g., <command>resize2fs
|
||||
/dev/mapper/<replaceable>node-root</replaceable></command>).</para>
|
||||
</listitem>
|
||||
</orderedlist></para>
|
||||
<para>You do not need to have a <filename>/boot</filename> partition, unless your image
|
||||
is an older Linux distribution that requires that <filename>/boot</filename> is not
|
||||
managed by LVM. You may elect to use a swap per</para>
|
||||
</simplesect>
|
||||
</section>
|
||||
<section xml:id="mac-adddress"><title>No hard-coded MAC address information</title>
|
||||
<para>You must remove the network persistence rules in the image as their presence will
|
||||
result in the network interface in the instance coming up as an interface other than
|
||||
eth0. This is because your image has a record of the MAC address of the network
|
||||
interface card when it was first installed, and this MAC address will be different each
|
||||
time the instance boots up. You should alter the following files:<itemizedlist>
|
||||
<listitem>
|
||||
<para>Replace <filename>/etc/udev/rules.d/70-persistent-net.rules</filename>
|
||||
with an empty file (contains network persistence rules, including MAC
|
||||
address)</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Replace
|
||||
<filename>/lib/udev/rules.d/75-persistent-net-generator.rules</filename>
|
||||
with an empty file (this generates the file above)</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Remove the HWADDR line from
|
||||
<filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename> on
|
||||
Fedora-based images</para>
|
||||
</listitem>
|
||||
</itemizedlist><note>
|
||||
<para>If you delete the network persistent rules files, you may get a udev kernel
|
||||
warning at boot time, which is why we recommend replacing them with empty files
|
||||
instead.</para>
|
||||
</note></para>
|
||||
|
||||
</section>
|
||||
<section xml:id="ensure-ssh-server">
|
||||
<title>Ensure ssh server runs</title>
|
||||
<para>You must install an ssh server into the image and ensure that it starts up on boot, or
|
||||
you will not be able to connect to your instance using ssh when it boots inside of
|
||||
OpenStack. This package is typically called <literal>openssh-server</literal>.</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="disable-firewall">
|
||||
<title>Disable firewall</title>
|
||||
<para>In general, we recommend that you disable any firewalls inside of your image and use
|
||||
OpenStack security groups to restrict access to instances. The reason is that having a
|
||||
firewall installed on your instance can make it more difficult to troubleshoot
|
||||
networking issues if you cannot connect to your instance.</para>
|
||||
</section>
|
||||
<section xml:id="ssh-public-key">
|
||||
<title>Access instance using ssh public key (cloud-init)</title>
|
||||
<para>The typical way that users access virtual machines running on OpenStack is to ssh
|
||||
using public key authentication. For this to work, your virtual machine image must be
|
||||
configured to download the ssh public key from the OpenStack metadata service or config
|
||||
drive, at boot time.</para>
|
||||
<simplesect>
|
||||
<title>Using cloud-init to fetch the public key</title>
|
||||
<para>The cloud-init package will automatically fetch the public key from the metadata
|
||||
server and place the key in an account. The account varies by distribution. On
|
||||
Ubuntu-based virtual machines, the account is called "ubuntu". On Fedora-based
|
||||
virtual machines, the account is called "ec2-user".</para>
|
||||
<para>You can change the name of the account used by
|
||||
cloud-init by editing the
|
||||
<filename>/etc/cloud/cloud.cfg</filename> file and
|
||||
adding a line with a different user. For example, to
|
||||
configure cloud-init to put the key in an account
|
||||
named "admin", edit the config file so it has the
|
||||
line:<programlisting>user: admin</programlisting></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Writing a custom script to fetch the public key</title>
|
||||
<para>If you are unable or unwilling to install cloud-init inside the guest, you can
|
||||
write a custom script to fetch the public and add it to a user account.</para>
|
||||
<para>To fetch the ssh public key and add it to the root account, edit the
|
||||
<filename>/etc/rc.local</filename> file and add the following lines before the
|
||||
line “touch /var/lock/subsys/local”. This code fragment is taken from the <link
|
||||
xlink:href="https://github.com/rackerjoe/oz-image-build/blob/master/templates/centos60_x86_64.tdl"
|
||||
>rackerjoe oz-image-build CentOS 6 template</link></para>
|
||||
<programlisting>if [ ! -d /root/.ssh ]; then
|
||||
mkdir -p /root/.ssh
|
||||
chmod 700 /root/.ssh
|
||||
fi
|
||||
|
||||
# Fetch public key using HTTP
|
||||
ATTEMPTS=30
|
||||
FAILED=0subl
|
||||
while [ ! -f /root/.ssh/authorized_keys ]; do
|
||||
curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/metadata-key 2>/dev/null
|
||||
if [ \$? -eq 0 ]; then
|
||||
cat /tmp/metadata-key >> /root/.ssh/authorized_keys
|
||||
chmod 0600 /root/.ssh/authorized_keys
|
||||
restorecon /root/.ssh/authorized_keys
|
||||
rm -f /tmp/metadata-key
|
||||
echo "Successfully retrieved public key from instance metadata"
|
||||
echo "*****************"
|
||||
echo "AUTHORIZED KEYS"
|
||||
echo "*****************"
|
||||
cat /root/.ssh/authorized_keys
|
||||
echo "*****************"
|
||||
done
|
||||
</programlisting>
|
||||
<note>
|
||||
<para>Some VNC clients replace : (colon) with ; (semicolon) and _ (underscore) with
|
||||
- (hyphen). If editing a file over a VNC session, make sure it's http: not http;
|
||||
and authorized_keys not authorized-keys.</para>
|
||||
</note>
|
||||
|
||||
|
||||
</simplesect>
|
||||
|
||||
</section>
|
||||
<section xml:id="metadata">
|
||||
<title>Process user data and other metadata (cloud-init)</title>
|
||||
<para>In additional the ssh public key, an image may need to retrieve additional information
|
||||
from OpenStack, such as <link
|
||||
xlink:href="http://docs.openstack.org/trunk/openstack-compute/admin/content/user-data.html"
|
||||
>user data</link> that the user submitted when requesting the image. For example,
|
||||
you may wish to set the host name of the instance to name given to the instance when it
|
||||
is booted. Or, you may wish to configure your image so that it executes user data
|
||||
content as a script on boot.</para>
|
||||
<para>This information is accessible via the metadata service or the <link
|
||||
xlink:href="http://docs.openstack.org/folsom/openstack-compute/admin/content/config-drive.html"
|
||||
>config drive</link>. As the OpenStack metadata service is compatible with version
|
||||
2009-04-04 of the Amazon EC2 metadata service, consult the Amazon EC2 documentation on
|
||||
<link
|
||||
xlink:href="http://docs.amazonwebservices.com/AWSEC2/2009-04-04/UserGuide/AESDG-chapter-instancedata.html"
|
||||
>Using Instance Metadata</link> for details on how to retrieve user data.</para>
|
||||
|
||||
<para>The easiest way to support this type of functionality is to install the
|
||||
cloud-init package into your image, which is configured by default to treat user data as
|
||||
an executable script, and will set the host name.</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="image-xen-pv"><title>Paravirtualized Xen support in the kernel (Xen hypervisor only)</title>
|
||||
<para>Prior to Linux kernel version 3.0, the mainline branch of the Linux kernel did not
|
||||
have support paravirtualized Xen virtual machine instances (what Xen calls DomU guests).
|
||||
If you are running the Xen hypervisor with paravirtualization, and you want to create an
|
||||
image for an older Linux distribution that has a pre 3.0 kernel, you will need to ensure
|
||||
that the image boots a kernel that has been compiled with Xen support.</para>
|
||||
</section>
|
||||
</chapter>
|
||||
BIN
doc/src/docbkx/openstack-image/figures/centos-complete.png
Normal file
|
After Width: | Height: | Size: 75 KiB |
BIN
doc/src/docbkx/openstack-image/figures/centos-install.png
Normal file
|
After Width: | Height: | Size: 544 KiB |
BIN
doc/src/docbkx/openstack-image/figures/centos-tcpip.png
Normal file
|
After Width: | Height: | Size: 28 KiB |
BIN
doc/src/docbkx/openstack-image/figures/install-method.png
Normal file
|
After Width: | Height: | Size: 25 KiB |
BIN
doc/src/docbkx/openstack-image/figures/ubuntu-finished.png
Normal file
|
After Width: | Height: | Size: 33 KiB |
BIN
doc/src/docbkx/openstack-image/figures/ubuntu-grub.png
Normal file
|
After Width: | Height: | Size: 38 KiB |
BIN
doc/src/docbkx/openstack-image/figures/ubuntu-install.png
Normal file
|
After Width: | Height: | Size: 42 KiB |
|
After Width: | Height: | Size: 45 KiB |
BIN
doc/src/docbkx/openstack-image/figures/url-setup.png
Normal file
|
After Width: | Height: | Size: 29 KiB |
BIN
doc/src/docbkx/openstack-image/figures/virt-manager-new.png
Normal file
|
After Width: | Height: | Size: 40 KiB |
121
doc/src/docbkx/openstack-image/pom.xml
Normal file
@@ -0,0 +1,121 @@
|
||||
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
|
||||
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
|
||||
<groupId>org.openstack.docs</groupId>
|
||||
<artifactId>openstack-guide</artifactId>
|
||||
<version>1.0.0-SNAPSHOT</version>
|
||||
<packaging>jar</packaging>
|
||||
<name>OpenStack Guides</name>
|
||||
|
||||
<properties>
|
||||
<!-- This is set by Jenkins according to the branch. -->
|
||||
<release.path.name>local</release.path.name>
|
||||
<comments.enabled>1</comments.enabled>
|
||||
</properties>
|
||||
<!-- ################################################ -->
|
||||
<!-- USE "mvn clean generate-sources" to run this POM -->
|
||||
<!-- ################################################ -->
|
||||
|
||||
<build>
|
||||
<plugins>
|
||||
|
||||
<plugin>
|
||||
<groupId>com.rackspace.cloud.api</groupId>
|
||||
<artifactId>clouddocs-maven-plugin</artifactId>
|
||||
<version>1.8.0</version>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>generate-webhelp</id>
|
||||
<goals>
|
||||
<goal>generate-webhelp</goal>
|
||||
</goals>
|
||||
<phase>generate-sources</phase>
|
||||
<configuration>
|
||||
<!-- These parameters only apply to webhelp -->
|
||||
<enableDisqus>${comments.enabled}</enableDisqus>
|
||||
<disqusShortname>openstackdocs</disqusShortname>
|
||||
<enableGoogleAnalytics>1</enableGoogleAnalytics>
|
||||
<googleAnalyticsId>UA-17511903-1</googleAnalyticsId>
|
||||
<generateToc>
|
||||
appendix toc,title
|
||||
article/appendix nop
|
||||
article toc,title
|
||||
book title,figure,table,example,equation
|
||||
chapter toc,title
|
||||
part toc,title
|
||||
preface toc,title
|
||||
qandadiv toc
|
||||
qandaset toc
|
||||
reference toc,title
|
||||
set toc,title
|
||||
</generateToc>
|
||||
<!-- The following elements sets the autonumbering of sections in output for chapter numbers but no numbered sections-->
|
||||
<sectionAutolabel>0</sectionAutolabel>
|
||||
<sectionLabelIncludesComponentLabel>0</sectionLabelIncludesComponentLabel>
|
||||
<targetDirectory>target/docbkx/webhelp/${release.path.name}</targetDirectory>
|
||||
<webhelpDirname>openstack-image</webhelpDirname>
|
||||
<pdfFilenameBase>bk-imageguide-${release.path.name}</pdfFilenameBase>
|
||||
</configuration>
|
||||
</execution>
|
||||
<execution>
|
||||
<id>cleanup</id>
|
||||
<goals>
|
||||
<goal>generate-webhelp</goal>
|
||||
</goals>
|
||||
<phase>generate-sources</phase>
|
||||
<configuration>
|
||||
<includes>dummy.xml</includes>
|
||||
<postProcess>
|
||||
<delete includeemptydirs="true">
|
||||
<fileset dir="${basedir}/target/docbkx/webhelp/${release.path.name}" >
|
||||
<include name="**/*"/>
|
||||
<exclude name="openstack-image/**"/>
|
||||
</fileset>
|
||||
</delete>
|
||||
</postProcess>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
<configuration>
|
||||
<!-- These parameters apply to pdf and webhelp -->
|
||||
<xincludeSupported>true</xincludeSupported>
|
||||
<sourceDirectory>.</sourceDirectory>
|
||||
<includes>
|
||||
bk-imageguide.xml
|
||||
</includes>
|
||||
<profileSecurity>reviewer</profileSecurity>
|
||||
<branding>openstack</branding>
|
||||
|
||||
</configuration>
|
||||
</plugin>
|
||||
|
||||
</plugins>
|
||||
</build>
|
||||
<profiles>
|
||||
<profile>
|
||||
<id>Rackspace Research Repositories</id>
|
||||
<activation>
|
||||
<activeByDefault>true</activeByDefault>
|
||||
</activation>
|
||||
<repositories>
|
||||
<repository>
|
||||
<id>rackspace-research</id>
|
||||
<name>Rackspace Research Repository</name>
|
||||
<url>http://maven.research.rackspacecloud.com/content/groups/public/</url>
|
||||
</repository>
|
||||
</repositories>
|
||||
<pluginRepositories>
|
||||
<pluginRepository>
|
||||
<id>rackspace-research</id>
|
||||
<name>Rackspace Research Repository</name>
|
||||
<url>http://maven.research.rackspacecloud.com/content/groups/public/</url>
|
||||
</pluginRepository>
|
||||
</pluginRepositories>
|
||||
</profile>
|
||||
</profiles>
|
||||
|
||||
</project>
|
||||
198
doc/src/docbkx/openstack-image/ubuntu-example.xml
Normal file
@@ -0,0 +1,198 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<section
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ubuntu-image">
|
||||
<title>Example: Ubuntu image</title>
|
||||
<para>We'll run through an example of installing an Ubuntu image. This will focus mainly on
|
||||
Ubuntu 12.04 (Precise Pangolin) server. Because the Ubuntu installation process may change
|
||||
across versions, if you are using a different version of Ubuntu the installer steps may
|
||||
differ.</para>
|
||||
<simplesect>
|
||||
<title>Download an Ubuntu install ISO</title>
|
||||
<para>In this example, we'll use the network installation ISO, since it's a smaller
|
||||
image. The 64-bit 12.04 network installer ISO is at <link
|
||||
xlink:href="http://archive.ubuntu.com/ubuntu/dists/precise/main/installer-amd64/current/images/netboot/mini.iso"
|
||||
>http://archive.ubuntu.com/ubuntu/dists/precise/main/installer-amd64/current/images/netboot/mini.iso</link></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Start the install process</title>
|
||||
<para>Start the installation process using either <command>virt-manager</command> or
|
||||
<command>virt-install</command> as described in the previous section. If using
|
||||
<command>virt-install</command>, don't forget to connect your VNC client to the
|
||||
virtual machine.</para>
|
||||
<para>We will assume the name of your virtual machine image is
|
||||
<literal>ubuntu-12.04</literal>, which we need to know when using
|
||||
<command>virsh</command> commands to manipulate the state of the image.</para>
|
||||
<para>If you're using virt-manager, the commands should look something like
|
||||
this:<screen><prompt>#</prompt> <userinput>qemu-img create -f qcow2 /tmp/precise.qcow2 10G</userinput>
|
||||
<prompt>#</prompt> <userinput>virt-install --virt-type kvm --name precise --ram 1024 \
|
||||
--cdrom=/data/isos/precise-64-mini.iso \
|
||||
--disk /tmp/precise.qcow2,format=qcow2 \
|
||||
--network network=default \
|
||||
--graphics vnc,listen=0.0.0.0 --noautoconsole \
|
||||
--os-type=linux --os-variant=ubuntuprecise</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Step through the install</title>
|
||||
<para>At the initial Installer boot menu, choose the "Install" option. Step through the
|
||||
install prompts, the defaults should be fine.</para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="figures/ubuntu-install.png" format="PNG" contentwidth="6in"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Hostname</title>
|
||||
<para>The installer may ask you to choose a hostname. The default
|
||||
(<literal>ubuntu</literal>) is fine. We will install the cloud-init
|
||||
packge later, which will set the hostname on boot when a new instance is provisioned
|
||||
using this image.</para>
|
||||
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Selecting a mirror</title>
|
||||
<para>The default mirror proposed by the installer should be fine.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Step through the install</title>
|
||||
<para>Step through the install, using the default options. When prompted for a
|
||||
username, the default (<literal>ubuntu</literal>) is fine. </para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Partition the disks</title>
|
||||
<para>There are different options for partitioning the disks. The default installation
|
||||
will use LVM partitions, and will create three partitions (<filename>/boot</filename>,
|
||||
<filename>/</filename>, swap), and this will work fine. Alternatively, you may wish
|
||||
to create a single ext4 partition, mounted to "<literal>/</literal>", should also work
|
||||
fine.</para>
|
||||
<para>If unsure, we recommend you use the installer's default partition scheme, since there
|
||||
is no clear advantage to one scheme of another.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Automatic updates</title>
|
||||
<para>The Ubuntu installer will ask how you want to manage upgrades on your system. This
|
||||
option depends upon your specific use case. If your virtual machine instances will be
|
||||
able to connect to the internet, we recommend "Install security updates
|
||||
automatically".</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Software selection: OpenSSH server</title>
|
||||
<para>Choose "OpenSSH server"so that you will be able to SSH into the virtual machine
|
||||
when it launches inside of an OpenStack cloud.</para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="figures/ubuntu-software-selection.png" format="PNG" contentwidth="6in"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Install GRUB boot loader</title>
|
||||
<para>Select "Yes" when asked about installing the GRUB boot loader to the master boot
|
||||
record.</para>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="figures/ubuntu-grub.png" format="PNG" contentwidth="6in"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Detach the CD-ROM and reboot</title>
|
||||
<para>Select the defaults for all of the remaining options. When the installation is
|
||||
complete, you will be prompted to remove the CD-ROM.</para>
|
||||
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata fileref="figures/ubuntu-finished.png" format="PNG" contentwidth="6in"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
<para>
|
||||
<note>
|
||||
<para>When you hit "Continue" the virtual machine will shut down, even though it
|
||||
says it will reboot.</para>
|
||||
</note>
|
||||
</para>
|
||||
|
||||
<para>To eject a disk using <command>virsh</command>, libvirt requires that you attach an
|
||||
empty disk at the same target that the CDROM was previously attached, which should be
|
||||
<literal>hdc</literal>. You can confirm the appropriate target using the
|
||||
<command>dom dumpxml <replaceable>vm-image</replaceable></command> command.</para>
|
||||
<screen><prompt>#</prompt> <userinput>virsh dumpxml precise</userinput>
|
||||
<computeroutput><domain type='kvm'>
|
||||
<name>precise</name>
|
||||
...
|
||||
<disk type='block' device='cdrom'>
|
||||
<driver name='qemu' type='raw'/>
|
||||
<target dev='hdc' bus='ide'/>
|
||||
<readonly/>
|
||||
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
|
||||
</disk>
|
||||
...
|
||||
</domain>
|
||||
</computeroutput></screen>
|
||||
|
||||
<para>Run the following commands in the host as root to start up the machine again as
|
||||
paused, eject the disk and resume. If you are using virt-manager, you may instead use
|
||||
the
|
||||
GUI.<screen><prompt>#</prompt> <userinput>virsh start precise --paused</userinput>
|
||||
<prompt>#</prompt> <userinput>virsh attach-disk --type cdrom --mode readonly precise "" hdc</userinput>
|
||||
<prompt>#</prompt> <userinput>virsh resume precise</userinput></screen></para>
|
||||
<note><para>In the example above, we start the instance paused, eject the disk, and then unpause. In
|
||||
theory, we could have ejected the disk at the "Installation complete" screen.
|
||||
However, our testing indicates that the Ubuntu installer locks the drive so that it
|
||||
cannot be ejected at that point.</para></note>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Log in to newly created image</title>
|
||||
<para>When you boot the first time after install, it may ask you about authentication
|
||||
tools, you can just choose 'Exit'. Then, log in as root using the root password you
|
||||
specified.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Install cloud-init</title>
|
||||
<para>The cloud-init package will automatically fetch the public key from the metadata
|
||||
server and place the key in an account.
|
||||
<screen><prompt>#</prompt> <userinput>apt-get install cloud-init</userinput></screen></para>
|
||||
<para>The account varies by distribution. On Ubuntu-based virtual virtual machines, the
|
||||
account is called "ubuntu". On Fedora-based virtual machines, the account is called
|
||||
"ec2-user".</para>
|
||||
<para>You can change the name of the account used by cloud-init by editing the
|
||||
<filename>/etc/cloud/cloud.cfg</filename> file and adding a line with a different
|
||||
user. For example, to configure cloud-init to put the key in an account named "admin",
|
||||
edit the config file so it has the
|
||||
line:<programlisting>user: admin</programlisting></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Shut down the instance</title>
|
||||
<para>From inside the instance, as
|
||||
root:<screen><prompt>#</prompt> <userinput>/sbin/shutdown -h now</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Clean up (e.g., remove MAC address details)</title>
|
||||
<para>The operating system records the MAC address of the virtual ethernet card in locations
|
||||
such as <filename>/etc/udev/rules.d/70-persistent-net.rules</filename> during the
|
||||
instance process. However, each time the image boots up, the virtual ethernet card will
|
||||
have a different MAC address, so this information must be deleted from the configuration
|
||||
file. </para>
|
||||
<para>There is a utility called <command>virt-sysprep</command>, that performs various
|
||||
cleanup tasks such as removing the MAC address references. It will clean up a virtual
|
||||
machine image in
|
||||
place:<screen><prompt>#</prompt> <userinput>virt-sysprep -d precise</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Undefine the libvirt domain</title>
|
||||
<para>Now that the image is ready to be uploaded to the Image service, we know longer need
|
||||
to have this virtual machine image managed by libvirt. Use the <command>virsh undefine
|
||||
<replaceable>vm-image</replaceable></command> command to inform
|
||||
libvirt<screen><prompt>#</prompt> <userinput>virsh undefine precise</userinput></screen></para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Image is complete</title>
|
||||
<para>The underlying image file you created with <command>qemu-img create</command> (e.g.
|
||||
<filename>/tmp/precise.qcow2</filename>) is now ready for uploading to the OpenStack
|
||||
Image service. </para>
|
||||
</simplesect>
|
||||
</section>
|
||||
41
doc/src/docbkx/openstack-image/windows-example.xml
Normal file
@@ -0,0 +1,41 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<section
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="windows-image">
|
||||
<title>Microsoft Windows image</title>
|
||||
<para>We do not yet have a fully documented example of how to create a Microsoft Windows image.
|
||||
You can use libvirt to install Windows from an installation DVD using the same approach as
|
||||
with the CentOS and Ubuntu examples. Once the initial install is done, you will need to
|
||||
perform some Windows-specific customizations.</para>
|
||||
<simplesect>
|
||||
<title>Install VirtIO drivers</title>
|
||||
<para>Installing the <link
|
||||
xlink:href="http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers"
|
||||
>VirtIO paravirtualizaion drivers for Windows</link> will improve virtual machine
|
||||
performance when using KVM as the hypervisor to run Windows.</para>
|
||||
</simplesect>
|
||||
<simplesect><title>Sysprep</title>
|
||||
<para>Microsoft has a special tool called <link
|
||||
xlink:href="http://technet.microsoft.com/en-us/library/cc766049(v=ws.10).aspx"
|
||||
>Sysprep</link> that must be run inside of a Windows guest to prepare it for use as
|
||||
a virtual machine image. </para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>cloudbase-init</title>
|
||||
<para><link xlink:href="http://www.cloudbase.it/cloud-init-for-windows-instances/"
|
||||
>cloudbase-init</link> is a Windows port of cloud-init that should be installed
|
||||
inside of the guest. The <link xlink:href="https://github.com/cloudbase/cloudbase-init"
|
||||
>source code</link> is available on github.</para>
|
||||
</simplesect>
|
||||
<simplesect>
|
||||
<title>Jordan Rinke's OpenStack Windows resources</title>
|
||||
<para>Jordan Rinke maintains <link xlink:href="https://github.com/jordanrinke/openstack"
|
||||
>a collection of resources</link> for managing OpenStack Windows virtual machine
|
||||
guests. </para>
|
||||
</simplesect>
|
||||
|
||||
|
||||
</section>
|
||||