[image-guide] Publish RST Vitual Machine Image Guide

Change-Id: I694301758f7f85290d4c9f9b01fbd1924b02b476
Implements: blueprint image-guide-rst
KATO Tomoyuki 7 years ago committed by Andreas Jaeger
parent b93354e5ba
commit 79aa72b0cb

@ -30,6 +30,11 @@ Operations Guide
* Shared File Systems chapter added.
Virtual Machine Image Guide
* RST conversion finished.

@ -1,20 +1,15 @@
# directories to be set up
declare -A DIRECTORIES=(
["fr"]="common glossary image-guide"
["ja"]="common glossary image-guide"
["zh_CN"]="common glossary arch-design image-guide"
["zh_CN"]="common glossary arch-design"
# books to be built
declare -A BOOKS=(
["ja"]="image-guide user-guide user-guide-admin install-guide networking-guide"
["zh_CN"]="arch-design image-guide"
["ja"]="user-guide user-guide-admin install-guide networking-guide"
# draft books
declare -A DRAFTS=(
["ja"]="install-guide networking-guide"
@ -30,6 +25,7 @@ DOC_DIR="doc/"
# project-config/jenkins/scripts/common_translation_update.sh
declare -A SPECIAL_BOOKS=(
@ -38,7 +34,6 @@ declare -A SPECIAL_BOOKS=(
# This needs special handling, handle it with the RST tools.

@ -1,52 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<book xmlns="http://docbook.org/ns/docbook"
<title>OpenStack Virtual Machine Image Guide</title>
<?rax title.font.size="28px" subtitle.font.size="28px"?>
<titleabbrev>VM Image Guide</titleabbrev>
<orgname>OpenStack Foundation</orgname>
<holder>OpenStack Foundation</holder>
<legalnotice role="cc-by">
<remark>Remaining licensing details are filled in by
the template.</remark>
<para>This guide describes how to obtain, create, and
modify virtual machine images that are compatible with
<!-- Chapters are referred from the book file through these include statements. You can add additional chapters using these types of statements. -->
<xi:include href="../common/ch_preface.xml"/>
<xi:include href="ch_introduction.xml"/>
<xi:include href="ch_obtaining_images.xml"/>
<xi:include href="ch_openstack_images.xml"/>
<xi:include href="ch_modifying_images.xml"/>
<xi:include href="ch_creating_images_manually.xml"/>
<xi:include href="ch_creating_images_automatically.xml"/>
<xi:include href="ch_converting.xml"/>
<xi:include href="../common/app_support.xml"/>

@ -1,82 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
<title>Converting between image formats</title>
<para>Converting images from one format to another is generally straightforward.</para>
<title>qemu-img convert: raw, qcow2, qed, vdi, vmdk, vhd</title>
<para>The <command>qemu-img convert</command> command can do conversion between multiple
formats, including qcow2, qed, raw, vdi, vhd, and vmdk.</para>
<caption>qemu-img format strings</caption>
<th>Image format</th>
<th>Argument to qemu-img</th>
<td>QCOW2 (KVM, Xen)</td>
<td>QED (KVM)</td>
<td>VDI (VirtualBox)</td>
<td>VHD (Hyper-V)</td>
<td>VMDK (VMware)</td>
<para>This example will convert a raw image file named <filename>centos7.img</filename> to a qcow2 image file.</para>
<screen><prompt>$</prompt> <userinput>qemu-img convert -f raw -O qcow2 centos7.img centos7.qcow2</userinput></screen>
<para>Run the following command to convert a vmdk image file to a raw image file.
<screen><prompt>$</prompt> <userinput>qemu-img convert -f vmdk -O raw centos7.vmdk centos7.img</userinput></screen>
<para>Run the following command to convert a vmdk image file to a qcow2 image file.
<screen><prompt>$</prompt> <userinput>qemu-img convert -f vmdk -O qcow2 centos7.vmdk centos7.qcow2</userinput></screen>
<para>The <literal>-f <replaceable>format</replaceable></literal> flag is optional.
If omitted, <command>qemu-img</command> will try to infer the image format.</para>
<para>When converting an image file with Windows OS, ensure the virtio driver is
installed. Otherwise, you will get a blue screen of death or BSOD when
launching the image due to lack of the virtio driver. Another option is to
set the image properties as below when you update the image in glance to
avoid this issue, but it will reduce performance significantly.</para>
<screen><prompt>$</prompt> <userinput>glance image-update --property hw_disk_bus='ide' image_id</userinput></screen>
<title>VBoxManage: VDI (VirtualBox) to raw</title>
<para>If you've created a VDI image using VirtualBox, you can convert it to raw format using
the <command>VBoxManage</command> command-line tool that ships with VirtualBox. On Mac
OS X, and Linux, VirtualBox stores images by default in the <filename>~/VirtualBox VMs/</filename>
directory. The following example creates a raw image in the current directory from a
VirtualBox VDI image.</para>
<screen><prompt>$</prompt> <userinput>VBoxManage clonehd ~/VirtualBox\ VMs/fedora21.vdi fedora21.img --format raw</userinput></screen>

@ -1,184 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
<title>Tool support for image creation</title>
<?dbhtml stop-chunking?>
<para>There are several tools that are designed to automate image
<section xml:id="Diskimage-builder">
<para><link xlink:href="http://docs.openstack.org/developer/diskimage-builder/"
>Diskimage-builder</link> is an automated disk image creation
tool that supports a variety of distributions and architectures.
Diskimage-builder (DIB) can build images for Fedora,
Red Hat Enterprise Linux, Ubuntu, Debian, CentOS, and openSUSE.
DIB is organized in a series of elements that build on top of
each other to create specific images.</para>
<para>To build an image, call the following script:</para>
<screen><prompt>#</prompt> <userinput>disk-image-create ubuntu vm</userinput></screen>
<para>This example creates a generic, bootable Ubuntu image of the latest
<para>Further customization could be accomplished by setting
environment variables or adding elements to the command-line:</para>
<screen><prompt>#</prompt> <userinput>disk-image-create -a armhf ubuntu vm</userinput></screen>
<para>This example creates the image as before, but for arm architecture.
More elements are available in the
<link xlink:href="https://github.com/openstack/diskimage-builder/tree/master/elements"
>git source directory</link> and documented in the <link
>diskimage-builder elements documentation</link>.
<section xml:id="oz">
<para><link xlink:href="https://github.com/clalancette/oz/wiki"
>Oz</link> is a command-line tool that automates the process of
creating a virtual machine image file. Oz is a Python app that
interacts with KVM to step through the process of installing a
virtual machine. It uses a predefined set of kickstart (Red
Hat-based systems) and preseed files (Debian-based systems) for
operating systems that it supports, and it can also be used to
create Microsoft Windows images. On Fedora, install Oz with yum:</para>
<screen><prompt>#</prompt> <userinput>yum install oz</userinput></screen>
<note><para>As of this writing, there are no Oz packages for Ubuntu,
so you will need to either install from the source or build your
own .deb file.</para>
<para>A full treatment of Oz is beyond the scope of this document, but
we will provide an example. You can find additional examples of Oz
template files on GitHub at <link
>rackerjoe/oz-image-build/templates</link>. Here's how you would
create a CentOS 6.4 image with Oz.</para>
<para>Create a template file (we'll call it
<filename>centos64.tdl</filename>) with the following contents.
The only entry you will need to change is the
<programlisting language="xml">&lt;template>
&lt;install type='iso'>
&lt;rootpw>CHANGE THIS TO YOUR ROOT PASSWORD&lt;/rootpw>
&lt;description>CentOS 6.4 x86_64&lt;/description>
&lt;repository name='epel-6'>
&lt;package name='epel-release'/>
&lt;package name='cloud-utils'/>
&lt;package name='cloud-init'/>
&lt;command name='update'>
yum -y update
yum clean all
sed -i '/^HWADDR/d' /etc/sysconfig/network-scripts/ifcfg-eth0
echo -n > /etc/udev/rules.d/70-persistent-net.rules
echo -n > /lib/udev/rules.d/75-persistent-net-generator.rules
<para>This Oz template specifies where to download the Centos 6.4
install ISO. Oz will use the version information to identify which
kickstart file to use. In this case, it will be <link
>RHEL6.auto</link>. It adds EPEL as a repository and install the
<literal>epel-release</literal>, <literal>cloud-utils</literal>,
and <literal>cloud-init</literal> packages, as specified in the
<literal>packages</literal> section of the file.</para>
<para>After Oz completes the initial OS install using the kickstart file,
it customizes the image with an update. It also removes any reference to the eth0
device that libvirt creates while Oz does the customizing, as
specified in the <literal>command</literal> section of the XML
<para>To run this:</para>
<screen><prompt>#</prompt> <userinput>oz-install -d3 -u centos64.tdl -x centos64-libvirt.xml</userinput></screen>
<para>The <literal>-d3</literal> flag tells Oz to show
status information as it runs.</para>
<para>The <literal>-u</literal> tells Oz to do the
customization (install extra packages, run the commands)
once it does the initial install.</para>
<para>The <literal>-x &lt;filename></literal> flag tells Oz
what filename to use to write out a libvirt XML file
(otherwise it will default to something like
<para>If you leave out the <literal>-u</literal> flag, or
you want to edit the file to do additional customizations, you can
use the <command>oz-customize</command> command, using the libvirt
XML file that <command>oz-install</command> creates. For example:</para>
<screen><prompt>#</prompt> <userinput>oz-customize -d3 centos64.tdl centos64-libvirt.xml</userinput></screen>
<para>Oz will invoke libvirt to boot the image inside of KVM, then Oz will
ssh into the instance and perform the customizations.</para>
<section xml:id="vmbuilder">
<para><link xlink:href="https://launchpad.net/vmbuilder"
>VMBuilder</link> (Virtual Machine Builder) is a
command-line tool that creates virtual machine images for
different hypervisors. The version of VMBuilder that ships
with Ubuntu can only create Ubuntu virtual machine guests.
The version of VMBuilder that ships with Debian can create
Ubuntu and Debian virtual machine guests.</para>
<para>The <link
><citetitle>Ubuntu Server Guide</citetitle></link>
has documentation on how to use VMBuilder to create an
Ubuntu image.</para>
<section xml:id="veewee">
<para><link xlink:href="https://github.com/jedi4ever/veewee">
VeeWee</link> is often used to build <link
boxes, but it can also be used to build KVM images.</para>
<section xml:id="packer">
<para><link xlink:href="https://packer.io">
Packer</link> is a tool for creating machine images for multiple platforms
from a single source configuration.
<section xml:id="imagefactory">
<para><link xlink:href="http://imgfac.org/"
>imagefactory</link> is a newer tool designed to
automate the building, converting, and uploading images to
different cloud providers. It uses Oz as its back-end and
includes support for OpenStack-based clouds.</para>
<section xml:id="susestudio">
<title>SUSE Studio</title>
<para><link xlink:href="http://susestudio.com">SUSE
Studio</link> is a web application for building and
testing software applications in a web browser. It
supports the creation of physical, virtual or cloud-based
applications and includes support for building images for
OpenStack based clouds using SUSE Linux Enterprise and
openSUSE as distributions.</para>

@ -1,185 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
<chapter xmlns="http://docbook.org/ns/docbook"
<title>Create images manually</title>
<para>Creating a new image is a step done outside of your
OpenStack installation. You create the new image manually on
your own system and then upload the image to your
<para>To create a new image, you will need the installation CD or
DVD ISO file for the guest operating system. You'll also need
access to a virtualization tool. You can use KVM for this. Or,
if you have a GUI desktop virtualization tool (such as, VMware
Fusion or VirtualBox), you can use that instead and just
convert the file to raw once you are done.</para>
<para>When you create a new virtual machine image, you will need
to connect to the graphical console of the hypervisor, which
acts as the virtual machine's display and allows you to
interact with the guest operating system's installer using
your keyboard and mouse. KVM can expose the graphical console
using the <link
>VNC</link> (Virtual Network Computing) protocol or the
newer <link xlink:href="http://spice-space.org">SPICE</link>
protocol. We'll use the VNC protocol here, since you're more
likely to be able to find a VNC client that works on your
local desktop.</para>
<section xml:id="net-running">
<title>Verify the libvirt default network is running</title>
<para>Before starting a virtual machine with libvirt, verify
that the libvirt "default" network has been started. This
network must be active for your virtual machine to be able
to connect out to the network. Starting this network will
create a Linux bridge (usually called
<literal>virbr0</literal>), iptables rules, and a
dnsmasq process that will serve as a DHCP server.</para>
<para>To verify that the libvirt "default" network is enabled,
use the <command>virsh net-list</command> command and
verify that the "default" network is active:</para>
<screen><prompt>#</prompt> <userinput>virsh net-list</userinput>
<computeroutput>Name State Autostart
default active yes</computeroutput></screen>
<para>If the network is not active, start it by doing:</para>
<screen><prompt>#</prompt> <userinput>virsh net-start default</userinput></screen>
<section xml:id="virt-manager">
<title>Use the virt-manager X11 GUI</title>
<para>If you plan to create a virtual machine image on a
machine that can run X11 applications, the simplest way to
do so is to use the <command>virt-manager</command> GUI,
which is installable as the
<literal>virt-manager</literal> package on both
Fedora-based and Debian-based systems. This GUI has an
embedded VNC client that will let you view and
interact with the guest's graphical console.</para>
<para>If you are building the image on a headless server, and
you have an X server on your local machine, you can launch
<command>virt-manager</command> using ssh X11
forwarding to access the GUI. Since virt-manager interacts
directly with libvirt, you typically need to be root to
access it. If you can ssh directly in as root (or with a
user that has permissions to interact with libvirt),
do:<screen><prompt>$</prompt> <userinput>ssh -X root@server virt-manager</userinput></screen></para>
<para>If the account you use to ssh into your server does not
have permissions to run libvirt, but has sudo privileges, do:<screen><prompt>$</prompt> <userinput>ssh -X root@server</userinput>
<prompt>$</prompt> <userinput>sudo virt-manager</userinput> </screen><note>
<para>The <literal>-X</literal> flag passed to ssh
will enable X11 forwarding over ssh. If this does
not work, try replacing it with the
<literal>-Y</literal> flag.</para>
<para>Click the "New" button at the top-left and step through
the instructions. <mediaobject>
<imagedata fileref="figures/virt-manager-new.png"
format="PNG" contentwidth="6in"/>
</mediaobject>You will be shown a series of dialog boxes
that will allow you to specify information about the
virtual machine.</para>
When using qcow2 format images you should check the option
'customize before install', go to disk properties and
explicitly select the qcow2 format. This ensures the virtual
machine disk size will be correct.
<section xml:id="virt-install">
<title>Use virt-install and connect by using a local VNC
<para>If you do not wish to use virt-manager (for example, you
do not want to install the dependencies on your server,
you don't have an X server running locally, the X11
forwarding over SSH isn't working), you can use the
<command>virt-install</command> tool to boot the
virtual machine through libvirt and connect to the
graphical console from a VNC client installed on your
local machine.</para>
<para>Because VNC is a standard protocol, there are multiple
clients available that implement the VNC spec, including
>TigerVNC</link> (multiple platforms), <link
(multiple platforms), <link
(multiple platforms), <link
>Chicken</link> (Mac OS X), <link
(KDE), and <link
>Vinagre</link> (GNOME).</para>
<para>The following example shows how to use the
<command>qemu-img</command> command to create an empty
image file, and <command>virt-install</command> command to
start up a virtual machine using that image file. As
<screen><prompt>#</prompt> <command>qemu-img create -f qcow2 /data/centos-6.4.qcow2 10G</command>
<prompt>#</prompt> <userinput>virt-install --virt-type kvm --name centos-6.4 --ram 1024 \
--cdrom=/data/CentOS-6.4-x86_64-netinstall.iso \
--disk path=/data/centos-6.4.qcow2,size=10,format=qcow2 \
--network network=default \
--graphics vnc,listen= --noautoconsole \
--os-type=linux --os-variant=rhel6</userinput>
Starting install...
Creating domain... | 0 B 00:00
Domain installation still in progress. You can reconnect to
the console to complete the installation process.</computeroutput></screen>
The KVM hypervisor starts the virtual machine with the
libvirt name, <literal>centos-6.4</literal>, with
1024&nbsp;MB of RAM. The virtual machine also has a virtual
CD-ROM drive associated with the
file and a local 10&nbsp;GB hard disk in qcow2 format that is
stored in the host at
It configures networking to
use libvirt's default network. There is a VNC server that
is listening on all interfaces, and libvirt will not
attempt to launch a VNC client automatically nor try to
display the text console
(<literal>--no-autoconsole</literal>). Finally,
libvirt will attempt to optimize the configuration for a
Linux guest running a RHEL 6.x distribution.<note>
<para>When using the libvirt
<literal>default</literal> network, libvirt will
connect the virtual machine's interface to a bridge
called <literal>virbr0</literal>. There is a dnsmasq
process managed by libvirt that will hand out an IP
address on the subnet, and libvirt
has iptables rules for doing NAT for IP addresses on
this subnet.</para>
<para>Run the <command>virt-install --os-variant
list</command> command to see a range of allowed
<literal>--os-variant</literal> options.</para>
<para>Use the <command>virsh vncdisplay
command to get the VNC port number.</para>
<screen><prompt>#</prompt> <userinput>virsh vncdisplay centos-6.4</userinput>
<para>In the example above, the guest
<literal>centos-6.4</literal> uses VNC display
<literal>:1</literal>, which corresponds to TCP port
<literal>5901</literal>. You should be able to connect
a VNC client running on your local machine to display
:1 on the remote machine and step through the installation
<xi:include href="section_centos-example.xml"/>
<xi:include href="section_ubuntu-example.xml"/>
<xi:include href="section_fedora-example.xml"/>
<xi:include href="section_windows-example.xml"/>
<xi:include href="section_freebsd-example.xml"/>

@ -1,140 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
<para>An OpenStack Compute cloud is not very useful unless you have virtual machine images
(which some people call "virtual appliances"). This guide describes how to obtain, create,
and modify virtual machine images that are compatible with OpenStack.</para>
<para>To keep things brief, we'll sometimes use the term "image" instead of "virtual machine
<para>What is a virtual machine image?</para>
<para>A virtual machine image is a single file which contains a virtual disk that has a
bootable operating system installed on it.</para>
<para>Virtual machine images come in different formats, some of which are described below.</para>
<listitem><para>The "raw" image format is the simplest one, and is
natively supported by both KVM and Xen hypervisors. You
can think of a raw image as being the bit-equivalent of a
block device file, created as if somebody had copied, say,
<filename>/dev/sda</filename> to a file using the
<command>dd</command> command. <note>
<para>We don't recommend creating raw images by dd'ing
block device files, we discuss how to create raw
images later.</para>
<listitem><para>The <link xlink:href="http://en.wikibooks.org/wiki/QEMU/Images">qcow2</link> (QEMU
copy-on-write version 2) format is commonly used with the KVM hypervisor. It has some
additional features over the raw format, such as:<itemizedlist>
<para>Using sparse representation, so the image size is smaller.</para>
<para>Support for snapshots.</para>
<para>Because qcow2 is sparse, qcow2 images are typically smaller than raw images. Smaller images mean faster uploads, so it's often faster to convert a raw image to qcow2 for uploading instead of uploading the raw file directly.</para>
<para>Because raw images don't support snapshots, OpenStack Compute will
automatically convert raw image files to qcow2 as needed.</para>
<listitem><para>The <link
>AMI/AKI/ARI </link>format was the initial image
format supported by Amazon EC2. The image consists of
three files:<itemizedlist>
<listitem><para>AMI (Amazon Machine Image):</para>
<para>This is a virtual machine image in raw
format, as described above.</para>
<para>AKI (Amazon Kernel Image)</para>
<para>A kernel file that the hypervisor will
load initially to boot the image. For a
Linux machine, this would be a
<emphasis>vmlinuz</emphasis> file.
<para>ARI (Amazon Ramdisk Image)</para>
<para>An optional ramdisk file mounted at boot
time. For a Linux machine, this would be
an <emphasis>initrd</emphasis>
<term>UEC tarball</term>
<listitem><para>A UEC (Ubuntu Enterprise Cloud) tarball is a gzipped tarfile that contains an AMI
file, AKI file, and ARI file.<note>
<para>Ubuntu Enterprise Cloud refers to a discontinued Eucalyptus-based Ubuntu cloud
solution that has been replaced by the OpenStack-based Ubuntu Cloud
<listitem><para>VMware's ESXi hypervisor uses the <link
>VMDK</link> (Virtual Machine Disk) format for images.</para></listitem>
<listitem><para>VirtualBox uses the <link
xlink:href="https://forums.virtualbox.org/viewtopic.php?t=8046">VDI</link> (Virtual
Disk Image) format for image files. None of the OpenStack Compute hypervisors support
VDI directly, so you will need to convert these files to a different format to use them
with OpenStack.</para></listitem>
<listitem><para>Microsoft Hyper-V uses the VHD (Virtual Hard Disk) format for images.</para></listitem>
<listitem><para>The version of Hyper-V that ships with Microsoft Server 2012 uses the newer <link
format, which has some additional features over VHD such as support for larger disk
sizes and protection against data corruption during power failures.</para></listitem>
<listitem><para><link xlink:href="http://dmtf.org/sites/default/files/OVF_Overview_Document_2010.pdf"
>OVF</link> (Open Virtualization Format) is a packaging format for virtual
machines, defined by the Distributed Management Task Force (DMTF) standards
group. An OVF package contains one or more image files, a .ovf XML metadata file
that contains information about the virtual machine, and possibly other files as
<para>An OVF package can be distributed in different ways. For example, it could be
distributed as a set of discrete files, or as a tar archive file with an .ova (open
virtual appliance/application) extension.</para>
<para>OpenStack Compute does not currently have support for OVF packages, so you will need
to extract the image file(s) from an OVF package if you wish to use it with
<listitem><para>The <link
>ISO</link> format is a disk image formatted with the read-only ISO 9660 (also known
as ECMA-119) filesystem commonly used for CDs and DVDs. While we don't normally think of
ISO as a virtual machine image format, since ISOs contain bootable filesystems with an
installed operating system, you can treat them the same as you treat other virtual machine
image files.</para></listitem>
<xi:include href="section_glance_image-formats.xml"/>
<xi:include href="section_glance-image-metadata.xml"/>

@ -1,394 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
<chapter xmlns="http://docbook.org/ns/docbook"
<title>Modify images</title>
<?dbhtml stop-chunking?>
<para>Once you have obtained a virtual machine image, you may want
to make some changes to it before uploading it to the
OpenStack Image service. Here we describe several tools
available that allow you to modify images.<warning>
<para>Do not attempt to use these tools to modify an image
that is attached to a running virtual machine. These
tools are designed to only modify images that are not
currently running.</para>
<section xml:id="guestfish">
<para>The <command>guestfish</command> program is a tool from
the <link xlink:href="http://libguestfs.org/"
>libguestfs</link> project that allows you to modify
the files inside of a virtual machine image.</para>
<para><command>guestfish</command> does not mount the
image directly into the local file system. Instead, it
provides you with a shell interface that enables you
to view, edit, and delete files. Many of
<command>guestfish</command> commands, such as
<command>chmod</command>, and <command>rm</command>,
resemble traditional bash commands.</para>
<title>Example guestfish session</title>
<para>Sometimes, you must modify a virtual machine image
to remove any traces of the MAC address that was
assigned to the virtual network interface card when
the image was first created, because the MAC address
will be different when it boots the next time. This
example shows how to use guestfish to remove
references to the old MAC address by deleting the
file and removing the <literal>HWADDR</literal> line
from the
<para>Assume that you have a CentOS qcow2 image called
<filename>centos63_desktop.img</filename>. Mount
the image in read-write mode as root, as
<screen><prompt>#</prompt> <userinput>guestfish --rw -a centos63_desktop.img</userinput>
Welcome to guestfish, the libguestfs filesystem interactive shell for
editing virtual machine filesystems.
Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell
<para>This starts a guestfish session. Note that the
guestfish prompt looks like a fish: <literal>>
<para>We must first use the <command>run</command> command
at the guestfish prompt before we can do anything
else. This will launch a virtual machine, which will
be used to perform all of the file
manipulations.<screen><prompt>>&lt;fs></prompt> <userinput>run</userinput></screen>
We can now view the file systems in the image using the
command:<screen><prompt>>&lt;fs></prompt> <userinput>list-filesystems</userinput>
<computeroutput>/dev/vda1: ext4
/dev/vg_centosbase/lv_root: ext4
/dev/vg_centosbase/lv_swap: swap</computeroutput></screen>We
need to mount the logical volume that contains the
root partition:
<screen><prompt>>&lt;fs></prompt> <userinput>mount /dev/vg_centosbase/lv_root /</userinput></screen></para>
<para>Next, we want to delete a file. We can use the
<command>rm</command> guestfish command, which
works the same way it does in a traditional
<para><screen><prompt>>&lt;fs></prompt> <userinput>rm /etc/udev/rules.d/70-persistent-net.rules</userinput></screen>We
want to edit the <filename>ifcfg-eth0</filename> file
to remove the <literal>HWADDR</literal> line. The
<command>edit</command> command will copy the file
to the host, invoke your editor, and then copy the
file back.
<screen><prompt>>&lt;fs></prompt> <userinput>edit /etc/sysconfig/network-scripts/ifcfg-eth0</userinput></screen></para>
<para>If you want to modify this image to load the 8021q
kernel at boot time, you must create an executable
script in the
directory. You can use the <command>touch</command>
guestfish command to create an empty file, the
<command>edit</command> command to edit it, and
the <command>chmod</command> command to make it
executable.<screen><prompt>>&lt;fs></prompt> <userinput>touch /etc/sysconfig/modules/8021q.modules</userinput>
<prompt>>&lt;fs></prompt> <userinput>edit /etc/sysconfig/modules/8021q.modules</userinput></screen>
We add the following line to the file and save
it:<programlisting>modprobe 8021q</programlisting>Then
we set to executable:
<screen>>&lt;fs> <userinput>chmod 0755 /etc/sysconfig/modules/8021q.modules</userinput></screen></para>
<para>We're done, so we can exit using the
command:<screen><prompt>>&lt;fs></prompt> <userinput>exit</userinput></screen></para>
<title>Go further with guestfish</title>
<para>There is an enormous amount of functionality in
guestfish and a full treatment is beyond the scope of
this document. Instead, we recommend that you read the
>guestfs-recipes</link> documentation page for a
sense of what is possible with these tools.</para>
<section xml:id="guestmount">
<para>For some types of changes, you may find it easier to
mount the image's file system directly in the guest. The
<command>guestmount</command> program, also from the
libguestfs project, allows you to do so.</para>
<para>For example, to mount the root partition from our
<filename>centos63_desktop.qcow2</filename> image to
<filename>/mnt</filename>, we can do:</para>
<screen><prompt>#</prompt> <userinput>guestmount -a centos63_desktop.qcow2 -m /dev/vg_centosbase/lv_root --rw /mnt</userinput></screen>
<para>If we didn't know in advance what the mount point is in
the guest, we could use the <literal>-i</literal>(inspect)
flag to tell guestmount to automatically determine what
mount point to
use:<screen><prompt>#</prompt> <userinput>guestmount -a centos63_desktop.qcow2 -i --rw /mnt</userinput></screen>Once
mounted, we could do things like list the installed
packages using
rpm:<screen><prompt>#</prompt> <userinput>rpm -qa --dbpath /mnt/var/lib/rpm</userinput></screen>
Once done, we
unmount:<screen><prompt>#</prompt> <userinput>umount /mnt</userinput></screen></para>
<section xml:id="virt-tools">
<title>virt-* tools</title>
<para>The <link xlink:href="http://libguestfs.org/"
>libguestfs</link> project has a number of other
useful tools, including:<itemizedlist>
>virt-edit</link> for editing a file
inside of an image.</para>
>virt-df</link> for displaying free space
inside of an image.</para>
>virt-resize</link> for resizing an
>virt-sysprep</link> for preparing an
image for distribution (for example, delete
SSH host keys, remove MAC address info, or
remove user accounts).</para>
>virt-sparsify</link> for making an image
>virt-p2v</link> for converting a physical
machine to an image that runs on KVM.</para>
>virt-v2v</link> for converting Xen and
VMware images to KVM images.</para>
<title>Modify a single file inside of an image</title>
<para>This example shows how to use
<command>virt-edit</command> to modify a file. The
command can take either a filename as an argument with
the <literal>-a</literal> flag, or a domain name as an
argument with the <literal>-d</literal> flag. The
following examples shows how to use this to modify the
<filename>/etc/shadow</filename> file in instance
with libvirt domain name
<literal>instance-000000e1</literal> that is
currently running:</para>
<screen><prompt>#</prompt> <userinput>virsh shutdown instance-000000e1</userinput>
<prompt>#</prompt> <userinput>virt-edit -d instance-000000e1 /etc/shadow</userinput>
<prompt>#</prompt> <userinput>virsh start instance-000000e1</userinput></screen>
<title>Resize an image</title>
<para>Here is an example of how to use
<command>virt-resize</command> to resize an image.
Assume we have a 16&nbsp;GB Windows image in qcow2 format
that we want to resize to 50&nbsp;GB. First, we use
<command>virt-filesystems</command> to identify
partitions:<screen><prompt>#</prompt> <userinput>virt-filesystems --long --parts --blkdevs -h -a /data/images/win2012.qcow2</userinput>
<computeroutput>Name Type MBR Size Parent
/dev/sda1 partition 07 350M /dev/sda
/dev/sda2 partition 07 16G /dev/sda
/dev/sda device - 16G -
<para>In this case, it's the
<filename>/dev/sda2</filename> partition that we
want to resize. We create a new qcow2 image and use
the <command>virt-resize</command> command to write a
resized copy of the original into the new
<screen><prompt>#</prompt> <userinput>qemu-img create -f qcow2 /data/images/win2012-50gb.qcow2 50G</userinput>
<prompt>#</prompt> <userinput>virt-resize --expand /dev/sda2 /data/images/win2012.qcow2 \
<computeroutput>Examining /data/images/win2012.qcow2 ...
Summary of changes:
/dev/sda1: This partition will be left alone.
/dev/sda2: This partition will be resized from 15.7G to 49.7G. The
filesystem ntfs on /dev/sda2 will be expanded using the
'ntfsresize' method.
Setting up initial partition table on /data/images/win2012-50gb.qcow2 ...
Copying /dev/sda1 ...
100% ⟦▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓⟧ 00:00
Copying /dev/sda2 ...
100% ⟦▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓⟧ 00:00
Expanding /dev/sda2 using the 'ntfsresize' method ...
Resize operation completed with no errors. Before deleting the old
disk, carefully check that the resized disk boots and works correctly.
<section xml:id="losetup-kpartx-nbd">
<title>Loop devices, kpartx, network block devices</title>
<para>If you don't have access to libguestfs, you can mount
image file systems directly in the host using loop
devices, kpartx, and network block devices.<warning>
<para>Mounting untrusted guest images using the tools
described in this section is a security risk,
always use libguestfs tools such as guestfish and
guestmount if you have access to them. See <link
>A reminder why you should never mount guest
disk images on the host OS</link> by Daniel
Berrangé for more details.</para>
<title>Mount a raw image (without LVM)</title>
<para>If you have a raw virtual machine image that is not
using LVM to manage its partitions, use the
<command>losetup</command> command to find an
unused loop device.
<screen><prompt>#</prompt> <userinput>losetup -f</userinput>
<para>In this example, <filename>/dev/loop0</filename> is
free. Associate a loop device with the raw
image:<screen><prompt>#</prompt> <userinput>losetup /dev/loop0 fedora17.img</userinput></screen></para>
<para>If the image only has a single partition, you can
mount the loop device
directly:<screen><prompt>#</prompt> <userinput>mount /dev/loop0 /mnt</userinput></screen></para>
<para>If the image has multiple partitions, use
<command>kpartx</command> to expose the partitions
as separate devices (for example,
<filename>/dev/mapper/loop0p1</filename>), then
mount the partition that corresponds to the root file
system:<screen><prompt>#</prompt> <userinput>kpartx -av /dev/loop0</userinput></screen></para>
<para>If the image has, say three partitions (/boot, /,
swap), there should be one new device created per
partition:<screen><prompt>$</prompt> <userinput>ls -l /dev/mapper/loop0p*</userinput>
<computeroutput>brw-rw---- 1 root disk 43, 49 2012-03-05 15:32 /dev/mapper/loop0p1
brw-rw---- 1 root disk 43, 50 2012-03-05 15:32 /dev/mapper/loop0p2
brw-rw---- 1 root disk 43, 51 2012-03-05 15:32 /dev/mapper/loop0p3</computeroutput></screen>To
mount the second partition, as
root:<screen><prompt>#</prompt> <userinput>mkdir /mnt/image</userinput>
<prompt>#</prompt> <userinput>mount /dev/mapper/loop0p2 /mnt</userinput></screen>Once
you're done, to clean
up:<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
<prompt>#</prompt> <userinput>kpartx -d /dev/loop0</userinput>
<prompt>#</prompt> <userinput>losetup -d /dev/loop0</userinput></screen></para>
<title>Mount a raw image (with LVM)</title>
<para>If your partitions are managed with LVM, use losetup
and kpartx as in the previous example to expose the
partitions to the host.</para>
<screen><prompt>#</prompt> <userinput>losetup -f</userinput>
<prompt>#</prompt> <userinput>losetup /dev/loop0 rhel62.img</userinput>
<prompt>#</prompt> <userinput>kpartx -av /dev/loop0</userinput></screen>
<para>Next, you need to use the <command>vgscan</command>
command to identify the LVM volume groups and then
<command>vgchange</command> to expose the volumes
as devices:</para>
<screen><prompt>#</prompt> <userinput>vgscan</userinput>
<computeroutput>Reading all physical volumes. This may take a while...
Found volume group "vg_rhel62x8664" using metadata type lvm2</computeroutput>
<prompt>#</prompt> <userinput>vgchange -ay</userinput>
<computeroutput> 2 logical volume(s) in volume group "vg_rhel62x8664" now active</computeroutput>
<prompt>#</prompt> <userinput>mount /dev/vg_rhel62x8664/lv_root /mnt</userinput></screen>
<para>Clean up when you're done:</para>
<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
<prompt>#</prompt> <userinput>vgchange -an vg_rhel62x8664</userinput>
<prompt>#</prompt> <userinput>kpartx -d /dev/loop0</userinput>
<prompt>#</prompt> <userinput>losetup -d /dev/loop0</userinput></screen>
<title>Mount a qcow2 image (without LVM)</title>
<para>You need the <literal>nbd</literal> (network block
device) kernel module loaded to mount qcow2 images.
This will load it with support for 16 block devices,
which is fine for our purposes. As
root:<screen><prompt>#</prompt> <userinput>modprobe nbd max_part=16</userinput></screen></para>
<para>Assuming the first block device
(<filename>/dev/nbd0</filename>) is not currently
in use, we can expose the disk partitions using the
<command>qemu-nbd</command> and
<command>partprobe</command> commands. As
root:<screen><prompt>#</prompt> <userinput>qemu-nbd -c /dev/nbd0 image.qcow2</userinput>
<prompt>#</prompt> <userinput>partprobe /dev/nbd0</userinput></screen></para>
<para>If the image has, say three partitions (/boot, /,
swap), there should be one new device created for
each partition:</para>
<screen><prompt>$</prompt> <userinput>ls -l /dev/nbd3*</userinput>
<computeroutput>brw-rw---- 1 root disk 43, 48 2012-03-05 15:32 /dev/nbd0
brw-rw---- 1 root disk 43, 49 2012-03-05 15:32 /dev/nbd0p1
brw-rw---- 1 root disk 43, 50 2012-03-05 15:32 /dev/nbd0p2
brw-rw---- 1 root disk 43, 51 2012-03-05 15:32 /dev/nbd0p3</computeroutput></screen>
<para>If the network block device you selected was
already in use, the initial
<command>qemu-nbd</command> command will fail
silently, and the
<filename>/dev/nbd3p{1,2,3}</filename> device
files will not be created.</para>
<para>If the image partitions are not managed with LVM,
they can be mounted directly:</para>
<screen><prompt>#</prompt> <userinput>mkdir /mnt/image</userinput>
<prompt>#</prompt> <userinput>mount /dev/nbd3p2 /mnt</userinput></screen>
<para>When you're done, clean up:</para>
<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
<prompt>#</prompt> <userinput>qemu-nbd -d /dev/nbd0</userinput></screen>
<title>Mount a qcow2 image (with LVM)</title>
<para>If the image partitions are managed with LVM, after
you use <command>qemu-nbd</command> and
<command>partprobe</command>, you must use
<command>vgscan</command> and <command>vgchange
-ay</command> in order to expose the LVM
partitions as devices that can be
mounted:<screen><prompt>#</prompt> <userinput>modprobe nbd max_part=16</userinput>
<prompt>#</prompt> <userinput>qemu-nbd -c /dev/nbd0 image.qcow2</userinput>
<prompt>#</prompt> <userinput>partprobe /dev/nbd0</userinput><prompt>#</prompt> <userinput>vgscan</userinput>
<computeroutput> Reading all physical volumes. This may take a while...
Found volume group "vg_rhel62x8664" using metadata type lvm2</computeroutput>
<prompt>#</prompt> <userinput>vgchange -ay</userinput>
<computeroutput> 2 logical volume(s) in volume group "vg_rhel62x8664" now active</computeroutput>
<prompt>#</prompt> <userinput>mount /dev/vg_rhel62x8664/lv_root /mnt</userinput></screen></para>
<para>When you're done, clean
up:<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
<prompt>#</prompt> <userinput>vgchange -an vg_rhel62x8664</userinput>
<prompt>#</prompt> <userinput>qemu-nbd -d /dev/nbd0</userinput></screen></para>

@ -1,178 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
<title>Get images</title>
<?dbhtml stop-chunking?>
<para>The simplest way to obtain a virtual machine image that works with
OpenStack is to download one that someone else has already created.
Most of the images contain the
<systemitem class="process">cloud-init</systemitem> package to
support SSH key pair and user data injection. Because many of the
images disable SSH password authentication by default, boot the
image with an injected key pair. You can SSH into the instance with
the private key and default login account. See the
<link xlink:href="http://docs.openstack.org/user-guide"
>OpenStack End User Guide</link> for more information on how to
create and inject key pairs with OpenStack.</para>
<section xml:id="centos-images">
<title>CentOS images</title>
<para>The CentOS project maintains official images for direct
<link xlink:href="http://cloud.centos.org/centos/6/images/"
>CentOS 6 images</link>
<link xlink:href="http://cloud.centos.org/centos/7/images/"
>CentOS 7 images</link>
<para>In a CentOS cloud image, the login account is
<section xml:id="cirros-images">
<title>CirrOS (test) images</title>
<para>CirrOS is a minimal Linux distribution that was designed for use as a test image on
clouds such as OpenStack Compute. You can download a CirrOS image in various formats
from the <link xlink:href="https://download.cirros-cloud.net">CirrOS
download page</link>.</para>
<para>If your deployment uses QEMU or KVM, we recommend using the images in qcow2
format. The most recent 64-bit qcow2 image as of this writing is <link
<para>In a CirrOS image, the login account is <literal>cirros</literal>. The
password is <literal>cubswin:)</literal></para>
<section xml:id="ubuntu-images">
<title>Official Ubuntu images</title>
<para>Canonical maintains an <link xlink:href="http://cloud-images.ubuntu.com/">official
set of Ubuntu-based images</link>.</para>
<para>Images are arranged by Ubuntu release, and by image release date, with "current" being
the most recent. For example, the page that contains the most recently built image for
Ubuntu 14.04 "Trusty Tahr" is <link
>http://cloud-images.ubuntu.com/trusty/current/</link>. Scroll to the bottom of the
page for links to images that can be downloaded directly.</para>
<para>If your deployment uses QEMU or KVM, we recommend using the images in qcow2
format. The most recent version of the 64-bit QCOW2 image for Ubuntu 14.04 is <link
<para>In an Ubuntu cloud image, the login account is
<section xml:id="redhat-images">
<title>Official Red Hat Enterprise Linux images</title>
Red Hat maintains official Red Hat Enterprise Linux cloud
images. A valid Red Hat Enterprise Linux subscription is required
to download these images.
<link xlink:href="https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.0/x86_64/product-downloads"
>Red Hat Enterprise Linux 7 KVM Guest Image</link>
<link xlink:href="https://rhn.redhat.com/rhn/software/channel/downloads/Download.do?cid=16952"
>Red Hat Enterprise Linux 6 KVM Guest Image</link>
In a RHEL cloud image, the login account is
<section xml:id="fedora-images">
<title>Official Fedora images</title>
<para>The Fedora project maintains a list of official cloud images at
<link xlink:href="https://getfedora.org/en/cloud/download/" />.
<para>In a Fedora cloud image, the login account is
<section xml:id="suse-sles-images">
<title>Official openSUSE and SLES images</title>
<para>SUSE provides images for <link xlink:href="http://download.opensuse.org/repositories/Cloud:/Images:/">openSUSE</link>.
For SUSE Linux Enterprise Server (SLES), custom images can be built with
a web-based tool called <link xlink:href="http://susestudio.com">SUSE Studio</link>.
SUSE Studio can also be used to build custom openSUSE images.</para>
<section xml:id="debian-images">
<title>Official Debian images</title>
<para>Since January 2015,
<link xlink:href="http://cdimage.debian.org/cdimage/openstack/">Debian
provides images for direct download</link>. They are now made at the
same time as the CD and DVD images of Debian. However, until Debian 8.0
(aka Jessie) is out, these images are the weekly built images of the
testing distribution.</para>
<para>If you wish to build your own images of Debian 7.0 (aka Wheezy, the
current stable release of Debian), you can use the package which is
used to build the official Debian images. It is named
<package>openstack-debian-images</package>, and it
provides a simple script for building them. This package is available
in Debian Unstable, Debian Jessie, and through the wheezy-backports
repositories. To produce a Wheezy image, simply run:
<screen><prompt>#</prompt> <userinput>build-openstack-debian-image -r wheezy</userinput></screen></para>
<para>If building the image for Wheezy, packages like
<package>cloud-init</package>, <package>cloud-utils</package> or
<package>cloud-initramfs-growroot</package> will be pulled from
wheezy-backports. Also, the current version of
<package>bootlogd</package> in Wheezy doesn't support logging to
multiple consoles, which is needed so that both the OpenStack
Dashboard console and the <command>nova console-log</command>
console works. However, a <link
fixed version is available from the non-official GPLHost
repository</link>. To install it on top of the image, it is possible
to use the <option>--hook-script</option> option of the
<command>build-openstack-debian-image</command> script, with this
kind of script as parameter:
<programlisting language="bash">#!/bin/sh
cp bootlogd_2.88dsf-41+deb7u2_amd64.deb ${BODI_CHROOT_PATH}
chroot ${BODI_CHROOT_PATH} dpkg -i bootlogd_2.88dsf-41+deb7u2_amd64.deb
rm ${BODI_CHROOT_PATH}/bootlogd_2.88dsf-41+deb7u2_amd64.deb</programlisting></para>
<para>In a Debian image, the login account is <literal>admin</literal>.</para>
<section xml:id="other-distros">
<title>Official images from other Linux distributions</title>
<para>As of this writing, we are not aware of other distributions that provide images for download.</para>
<section xml:id="rcb-images">
<title>Rackspace Cloud Builders (multiple distros)
<para>Rackspace Cloud Builders maintains a list of pre-built images from various
distributions (Red Hat, CentOS, Fedora, Ubuntu). Links to these images can be found at
<link xlink:href="https://github.com/rackerjoe/oz-image-build"
>rackerjoe/oz-image-build on GitHub</link>.</para>
<section xml:id="windows-images">
<title>Microsoft Windows images</title>
<para>Cloudbase Solutions hosts an <link xlink:href="http://www.cloudbase.it/ws2012r2/"
>OpenStack Windows Server 2012 Standard Evaluation image</link> that runs on
Hyper-V, KVM, and XenServer/XCP.</para>

@ -1,549 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
<chapter xmlns="http://docbook.org/ns/docbook"
<title>OpenStack Linux image requirements</title>
<?dbhtml stop-chunking?>
<para>For a Linux-based image to have full functionality in an
OpenStack Compute cloud, there are a few requirements. For
some of these, you can fulfill the requirements by installing
the <link
><package>cloud-init</package></link> package. Read
this section before you create your own image to be sure that
the image supports the OpenStack features that you plan to use.</para>
<para>Disk partitions and resize root partition on boot
<para>No hard-coded MAC address information</para>
<para>SSH server running</para>
<para>Disable firewall</para>
<para>Access instance using ssh public key
<para>Process user data and other metadata
<para>Paravirtualized Xen support in Linux kernel (Xen
hypervisor only with Linux kernel version &lt;
<section xml:id="support-resizing">
<title>Disk partitions and resize root partition on boot
<para>When you create a Linux image, you must decide how to
partition the disks. The choice of partition method can
affect the resizing functionality, as described in the
following sections.</para>
<para>The size of the disk in a virtual machine image is
determined when you initially create the image. However,
OpenStack lets you launch instances with different size
drives by specifying different flavors. For example, if
your image was created with a 5&nbsp;GB disk, and you
launch an instance with a flavor of
<literal>m1.small</literal>. The resulting virtual
machine instance has, by default, a primary disk size of
10&nbsp;GB. When the disk for an instance is resized up,
zeros are just added to the end.</para>
<para>Your image must be able to resize its partitions on boot
to match the size requested by the user. Otherwise, after
the instance boots, you must manually resize the
partitions to access the additional storage to which you
have access when the disk size associated with the flavor
exceeds the disk size with which your image was
<title>Xen: 1 ext3/ext4 partition (no LVM, no /boot, no
<para>If you use the OpenStack XenAPI driver, the Compute
service automatically adjusts the partition and file
system for your instance on boot. Automatic resize
occurs if the following conditions are all
<para><literal>auto_disk_config=True</literal> is
set as a property on the image in the image
<para>The disk on the image has only one
<para>The file system on the one partition is ext3
or ext4.</para>
<para>Therefore, if you use Xen, we recommend that when
you create your images, you create a single ext3 or
ext4 partition (not managed by LVM). Otherwise, read
<title>Non-Xen with cloud-init/cloud-tools: One ext3/ext4
partition (no LVM, no /boot, no swap)</title>
<para>You must configure these items for your
<para>The partition table for the image describes
the original size of the image.</para>
<para>The file system for the image fills the
original size of the image.</para>
<para>Then, during the boot process, you must:</para>
<para>Modify the partition table to make it aware
of the additional space:</para>
<para>If you do not use LVM, you must
modify the table to extend the
existing root partition to encompass
this additional space.</para>
<para>If you use LVM, you can add a new
LVM entry to the partition table,
create a new LVM physical volume, add
it to the volume group, and extend the
logical partition with the root
<para>Resize the root volume file system.</para>
<para>The simplest way to support this is to
install in your image the:</para>
<para><link xlink:href="https://launchpad.net/cloud-utils">cloud-utils</link>
package, which contains the <command>growpart</command>
tool for extending partitions.</para>
<para><link xlink:href="https://launchpad.net/cloud-initramfs-tools">cloud-initramfs-growroot</link>
package for Ubuntu, Debian and Fedora, which supports
resizing root partition on the first boot.</para>
package for Centos and RHEL.</para>
<para><link xlink:href="https://launchpad.net/cloud-init">cloud-init</link>
<para>With these packages installed, the image
performs the root partition resize on boot. For
example, in the <filename>/etc/rc.local</filename>
file. These packages are in the Ubuntu and Debian
package repository, as well as the EPEL repository
(for Fedora/RHEL/CentOS/Scientific Linux
<para>If you cannot install
<literal>cloud-initramfs-tools</literal>, Robert
Plestenjak has a GitHub project called <link
>linux-rootfs-resize</link> that contains scripts
that update a ramdisk by using
<command>growpart</command> so that the image
resizes properly on boot.</para>
<para>If you can install the cloud-utils and
<package>cloud-init</package> packages, we
recommend that when you create your images, you create
a single ext3 or ext4 partition (not managed by
<title>Non-Xen without
<para>If you cannot install <package>cloud-init</package>
and <package>cloud-tools</package> inside of your
guest, and you want to support resize, you must write
a script that your image runs on boot to modify the
partition table. In this case, we recommend using LVM
to manage your partitions. Due to a limitation in the
Linux kernel (as of this writing), you cannot modify a
partition table of a raw disk that has partitions
currently mounted, but you can do this for LVM.</para>
<para>Your script must do something like the following:<orderedlist>
<para>Detect if any additional space is
available on the disk. For example, parse
the output of <command>parted /dev/sda
--script "print
<para>Create a new LVM partition with the
additional space. For example,
<command>parted /dev/sda --script
"mkpart lvm ..."</command>.</para>
<para>Create a new physical volume. For
example, <command>pvcreate
<para>Extend the volume group with this
physical partition. For example,
<para>Extend the logical volume contained the
root partition by the amount of space. For
example, <command>lvextend
<para>Resize the root file system. For
example, <command>resize2fs
<para>You do not need a <filename>/boot</filename>
partition unless your image is an older Linux
distribution that requires that
<filename>/boot</filename> is not managed by
<section xml:id="mac-address">
<title>No hard-coded MAC address information</title>
<para>You must remove the network persistence rules in the
image because they cause the network interface in the
instance to come up as an interface other than eth0. This
is because your image has a record of the MAC address of
the network interface card when it was first installed,
and this MAC address is different each time the
instance boots. You should alter the following
with an empty file (contains network persistence
rules, including MAC address).</para>
with an empty file (this generates the file
<para>Remove the HWADDR line from
on Fedora-based images.</para>
<para>If you delete the network persistent rules files,
you may get a udev kernel warning at boot time, which
is why we recommend replacing them with empty files
<section xml:id="ensure-ssh-server">
<title>Ensure ssh server runs</title>
<para>You must install an ssh server into the image and ensure
that it starts up on boot, or you cannot connect to your
instance by using ssh when it boots inside of OpenStack.
This package is typically called
<section xml:id="disable-firewall">
<title>Disable firewall</title>
<para>In general, we recommend that you disable any firewalls
inside of your image and use OpenStack security g