[image-guide] Publish RST Vitual Machine Image Guide

Change-Id: I694301758f7f85290d4c9f9b01fbd1924b02b476
Implements: blueprint image-guide-rst
This commit is contained in:
KATO Tomoyuki 2015-11-23 18:23:39 +09:00 committed by Andreas Jaeger
parent b93354e5ba
commit 79aa72b0cb
71 changed files with 12 additions and 15830 deletions

View File

@ -30,6 +30,11 @@ Operations Guide
* Shared File Systems chapter added.
Virtual Machine Image Guide
---------------------------
* RST conversion finished.
Translations
------------

View File

@ -1,20 +1,15 @@
# directories to be set up
declare -A DIRECTORIES=(
["fr"]="common glossary image-guide"
["ja"]="common glossary image-guide"
["zh_CN"]="common glossary arch-design image-guide"
["zh_CN"]="common glossary arch-design"
)
# books to be built
declare -A BOOKS=(
["fr"]="image-guide"
["ja"]="image-guide user-guide user-guide-admin install-guide networking-guide"
["zh_CN"]="arch-design image-guide"
["ja"]="user-guide user-guide-admin install-guide networking-guide"
)
# draft books
declare -A DRAFTS=(
["fr"]="image-guide"
["ja"]="install-guide networking-guide"
)
@ -30,6 +25,7 @@ DOC_DIR="doc/"
# project-config/jenkins/scripts/common_translation_update.sh
declare -A SPECIAL_BOOKS=(
["admin-guide-cloud"]="RST"
["image-guide"]="RST"
["install-guide"]="RST"
["networking-guide"]="RST"
["user-guide"]="RST"
@ -38,7 +34,6 @@ declare -A SPECIAL_BOOKS=(
["contributor-guide"]="skip"
["arch-design-rst"]="skip"
["config-ref-rst"]="skip"
["image-guide-rst"]="skip"
# This needs special handling, handle it with the RST tools.
["common-rst"]="RST"
)

View File

@ -1,52 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<book xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="openstack-image-manual">
<title>OpenStack Virtual Machine Image Guide</title>
<?rax title.font.size="28px" subtitle.font.size="28px"?>
<titleabbrev>VM Image Guide</titleabbrev>
<info>
<author>
<personname>
<firstname/>
<surname/>
</personname>
<affiliation>
<orgname>OpenStack Foundation</orgname>
</affiliation>
</author>
<copyright>
<year>2013</year>
<year>2014</year>
<year>2015</year>
<holder>OpenStack Foundation</holder>
</copyright>
<releaseinfo>current</releaseinfo>
<productname>OpenStack</productname>
<pubdate/>
<legalnotice role="cc-by">
<annotation>
<remark>Remaining licensing details are filled in by
the template.</remark>
</annotation>
</legalnotice>
<abstract>
<para>This guide describes how to obtain, create, and
modify virtual machine images that are compatible with
OpenStack.</para>
</abstract>
</info>
<!-- Chapters are referred from the book file through these include statements. You can add additional chapters using these types of statements. -->
<xi:include href="../common/ch_preface.xml"/>
<xi:include href="ch_introduction.xml"/>
<xi:include href="ch_obtaining_images.xml"/>
<xi:include href="ch_openstack_images.xml"/>
<xi:include href="ch_modifying_images.xml"/>
<xi:include href="ch_creating_images_manually.xml"/>
<xi:include href="ch_creating_images_automatically.xml"/>
<xi:include href="ch_converting.xml"/>
<xi:include href="../common/app_support.xml"/>
</book>

View File

@ -1,82 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_converting">
<title>Converting between image formats</title>
<para>Converting images from one format to another is generally straightforward.</para>
<simplesect>
<title>qemu-img convert: raw, qcow2, qed, vdi, vmdk, vhd</title>
<para>The <command>qemu-img convert</command> command can do conversion between multiple
formats, including qcow2, qed, raw, vdi, vhd, and vmdk.</para>
<table
rules="all">
<caption>qemu-img format strings</caption>
<thead>
<tr>
<th>Image format</th>
<th>Argument to qemu-img</th>
</tr>
</thead>
<tbody>
<tr>
<td>QCOW2 (KVM, Xen)</td>
<td><literal>qcow2</literal></td>
</tr>
<tr>
<td>QED (KVM)</td>
<td><literal>qed</literal></td>
</tr>
<tr>
<td>raw</td>
<td><literal>raw</literal></td>
</tr>
<tr>
<td>VDI (VirtualBox)</td>
<td><literal>vdi</literal></td>
</tr>
<tr>
<td>VHD (Hyper-V)</td>
<td><literal>vpc</literal></td>
</tr>
<tr>
<td>VMDK (VMware)</td>
<td><literal>vmdk</literal></td>
</tr>
</tbody>
</table>
<para>This example will convert a raw image file named <filename>centos7.img</filename> to a qcow2 image file.</para>
<para>
<screen><prompt>$</prompt> <userinput>qemu-img convert -f raw -O qcow2 centos7.img centos7.qcow2</userinput></screen>
</para>
<para>Run the following command to convert a vmdk image file to a raw image file.
<screen><prompt>$</prompt> <userinput>qemu-img convert -f vmdk -O raw centos7.vmdk centos7.img</userinput></screen>
</para>
<para>Run the following command to convert a vmdk image file to a qcow2 image file.
<screen><prompt>$</prompt> <userinput>qemu-img convert -f vmdk -O qcow2 centos7.vmdk centos7.qcow2</userinput></screen>
</para>
<para>
<note>
<para>The <literal>-f <replaceable>format</replaceable></literal> flag is optional.
If omitted, <command>qemu-img</command> will try to infer the image format.</para>
<para>When converting an image file with Windows OS, ensure the virtio driver is
installed. Otherwise, you will get a blue screen of death or BSOD when
launching the image due to lack of the virtio driver. Another option is to
set the image properties as below when you update the image in glance to
avoid this issue, but it will reduce performance significantly.</para>
<screen><prompt>$</prompt> <userinput>glance image-update --property hw_disk_bus='ide' image_id</userinput></screen>
</note>
</para>
</simplesect>
<simplesect>
<title>VBoxManage: VDI (VirtualBox) to raw</title>
<para>If you've created a VDI image using VirtualBox, you can convert it to raw format using
the <command>VBoxManage</command> command-line tool that ships with VirtualBox. On Mac
OS X, and Linux, VirtualBox stores images by default in the <filename>~/VirtualBox VMs/</filename>
directory. The following example creates a raw image in the current directory from a
VirtualBox VDI image.</para>
<screen><prompt>$</prompt> <userinput>VBoxManage clonehd ~/VirtualBox\ VMs/fedora21.vdi fedora21.img --format raw</userinput></screen>
</simplesect>
</chapter>

View File

@ -1,184 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_creating_images_automatically">
<title>Tool support for image creation</title>
<?dbhtml stop-chunking?>
<para>There are several tools that are designed to automate image
creation.</para>
<section xml:id="Diskimage-builder">
<title>Diskimage-builder</title>
<para><link xlink:href="http://docs.openstack.org/developer/diskimage-builder/"
>Diskimage-builder</link> is an automated disk image creation
tool that supports a variety of distributions and architectures.
Diskimage-builder (DIB) can build images for Fedora,
Red Hat Enterprise Linux, Ubuntu, Debian, CentOS, and openSUSE.
DIB is organized in a series of elements that build on top of
each other to create specific images.</para>
<para>To build an image, call the following script:</para>
<screen><prompt>#</prompt> <userinput>disk-image-create ubuntu vm</userinput></screen>
<para>This example creates a generic, bootable Ubuntu image of the latest
release.</para>
<para>Further customization could be accomplished by setting
environment variables or adding elements to the command-line:</para>
<screen><prompt>#</prompt> <userinput>disk-image-create -a armhf ubuntu vm</userinput></screen>
<para>This example creates the image as before, but for arm architecture.
More elements are available in the
<link xlink:href="https://github.com/openstack/diskimage-builder/tree/master/elements"
>git source directory</link> and documented in the <link
xlink:href="http://docs.openstack.org/developer/diskimage-builder/elements.html"
>diskimage-builder elements documentation</link>.
</para>
</section>
<section xml:id="oz">
<title>Oz</title>
<para><link xlink:href="https://github.com/clalancette/oz/wiki"
>Oz</link> is a command-line tool that automates the process of
creating a virtual machine image file. Oz is a Python app that
interacts with KVM to step through the process of installing a
virtual machine. It uses a predefined set of kickstart (Red
Hat-based systems) and preseed files (Debian-based systems) for
operating systems that it supports, and it can also be used to
create Microsoft Windows images. On Fedora, install Oz with yum:</para>
<screen><prompt>#</prompt> <userinput>yum install oz</userinput></screen>
<note><para>As of this writing, there are no Oz packages for Ubuntu,
so you will need to either install from the source or build your
own .deb file.</para>
</note>
<para>A full treatment of Oz is beyond the scope of this document, but
we will provide an example. You can find additional examples of Oz
template files on GitHub at <link
xlink:href="https://github.com/rackerjoe/oz-image-build/tree/master/templates"
>rackerjoe/oz-image-build/templates</link>. Here's how you would
create a CentOS 6.4 image with Oz.</para>
<para>Create a template file (we'll call it
<filename>centos64.tdl</filename>) with the following contents.
The only entry you will need to change is the
<literal>&lt;rootpw></literal>
contents.</para>
<programlisting language="xml">&lt;template>
&lt;name>centos64&lt;/name>
&lt;os>
&lt;name>CentOS-6&lt;/name>
&lt;version>4&lt;/version>
&lt;arch>x86_64&lt;/arch>
&lt;install type='iso'>
&lt;iso>http://mirror.rackspace.com/CentOS/6/isos/x86_64/CentOS-6.4-x86_64-bin-DVD1.iso&lt;/iso>
&lt;/install>
&lt;rootpw>CHANGE THIS TO YOUR ROOT PASSWORD&lt;/rootpw>
&lt;/os>
&lt;description>CentOS 6.4 x86_64&lt;/description>
&lt;repositories>
&lt;repository name='epel-6'>
&lt;url>http://download.fedoraproject.org/pub/epel/6/$basearch&lt;/url>
&lt;signed>no&lt;/signed>
&lt;/repository>
&lt;/repositories>
&lt;packages>
&lt;package name='epel-release'/>
&lt;package name='cloud-utils'/>
&lt;package name='cloud-init'/>
&lt;/packages>
&lt;commands>
&lt;command name='update'>
yum -y update
yum clean all
sed -i '/^HWADDR/d' /etc/sysconfig/network-scripts/ifcfg-eth0
echo -n > /etc/udev/rules.d/70-persistent-net.rules
echo -n > /lib/udev/rules.d/75-persistent-net-generator.rules
&lt;/command>
&lt;/commands>
&lt;/template></programlisting>
<para>This Oz template specifies where to download the Centos 6.4
install ISO. Oz will use the version information to identify which
kickstart file to use. In this case, it will be <link
xlink:href="https://github.com/clalancette/oz/blob/master/oz/auto/RHEL6.auto"
>RHEL6.auto</link>. It adds EPEL as a repository and install the
<literal>epel-release</literal>, <literal>cloud-utils</literal>,
and <literal>cloud-init</literal> packages, as specified in the
<literal>packages</literal> section of the file.</para>
<para>After Oz completes the initial OS install using the kickstart file,
it customizes the image with an update. It also removes any reference to the eth0
device that libvirt creates while Oz does the customizing, as
specified in the <literal>command</literal> section of the XML
file.</para>
<para>To run this:</para>
<screen><prompt>#</prompt> <userinput>oz-install -d3 -u centos64.tdl -x centos64-libvirt.xml</userinput></screen>
<itemizedlist>
<listitem>
<para>The <literal>-d3</literal> flag tells Oz to show
status information as it runs.</para>
</listitem>
<listitem>
<para>The <literal>-u</literal> tells Oz to do the
customization (install extra packages, run the commands)
once it does the initial install.</para>
</listitem>
<listitem>
<para>The <literal>-x &lt;filename></literal> flag tells Oz
what filename to use to write out a libvirt XML file
(otherwise it will default to something like
<filename>centos64Apr_03_2013-12:39:42</filename>).</para>
</listitem>
</itemizedlist>
<para>If you leave out the <literal>-u</literal> flag, or
you want to edit the file to do additional customizations, you can
use the <command>oz-customize</command> command, using the libvirt
XML file that <command>oz-install</command> creates. For example:</para>
<screen><prompt>#</prompt> <userinput>oz-customize -d3 centos64.tdl centos64-libvirt.xml</userinput></screen>
<para>Oz will invoke libvirt to boot the image inside of KVM, then Oz will
ssh into the instance and perform the customizations.</para>
</section>
<section xml:id="vmbuilder">
<title>VMBuilder</title>
<para><link xlink:href="https://launchpad.net/vmbuilder"
>VMBuilder</link> (Virtual Machine Builder) is a
command-line tool that creates virtual machine images for
different hypervisors. The version of VMBuilder that ships
with Ubuntu can only create Ubuntu virtual machine guests.
The version of VMBuilder that ships with Debian can create
Ubuntu and Debian virtual machine guests.</para>
<para>The <link
xlink:href="https://help.ubuntu.com/12.04/serverguide/jeos-and-vmbuilder.html"
><citetitle>Ubuntu Server Guide</citetitle></link>
has documentation on how to use VMBuilder to create an
Ubuntu image.</para>
</section>
<section xml:id="veewee">
<title>VeeWee</title>
<para><link xlink:href="https://github.com/jedi4ever/veewee">
VeeWee</link> is often used to build <link
xlink:href="http://vagrantup.com">Vagrant</link>
boxes, but it can also be used to build KVM images.</para>
</section>
<section xml:id="packer">
<title>Packer</title>
<para><link xlink:href="https://packer.io">
Packer</link> is a tool for creating machine images for multiple platforms
from a single source configuration.
</para>
</section>
<section xml:id="imagefactory">
<title>imagefactory</title>
<para><link xlink:href="http://imgfac.org/"
>imagefactory</link> is a newer tool designed to
automate the building, converting, and uploading images to
different cloud providers. It uses Oz as its back-end and
includes support for OpenStack-based clouds.</para>
</section>
<section xml:id="susestudio">
<title>SUSE Studio</title>
<para><link xlink:href="http://susestudio.com">SUSE
Studio</link> is a web application for building and
testing software applications in a web browser. It
supports the creation of physical, virtual or cloud-based
applications and includes support for building images for
OpenStack based clouds using SUSE Linux Enterprise and
openSUSE as distributions.</para>
</section>
</chapter>

View File

@ -1,185 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_creating_images_manually">
<title>Create images manually</title>
<para>Creating a new image is a step done outside of your
OpenStack installation. You create the new image manually on
your own system and then upload the image to your
cloud.</para>
<para>To create a new image, you will need the installation CD or
DVD ISO file for the guest operating system. You'll also need
access to a virtualization tool. You can use KVM for this. Or,
if you have a GUI desktop virtualization tool (such as, VMware
Fusion or VirtualBox), you can use that instead and just
convert the file to raw once you are done.</para>
<para>When you create a new virtual machine image, you will need
to connect to the graphical console of the hypervisor, which
acts as the virtual machine's display and allows you to
interact with the guest operating system's installer using
your keyboard and mouse. KVM can expose the graphical console
using the <link
xlink:href="https://en.wikipedia.org/wiki/Virtual_Network_Computing"
>VNC</link> (Virtual Network Computing) protocol or the
newer <link xlink:href="http://spice-space.org">SPICE</link>
protocol. We'll use the VNC protocol here, since you're more
likely to be able to find a VNC client that works on your
local desktop.</para>
<section xml:id="net-running">
<title>Verify the libvirt default network is running</title>
<para>Before starting a virtual machine with libvirt, verify
that the libvirt "default" network has been started. This
network must be active for your virtual machine to be able
to connect out to the network. Starting this network will
create a Linux bridge (usually called
<literal>virbr0</literal>), iptables rules, and a
dnsmasq process that will serve as a DHCP server.</para>
<para>To verify that the libvirt "default" network is enabled,
use the <command>virsh net-list</command> command and
verify that the "default" network is active:</para>
<screen><prompt>#</prompt> <userinput>virsh net-list</userinput>
<computeroutput>Name State Autostart
-----------------------------------------
default active yes</computeroutput></screen>
<para>If the network is not active, start it by doing:</para>
<screen><prompt>#</prompt> <userinput>virsh net-start default</userinput></screen>
</section>
<section xml:id="virt-manager">
<title>Use the virt-manager X11 GUI</title>
<para>If you plan to create a virtual machine image on a
machine that can run X11 applications, the simplest way to
do so is to use the <command>virt-manager</command> GUI,
which is installable as the
<literal>virt-manager</literal> package on both
Fedora-based and Debian-based systems. This GUI has an
embedded VNC client that will let you view and
interact with the guest's graphical console.</para>
<para>If you are building the image on a headless server, and
you have an X server on your local machine, you can launch
<command>virt-manager</command> using ssh X11
forwarding to access the GUI. Since virt-manager interacts
directly with libvirt, you typically need to be root to
access it. If you can ssh directly in as root (or with a
user that has permissions to interact with libvirt),
do:<screen><prompt>$</prompt> <userinput>ssh -X root@server virt-manager</userinput></screen></para>
<para>If the account you use to ssh into your server does not
have permissions to run libvirt, but has sudo privileges, do:<screen><prompt>$</prompt> <userinput>ssh -X root@server</userinput>
<prompt>$</prompt> <userinput>sudo virt-manager</userinput> </screen><note>
<para>The <literal>-X</literal> flag passed to ssh
will enable X11 forwarding over ssh. If this does
not work, try replacing it with the
<literal>-Y</literal> flag.</para>
</note></para>
<para>Click the "New" button at the top-left and step through
the instructions. <mediaobject>
<imageobject>
<imagedata fileref="figures/virt-manager-new.png"
format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>You will be shown a series of dialog boxes
that will allow you to specify information about the
virtual machine.</para>
<note><para>
When using qcow2 format images you should check the option
'customize before install', go to disk properties and
explicitly select the qcow2 format. This ensures the virtual
machine disk size will be correct.
</para></note>
</section>
<section xml:id="virt-install">
<title>Use virt-install and connect by using a local VNC
client</title>
<para>If you do not wish to use virt-manager (for example, you
do not want to install the dependencies on your server,
you don't have an X server running locally, the X11
forwarding over SSH isn't working), you can use the
<command>virt-install</command> tool to boot the
virtual machine through libvirt and connect to the
graphical console from a VNC client installed on your
local machine.</para>
<para>Because VNC is a standard protocol, there are multiple
clients available that implement the VNC spec, including
<link
xlink:href="http://sourceforge.net/apps/mediawiki/tigervnc/index.php?title=Welcome_to_TigerVNC"
>TigerVNC</link> (multiple platforms), <link
xlink:href="http://tightvnc.com/">TightVNC</link>
(multiple platforms), <link
xlink:href="http://realvnc.com/">RealVNC</link>
(multiple platforms), <link
xlink:href="http://sourceforge.net/projects/chicken/"
>Chicken</link> (Mac OS X), <link
xlink:href="http://userbase.kde.org/Krdc">Krde</link>
(KDE), and <link
xlink:href="http://projects.gnome.org/vinagre/"
>Vinagre</link> (GNOME).</para>
<para>The following example shows how to use the
<command>qemu-img</command> command to create an empty
image file, and <command>virt-install</command> command to
start up a virtual machine using that image file. As
root:</para>
<screen><prompt>#</prompt> <command>qemu-img create -f qcow2 /data/centos-6.4.qcow2 10G</command>
<prompt>#</prompt> <userinput>virt-install --virt-type kvm --name centos-6.4 --ram 1024 \
--cdrom=/data/CentOS-6.4-x86_64-netinstall.iso \
--disk path=/data/centos-6.4.qcow2,size=10,format=qcow2 \
--network network=default \
--graphics vnc,listen=0.0.0.0 --noautoconsole \
--os-type=linux --os-variant=rhel6</userinput>
<computeroutput>
Starting install...
Creating domain... | 0 B 00:00
Domain installation still in progress. You can reconnect to
the console to complete the installation process.</computeroutput></screen>
<para>
The KVM hypervisor starts the virtual machine with the
libvirt name, <literal>centos-6.4</literal>, with
1024&nbsp;MB of RAM. The virtual machine also has a virtual
CD-ROM drive associated with the
<filename>/data/CentOS-6.4-x86_64-netinstall.iso</filename>
file and a local 10&nbsp;GB hard disk in qcow2 format that is
stored in the host at
<filename>/data/centos-6.4.qcow2</filename>.
It configures networking to
use libvirt's default network. There is a VNC server that
is listening on all interfaces, and libvirt will not
attempt to launch a VNC client automatically nor try to
display the text console
(<literal>--no-autoconsole</literal>). Finally,
libvirt will attempt to optimize the configuration for a
Linux guest running a RHEL 6.x distribution.<note>
<para>When using the libvirt
<literal>default</literal> network, libvirt will
connect the virtual machine's interface to a bridge
called <literal>virbr0</literal>. There is a dnsmasq
process managed by libvirt that will hand out an IP
address on the 192.168.122.0/24 subnet, and libvirt
has iptables rules for doing NAT for IP addresses on
this subnet.</para>
</note></para>
<para>Run the <command>virt-install --os-variant
list</command> command to see a range of allowed
<literal>--os-variant</literal> options.</para>
<para>Use the <command>virsh vncdisplay
<replaceable>vm-name</replaceable></command>
command to get the VNC port number.</para>
<screen><prompt>#</prompt> <userinput>virsh vncdisplay centos-6.4</userinput>
<computeroutput>:1</computeroutput></screen>
<para>In the example above, the guest
<literal>centos-6.4</literal> uses VNC display
<literal>:1</literal>, which corresponds to TCP port
<literal>5901</literal>. You should be able to connect
a VNC client running on your local machine to display
:1 on the remote machine and step through the installation
process.</para>
</section>
<xi:include href="section_centos-example.xml"/>
<xi:include href="section_ubuntu-example.xml"/>
<xi:include href="section_fedora-example.xml"/>
<xi:include href="section_windows-example.xml"/>
<xi:include href="section_freebsd-example.xml"/>
</chapter>

View File

@ -1,140 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_introduction">
<title>Introduction</title>
<para>An OpenStack Compute cloud is not very useful unless you have virtual machine images
(which some people call "virtual appliances"). This guide describes how to obtain, create,
and modify virtual machine images that are compatible with OpenStack.</para>
<para>To keep things brief, we'll sometimes use the term "image" instead of "virtual machine
image".</para>
<para>What is a virtual machine image?</para>
<para>A virtual machine image is a single file which contains a virtual disk that has a
bootable operating system installed on it.</para>
<para>Virtual machine images come in different formats, some of which are described below.</para>
<variablelist>
<varlistentry>
<term>Raw</term>
<listitem><para>The "raw" image format is the simplest one, and is
natively supported by both KVM and Xen hypervisors. You
can think of a raw image as being the bit-equivalent of a
block device file, created as if somebody had copied, say,
<filename>/dev/sda</filename> to a file using the
<command>dd</command> command. <note>
<para>We don't recommend creating raw images by dd'ing
block device files, we discuss how to create raw
images later.</para>
</note></para></listitem>
</varlistentry>
<varlistentry>
<term>qcow2</term>
<listitem><para>The <link xlink:href="http://en.wikibooks.org/wiki/QEMU/Images">qcow2</link> (QEMU
copy-on-write version 2) format is commonly used with the KVM hypervisor. It has some
additional features over the raw format, such as:<itemizedlist>
<listitem>
<para>Using sparse representation, so the image size is smaller.</para>
</listitem>
<listitem>
<para>Support for snapshots.</para>
</listitem>
</itemizedlist></para>
<para>Because qcow2 is sparse, qcow2 images are typically smaller than raw images. Smaller images mean faster uploads, so it's often faster to convert a raw image to qcow2 for uploading instead of uploading the raw file directly.</para>
<para>
<note>
<para>Because raw images don't support snapshots, OpenStack Compute will
automatically convert raw image files to qcow2 as needed.</para>
</note>
</para></listitem>
</varlistentry>
<varlistentry>
<term>AMI/AKI/ARI</term>
<listitem><para>The <link
xlink:href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html"
>AMI/AKI/ARI </link>format was the initial image
format supported by Amazon EC2. The image consists of
three files:<itemizedlist>
<listitem><para>AMI (Amazon Machine Image):</para>
<para>This is a virtual machine image in raw
format, as described above.</para>
</listitem>
<listitem>
<para>AKI (Amazon Kernel Image)</para>
<para>A kernel file that the hypervisor will
load initially to boot the image. For a
Linux machine, this would be a
<emphasis>vmlinuz</emphasis> file.
</para>
</listitem>
<listitem>
<para>ARI (Amazon Ramdisk Image)</para>
<para>An optional ramdisk file mounted at boot
time. For a Linux machine, this would be
an <emphasis>initrd</emphasis>
file.</para>
</listitem>
</itemizedlist></para></listitem>
</varlistentry>
<varlistentry>
<term>UEC tarball</term>
<listitem><para>A UEC (Ubuntu Enterprise Cloud) tarball is a gzipped tarfile that contains an AMI
file, AKI file, and ARI file.<note>
<para>Ubuntu Enterprise Cloud refers to a discontinued Eucalyptus-based Ubuntu cloud
solution that has been replaced by the OpenStack-based Ubuntu Cloud
Infrastructure.</para>
</note></para></listitem>
</varlistentry>
<varlistentry>
<term>VMDK</term>
<listitem><para>VMware's ESXi hypervisor uses the <link
xlink:href="http://www.vmware.com/technical-resources/interfaces/vmdk.html"
>VMDK</link> (Virtual Machine Disk) format for images.</para></listitem>
</varlistentry>
<varlistentry>
<term>VDI</term>
<listitem><para>VirtualBox uses the <link
xlink:href="https://forums.virtualbox.org/viewtopic.php?t=8046">VDI</link> (Virtual
Disk Image) format for image files. None of the OpenStack Compute hypervisors support
VDI directly, so you will need to convert these files to a different format to use them
with OpenStack.</para></listitem>
</varlistentry>
<varlistentry>
<term>VHD</term>
<listitem><para>Microsoft Hyper-V uses the VHD (Virtual Hard Disk) format for images.</para></listitem>
</varlistentry>
<varlistentry>
<term>VHDX</term>
<listitem><para>The version of Hyper-V that ships with Microsoft Server 2012 uses the newer <link
xlink:href="http://technet.microsoft.com/en-us/library/hh831446.aspx">VHDX</link>
format, which has some additional features over VHD such as support for larger disk
sizes and protection against data corruption during power failures.</para></listitem>
</varlistentry>
<varlistentry>
<term>OVF</term>
<listitem><para><link xlink:href="http://dmtf.org/sites/default/files/OVF_Overview_Document_2010.pdf"
>OVF</link> (Open Virtualization Format) is a packaging format for virtual
machines, defined by the Distributed Management Task Force (DMTF) standards
group. An OVF package contains one or more image files, a .ovf XML metadata file
that contains information about the virtual machine, and possibly other files as
well.</para>
<para>An OVF package can be distributed in different ways. For example, it could be
distributed as a set of discrete files, or as a tar archive file with an .ova (open
virtual appliance/application) extension.</para>
<para>OpenStack Compute does not currently have support for OVF packages, so you will need
to extract the image file(s) from an OVF package if you wish to use it with
OpenStack.</para></listitem>
</varlistentry>
<varlistentry>
<term>ISO</term>
<listitem><para>The <link
xlink:href="http://www.ecma-international.org/publications/standards/Ecma-119.htm"
>ISO</link> format is a disk image formatted with the read-only ISO 9660 (also known
as ECMA-119) filesystem commonly used for CDs and DVDs. While we don't normally think of
ISO as a virtual machine image format, since ISOs contain bootable filesystems with an
installed operating system, you can treat them the same as you treat other virtual machine
image files.</para></listitem>
</varlistentry></variablelist>
<xi:include href="section_glance_image-formats.xml"/>
<xi:include href="section_glance-image-metadata.xml"/>
</chapter>

View File

@ -1,394 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_modifying_images">
<title>Modify images</title>
<?dbhtml stop-chunking?>
<para>Once you have obtained a virtual machine image, you may want
to make some changes to it before uploading it to the
OpenStack Image service. Here we describe several tools
available that allow you to modify images.<warning>
<para>Do not attempt to use these tools to modify an image
that is attached to a running virtual machine. These
tools are designed to only modify images that are not
currently running.</para>
</warning></para>
<section xml:id="guestfish">
<title>guestfish</title>
<para>The <command>guestfish</command> program is a tool from
the <link xlink:href="http://libguestfs.org/"
>libguestfs</link> project that allows you to modify
the files inside of a virtual machine image.</para>
<note>
<para><command>guestfish</command> does not mount the
image directly into the local file system. Instead, it
provides you with a shell interface that enables you
to view, edit, and delete files. Many of
<command>guestfish</command> commands, such as
<command>touch</command>,
<command>chmod</command>, and <command>rm</command>,
resemble traditional bash commands.</para>
</note>
<simplesect>
<title>Example guestfish session</title>
<para>Sometimes, you must modify a virtual machine image
to remove any traces of the MAC address that was
assigned to the virtual network interface card when
the image was first created, because the MAC address
will be different when it boots the next time. This
example shows how to use guestfish to remove
references to the old MAC address by deleting the
<filename>/etc/udev/rules.d/70-persistent-net.rules</filename>
file and removing the <literal>HWADDR</literal> line
from the
<filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename>
file.</para>
<para>Assume that you have a CentOS qcow2 image called
<filename>centos63_desktop.img</filename>. Mount
the image in read-write mode as root, as
follows:</para>
<screen><prompt>#</prompt> <userinput>guestfish --rw -a centos63_desktop.img</userinput>
<computeroutput>
Welcome to guestfish, the libguestfs filesystem interactive shell for
editing virtual machine filesystems.
Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell
>&lt;fs></computeroutput></screen>
<para>This starts a guestfish session. Note that the
guestfish prompt looks like a fish: <literal>>
&lt;fs></literal>.</para>
<para>We must first use the <command>run</command> command
at the guestfish prompt before we can do anything
else. This will launch a virtual machine, which will
be used to perform all of the file
manipulations.<screen><prompt>>&lt;fs></prompt> <userinput>run</userinput></screen>
We can now view the file systems in the image using the
<command>list-filesystems</command>
command:<screen><prompt>>&lt;fs></prompt> <userinput>list-filesystems</userinput>
<computeroutput>/dev/vda1: ext4
/dev/vg_centosbase/lv_root: ext4
/dev/vg_centosbase/lv_swap: swap</computeroutput></screen>We
need to mount the logical volume that contains the
root partition:
<screen><prompt>>&lt;fs></prompt> <userinput>mount /dev/vg_centosbase/lv_root /</userinput></screen></para>
<para>Next, we want to delete a file. We can use the
<command>rm</command> guestfish command, which
works the same way it does in a traditional
shell.</para>
<para><screen><prompt>>&lt;fs></prompt> <userinput>rm /etc/udev/rules.d/70-persistent-net.rules</userinput></screen>We
want to edit the <filename>ifcfg-eth0</filename> file
to remove the <literal>HWADDR</literal> line. The
<command>edit</command> command will copy the file
to the host, invoke your editor, and then copy the
file back.
<screen><prompt>>&lt;fs></prompt> <userinput>edit /etc/sysconfig/network-scripts/ifcfg-eth0</userinput></screen></para>
<para>If you want to modify this image to load the 8021q
kernel at boot time, you must create an executable
script in the
<filename>/etc/sysconfig/modules/</filename>
directory. You can use the <command>touch</command>
guestfish command to create an empty file, the
<command>edit</command> command to edit it, and
the <command>chmod</command> command to make it
executable.<screen><prompt>>&lt;fs></prompt> <userinput>touch /etc/sysconfig/modules/8021q.modules</userinput>
<prompt>>&lt;fs></prompt> <userinput>edit /etc/sysconfig/modules/8021q.modules</userinput></screen>
We add the following line to the file and save
it:<programlisting>modprobe 8021q</programlisting>Then
we set to executable:
<screen>>&lt;fs> <userinput>chmod 0755 /etc/sysconfig/modules/8021q.modules</userinput></screen></para>
<para>We're done, so we can exit using the
<command>exit</command>
command:<screen><prompt>>&lt;fs></prompt> <userinput>exit</userinput></screen></para>
</simplesect>
<simplesect>
<title>Go further with guestfish</title>
<para>There is an enormous amount of functionality in
guestfish and a full treatment is beyond the scope of
this document. Instead, we recommend that you read the
<link
xlink:href="http://libguestfs.org/guestfs-recipes.1.html"
>guestfs-recipes</link> documentation page for a
sense of what is possible with these tools.</para>
</simplesect>
</section>
<section xml:id="guestmount">
<title>guestmount</title>
<para>For some types of changes, you may find it easier to
mount the image's file system directly in the guest. The
<command>guestmount</command> program, also from the
libguestfs project, allows you to do so.</para>
<para>For example, to mount the root partition from our
<filename>centos63_desktop.qcow2</filename> image to
<filename>/mnt</filename>, we can do:</para>
<para>
<screen><prompt>#</prompt> <userinput>guestmount -a centos63_desktop.qcow2 -m /dev/vg_centosbase/lv_root --rw /mnt</userinput></screen>
</para>
<para>If we didn't know in advance what the mount point is in
the guest, we could use the <literal>-i</literal>(inspect)
flag to tell guestmount to automatically determine what
mount point to
use:<screen><prompt>#</prompt> <userinput>guestmount -a centos63_desktop.qcow2 -i --rw /mnt</userinput></screen>Once
mounted, we could do things like list the installed
packages using
rpm:<screen><prompt>#</prompt> <userinput>rpm -qa --dbpath /mnt/var/lib/rpm</userinput></screen>
Once done, we
unmount:<screen><prompt>#</prompt> <userinput>umount /mnt</userinput></screen></para>
</section>
<section xml:id="virt-tools">
<title>virt-* tools</title>
<para>The <link xlink:href="http://libguestfs.org/"
>libguestfs</link> project has a number of other
useful tools, including:<itemizedlist>
<listitem>
<para><link
xlink:href="http://libguestfs.org/virt-edit.1.html"
>virt-edit</link> for editing a file
inside of an image.</para>
</listitem>
<listitem>
<para><link
xlink:href="http://libguestfs.org/virt-df.1.html"
>virt-df</link> for displaying free space
inside of an image.</para>
</listitem>
<listitem>
<para><link
xlink:href="http://libguestfs.org/virt-resize.1.html"
>virt-resize</link> for resizing an
image.</para>
</listitem>
<listitem>
<para><link
xlink:href="http://libguestfs.org/virt-sysprep.1.html"
>virt-sysprep</link> for preparing an
image for distribution (for example, delete
SSH host keys, remove MAC address info, or
remove user accounts).</para>
</listitem>
<listitem>
<para><link
xlink:href="http://libguestfs.org/virt-sparsify.1.html"
>virt-sparsify</link> for making an image
sparse.</para>
</listitem>
<listitem>
<para><link
xlink:href="http://libguestfs.org/virt-v2v/"
>virt-p2v</link> for converting a physical
machine to an image that runs on KVM.</para>
</listitem>
<listitem>
<para><link
xlink:href="http://libguestfs.org/virt-v2v/"
>virt-v2v</link> for converting Xen and
VMware images to KVM images.</para>
</listitem>
</itemizedlist></para>
<simplesect>
<title>Modify a single file inside of an image</title>
<para>This example shows how to use
<command>virt-edit</command> to modify a file. The
command can take either a filename as an argument with
the <literal>-a</literal> flag, or a domain name as an
argument with the <literal>-d</literal> flag. The
following examples shows how to use this to modify the
<filename>/etc/shadow</filename> file in instance
with libvirt domain name
<literal>instance-000000e1</literal> that is
currently running:</para>
<para>
<screen><prompt>#</prompt> <userinput>virsh shutdown instance-000000e1</userinput>
<prompt>#</prompt> <userinput>virt-edit -d instance-000000e1 /etc/shadow</userinput>
<prompt>#</prompt> <userinput>virsh start instance-000000e1</userinput></screen>
</para>
</simplesect>
<simplesect>
<title>Resize an image</title>
<para>Here is an example of how to use
<command>virt-resize</command> to resize an image.
Assume we have a 16&nbsp;GB Windows image in qcow2 format
that we want to resize to 50&nbsp;GB. First, we use
<command>virt-filesystems</command> to identify
the
partitions:<screen><prompt>#</prompt> <userinput>virt-filesystems --long --parts --blkdevs -h -a /data/images/win2012.qcow2</userinput>
<computeroutput>Name Type MBR Size Parent
/dev/sda1 partition 07 350M /dev/sda
/dev/sda2 partition 07 16G /dev/sda
/dev/sda device - 16G -
</computeroutput></screen></para>
<para>In this case, it's the
<filename>/dev/sda2</filename> partition that we
want to resize. We create a new qcow2 image and use
the <command>virt-resize</command> command to write a
resized copy of the original into the new
image:
<screen><prompt>#</prompt> <userinput>qemu-img create -f qcow2 /data/images/win2012-50gb.qcow2 50G</userinput>
<prompt>#</prompt> <userinput>virt-resize --expand /dev/sda2 /data/images/win2012.qcow2 \
/data/images/win2012-50gb.qcow2</userinput>
<computeroutput>Examining /data/images/win2012.qcow2 ...
**********
Summary of changes:
/dev/sda1: This partition will be left alone.
/dev/sda2: This partition will be resized from 15.7G to 49.7G. The
filesystem ntfs on /dev/sda2 will be expanded using the
'ntfsresize' method.
**********
Setting up initial partition table on /data/images/win2012-50gb.qcow2 ...
Copying /dev/sda1 ...
100% ⟦▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓⟧ 00:00
Copying /dev/sda2 ...
100% ⟦▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓⟧ 00:00
Expanding /dev/sda2 using the 'ntfsresize' method ...
Resize operation completed with no errors. Before deleting the old
disk, carefully check that the resized disk boots and works correctly.
</computeroutput></screen></para>
</simplesect>
</section>
<section xml:id="losetup-kpartx-nbd">
<title>Loop devices, kpartx, network block devices</title>
<para>If you don't have access to libguestfs, you can mount
image file systems directly in the host using loop
devices, kpartx, and network block devices.<warning>
<para>Mounting untrusted guest images using the tools
described in this section is a security risk,
always use libguestfs tools such as guestfish and
guestmount if you have access to them. See <link
xlink:href="https://www.berrange.com/posts/2013/02/20/a-reminder-why-you-should-never-mount-guest-disk-images-on-the-host-os/"
>A reminder why you should never mount guest
disk images on the host OS</link> by Daniel
Berrangé for more details.</para>
</warning></para>
<simplesect>
<title>Mount a raw image (without LVM)</title>
<para>If you have a raw virtual machine image that is not
using LVM to manage its partitions, use the
<command>losetup</command> command to find an
unused loop device.
<screen><prompt>#</prompt> <userinput>losetup -f</userinput>
<computeroutput>/dev/loop0</computeroutput></screen></para>
<para>In this example, <filename>/dev/loop0</filename> is
free. Associate a loop device with the raw
image:<screen><prompt>#</prompt> <userinput>losetup /dev/loop0 fedora17.img</userinput></screen></para>
<para>If the image only has a single partition, you can
mount the loop device
directly:<screen><prompt>#</prompt> <userinput>mount /dev/loop0 /mnt</userinput></screen></para>
<para>If the image has multiple partitions, use
<command>kpartx</command> to expose the partitions
as separate devices (for example,
<filename>/dev/mapper/loop0p1</filename>), then
mount the partition that corresponds to the root file
system:<screen><prompt>#</prompt> <userinput>kpartx -av /dev/loop0</userinput></screen></para>
<para>If the image has, say three partitions (/boot, /,
swap), there should be one new device created per
partition:<screen><prompt>$</prompt> <userinput>ls -l /dev/mapper/loop0p*</userinput>
<computeroutput>brw-rw---- 1 root disk 43, 49 2012-03-05 15:32 /dev/mapper/loop0p1
brw-rw---- 1 root disk 43, 50 2012-03-05 15:32 /dev/mapper/loop0p2
brw-rw---- 1 root disk 43, 51 2012-03-05 15:32 /dev/mapper/loop0p3</computeroutput></screen>To
mount the second partition, as
root:<screen><prompt>#</prompt> <userinput>mkdir /mnt/image</userinput>
<prompt>#</prompt> <userinput>mount /dev/mapper/loop0p2 /mnt</userinput></screen>Once
you're done, to clean
up:<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
<prompt>#</prompt> <userinput>kpartx -d /dev/loop0</userinput>
<prompt>#</prompt> <userinput>losetup -d /dev/loop0</userinput></screen></para>
</simplesect>
<simplesect>
<title>Mount a raw image (with LVM)</title>
<para>If your partitions are managed with LVM, use losetup
and kpartx as in the previous example to expose the
partitions to the host.</para>
<screen><prompt>#</prompt> <userinput>losetup -f</userinput>
<computeroutput>/dev/loop0</computeroutput>
<prompt>#</prompt> <userinput>losetup /dev/loop0 rhel62.img</userinput>
<prompt>#</prompt> <userinput>kpartx -av /dev/loop0</userinput></screen>
<para>Next, you need to use the <command>vgscan</command>
command to identify the LVM volume groups and then
<command>vgchange</command> to expose the volumes
as devices:</para>
<screen><prompt>#</prompt> <userinput>vgscan</userinput>
<computeroutput>Reading all physical volumes. This may take a while...
Found volume group "vg_rhel62x8664" using metadata type lvm2</computeroutput>
<prompt>#</prompt> <userinput>vgchange -ay</userinput>
<computeroutput> 2 logical volume(s) in volume group "vg_rhel62x8664" now active</computeroutput>
<prompt>#</prompt> <userinput>mount /dev/vg_rhel62x8664/lv_root /mnt</userinput></screen>
<para>Clean up when you're done:</para>
<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
<prompt>#</prompt> <userinput>vgchange -an vg_rhel62x8664</userinput>
<prompt>#</prompt> <userinput>kpartx -d /dev/loop0</userinput>
<prompt>#</prompt> <userinput>losetup -d /dev/loop0</userinput></screen>
</simplesect>
<simplesect>
<title>Mount a qcow2 image (without LVM)</title>
<para>You need the <literal>nbd</literal> (network block
device) kernel module loaded to mount qcow2 images.
This will load it with support for 16 block devices,
which is fine for our purposes. As
root:<screen><prompt>#</prompt> <userinput>modprobe nbd max_part=16</userinput></screen></para>
<para>Assuming the first block device
(<filename>/dev/nbd0</filename>) is not currently
in use, we can expose the disk partitions using the
<command>qemu-nbd</command> and
<command>partprobe</command> commands. As
root:<screen><prompt>#</prompt> <userinput>qemu-nbd -c /dev/nbd0 image.qcow2</userinput>
<prompt>#</prompt> <userinput>partprobe /dev/nbd0</userinput></screen></para>
<para>If the image has, say three partitions (/boot, /,
swap), there should be one new device created for
each partition:</para>
<screen><prompt>$</prompt> <userinput>ls -l /dev/nbd3*</userinput>
<computeroutput>brw-rw---- 1 root disk 43, 48 2012-03-05 15:32 /dev/nbd0
brw-rw---- 1 root disk 43, 49 2012-03-05 15:32 /dev/nbd0p1
brw-rw---- 1 root disk 43, 50 2012-03-05 15:32 /dev/nbd0p2
brw-rw---- 1 root disk 43, 51 2012-03-05 15:32 /dev/nbd0p3</computeroutput></screen>
<note>
<para>If the network block device you selected was
already in use, the initial
<command>qemu-nbd</command> command will fail
silently, and the
<filename>/dev/nbd3p{1,2,3}</filename> device
files will not be created.</para>
</note>
<para>If the image partitions are not managed with LVM,
they can be mounted directly:</para>
<screen><prompt>#</prompt> <userinput>mkdir /mnt/image</userinput>
<prompt>#</prompt> <userinput>mount /dev/nbd3p2 /mnt</userinput></screen>
<para>When you're done, clean up:</para>
<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
<prompt>#</prompt> <userinput>qemu-nbd -d /dev/nbd0</userinput></screen>
</simplesect>
<simplesect>
<title>Mount a qcow2 image (with LVM)</title>
<para>If the image partitions are managed with LVM, after
you use <command>qemu-nbd</command> and
<command>partprobe</command>, you must use
<command>vgscan</command> and <command>vgchange
-ay</command> in order to expose the LVM
partitions as devices that can be
mounted:<screen><prompt>#</prompt> <userinput>modprobe nbd max_part=16</userinput>
<prompt>#</prompt> <userinput>qemu-nbd -c /dev/nbd0 image.qcow2</userinput>
<prompt>#</prompt> <userinput>partprobe /dev/nbd0</userinput><prompt>#</prompt> <userinput>vgscan</userinput>
<computeroutput> Reading all physical volumes. This may take a while...
Found volume group "vg_rhel62x8664" using metadata type lvm2</computeroutput>
<prompt>#</prompt> <userinput>vgchange -ay</userinput>
<computeroutput> 2 logical volume(s) in volume group "vg_rhel62x8664" now active</computeroutput>
<prompt>#</prompt> <userinput>mount /dev/vg_rhel62x8664/lv_root /mnt</userinput></screen></para>
<para>When you're done, clean
up:<screen><prompt>#</prompt> <userinput>umount /mnt</userinput>
<prompt>#</prompt> <userinput>vgchange -an vg_rhel62x8664</userinput>
<prompt>#</prompt> <userinput>qemu-nbd -d /dev/nbd0</userinput></screen></para>
</simplesect>
</section>
</chapter>

View File

@ -1,178 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_obtaining_images">
<title>Get images</title>
<?dbhtml stop-chunking?>
<para>The simplest way to obtain a virtual machine image that works with
OpenStack is to download one that someone else has already created.
Most of the images contain the
<systemitem class="process">cloud-init</systemitem> package to
support SSH key pair and user data injection. Because many of the
images disable SSH password authentication by default, boot the
image with an injected key pair. You can SSH into the instance with
the private key and default login account. See the
<link xlink:href="http://docs.openstack.org/user-guide"
>OpenStack End User Guide</link> for more information on how to
create and inject key pairs with OpenStack.</para>
<section xml:id="centos-images">
<title>CentOS images</title>
<para>The CentOS project maintains official images for direct
download.</para>
<itemizedlist>
<listitem>
<para>
<link xlink:href="http://cloud.centos.org/centos/6/images/"
>CentOS 6 images</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="http://cloud.centos.org/centos/7/images/"
>CentOS 7 images</link>
</para>
</listitem>
</itemizedlist>
<note>
<para>In a CentOS cloud image, the login account is
<literal>centos</literal>.</para>
</note>
</section>
<section xml:id="cirros-images">
<title>CirrOS (test) images</title>
<para>CirrOS is a minimal Linux distribution that was designed for use as a test image on
clouds such as OpenStack Compute. You can download a CirrOS image in various formats
from the <link xlink:href="https://download.cirros-cloud.net">CirrOS
download page</link>.</para>
<para>If your deployment uses QEMU or KVM, we recommend using the images in qcow2
format. The most recent 64-bit qcow2 image as of this writing is <link
xlink:href="http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"
>cirros-0.3.4-x86_64-disk.img</link>.
<note>
<para>In a CirrOS image, the login account is <literal>cirros</literal>. The
password is <literal>cubswin:)</literal></para>
</note></para>
</section>
<section xml:id="ubuntu-images">
<title>Official Ubuntu images</title>
<para>Canonical maintains an <link xlink:href="http://cloud-images.ubuntu.com/">official
set of Ubuntu-based images</link>.</para>
<para>Images are arranged by Ubuntu release, and by image release date, with "current" being
the most recent. For example, the page that contains the most recently built image for
Ubuntu 14.04 "Trusty Tahr" is <link
xlink:href="http://cloud-images.ubuntu.com/trusty/current/"
>http://cloud-images.ubuntu.com/trusty/current/</link>. Scroll to the bottom of the
page for links to images that can be downloaded directly.</para>
<para>If your deployment uses QEMU or KVM, we recommend using the images in qcow2
format. The most recent version of the 64-bit QCOW2 image for Ubuntu 14.04 is <link
xlink:href="http://uec-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img"
>trusty-server-cloudimg-amd64-disk1.img</link>.<note>
<para>In an Ubuntu cloud image, the login account is
<literal>ubuntu</literal>.</para>
</note></para>
</section>
<section xml:id="redhat-images">
<title>Official Red Hat Enterprise Linux images</title>
<para>
Red Hat maintains official Red Hat Enterprise Linux cloud
images. A valid Red Hat Enterprise Linux subscription is required
to download these images.
</para>
<itemizedlist>
<listitem>
<para>
<link xlink:href="https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.0/x86_64/product-downloads"
>Red Hat Enterprise Linux 7 KVM Guest Image</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://rhn.redhat.com/rhn/software/channel/downloads/Download.do?cid=16952"
>Red Hat Enterprise Linux 6 KVM Guest Image</link>
</para>
</listitem>
</itemizedlist>
<note>
<para>
In a RHEL cloud image, the login account is
<literal>cloud-user</literal>.
</para>
</note>
</section>
<section xml:id="fedora-images">
<title>Official Fedora images</title>
<para>The Fedora project maintains a list of official cloud images at
<link xlink:href="https://getfedora.org/en/cloud/download/" />.
<note>
<para>In a Fedora cloud image, the login account is
<literal>fedora</literal>.</para>
</note></para>
</section>
<section xml:id="suse-sles-images">
<title>Official openSUSE and SLES images</title>
<para>SUSE provides images for <link xlink:href="http://download.opensuse.org/repositories/Cloud:/Images:/">openSUSE</link>.
For SUSE Linux Enterprise Server (SLES), custom images can be built with
a web-based tool called <link xlink:href="http://susestudio.com">SUSE Studio</link>.
SUSE Studio can also be used to build custom openSUSE images.</para>
</section>
<section xml:id="debian-images">
<title>Official Debian images</title>
<para>Since January 2015,
<link xlink:href="http://cdimage.debian.org/cdimage/openstack/">Debian
provides images for direct download</link>. They are now made at the
same time as the CD and DVD images of Debian. However, until Debian 8.0
(aka Jessie) is out, these images are the weekly built images of the
testing distribution.</para>
<para>If you wish to build your own images of Debian 7.0 (aka Wheezy, the
current stable release of Debian), you can use the package which is
used to build the official Debian images. It is named
<package>openstack-debian-images</package>, and it
provides a simple script for building them. This package is available
in Debian Unstable, Debian Jessie, and through the wheezy-backports
repositories. To produce a Wheezy image, simply run:
<screen><prompt>#</prompt> <userinput>build-openstack-debian-image -r wheezy</userinput></screen></para>
<para>If building the image for Wheezy, packages like
<package>cloud-init</package>, <package>cloud-utils</package> or
<package>cloud-initramfs-growroot</package> will be pulled from
wheezy-backports. Also, the current version of
<package>bootlogd</package> in Wheezy doesn't support logging to
multiple consoles, which is needed so that both the OpenStack
Dashboard console and the <command>nova console-log</command>
console works. However, a <link
xlink:href="http://archive.gplhost.com/debian/pool/juno-backports/main/s/sysvinit/bootlogd_2.88dsf-41+deb7u2_amd64.deb">
fixed version is available from the non-official GPLHost
repository</link>. To install it on top of the image, it is possible
to use the <option>--hook-script</option> option of the
<command>build-openstack-debian-image</command> script, with this
kind of script as parameter:
<programlisting language="bash">#!/bin/sh
cp bootlogd_2.88dsf-41+deb7u2_amd64.deb ${BODI_CHROOT_PATH}
chroot ${BODI_CHROOT_PATH} dpkg -i bootlogd_2.88dsf-41+deb7u2_amd64.deb
rm ${BODI_CHROOT_PATH}/bootlogd_2.88dsf-41+deb7u2_amd64.deb</programlisting></para>
<note>
<para>In a Debian image, the login account is <literal>admin</literal>.</para>
</note>
</section>
<section xml:id="other-distros">
<title>Official images from other Linux distributions</title>
<para>As of this writing, we are not aware of other distributions that provide images for download.</para>
</section>
<section xml:id="rcb-images">
<title>Rackspace Cloud Builders (multiple distros)
images</title>
<para>Rackspace Cloud Builders maintains a list of pre-built images from various
distributions (Red Hat, CentOS, Fedora, Ubuntu). Links to these images can be found at
<link xlink:href="https://github.com/rackerjoe/oz-image-build"
>rackerjoe/oz-image-build on GitHub</link>.</para>
</section>
<section xml:id="windows-images">
<title>Microsoft Windows images</title>
<para>Cloudbase Solutions hosts an <link xlink:href="http://www.cloudbase.it/ws2012r2/"
>OpenStack Windows Server 2012 Standard Evaluation image</link> that runs on
Hyper-V, KVM, and XenServer/XCP.</para>
</section>
</chapter>

View File

@ -1,549 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_openstack_images">
<title>OpenStack Linux image requirements</title>
<?dbhtml stop-chunking?>
<para>For a Linux-based image to have full functionality in an
OpenStack Compute cloud, there are a few requirements. For
some of these, you can fulfill the requirements by installing
the <link
xlink:href="https://cloudinit.readthedocs.org/en/latest/"
><package>cloud-init</package></link> package. Read
this section before you create your own image to be sure that
the image supports the OpenStack features that you plan to use.</para>
<itemizedlist>
<listitem>
<para>Disk partitions and resize root partition on boot
(<package>cloud-init</package>)</para>
</listitem>
<listitem>
<para>No hard-coded MAC address information</para>
</listitem>
<listitem>
<para>SSH server running</para>
</listitem>
<listitem>
<para>Disable firewall</para>
</listitem>
<listitem>
<para>Access instance using ssh public key
(<package>cloud-init</package>)</para>
</listitem>
<listitem>
<para>Process user data and other metadata
(<package>cloud-init</package>)</para>
</listitem>
<listitem>
<para>Paravirtualized Xen support in Linux kernel (Xen
hypervisor only with Linux kernel version &lt;
3.0)</para>
</listitem>
</itemizedlist>
<section xml:id="support-resizing">
<title>Disk partitions and resize root partition on boot
(cloud-init)</title>
<para>When you create a Linux image, you must decide how to
partition the disks. The choice of partition method can
affect the resizing functionality, as described in the
following sections.</para>
<para>The size of the disk in a virtual machine image is
determined when you initially create the image. However,
OpenStack lets you launch instances with different size
drives by specifying different flavors. For example, if
your image was created with a 5&nbsp;GB disk, and you
launch an instance with a flavor of
<literal>m1.small</literal>. The resulting virtual
machine instance has, by default, a primary disk size of
10&nbsp;GB. When the disk for an instance is resized up,
zeros are just added to the end.</para>
<para>Your image must be able to resize its partitions on boot
to match the size requested by the user. Otherwise, after
the instance boots, you must manually resize the
partitions to access the additional storage to which you
have access when the disk size associated with the flavor
exceeds the disk size with which your image was
created.</para>
<simplesect>
<title>Xen: 1 ext3/ext4 partition (no LVM, no /boot, no
swap)</title>
<para>If you use the OpenStack XenAPI driver, the Compute
service automatically adjusts the partition and file
system for your instance on boot. Automatic resize
occurs if the following conditions are all
true:</para>
<itemizedlist>
<listitem>
<para><literal>auto_disk_config=True</literal> is
set as a property on the image in the image
registry.</para>
</listitem>
<listitem>
<para>The disk on the image has only one
partition.</para>
</listitem>
<listitem>
<para>The file system on the one partition is ext3
or ext4.</para>
</listitem>
</itemizedlist>
<para>Therefore, if you use Xen, we recommend that when
you create your images, you create a single ext3 or
ext4 partition (not managed by LVM). Otherwise, read
on.</para>
</simplesect>
<simplesect>
<title>Non-Xen with cloud-init/cloud-tools: One ext3/ext4
partition (no LVM, no /boot, no swap)</title>
<para>You must configure these items for your
image:</para>
<itemizedlist>
<listitem>
<para>The partition table for the image describes
the original size of the image.</para>
</listitem>
<listitem>
<para>The file system for the image fills the
original size of the image.</para>
</listitem>
</itemizedlist>
<para>Then, during the boot process, you must:</para>
<itemizedlist>
<listitem>
<para>Modify the partition table to make it aware
of the additional space:</para>
<itemizedlist>
<listitem>
<para>If you do not use LVM, you must
modify the table to extend the
existing root partition to encompass
this additional space.</para>
</listitem>
<listitem>
<para>If you use LVM, you can add a new
LVM entry to the partition table,
create a new LVM physical volume, add
it to the volume group, and extend the
logical partition with the root
volume.</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>Resize the root volume file system.</para>
</listitem>
</itemizedlist>
<para>The simplest way to support this is to
install in your image the:</para>
<itemizedlist>
<listitem>
<para><link xlink:href="https://launchpad.net/cloud-utils">cloud-utils</link>
package, which contains the <command>growpart</command>
tool for extending partitions.</para>
</listitem>
<listitem>
<para><link xlink:href="https://launchpad.net/cloud-initramfs-tools">cloud-initramfs-growroot</link>
package for Ubuntu, Debian and Fedora, which supports
resizing root partition on the first boot.</para>
</listitem>
<listitem>
<para><package>cloud-initramfs-growroot</package>
package for Centos and RHEL.</para>
</listitem>
<listitem>
<para><link xlink:href="https://launchpad.net/cloud-init">cloud-init</link>
package.</para>
</listitem>
</itemizedlist>
<para>With these packages installed, the image
performs the root partition resize on boot. For
example, in the <filename>/etc/rc.local</filename>
file. These packages are in the Ubuntu and Debian
package repository, as well as the EPEL repository
(for Fedora/RHEL/CentOS/Scientific Linux
guests).</para>
<para>If you cannot install
<literal>cloud-initramfs-tools</literal>, Robert
Plestenjak has a GitHub project called <link
xlink:href="https://github.com/flegmatik/linux-rootfs-resize"
>linux-rootfs-resize</link> that contains scripts
that update a ramdisk by using
<command>growpart</command> so that the image
resizes properly on boot.</para>
<para>If you can install the cloud-utils and
<package>cloud-init</package> packages, we
recommend that when you create your images, you create
a single ext3 or ext4 partition (not managed by
LVM).</para>
</simplesect>
<simplesect>
<title>Non-Xen without
<package>cloud-init</package>/<package>cloud-tools</package>:
LVM</title>
<para>If you cannot install <package>cloud-init</package>
and <package>cloud-tools</package> inside of your
guest, and you want to support resize, you must write
a script that your image runs on boot to modify the
partition table. In this case, we recommend using LVM
to manage your partitions. Due to a limitation in the
Linux kernel (as of this writing), you cannot modify a
partition table of a raw disk that has partitions
currently mounted, but you can do this for LVM.</para>
<para>Your script must do something like the following:<orderedlist>
<listitem>
<para>Detect if any additional space is
available on the disk. For example, parse
the output of <command>parted /dev/sda
--script "print
free"</command>.</para>
</listitem>
<listitem>
<para>Create a new LVM partition with the
additional space. For example,
<command>parted /dev/sda --script
"mkpart lvm ..."</command>.</para>
</listitem>
<listitem>
<para>Create a new physical volume. For
example, <command>pvcreate
/dev/<replaceable>sda6</replaceable></command>.</para>
</listitem>
<listitem>
<para>Extend the volume group with this
physical partition. For example,
<command>vgextend
<replaceable>vg00</replaceable>
/dev/<replaceable>sda6</replaceable></command>.</para>
</listitem>
<listitem>
<para>Extend the logical volume contained the
root partition by the amount of space. For
example, <command>lvextend
/dev/mapper/<replaceable>node-root</replaceable>
/dev/<replaceable>sda6</replaceable></command>.</para>
</listitem>
<listitem>
<para>Resize the root file system. For
example, <command>resize2fs
/dev/mapper/<replaceable>node-root</replaceable></command>.</para>
</listitem>
</orderedlist></para>
<para>You do not need a <filename>/boot</filename>
partition unless your image is an older Linux
distribution that requires that
<filename>/boot</filename> is not managed by
LVM.</para>
</simplesect>
</section>
<section xml:id="mac-address">
<title>No hard-coded MAC address information</title>
<para>You must remove the network persistence rules in the
image because they cause the network interface in the
instance to come up as an interface other than eth0. This
is because your image has a record of the MAC address of
the network interface card when it was first installed,
and this MAC address is different each time the
instance boots. You should alter the following
files:</para>
<itemizedlist>
<listitem>
<para>Replace
<filename>/etc/udev/rules.d/70-persistent-net.rules</filename>
with an empty file (contains network persistence
rules, including MAC address).</para>
</listitem>
<listitem>
<para>Replace
<filename>/lib/udev/rules.d/75-persistent-net-generator.rules</filename>
with an empty file (this generates the file
above).</para>
</listitem>
<listitem>
<para>Remove the HWADDR line from
<filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename>
on Fedora-based images.</para>
</listitem>
</itemizedlist>
<note>
<para>If you delete the network persistent rules files,
you may get a udev kernel warning at boot time, which
is why we recommend replacing them with empty files
instead.</para>
</note>
</section>
<section xml:id="ensure-ssh-server">
<title>Ensure ssh server runs</title>
<para>You must install an ssh server into the image and ensure
that it starts up on boot, or you cannot connect to your
instance by using ssh when it boots inside of OpenStack.
This package is typically called
<literal>openssh-server</literal>.</para>
</section>
<section xml:id="disable-firewall">
<title>Disable firewall</title>
<para>In general, we recommend that you disable any firewalls
inside of your image and use OpenStack security groups to
restrict access to instances. The reason is that having a
firewall installed on your instance can make it more
difficult to troubleshoot networking issues if you cannot
connect to your instance.</para>
</section>
<section xml:id="ssh-public-key">
<title>Access instance by using ssh public key
(cloud-init)</title>
<para>The typical way that users access virtual machines
running on OpenStack is to ssh using public key
authentication. For this to work, your virtual machine
image must be configured to download the ssh public key
from the OpenStack metadata service or config drive, at
boot time.</para>
<para>If both the XenAPI agent and <package>cloud-init</package> are
present in an image, <package>cloud-init</package> handles ssh-key
injection. The system assumes <package>cloud-init</package> is
present when the image has the <literal>cloud_init_installed
</literal> property.</para>
<simplesect>
<title>Use <package>cloud-init</package> to fetch the
public key</title>
<para>The <package>cloud-init</package> package
automatically fetches the public key from the metadata
server and places the key in an account. The account
varies by distribution. On Ubuntu-based virtual
machines, the account is called
<literal>ubuntu</literal>. On Fedora-based virtual
machines, the account is called
<literal>ec2-user</literal>.</para>
<para>You can change the name of the account used by
<package>cloud-init</package> by editing the
<filename>/etc/cloud/cloud.cfg</filename> file and
adding a line with a different user. For example, to
configure <package>cloud-init</package> to put the key
in an account named <literal>admin</literal>, edit the
configuration file so it has the line:</para>
<programlisting>user: admin</programlisting>
</simplesect>
<simplesect>
<title>Write a custom script to fetch the public
key</title>
<para>If you are unable or unwilling to install
<package>cloud-init</package> inside the guest,
you can write a custom script to fetch the public key
and add it to a user account.</para>
<para>To fetch the ssh public key and add it to the root
account, edit the <filename>/etc/rc.local</filename>
file and add the following lines before the line
"touch /var/lock/subsys/local". This code fragment is
taken from the <link
xlink:href="https://github.com/rackerjoe/oz-image-build/blob/master/templates/centos60_x86_64.tdl"
>rackerjoe oz-image-build CentOS 6
template</link>.</para>
<programlisting language="bash">if [ ! -d /root/.ssh ]; then
mkdir -p /root/.ssh
chmod 700 /root/.ssh
fi
# Fetch public key using HTTP
ATTEMPTS=30
FAILED=0
while [ ! -f /root/.ssh/authorized_keys ]; do
curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/metadata-key 2>/dev/null
if [ $? -eq 0 ]; then
cat /tmp/metadata-key >> /root/.ssh/authorized_keys
chmod 0600 /root/.ssh/authorized_keys
restorecon /root/.ssh/authorized_keys
rm -f /tmp/metadata-key
echo "Successfully retrieved public key from instance metadata"
echo "*****************"
echo "AUTHORIZED KEYS"
echo "*****************"
cat /root/.ssh/authorized_keys
echo "*****************"
else
FAILED=`expr $FAILED + 1`
if [ $FAILED -ge $ATTEMPTS ]; then
echo "Failed to retrieve public key from instance metadata after $FAILED attempts, quitting"
break
fi
echo "Could not retrieve public key from instance metadata (attempt #$FAILED/$ATTEMPTS), retrying in 5 seconds..."
sleep 5
fi
done</programlisting>
<note>
<para>Some VNC clients replace : (colon) with ;
(semicolon) and _ (underscore) with - (hyphen). If
editing a file over a VNC session, make sure it's
http: not http; and authorized_keys not
authorized-keys.</para>
</note>
</simplesect>
</section>
<section xml:id="metadata">
<title>Process user data and other metadata
(cloud-init)</title>
<para>In addition to the ssh public key, an image might need
additional information from OpenStack, such as <link
xlink:href="http://docs.openstack.org/user-guide/cli_provide_user_data_to_instances.html"
>Provide user data to instances</link>, that the user
submitted when requesting the image. For example, you might
want to set the host name of the instance when it is booted.
Or, you might wish to configure your image so that it executes
user data content as a script on boot.</para>
<para>You can access this information through the metadata
service or referring to <link
xlink:href="http://docs.openstack.org/user-guide/cli_config_drive.html"
>Store metadata on the configuration drive</link>. As the OpenStack metadata
service is compatible with version 2009-04-04 of the
Amazon EC2 metadata service, consult the Amazon EC2
documentation on <link
xlink:href="http://docs.amazonwebservices.com/AWSEC2/2009-04-04/UserGuide/AESDG-chapter-instancedata.html"
>Using Instance Metadata</link> for details on how to
retrieve user data.</para>
<para>The easiest way to support this type of functionality is
to install the <package>cloud-init</package> package into
your image, which is configured by default to treat user
data as an executable script, and sets the host
name.</para>
</section>
<section xml:id="write-to-console">
<title>Ensure image writes boot log to console</title>
<para>You must configure the image so that the kernel writes
the boot log to the <literal>ttyS0</literal> device. In
particular, the <literal>console=ttyS0</literal> argument
must be passed to the kernel on boot.</para>
<para>If your image uses grub2 as the boot loader, there
should be a line in the grub configuration file. For
example, <filename>/boot/grub/grub.cfg</filename>, which
looks something like this:</para>
<programlisting>linux /boot/vmlinuz-3.2.0-49-virtual root=UUID=6d2231e4-0975-4f35-a94f-56738c1a8150 ro console=ttyS0</programlisting>
<para>If <literal>console=ttyS0</literal> does not appear, you
must modify your grub configuration. In general, you
should not update the <filename>grub.cfg</filename>
directly, since it is automatically generated. Instead,
you should edit <filename>/etc/default/grub</filename> and
modify the value of the
<literal>GRUB_CMDLINE_LINUX_DEFAULT</literal>
variable:
<programlisting language="bash">GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"</programlisting></para>
<para>Next, update the grub configuration. On Debian-based
operating-systems such as Ubuntu, run this command:</para>
<screen><prompt>#</prompt> <userinput>update-grub</userinput></screen>
<para>On Fedora-based systems, such as RHEL and CentOS, and on
openSUSE, run this command:</para>
<screen><prompt>#</prompt> <userinput>grub2-mkconfig -o /boot/grub2/grub.cfg</userinput></screen>
</section>
<section xml:id="image-xen-pv">
<title>Paravirtualized Xen support in the kernel (Xen
hypervisor only)</title>
<para>Prior to Linux kernel version 3.0, the mainline branch
of the Linux kernel did not have support for paravirtualized
Xen virtual machine instances (what Xen calls DomU
guests). If you are running the Xen hypervisor with
paravirtualization, and you want to create an image for an
older Linux distribution that has a pre 3.0 kernel, you
must ensure that the image boots a kernel that has been
compiled with Xen support.</para>
</section>
<section xml:id="image-cache-management">
<title>Manage the image cache</title>
<para>Use options in <filename>nova.conf</filename> to control
whether, and for how long, unused base images are stored
in <filename>/var/lib/nova/instances/_base/</filename>. If
you have configured live migration of instances, all your
compute nodes share one common
<filename>/var/lib/nova/instances/</filename>
directory.</para>
<para>For information about libvirt images in OpenStack, see
<link
xlink:href="http://www.pixelbeat.org/docs/openstack_libvirt_images/"
>The life of an OpenStack libvirt image from Pádraig
Brady</link>.</para>
<table rules="all">
<caption>Image cache management configuration
options</caption>
<col width="50%"/>
<col width="50%"/>
<thead>
<tr>
<td>Configuration option=Default value</td>
<td>(Type) Description</td>
</tr>
</thead>
<tbody>
<tr>
<td>preallocate_images=none</td>
<td><para>(StrOpt) VM image preallocation
mode:</para><itemizedlist>
<listitem>
<para><literal>none</literal>. No
storage provisioning occurs up
front.</para>
</listitem>
<listitem>
<para><literal>space</literal>.
Storage is fully allocated at
instance start. The
<literal>$instance_dir/</literal>
images are <link
xlink:href="http://www.kernel.org/doc/man-pages/online/pages/man2/fallocate.2.html"
>fallocate</link>d to immediately
determine if enough space is
available, and to possibly improve
VM I/O performance due to ongoing
allocation avoidance, and better
locality of block
allocations.</para>
</listitem>
</itemizedlist></td>
</tr>
<tr>
<td>remove_unused_base_images=True</td>
<td>(BoolOpt) Should unused base images be
removed? When set to True, the interval at
which base images are removed are set with the
following two settings. If set to False base
images are never removed by Compute.</td>
</tr>
<tr>
<td>remove_unused_original_minimum_age_seconds=86400</td>
<td>(IntOpt) Unused unresized base images younger
than this are not removed. Default is 86400
seconds, or 24 hours.</td>
</tr>
<tr>
<td>remove_unused_resized_minimum_age_seconds=3600</td>
<td>(IntOpt) Unused resized base images younger
than this are not removed. Default is 3600
seconds, or one hour.</td>
</tr>
</tbody>
</table>
<para>To see how the settings affect the deletion of a running
instance, check the directory where the images are
stored:</para>
<screen><prompt>#</prompt> <userinput>ls -lash /var/lib/nova/instances/_base/</userinput></screen>
<para>In the <filename>/var/log/compute/compute.log</filename>
file, look for the identifier:</para>
<screen><computeroutput>2012-02-18 04:24:17 41389 WARNING nova.virt.libvirt.imagecache [-] Unknown base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810
a0d1d5d3_20
2012-02-18 04:24:17 41389 INFO nova.virt.libvirt.imagecache [-] Removable base files: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810
a0d1d5d3 /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20
2012-02-18 04:24:17 41389 INFO nova.virt.libvirt.imagecache [-] Removing base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3</computeroutput></screen>
<para>Because 86400 seconds (24 hours) is the default time for
<literal>remove_unused_original_minimum_age_seconds</literal>,
you can either wait for that time interval to see the base
image removed, or set the value to a shorter time period
in <filename>nova.conf</filename>. Restart all nova
services after changing a setting in
<filename>nova.conf</filename>.</para>
</section>
</chapter>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 544 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,81 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<parent>
<groupId>org.openstack.docs</groupId>
<artifactId>parent-pom</artifactId>
<version>1.0.0-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>openstack-image-guide</artifactId>
<packaging>jar</packaging>
<name>OpenStack Virtual Machine Image Guide</name>
<properties>
<!-- This is set by Jenkins according to the branch. -->
<release.path.name>local</release.path.name>
<comments.enabled>1</comments.enabled>
</properties>
<!-- ################################################ -->
<!-- USE "mvn clean generate-sources" to run this POM -->
<!-- ################################################ -->
<build>
<plugins>
<plugin>
<groupId>com.rackspace.cloud.api</groupId>
<artifactId>clouddocs-maven-plugin</artifactId>
<!-- version is set in ../pom.xml file -->
<executions>
<execution>
<id>generate-webhelp</id>
<goals>
<goal>generate-webhelp</goal>
</goals>
<phase>generate-sources</phase>
<configuration>
<!-- These parameters only apply to webhelp -->
<enableDisqus>${comments.enabled}</enableDisqus>
<disqusShortname>os-image-guide</disqusShortname>
<enableGoogleAnalytics>1</enableGoogleAnalytics>
<googleAnalyticsId>UA-17511903-1</googleAnalyticsId>
<generateToc>
appendix toc,title
article/appendix nop
article toc,title
book toc,title,figure,table,example,equation
chapter toc
section toc
part toc
preface toc
qandadiv toc
qandaset toc
reference toc,title
set toc,title
</generateToc>
<!-- The following elements sets the autonumbering of sections in output for chapter numbers but no numbered sections-->
<sectionAutolabel>0</sectionAutolabel>
<formalProcedures>0</formalProcedures>
<tocSectionDepth>1</tocSectionDepth>
<tocChapterDepth>1</tocChapterDepth>
<sectionLabelIncludesComponentLabel>0</sectionLabelIncludesComponentLabel>
<webhelpDirname>image-guide</webhelpDirname>
<pdfFilenameBase>image-guide</pdfFilenameBase>
</configuration>
</execution>
</executions>
<configuration>
<!-- These parameters apply to pdf and webhelp -->
<xincludeSupported>true</xincludeSupported>
<sourceDirectory>.</sourceDirectory>
<includes>
bk-imageguide.xml
</includes>
<canonicalUrlBase>http://docs.openstack.org/image-guide/content</canonicalUrlBase>
<glossaryCollection>${basedir}/../glossary/glossary-terms.xml</glossaryCollection>
<branding>openstack</branding>
</configuration>
</plugin>
</plugins>
</build>
</project>

View File

@ -1,22 +0,0 @@
Roadmap for Virtual Machine Image Guide
---------------------------------------
This file is stored with the source to offer ideas for what to work on.
Put your name next to a task if you want to work on it and put a WIP
review up on review.openstack.org.
May 21, 2014
To do tasks:
- Add a chapter describing how to make an image for Database as a
Service (trove)
- Add audience information; who is this book intended for
Ongoing tasks:
- Ensure it meets conventions and standards
Wishlist tasks:
- Replace all individual client commands (like keystone, glance) with
openstack client commands

View File

@ -1,332 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="centos-image">
<title>Example: CentOS image</title>
<para>This example shows you how to install a CentOS image and focuses mainly on CentOS 6.4.
Because the CentOS installation process might differ across versions, the installation steps
might differ if you use a different version of CentOS.</para>
<procedure>
<title>Download a CentOS install ISO</title>
<step>
<para>Navigate to the <link
xlink:href="http://www.centos.org/download/mirrors/">CentOS
mirrors</link> page.</para>
</step>
<step>
<para>Click one of the <literal>HTTP</literal> links in the right-hand column next to
one of the mirrors.</para>
</step>
<step>
<para>Click the folder link of the CentOS version that you want to use. For example,
<literal>6.4/</literal>.</para>
</step>
<step>
<para>Click the <literal>isos/</literal> folder link.</para>
</step>
<step>
<para>Click the <literal>x86_64/</literal> folder link for 64-bit images.</para>
</step>
<step>
<para>Click the netinstall ISO image that you want to download. For example,
<filename>CentOS-6.4-x86_64-netinstall.iso</filename> is a good choice because
it is a smaller image that downloads missing packages from the Internet during
installation.</para>
</step>
</procedure>
<simplesect>
<title>Start the installation process</title>
<para>Start the installation process using either <command>virt-manager</command> or
<command>virt-install</command> as described in the previous section. If you use
<command>virt-install</command>, do not forget to connect your VNC client to the
virtual machine.</para>
<para>Assume that:</para>
<itemizedlist>
<listitem>
<para>The name of your virtual machine image is <literal>centos-6.4</literal>;
you need this name when you use <command>virsh</command> commands to manipulate the
state of the image.</para>
</listitem>
<listitem>
<para>You saved the netinstall ISO image to the <filename>/data/isos</filename> directory.</para>
</listitem>
</itemizedlist>
<para>If you use <command>virt-install</command>, the commands should look something like
this:</para>
<screen><prompt>#</prompt> <userinput>qemu-img create -f qcow2 /tmp/centos-6.4.qcow2 10G</userinput>
<prompt>#</prompt> <userinput>virt-install --virt-type kvm --name centos-6.4 --ram 1024 \
--disk /tmp/centos-6.4.qcow2,format=qcow2 \
--network network=default \
--graphics vnc,listen=0.0.0.0 --noautoconsole \
--os-type=linux --os-variant=rhel6 \
--extra-args="console=tty0 console=ttyS0,115200n8 serial" \
--location=/data/isos/CentOS-6.4-x86_64-netinstall.iso</userinput></screen>
</simplesect>
<simplesect>
<title>Step through the installation</title>
<para>At the initial Installer boot menu, choose the <guilabel>Install or upgrade an
existing system</guilabel> option. Step through the installation prompts. Accept the
defaults.</para>
<mediaobject>
<imageobject role="fo">
<imagedata fileref="figures/centos-install.png" format="PNG" scale="60"/>
</imageobject>
<imageobject role="html">
<imagedata fileref="figures/centos-install.png" format="PNG"/>
</imageobject>
</mediaobject>
</simplesect>
<simplesect>
<title>Configure TCP/IP</title>
<para>The default TCP/IP settings are fine. In particular, ensure that Enable IPv4 support
is enabled with DHCP, which is the default.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/centos-tcpip.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
</simplesect>
<simplesect>
<title>Point the installer to a CentOS web server</title>
<para>Choose URL as the installation method.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/install-method.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Depending on the version of CentOS, the net installer requires the user to specify
either a URL or the web site and a CentOS directory that corresponds to one of the
CentOS mirrors. If the installer asks for a single URL, a valid URL might be
<literal>http://mirror.umd.edu/centos/6/os/x86_64</literal>.</para>
<note>
<para>Consider using other mirrors as an alternative to
<literal>mirror.umd.edu</literal>.</para>
</note>
<mediaobject>
<imageobject>
<imagedata fileref="figures/url-setup.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>If the installer asks for web site name and CentOS directory separately, you might
enter:</para>
<itemizedlist>
<listitem>
<para>Web site name: <literal>mirror.umd.edu</literal></para>
</listitem>
<listitem>
<para>CentOS directory: <literal>centos/6/os/x86_64</literal></para>
</listitem>
</itemizedlist>
<para>See <link xlink:href="http://www.centos.org/download/mirrors/"
>CentOS mirror page</link> to get a full list of mirrors, click on the "HTTP" link
of a mirror to retrieve the web site name of a mirror.</para>
</simplesect>
<simplesect>
<title>Storage devices</title>
<para>If prompted about which type of devices your installation uses, choose <guilabel>Basic
Storage Devices</guilabel>.</para>
</simplesect>
<simplesect>
<title>Hostname</title>
<para>The installer may ask you to choose a host name. The default
(<literal>localhost.localdomain</literal>) is fine. You install the <systemitem
class="service">cloud-init</systemitem> package later, which sets the host name on
boot when a new instance is provisioned using this image.</para>
</simplesect>
<simplesect>
<title>Partition the disks</title>
<para>There are different options for partitioning the disks. The default installation uses
LVM partitions, and creates three partitions (<filename>/boot</filename>,
<filename>/</filename>, swap), which works fine. Alternatively, you might want to
create a single ext4 partition that is mounted to "<literal>/</literal>", which also
works fine.</para>
<para>If unsure, use the default partition scheme for the installer because no scheme is
better than another.</para>
</simplesect>
<simplesect>
<title>Step through the installation</title>
<para>Step through the installation, using the default options. The simplest thing to do is
to choose the "Basic Server" install (may be called "Server" install on older versions
of CentOS), which installs an SSH server.</para>
</simplesect>
<simplesect>
<title>Detach the CD-ROM and reboot</title>
<para>When the installation has completed, the <guilabel>Congratulations, your CentOS installation
is complete</guilabel> screen appears.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/centos-complete.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>To eject a disk by using the <command>virsh</command> command, libvirt requires that
you attach an empty disk at the same target that the CDROM was previously attached,
which should be <literal>hdc</literal>. You can confirm the appropriate target using the
<command>virsh dumpxml <replaceable>vm-image</replaceable></command> command.</para>
<screen><prompt>#</prompt> <userinput>virsh dumpxml centos-6.4</userinput>
<computeroutput>&lt;domain type='kvm'>
&lt;name>centos-6.4&lt;/name>
...
&lt;disk type='block' device='cdrom'>
&lt;driver name='qemu' type='raw'/>
&lt;target dev='hdc' bus='ide'/>
&lt;readonly/>
&lt;address type='drive' controller='0' bus='1' target='0' unit='0'/>
&lt;/disk>
...
&lt;/domain>
</computeroutput></screen>
<para>Run the following commands from the host to eject the disk and reboot using virsh, as
root. If you are using virt-manager, the commands below will work, but you can also use
the GUI to detach and reboot it by manually stopping and starting.</para>
<screen><prompt>#</prompt> <userinput>virsh attach-disk --type cdrom --mode readonly centos-6.4 "" hdc</userinput>
<prompt>#</prompt> <userinput>virsh destroy centos-6.4</userinput>
<prompt>#</prompt> <userinput>virsh start centos-6.4</userinput></screen>
</simplesect>
<simplesect>
<title>Log in to newly created image</title>
<para>When you boot for the first time after installation, you might be prompted about
authentication tools. Select <guilabel>Exit</guilabel>. Then, log in as root.</para>
</simplesect>
<simplesect>
<title>Install the ACPI service</title>
<para>To enable the hypervisor to reboot or shutdown an instance, you
must install and run the <systemitem
class="service">acpid</systemitem> service on the guest
system.</para>
<para>Run the following commands inside the CentOS guest to install the
ACPI service and configure it to start when the system
boots:</para>
<screen><prompt>#</prompt> <userinput>yum install acpid</userinput>
<prompt>#</prompt> <userinput>chkconfig acpid on</userinput></screen>
</simplesect>
<simplesect>
<title>Configure to fetch metadata</title>
<para>An instance must interact with the metadata service to perform several tasks on start
up. For example, the instance must get the ssh public key and run the user data script.
To ensure that the instance performs these tasks, use one of these methods:</para>
<itemizedlist>
<listitem>
<para>Install a <systemitem class="service">cloud-init</systemitem> RPM, which is a
port of the Ubuntu <link xlink:href="https://launchpad.net/cloud-init"
>cloud-init</link> package. This is the recommended approach.</para>
</listitem>
<listitem>
<para>Modify <filename>/etc/rc.local</filename> to fetch desired information from
the metadata service, as described in the next section.</para>
</listitem>
</itemizedlist>
</simplesect>
<simplesect>
<title>Use cloud-init to fetch the public key</title>
<para>The <systemitem class="service">cloud-init</systemitem> package automatically fetches
the public key from the metadata server and places the key in an account. You can
install <systemitem class="service">cloud-init</systemitem> inside the CentOS guest by
adding the EPEL repo:</para>
<screen><prompt>#</prompt> <userinput>yum install http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm</userinput>
<prompt>#</prompt> <userinput>yum install cloud-init</userinput></screen>
<para>The account varies by distribution. On Ubuntu-based virtual machines, the account is
called <literal>ubuntu</literal>. On Fedora-based virtual machines, the account is
called <literal>ec2-user</literal>.</para>
<para>You can change the name of the account used by <systemitem class="service"
>cloud-init</systemitem> by editing the <filename>/etc/cloud/cloud.cfg</filename>
file and adding a line with a different user. For example, to configure <systemitem
class="service">cloud-init</systemitem> to put the key in an account named
<literal>admin</literal>, add this line to the configuration file:</para>
<programlisting>user: admin</programlisting>
</simplesect>
<simplesect>
<title>Write a script to fetch the public key (if no cloud-init)</title>
<para>If you are not able to install the <systemitem class="service">cloud-init</systemitem>
package in your image, to fetch the ssh public key and add it to the root account, edit
the <filename>/etc/rc.d/rc.local</filename> file and add the following lines before the line
<literal>touch /var/lock/subsys/local</literal>:</para>
<programlisting language="bash">if [ ! -d /root/.ssh ]; then
mkdir -p /root/.ssh
chmod 700 /root/.ssh
fi
# Fetch public key using HTTP
ATTEMPTS=30
FAILED=0
while [ ! -f /root/.ssh/authorized_keys ]; do
curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key \
> /tmp/metadata-key 2>/dev/null
if [ \$? -eq 0 ]; then
cat /tmp/metadata-key >> /root/.ssh/authorized_keys
chmod 0600 /root/.ssh/authorized_keys
restorecon /root/.ssh/authorized_keys
rm -f /tmp/metadata-key
echo "Successfully retrieved public key from instance metadata"
echo "*****************"
echo "AUTHORIZED KEYS"
echo "*****************"
cat /root/.ssh/authorized_keys
echo "*****************"
fi
done</programlisting>
<note>
<para>Some VNC clients replace the colon (<literal>:</literal>) with a semicolon
(<literal>;</literal>) and the underscore (<literal>_</literal>) with a hyphen
(<literal>-</literal>). Make sure to specify <literal>http:</literal> and not
<literal>http;</literal>. Make sure to specify
<literal>authorized_keys</literal> and not
<literal>authorized-keys</literal>.</para>
</note>
<note>
<para>The previous script only gets the ssh public key from the metadata server. It does
not get user data, which is optional data that can be passed by the user when
requesting a new instance. User data is often used to run a custom script when an
instance boots.</para>
<para>As the OpenStack metadata service is compatible with version 2009-04-04 of the
Amazon EC2 metadata service, consult the Amazon EC2 documentation on <link
xlink:href="http://docs.amazonwebservices.com/AWSEC2/2009-04-04/UserGuide/AESDG-chapter-instancedata.html"
>Using Instance Metadata</link> for details on how to get user data.</para>
</note>
</simplesect>
<simplesect>
<title>Disable the zeroconf route</title>
<para>For the instance to access the metadata service, you must disable the default zeroconf
route:</para>
<screen><prompt>#</prompt> <userinput>echo "NOZEROCONF=yes" &gt;&gt; /etc/sysconfig/network</userinput></screen>
</simplesect>
<simplesect>
<title>Configure console</title>
<para>For the <command>nova console-log</command> command to work properly on CentOS
6.<replaceable>x</replaceable>, you might need to add the following lines to the
<filename>/boot/grub/menu.lst</filename> file:</para>
<programlisting>serial --unit=0 --speed=115200
terminal --timeout=10 console serial
# Edit the kernel line to add the console entries
kernel <replaceable>...</replaceable> console=tty0 console=ttyS0,115200n8</programlisting>
</simplesect>
<simplesect>
<title>Shut down the instance</title>
<para>From inside the instance, as root:</para>
<screen><prompt>#</prompt> <userinput>/sbin/shutdown -h now</userinput></screen>
</simplesect>
<simplesect>
<title>Clean up (remove MAC address details)</title>
<para>The operating system records the MAC address of the virtual Ethernet card in locations
such as <filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename> and
<filename>/etc/udev/rules.d/70-persistent-net.rules</filename> during the instance
process. However, each time the image boots up, the virtual Ethernet card will have a
different MAC address, so this information must be deleted from the configuration
file.</para>
<para>There is a utility called <command>virt-sysprep</command>, that performs various
cleanup tasks such as removing the MAC address references. It will clean up a virtual
machine image in place:</para>
<screen><prompt>#</prompt> <userinput>virt-sysprep -d centos-6.4</userinput></screen>
</simplesect>
<simplesect>
<title>Undefine the libvirt domain</title>
<para>Now that you can upload the image to the Image service, you no longer need to have
this virtual machine image managed by libvirt. Use the <command>virsh undefine
<replaceable>vm-image</replaceable></command> command to inform libvirt:</para>
<screen><prompt>#</prompt> <userinput>virsh undefine centos-6.4</userinput></screen>
</simplesect>
<simplesect>
<title>Image is complete</title>
<para>The underlying image file that you created with <command>qemu-img create</command> is
ready to be uploaded. For example, you can upload the
<filename>/tmp/centos-6.4.qcow2</filename> image to the Image service.</para>
</simplesect>
</section>

View File

@ -1,169 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="section_fedora-example">
<title>Example: Fedora image</title>
<para>Download a <link xlink:href="http://dl.fedoraproject.org/pub/fedora/linux/releases/20/Fedora/x86_64/iso/Fedora-20-x86_64-DVD.iso">Fedora</link>
ISO image. This procedure lets you create a Fedora 20 image.</para>
<procedure>
<step><para>Start the installation using <command>virt-install</command> as shown below:</para>
<screen><prompt>#</prompt> <userinput>qemu-img create -f qcow2 fedora-20.qcow2 8G</userinput>
<prompt>#</prompt> <userinput>virt-install --connect=qemu:///system --network=bridge:virbr0 \
--extra-args="console=tty0 console=ttyS0,115200 serial rd_NO_PLYMOUTH" \
--name=fedora-20 --disk path=/var/lib/libvirt/images/fedora-20.qcow2,format=qcow2,size=10,cache=none \
--ram 2048 --vcpus=2 --check-cpu --accelerate --os-type linux --os-variant fedora19 \
--hvm --location=http://dl.fedoraproject.org/pub/fedora/linux/releases/20/Fedora/x86_64/os/ \
--nographics</userinput></screen>
<para>This will launch a VM and start the installation process.</para>
<screen><computeroutput>Starting install...
Retrieving file .treeinfo... | 2.2 kB 00:00:00 !!!
Retrieving file vmlinuz... | 9.8 MB 00:00:05 !!!
Retrieving file initrd.img... | 66 MB 00:00:37 !!!
Allocating 'fedora-20.qcow2' | 10 GB 00:00:00
Creating domain... | 0 B 00:00:00
Connected to domain fedora-20
Escape character is ^]
[ 0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Initializing cgroup subsys cpuacct
...
...
...
[ OK ] Reached target Local File Systems (Pre).
Starting installer, one moment...
anaconda 20.25.15-1 for Fedora 20 started.
================================================================================
===============================================================================</computeroutput></screen>
</step>
<step><para>Choose the VNC or text mode to set the installation options.</para>
<screen><computeroutput>Text mode provides a limited set of installation
options. It does not offer custom partitioning for full control over the
disk layout. Would you like to use VNC mode instead?
1) Start VNC
2) Use text mode
Please make your choice from above ['q' to quit | 'c' to continue |
'r' to refresh]:</computeroutput></screen></step>
<step><para>Set the timezone, network configuration, installation
source, and the root password. Optionally,
you can choose to create a user.</para></step>
<step><para>Set up the installation destination as shown below:</para>
<screen><computeroutput>================================================================================
Probing storage...
Installation Destination
[x] 1) Virtio Block Device: 10.24 GB (vda)
1 disk selected; 10.24 GB capacity; 10.24 GB free ...
Please make your choice from above ['q' to quit | 'c' to continue |
'r' to refresh]: c
================================================================================
================================================================================
Autopartitioning Options
[ ] 1) Replace Existing Linux system(s)
[x] 2) Use All Space
[ ] 3) Use Free Space
Installation requires partitioning of your hard drive. Select what space to use
for the install target.
Please make your choice from above ['q' to quit | 'c' to continue |
'r' to refresh]: 2
================================================================================
================================================================================
Autopartitioning Options
[ ] 1) Replace Existing Linux system(s)
[x] 2) Use All Space
[ ] 3) Use Free Space
Installation requires partitioning of your hard drive. Select what space to use
for the install target.
Please make your choice from above ['q' to quit | 'c' to continue |
'r' to refresh]: c
================================================================================
================================================================================
Partition Scheme Options
[ ] 1) Standard Partition
[x] 2) LVM
[ ] 3) BTRFS
Select a partition scheme configuration.
Please make your choice from above ['q' to quit | 'c' to continue |
'r' to refresh]: c
Generating updated storage configuration
Checking storage configuration...
================================================================================</computeroutput></screen></step>
<step><para>Run the following commands from the host to eject the disk and
reboot using virsh, as root.</para>
<screen><prompt>#</prompt> <userinput>virsh attach-disk --type cdrom --mode readonly <replaceable>fedora-20</replaceable> "" hdc</userinput>
<prompt>#</prompt> <userinput>virsh destroy fedora-20</userinput>
<prompt>#</prompt> <userinput>virsh start fedora-20</userinput></screen>
<para>You can also use the GUI to detach and reboot it by manually
stopping and starting.</para></step>
<step><para>Log in as root user when you boot for the first time after
installation.</para></step>
<step><para>Install and run the <literal>acpid</literal> service on the guest
system to enable the virtual machine to reboot or shutdown an instance.</para>
<para>Run the following commands inside the Fedora guest to install the
ACPI service and configure it to start when the system boots:</para>
<screen><prompt>#</prompt> <userinput>yum install acpid</userinput>
<prompt>#</prompt> <userinput>chkconfig acpid on</userinput></screen></step>
<step><para>Install <literal>cloud-init</literal> package inside the Fedora
guest by adding the EPEL repo:</para>
<para>The <literal>cloud-init</literal> package automatically fetches
the public key from the metadata server and places the key in an account.</para>
<screen><prompt>#</prompt> <userinput>yum install http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm</userinput>
<prompt>#</prompt> <userinput>yum install cloud-init</userinput></screen>
<para>You can change the name of the account used by <literal>cloud-init</literal>
by editing the <filename>/etc/cloud/cloud.cfg</filename> file and
adding a line with a different user. For example, to configure
<literal>cloud-init</literal> to put the key in an account named
admin, add this line to the configuration file:</para>
<screen><computeroutput>user: admin</computeroutput></screen></step>
<step><para>Disable the default <literal>zeroconf</literal> route for the
instance to access the metadata service:</para>
<screen><prompt>#</prompt> <userinput>echo "NOZEROCONF=yes" >> /etc/sysconfig/network</userinput></screen></step>
<step><para>For the <command>nova console-log</command> command to work
properly on Fedora 20, you might need to add the following lines to
the <filename>/boot/grub/menu.lst</filename> file:</para>
<programlisting language='ini'>serial --unit=0 --speed=115200
terminal --timeout=10 console serial
# Edit the kernel line to add the console entries
kernel ... console=tty0 console=ttyS0,115200n8</programlisting></step>
<step><para>Shut down the instance from inside the instance as a root user:</para>
<screen><prompt>#</prompt> <userinput>/sbin/shutdown -h now</userinput></screen></step>
<step><para>Clean up and remove MAC address details.</para>
<para>The operating system records the MAC address of the virtual
Ethernet card in locations such as <filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename>
and <filename>/etc/udev/rules.d/70-persistent-net.rules</filename>
during the instance process. However, each time the image boots up,
the virtual Ethernet card will have a different MAC address, so this
information must be deleted from the configuration file.</para>
<para>Use the <literal>virt-sysprep</literal> utility. This performs
various cleanup tasks such as removing the MAC address references.
It will clean up a virtual machine image in place:</para>
<screen><prompt>#</prompt> <userinput>virt-sysprep -d fedora-20</userinput></screen></step>
<step><para>Undefine the domain since you no longer need to have this
virtual machine image managed by libvirt:</para>
<screen><prompt>#</prompt> <userinput>virsh undefine fedora-20</userinput></screen></step>
</procedure>
<para>The underlying image file that you created with qemu-img create is
ready to be uploaded to the Image service.</para>
</section>

View File

@ -1,268 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="example-freebsd-image">
<title>Example: FreeBSD image</title>
<para>This example creates a minimal FreeBSD image that is
compatible with OpenStack and
<application>bsd-cloudinit</application>. The
<application>bsd-cloudinit</application> program is
independently maintained and in active development. The best
source of information on the current state of the project is at
<link xlink:href="http://pellaeon.github.io/bsd-cloudinit/"
>http://pellaeon.github.io/bsd-cloudinit/</link>.</para>
<para>KVM with virtio drivers is used as the virtualization platform
because that is the most widely used among OpenStack operators. If
you use a different platform for your cloud virtualization, use
that same platform in the image creation step.</para>
<para>This example shows how to create a FreeBSD 10 image. To create
a FreeBSD 9.2 image, follow these steps with the noted
differences.</para>
<procedure>
<title>To create a FreeBSD image</title>
<step>
<para>Make a virtual drive:</para>
<screen><prompt>$</prompt> <userinput>qemu-img create -f qcow2 freebsd.qcow2 1G</userinput></screen>
<para>The minimum supported disk size for FreeBSD is 1&nbsp;GB.
Because the goal is to make the smallest possible base image,
the example uses that minimum size. This size is sufficient to
include the optional <literal>doc</literal>,
<literal>games</literal>, and <literal>lib32</literal>
collections. To include the <literal>ports</literal>
collection, add another 1&nbsp;GB. To include
<literal>src</literal>, add 512&nbsp;MB.</para>
</step>
<step>
<para>Get the installer ISO:</para>
<screen><prompt>$</prompt> <userinput>curl ftp://ftp.freebsd.org/pub/FreeBSD/releases\
/amd64/amd64/ISO-IMAGES/10.1/FreeBSD-10.1-RELEASE-amd64-bootonly.iso &gt;\
FreeBSD-10.1-RELEASE-amd64-bootonly.iso</userinput></screen>
</step>
<step>
<para>Launch a VM on your local workstation. Use the same
hypervisor, virtual disk, and virtual network drivers as you
use in your production environment.</para>
<para>The following command uses the minimum amount of RAM,
which is 256&nbsp;MB:</para>
<screen><prompt>$</prompt> <userinput>kvm -smp 1 -m 256 -cdrom FreeBSD-10.1-RELEASE-amd64-bootonly.iso \
-drive if=virtio,file=freebsd.qcow2 \
-net nic,model=virtio -net user</userinput></screen>
<para>You can specify up to 1&nbsp;GB additional RAM to make the
installation process run faster.</para>
<para>This VM must also have Internet access to download
packages.</para>
<note>
<para>By using the same hypervisor, you can ensure that you
emulate the same devices that exist in production. However,
if you use full hardware virtualization instead of
paravirtualization, you do not need to use the same
hypervisor; you must use the same type of virtualized
hardware because FreeBSD device names are related to their
drivers. If the name of your root block device or primary
network interface in production differs than the names used
during image creation, errors can occur.</para>
</note>
<para>You now have a VM that boots from the downloaded install
ISO and is connected to the blank virtual disk that you
created previously.</para>
</step>
<step>
<para>To install the operating system, complete the following
steps inside the VM:</para>
<substeps>
<step>
<para>When prompted, choose to run the ISO in
<guibutton>Install</guibutton> mode.</para>
</step>
<step>
<para>Accept the default keymap or select an appropriate
mapping for your needs.</para>
</step>
<step>
<para>Provide a host name for your image. If you use
<application>bsd-cloudinit</application>, it overrides
this value with the name provided by OpenStack when an
instance boots from this image.</para>
</step>
<step>
<para>When prompted about the optional
<literal>doc</literal>, <literal>games</literal>,
<literal>lib32</literal>, <literal>ports</literal>, and
<literal>src</literal> system components, select only
those that you need. It is possible to have a fully
functional installation without selecting additional
components selected. As noted previously, a minimal system
with a 1&nbsp;GB virtual disk supports
<literal>doc</literal>, <literal>games</literal>, and
<literal>lib32</literal> inclusive. The
<literal>ports</literal> collection requires at least
1&nbsp;GB additional space and possibly more if you plan
to install many ports. The <literal>src</literal>
collection requires an additional 512&nbsp;MB.</para>
</step>
<step>
<para>Configure the primary network interface to use DHCP.
In this example, which uses a virtio network device, this
interface is named <literal>vtnet0</literal>.</para>
</step>
<step>
<para>Accept the default network mirror.</para>
</step>
<step>
<para>Set up disk partitioning.</para>
<para>Disk partitioning is a critical element of the image
creation process and the auto-generated default
partitioning scheme does not work with
<application>bsd-cloudinit</application> at this
time.</para>
<para>Because the default does not work, you must select
manual partitioning. The partition editor should list only
one block device. If you use virtio for the disk device
driver, it is named <literal>vtbd0</literal>. Select this
device and run the <command>create</command> command three
times:</para>
<orderedlist>
<listitem>
<para>Select <guibutton>Create</guibutton> to create a
partition table. This action is the default when no
partition table exists. Then, select <guilabel>GPT
GUID Partition Table</guilabel> from the list. This
choice is the default.</para>
</listitem>
<listitem>
<para>Create two partitions:<itemizedlist>
<listitem>
<para>First partition: A 64&nbsp;kB
<literal>freebsd-boot</literal> partition with
no mount point.</para>
</listitem>
<listitem>
<para>Second partition: A
<literal>freebsd-ufs</literal> partition with
a mount point of <filename>/</filename> with all
remaining free space.</para>
</listitem>
</itemizedlist></para>
</listitem>
</orderedlist>
<para>The following figure shows a completed partition table
with a 1&nbsp;GB virtual disk:</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/freebsd-partitions.png"
format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>Select <guibutton>Finish</guibutton> and then
<guibutton>Commit</guibutton> to commit your
changes.</para>
<note>
<para>If you modify this example, the root partition,
which is mounted on <filename>/</filename>, must be the
last partition on the drive so that it can expand at run
time to the disk size that your instance type provides.
Also note that <application>bsd-cloudinit</application>
currently has a hard-coded assumption that this is the
second partition.</para>
</note>
</step>
</substeps>
</step>
<step>
<para>Select a root password.</para>
</step>
<step>
<para>Select the CMOS time zone.</para>
<para>The virtualized CMOS almost always stores its time in UTC,
so unless you know otherwise, select UTC.</para>
</step>
<step>
<para>Select the time zone appropriate to your
environment.</para>
</step>
<step>
<para>From the list of services to start on boot, you must
select <systemitem class="service">ssh</systemitem>.
Optionally, select other services.</para>
</step>
<step>
<para>Optionally, add users.</para>
<para>You do not need to add users at this time. The
<application>bsd-cloudinit</application> program adds a
<literal>freebsd</literal> user account if one does not
exist. The <systemitem class="service">ssh</systemitem> keys
for this user are associated with OpenStack. To customize this
user account, you can create it now. For example, you might
want to customize the shell for the user.</para>
</step>
<step>
<para>Final config</para>
<para>This menu enables you to update previous settings. Check
that the settings are correct, and click
<guibutton>exit</guibutton>.</para>
</step>
<step>
<para>After you exit, you can open a shell to complete manual
configuration steps. Select <guibutton>Yes</guibutton> to make
a few OpenStack-specific changes:</para>
<substeps>
<step>
<para>Set up the console:</para>
<screen><prompt>#</prompt> <userinput>echo 'console="comconsole,vidconsole"' >> /boot/loader.conf</userinput></screen>
<para>This sets console output to go to the serial console,
which is displayed by <command>nova consolelog</command>,
and the video console for sites with VNC or Spice
configured.</para>
</step>
<step>
<para>Minimize boot delay:</para>
<screen><prompt>#</prompt> <userinput>echo 'autoboot_delay="1"' >> /boot/loader.conf</userinput></screen>
</step>
<step>
<para>Download the latest
<application>bsd-cloudinit-installer</application>. The
download commands differ between FreeBSD 10.1 and 9.2
because of differences in how the <command>fetch</command>
command handles HTTPS URLs.</para>
<para>In FreeBSD 10.1 the <command>fetch</command> command
verifies SSL peers by default, so you need to install the
<package>ca_root_nss</package> package that contains
certificate authority root certificates and tell
<command>fetch</command> where to find them. For FreeBSD
10.1 run these commands:</para>
<screen><prompt>#</prompt> <userinput>pkg install ca_root_nss</userinput>
<prompt>#</prompt> <userinput>fetch --ca-cert=/usr/local/share/certs/ca-root-nss.crt \
https://raw.github.com/pellaeon/bsd-cloudinit-installer/master/installer.sh</userinput></screen>
<para>FreeBSD 9.2 <command>fetch</command> does not support
peer-verification for https. For FreeBSD 9.2, run this
command:</para>
<screen><prompt>#</prompt> <userinput>fetch https://raw.github.com/pellaeon/bsd-cloudinit-installer/master/installer.sh</userinput></screen>
</step>
<step>
<para>Run the installer:</para>
<screen><prompt>#</prompt> <userinput>sh ./installer.sh</userinput></screen>
<para>Issue this command to download and install the latest
<package>bsd-cloudinit</package> package, and install the
necessary prerequisites.</para>
</step>
<step>
<para>Install <package>sudo</package> and configure the
<literal>freebsd</literal> user to have passwordless
access:</para>
<screen><prompt>#</prompt> <userinput>pkg install sudo</userinput>
<prompt>#</prompt> <userinput>echo 'freebsd ALL=(ALL) NOPASSWD: ALL' > /usr/local/etc/sudoers.d/10-cloudinit</userinput></screen>
</step>
</substeps>
</step>
<step>
<para>Power off the system:</para>
<screen><prompt>#</prompt> <userinput>shutdown -h now</userinput></screen>
</step>
</procedure>
</section>

View File

@ -1,80 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="image-metadata">
<title>Image metadata</title>
<?dbhtml stop-chunking?>
<para>Image metadata can help end users determine the nature of an image, and is used by
associated OpenStack components and drivers which interface with the Image service.</para>
<para>Metadata can also determine the scheduling of hosts. If the <option>property</option> option
is set on an image, and Compute is configured so that the
<systemitem>ImagePropertiesFilter</systemitem> scheduler filter is enabled (default), then the
scheduler only considers compute hosts that satisfy that property.</para>
<note><para>Compute's <systemitem>ImagePropertiesFilter</systemitem> value is specified in the
<option>scheduler_default_filter</option> value in the
<filename>/etc/nova/nova.conf</filename> file.</para></note>
<para>You can add metadata to Image service images by using the <parameter>--property
<replaceable>key</replaceable>=<replaceable>value</replaceable></parameter> parameter with the
<command>glance image-create</command> or <command>glance image-update</command> command. More
than one property can be specified. For example:</para>
<screen><prompt>$</prompt> <userinput>glance image-update <replaceable>img-uuid</replaceable> --property architecture=arm --property hypervisor_type=qemu</userinput></screen>
<para>Common image properties are also specified in the
<filename>/etc/glance/schema-image.json</filename> file. For a complete list of valid property
keys and values, refer to the <link
xlink:href="http://docs.openstack.org/cli-reference/content/chapter_cli-glance-property.html"><citetitle>OpenStack
Command-Line Reference</citetitle></link>.</para>
<para>All associated properties for an image can be displayed using the <command>glance
image-show</command> command. For example:</para>
<screen><prompt>$</prompt> <userinput>glance image-show myCirrosImage</userinput>
<computeroutput>+---------------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------------+--------------------------------------+
| Property 'base_image_ref' | 397e713c-b95b-4186-ad46-6126863ea0a9 |
| Property 'image_location' | snapshot |
| Property 'image_state' | available |
| Property 'image_type' | snapshot |
| Property 'instance_type_ephemeral_gb' | 0 |
| Property 'instance_type_flavorid' | 2 |
| Property 'instance_type_id' | 5 |
| Property 'instance_type_memory_mb' | 2048 |
| Property 'instance_type_name' | m1.small |
| Property 'instance_type_root_gb' | 20 |
| Property 'instance_type_rxtx_factor' | 1 |
| Property 'instance_type_swap' | 0 |
| Property 'instance_type_vcpu_weight' | None |
| Property 'instance_type_vcpus' | 1 |
| Property 'instance_uuid' | 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 |
| Property 'kernel_id' | df430cc2-3406-4061-b635-a51c16e488ac |
| Property 'owner_id' | 66265572db174a7aa66eba661f58eb9e |
| Property 'ramdisk_id' | 3cf852bd-2332-48f4-9ae4-7d926d50945e |
| Property 'user_id' | 376744b5910b4b4da7d8e6cb483b06a8 |
| checksum | 8e4838effa1969ad591655d6485c7ba8 |
| container_format | ami |
| created_at | 2013-07-22T19:45:58 |
| deleted | False |
| disk_format | ami |
| id | 7e5142af-1253-4634-bcc6-89482c5f2e8a |
| is_public | False |
| min_disk | 0 |
| min_ram | 0 |
| name | myCirrosImage |
| owner | 66265572db174a7aa66eba661f58eb9e |
| protected | False |
| size | 14221312 |
| status | active |
| updated_at | 2013-07-22T19:46:42 |
+---------------------------------------+--------------------------------------+</computeroutput></screen>
<note>
<title>Volume-from-Image properties</title>
<para>When creating Block Storage volumes from images, also consider your
configured image properties. If you alter the core image properties, you
should also update your Block Storage configuration. Amend
<option>glance_core_properties</option> in the
<filename>/etc/cinder/cinder.conf</filename> file on all controller
nodes to match the core properties you have set in the Image
service.</para>
</note>
</section>

View File

@ -1,102 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="image-formats">
<title>Disk and container formats for images</title>
<?dbhtml stop-chunking?>
<para>When you add an image to the Image service, you can specify
its disk and container formats.</para>
<section xml:id="disk-format">
<title>Disk formats</title>
<para>The disk format of a virtual machine image is the format
of the underlying disk image. Virtual appliance vendors
have different formats for laying out the information
contained in a virtual machine disk image.</para>
<para>Set the disk format for your image to one of the
following values:</para>
<itemizedlist>
<listitem>
<para><literal>raw</literal>: An unstructured disk
image format; if you have a file without an
extension it is possibly a raw format.</para>
</listitem>
<listitem>
<para><literal>vhd</literal>: The VHD disk format, a
common disk format used by virtual machine
monitors from VMware, Xen, Microsoft, VirtualBox,
and others.</para>
</listitem>
<listitem>
<para><literal>vmdk</literal>: Common disk format
supported by many common virtual machine
monitors.</para>
</listitem>
<listitem>
<para><literal>vdi</literal>: Supported by VirtualBox
virtual machine monitor and the QEMU
emulator.</para>
</listitem>
<listitem>
<para><literal>iso</literal>: An archive format for
the data contents of an optical disc, such as
CD-ROM.</para>
</listitem>
<listitem>
<para><literal>qcow2</literal>: Supported by the QEMU
emulator that can expand dynamically and supports
Copy on Write.</para>
</listitem>
<listitem>
<para><literal>aki</literal>: An Amazon kernel
image.</para>
</listitem>
<listitem>
<para><literal>ari</literal>: An Amazon ramdisk
image.</para>
</listitem>
<listitem>
<para><literal>ami</literal>: An Amazon machine
image.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="container-format">
<title>Container formats</title>
<para>The container format indicates whether the virtual
machine image is in a file format that also contains
metadata about the actual virtual machine.</para>
<note>
<para>The Image service and other OpenStack projects do
not currently support the container format. It is safe
to specify <literal>bare</literal> as the container
format if you are unsure.</para>
</note>
<para>You can set the container format for your image to one
of the following values:</para>
<itemizedlist>
<listitem>
<para><literal>bare</literal>. The image does not have
a container or metadata envelope.</para>
</listitem>
<listitem>
<para><literal>ovf</literal>. The OVF container
format.</para>
</listitem>
<listitem>
<para><literal>aki</literal>. An Amazon kernel image.
</para>
</listitem>
<listitem>
<para><literal>ari</literal>. An Amazon ramdisk
image.</para>
</listitem>
<listitem>
<para><literal>ami</literal>. An Amazon machine
image.</para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -1,208 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ubuntu-image">
<title>Example: Ubuntu image</title>
<para>This example installs a Ubuntu 14.04 (Trusty Tahr) image. To create an image for a
different version of Ubuntu, follow these steps with the noted differences.</para>
<simplesect>
<title>Download an Ubuntu install ISO</title>
<para>Because the goal is to make the smallest possible base image, this example uses the
network installation ISO. The Ubuntu 64-bit 14.04 network installer ISO is at <link
xlink:href="http://archive.ubuntu.com/ubuntu/dists/trusty/main/installer-amd64/current/images/netboot/mini.iso"
>http://archive.ubuntu.com/ubuntu/dists/trusty/main/installer-amd64/current/images/netboot/mini.iso</link>.</para>
</simplesect>
<simplesect>
<title>Start the install process</title>
<para>Start the installation process by using either <command>virt-manager</command> or
<command>virt-install</command> as described in the previous section. If you use
<command>virt-install</command>, do not forget to connect your VNC client to the
virtual machine.</para>
<para>Assume that the name of your virtual machine image is <literal>ubuntu-14.04</literal>,
which you need to know when you use <command>virsh</command> commands to manipulate the
state of the image.</para>
<para>If you are using <command>virt-manager</command>, the commands should look something
like this:</para>
<screen><prompt>#</prompt> <userinput>qemu-img create -f qcow2 /tmp/trusty.qcow2 10G</userinput>
<prompt>#</prompt> <userinput>virt-install --virt-type kvm --name trusty --ram 1024 \
--cdrom=/data/isos/trusty-64-mini.iso \
--disk /tmp/trusty.qcow2,format=qcow2 \
--network network=default \
--graphics vnc,listen=0.0.0.0 --noautoconsole \
--os-type=linux --os-variant=ubuntutrusty</userinput></screen>
</simplesect>
<simplesect>
<title>Step through the install</title>
<para>At the initial Installer boot menu, choose the <guilabel>Install</guilabel> option.
Step through the install prompts, the defaults should be fine.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/ubuntu-install.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
</simplesect>
<simplesect>
<title>Hostname</title>
<para>The installer may ask you to choose a hostname. The default
(<literal>ubuntu</literal>) is fine. We will install the cloud-init package later, which
will set the hostname on boot when a new instance is provisioned using this
image.</para>
</simplesect>
<simplesect>
<title>Select a mirror</title>
<para>The default mirror proposed by the installer should be fine.</para>
</simplesect>
<simplesect>
<title>Step through the install</title>
<para>Step through the install, using the default options. When prompted for a user name,
the default (<systemitem class="username">ubuntu</systemitem>) is fine.</para>
</simplesect>
<simplesect>
<title>Partition the disks</title>
<para>There are different options for partitioning the disks. The default installation will
use LVM partitions, and will create three partitions (<filename>/boot</filename>,
<filename>/</filename>, swap), and this will work fine. Alternatively, you may wish
to create a single ext4 partition, mounted to "<literal>/</literal>", should also work
fine.</para>
<para>If unsure, we recommend you use the installer's default partition scheme, since there
is no clear advantage to one scheme or another.</para>
</simplesect>
<simplesect>
<title>Automatic updates</title>
<para>The Ubuntu installer will ask how you want to manage upgrades on your system. This
option depends on your specific use case. If your virtual machine instances will be
connected to the Internet, we recommend "Install security updates automatically".</para>
</simplesect>
<simplesect>
<title>Software selection: OpenSSH server</title>
<para>Choose "OpenSSH server" so that you will be able to SSH into the virtual machine when
it launches inside of an OpenStack cloud.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/ubuntu-software-selection.png" format="PNG"
contentwidth="6in"/>
</imageobject>
</mediaobject>
</simplesect>
<simplesect>
<title>Install GRUB boot loader</title>
<para>Select "Yes" when asked about installing the GRUB boot loader to the master boot
record.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/ubuntu-grub.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
<para>For more information on configuring Grub, see
<xref linkend="write-to-console"/>.</para>
</simplesect>
<simplesect>
<title>Detach the CD-ROM and reboot</title>
<para>Select the defaults for all of the remaining options. When the installation is
complete, you will be prompted to remove the CD-ROM.</para>
<mediaobject>
<imageobject>
<imagedata fileref="figures/ubuntu-finished.png" format="PNG" contentwidth="6in"/>
</imageobject>
</mediaobject>
<note>
<para>There is a known bug in Ubuntu 14.04; when you select
"Continue", the virtual machine will shut down, even though
it says it will reboot.</para>
</note>
<para>To eject a disk using <command>virsh</command>, libvirt requires that you attach an
empty disk at the same target that the CDROM was previously attached, which should be
<literal>hdc</literal>. You can confirm the appropriate target using the
<command>virsh dumpxml <replaceable>vm-image</replaceable></command> command.</para>
<screen><prompt>#</prompt> <userinput>virsh dumpxml trusty</userinput>
<computeroutput>&lt;domain type='kvm'>
&lt;name>trusty&lt;/name>
...
&lt;disk type='block' device='cdrom'>
&lt;driver name='qemu' type='raw'/>
&lt;target dev='hdc' bus='ide'/>
&lt;readonly/>
&lt;address type='drive' controller='0' bus='1' target='0' unit='0'/>
&lt;/disk>
...
&lt;/domain>
</computeroutput></screen>
<para>Run the following commands in the host as root to start up the machine again as
paused, eject the disk and resume. If you are using virt-manager, you may use the GUI
instead.</para>
<screen><prompt>#</prompt> <userinput>virsh start trusty --paused</userinput>
<prompt>#</prompt> <userinput>virsh attach-disk --type cdrom --mode readonly trusty "" hdc</userinput>
<prompt>#</prompt> <userinput>virsh resume trusty</userinput></screen>
<note>
<para>In the previous example, you paused the instance, ejected the disk, and
unpaused the instance. In theory, you could have ejected the disk at the
<guilabel>Installation complete</guilabel> screen. However, our testing
indicates that the Ubuntu installer locks the drive so that it cannot be ejected at
that point.</para>
</note>
</simplesect>
<simplesect>
<title>Log in to newly created image</title>
<para>When you boot for the first time after install, it may ask you about authentication
tools, you can just choose 'Exit'. Then, log in as root using the root password you
specified.</para>
</simplesect>
<simplesect>
<title>Install cloud-init</title>
<para>The <command>cloud-init</command> script starts on instance boot and will search for a
metadata provider to fetch a public key from. The public key will be placed in the
default user account for the image.</para>
<para>Install the <package>cloud-init</package> package:</para>
<screen><prompt>#</prompt> <userinput>apt-get install cloud-init</userinput></screen>
<para>When building Ubuntu images <command>cloud-init</command> must be explicitly
configured for the metadata source in use. The OpenStack metadata server emulates the
EC2 metadata service used by images in Amazon EC2.</para>
<para>To set the metadata source to be used by the image run the
<command>dpkg-reconfigure</command> command against the
<package>cloud-init</package> package. When prompted select the
<literal>EC2</literal> data source:</para>
<screen><prompt>#</prompt> <userinput>dpkg-reconfigure cloud-init</userinput></screen>
<para>The account varies by distribution. On Ubuntu-based virtual machines, the account is
called "ubuntu". On Fedora-based virtual machines, the account is called
"ec2-user".</para>
<para>You can change the name of the account used by cloud-init by editing the
<filename>/etc/cloud/cloud.cfg</filename> file and adding a line with a different
user. For example, to configure cloud-init to put the key in an account named "admin",
edit the config file so it has the line:</para>
<programlisting>user: admin</programlisting>
</simplesect>
<simplesect>
<title>Shut down the instance</title>
<para>From inside the instance, as root:</para>
<screen><prompt>#</prompt> <userinput>/sbin/shutdown -h now</userinput></screen>
</simplesect>
<simplesect>
<title>Clean up (remove MAC address details)</title>
<para>The operating system records the MAC address of the virtual Ethernet card in locations
such as <filename>/etc/udev/rules.d/70-persistent-net.rules</filename> during the
installation process. However, each time the image boots up, the virtual Ethernet card will
have a different MAC address, so this information must be deleted from the configuration
file.</para>
<para>There is a utility called <command>virt-sysprep</command>, that performs various
cleanup tasks such as removing the MAC address references. It will clean up a virtual
machine image in place:</para>
<screen><prompt>#</prompt> <userinput>virt-sysprep -d trusty</userinput></screen>
</simplesect>
<simplesect>
<title>Undefine the libvirt domain</title>
<para>Now that the image is ready to be uploaded to the Image service, you no longer need to
have this virtual machine image managed by libvirt. Use the <command>virsh undefine
<replaceable>vm-image</replaceable></command> command to inform libvirt:</para>
<screen><prompt>#</prompt> <userinput>virsh undefine trusty</userinput></screen>
</simplesect>
<simplesect>
<title>Image is complete</title>
<para>The underlying image file that you created with <command>qemu-img create</command>,
such as <filename>/tmp/trusty.qcow2</filename>, is now ready for uploading to the
OpenStack Image service.</para>
</simplesect>
</section>

View File

@ -1,105 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="windows-image">
<title>Example: Microsoft Windows image</title>
<para>This example creates a Windows Server 2012 qcow2 image, using
<command>virt-install</command> and the KVM hypervisor.</para>
<procedure>
<step>
<para>Follow these steps to prepare the installation:</para>
<substeps>
<step>
<para>Download a Windows Server 2012 installation ISO. Evaluation
images are available on <link
xlink:href="http://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-2012">the
Microsoft website</link> (registration required).</para>
</step>
<step>
<para>Download the signed VirtIO drivers ISO from the <link
xlink:href="https://fedoraproject.org/wiki/Windows_Virtio_Drivers#Direct_download">Fedora
website</link>.</para>
</step>
<step>
<para>Create a 15&nbsp;GB qcow2 image:</para>
<screen><prompt>$</prompt> <userinput>qemu-img create -f qcow2 ws2012.qcow2 15G</userinput></screen>
</step>
</substeps>
</step>
<step>
<para>Start the Windows Server 2012 installation with the
<command>virt-install</command> command:</para>
<screen><prompt>#</prompt> <userinput>virt-install --connect qemu:///system \
--name ws2012 --ram 2048 --vcpus 2 \
--network network=default,model=virtio \
--disk path=ws2012.qcow2,format=qcow2,device=disk,bus=virtio \
--cdrom /path/to/en_windows_server_2012_x64_dvd.iso \
--disk path=/path/to/virtio-win-0.1-XX.iso,device=cdrom \
--vnc --os-type windows --os-variant win2k8</userinput></screen>
<para>Use <command>virt-manager</command> or
<command>virt-viewer</command> to connect to the VM and start the
Windows installation.</para>
</step>
<step>
<para>Enable the VirtIO drivers.</para>
<para>The disk is not detected by default by the Windows installer. When
requested to choose an installation target, click <guibutton>Load
driver</guibutton> and browse the file system to select the
<filename>E:\WIN8\AMD64</filename> folder. The Windows installer
displays a list of drivers to install. Select the VirtIO SCSI and
network drivers, and continue the installation.</para>
<para>Once the installation is completed, the VM restarts. Define a
password for the administrator when prompted.</para>
</step>
<step>
<para>Log in as administrator and start a command window.</para>
</step>
<step>
<para>Complete the VirtIO drivers installation by running the
following command:</para>
<screen><prompt>C:\</prompt><userinput>pnputil -i -a E:\WIN8\AMD64\*.INF</userinput></screen>
</step>
<step>
<para>To allow <glossterm>Cloudbase-Init</glossterm> to run scripts
during an instance boot, set the PowerShell execution policy to be
unrestricted:</para>
<screen><prompt>C:\</prompt><userinput>powershell</userinput>
<prompt>C:\</prompt><userinput>Set-ExecutionPolicy Unrestricted</userinput></screen>
</step>
<step>
<para>Download and install Cloudbase-Init:</para>
<screen><prompt>C:\</prompt><userinput>Invoke-WebRequest -UseBasicParsing http://www.cloudbase.it/downloads/CloudbaseInitSetup_Stable_x64.msi -OutFile cloudbaseinit.msi</userinput>
<prompt>C:\</prompt><userinput>.\cloudbaseinit.msi</userinput></screen>
<para>In the <guilabel>configuration options</guilabel> window, change the following settings:</para>
<itemizedlist>
<listitem>
<para>Username: <literal>Administrator</literal></para>
</listitem>
<listitem>
<para>Network adapter to configure:
<literal>Red Hat VirtIO Ethernet Adapter</literal></para>
</listitem>
<listitem>
<para>Serial port for logging: <literal>COM1</literal></para>
</listitem>
</itemizedlist>
<para>When the installation is done, in the <guilabel>Complete the
Cloudbase-Init Setup Wizard</guilabel> window, select the
<guilabel>Run Sysprep</guilabel> and <guilabel>Shutdown</guilabel>
check boxes and click <guibutton>Finish</guibutton>.</para>
<para>Wait for the machine shutdown.</para>
</step>
</procedure>
<para>Your image is ready to upload to the Image service:</para>
<screen><prompt>$</prompt> <userinput>glance image-create --name WS2012 --disk-format qcow2 \
--container-format bare --is-public true \
--file ws2012.qcow2</userinput></screen>
</section>

View File

Before

Width:  |  Height:  |  Size: 75 KiB

After

Width:  |  Height:  |  Size: 75 KiB

View File

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 25 KiB

View File

Before

Width:  |  Height:  |  Size: 544 KiB

After

Width:  |  Height:  |  Size: 544 KiB

View File

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 28 KiB

View File

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 29 KiB

View File

Before

Width:  |  Height:  |  Size: 6.7 KiB

After

Width:  |  Height:  |  Size: 6.7 KiB

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 20 KiB

View File

Before

Width:  |  Height:  |  Size: 45 KiB

After

Width:  |  Height:  |  Size: 45 KiB

View File

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 30 KiB

View File

@ -8,8 +8,6 @@ Abstract
This guide describes how to obtain, create, and modify virtual
machine images that are compatible with OpenStack.
.. warning:: This guide is a work-in-progress.
Contents
~~~~~~~~

View File

@ -14,7 +14,6 @@
<module>cli-reference</module>
<module>config-reference</module>
<module>glossary</module>
<module>image-guide</module>
</modules>
<profiles>
<profile>

View File

@ -11,7 +11,8 @@ if [[ $# > 0 ]] ; then
fi
fi
for guide in user-guide user-guide-admin networking-guide admin-guide-cloud contributor-guide; do
for guide in user-guide user-guide-admin networking-guide admin-guide-cloud \
contributor-guide image-guide; do
tools/build-rst.sh doc/$guide $GLOSSARY --build build \
--target $guide $LINKCHECK
# Build it only the first time
@ -19,7 +20,7 @@ for guide in user-guide user-guide-admin networking-guide admin-guide-cloud cont
done
# Draft guides
for guide in arch-design-rst config-ref-rst image-guide-rst; do
for guide in arch-design-rst config-ref-rst; do
tools/build-rst.sh doc/$guide --build build \
--target "draft/$guide" $LINKCHECK
done

View File

@ -159,6 +159,7 @@ redirect 301 /admin-guide-cloud/content/customize-flavors.html /admin-guide-clou
redirectmatch 301 "^/user-guide/content/.*$" /user-guide/index.html
redirectmatch 301 "^/user-guide-admin/content/.*" /user-guide-admin/index.html
redirectmatch 301 "^/admin-guide-cloud/content/.*$" /admin-guide-cloud/index.html
redirectmatch 301 "^/image-guide/content/.*$" /image-guide/index.html
# Hot-guide has moved to heat repo
redirect 301 /user-guide/hot-guide/hot.html /developer/heat/template_guide/hot_guide.html