openstack-manuals/doc/image-guide/ch_openstack_images.xml

550 lines
28 KiB
XML

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE chapter [
<!ENTITY % openstack SYSTEM "../common/entities/openstack.ent">
%openstack;
]>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="ch_openstack_images">
<title>OpenStack Linux image requirements</title>
<?dbhtml stop-chunking?>
<para>For a Linux-based image to have full functionality in an
OpenStack Compute cloud, there are a few requirements. For
some of these, you can fulfill the requirements by installing
the <link
xlink:href="https://cloudinit.readthedocs.org/en/latest/"
><package>cloud-init</package></link> package. Read
this section before you create your own image to be sure that
the image supports the OpenStack features that you plan to use.</para>
<itemizedlist>
<listitem>
<para>Disk partitions and resize root partition on boot
(<package>cloud-init</package>)</para>
</listitem>
<listitem>
<para>No hard-coded MAC address information</para>
</listitem>
<listitem>
<para>SSH server running</para>
</listitem>
<listitem>
<para>Disable firewall</para>
</listitem>
<listitem>
<para>Access instance using ssh public key
(<package>cloud-init</package>)</para>
</listitem>
<listitem>
<para>Process user data and other metadata
(<package>cloud-init</package>)</para>
</listitem>
<listitem>
<para>Paravirtualized Xen support in Linux kernel (Xen
hypervisor only with Linux kernel version &lt;
3.0)</para>
</listitem>
</itemizedlist>
<section xml:id="support-resizing">
<title>Disk partitions and resize root partition on boot
(cloud-init)</title>
<para>When you create a Linux image, you must decide how to
partition the disks. The choice of partition method can
affect the resizing functionality, as described in the
following sections.</para>
<para>The size of the disk in a virtual machine image is
determined when you initially create the image. However,
OpenStack lets you launch instances with different size
drives by specifying different flavors. For example, if
your image was created with a 5&nbsp;GB disk, and you
launch an instance with a flavor of
<literal>m1.small</literal>. The resulting virtual
machine instance has, by default, a primary disk size of
10&nbsp;GB. When the disk for an instance is resized up,
zeros are just added to the end.</para>
<para>Your image must be able to resize its partitions on boot
to match the size requested by the user. Otherwise, after
the instance boots, you must manually resize the
partitions to access the additional storage to which you
have access when the disk size associated with the flavor
exceeds the disk size with which your image was
created.</para>
<simplesect>
<title>Xen: 1 ext3/ext4 partition (no LVM, no /boot, no
swap)</title>
<para>If you use the OpenStack XenAPI driver, the Compute
service automatically adjusts the partition and file
system for your instance on boot. Automatic resize
occurs if the following conditions are all
true:</para>
<itemizedlist>
<listitem>
<para><literal>auto_disk_config=True</literal> is
set as a property on the image in the image
registry.</para>
</listitem>
<listitem>
<para>The disk on the image has only one
partition.</para>
</listitem>
<listitem>
<para>The file system on the one partition is ext3
or ext4.</para>
</listitem>
</itemizedlist>
<para>Therefore, if you use Xen, we recommend that when
you create your images, you create a single ext3 or
ext4 partition (not managed by LVM). Otherwise, read
on.</para>
</simplesect>
<simplesect>
<title>Non-Xen with cloud-init/cloud-tools: One ext3/ext4
partition (no LVM, no /boot, no swap)</title>
<para>You must configure these items for your
image:</para>
<itemizedlist>
<listitem>
<para>The partition table for the image describes
the original size of the image.</para>
</listitem>
<listitem>
<para>The file system for the image fills the
original size of the image.</para>
</listitem>
</itemizedlist>
<para>Then, during the boot process, you must:</para>
<itemizedlist>
<listitem>
<para>Modify the partition table to make it aware
of the additional space:</para>
<itemizedlist>
<listitem>
<para>If you do not use LVM, you must
modify the table to extend the
existing root partition to encompass
this additional space.</para>
</listitem>
<listitem>
<para>If you use LVM, you can add a new
LVM entry to the partition table,
create a new LVM physical volume, add
it to the volume group, and extend the
logical partition with the root
volume.</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>Resize the root volume file system.</para>
</listitem>
</itemizedlist>
<para>The simplest way to support this is to
install in your image the:</para>
<itemizedlist>
<listitem>
<para><link xlink:href="https://launchpad.net/cloud-utils">cloud-utils</link>
package, which contains the <command>growpart</command>
tool for extending partitions.</para>
</listitem>
<listitem>
<para><link xlink:href="https://launchpad.net/cloud-initramfs-tools">cloud-initramfs-growroot</link>
package for Ubuntu, Debian and Fedora, which supports
resizing root partition on the first boot.</para>
</listitem>
<listitem>
<para><package>cloud-initramfs-growroot</package>
package for Centos and RHEL.</para>
</listitem>
<listitem>
<para><link xlink:href="https://launchpad.net/cloud-init">cloud-init</link>
package.</para>
</listitem>
</itemizedlist>
<para>With these packages installed, the image
performs the root partition resize on boot. For
example, in the <filename>/etc/rc.local</filename>
file. These packages are in the Ubuntu and Debian
package repository, as well as the EPEL repository
(for Fedora/RHEL/CentOS/Scientific Linux
guests).</para>
<para>If you cannot install
<literal>cloud-initramfs-tools</literal>, Robert
Plestenjak has a GitHub project called <link
xlink:href="https://github.com/flegmatik/linux-rootfs-resize"
>linux-rootfs-resize</link> that contains scripts
that update a ramdisk by using
<command>growpart</command> so that the image
resizes properly on boot.</para>
<para>If you can install the cloud-utils and
<package>cloud-init</package> packages, we
recommend that when you create your images, you create
a single ext3 or ext4 partition (not managed by
LVM).</para>
</simplesect>
<simplesect>
<title>Non-Xen without
<package>cloud-init</package>/<package>cloud-tools</package>:
LVM</title>
<para>If you cannot install <package>cloud-init</package>
and <package>cloud-tools</package> inside of your
guest, and you want to support resize, you must write
a script that your image runs on boot to modify the
partition table. In this case, we recommend using LVM
to manage your partitions. Due to a limitation in the
Linux kernel (as of this writing), you cannot modify a
partition table of a raw disk that has partitions
currently mounted, but you can do this for LVM.</para>
<para>Your script must do something like the following:<orderedlist>
<listitem>
<para>Detect if any additional space is
available on the disk. For example, parse
the output of <command>parted /dev/sda
--script "print
free"</command>.</para>
</listitem>
<listitem>
<para>Create a new LVM partition with the
additional space. For example,
<command>parted /dev/sda --script
"mkpart lvm ..."</command>.</para>
</listitem>
<listitem>
<para>Create a new physical volume. For
example, <command>pvcreate
/dev/<replaceable>sda6</replaceable></command>.</para>
</listitem>
<listitem>
<para>Extend the volume group with this
physical partition. For example,
<command>vgextend
<replaceable>vg00</replaceable>
/dev/<replaceable>sda6</replaceable></command>.</para>
</listitem>
<listitem>
<para>Extend the logical volume contained the
root partition by the amount of space. For
example, <command>lvextend
/dev/mapper/<replaceable>node-root</replaceable>
/dev/<replaceable>sda6</replaceable></command>.</para>
</listitem>
<listitem>
<para>Resize the root file system. For
example, <command>resize2fs
/dev/mapper/<replaceable>node-root</replaceable></command>.</para>
</listitem>
</orderedlist></para>
<para>You do not need a <filename>/boot</filename>
partition unless your image is an older Linux
distribution that requires that
<filename>/boot</filename> is not managed by
LVM.</para>
</simplesect>
</section>
<section xml:id="mac-address">
<title>No hard-coded MAC address information</title>
<para>You must remove the network persistence rules in the
image because they cause the network interface in the
instance to come up as an interface other than eth0. This
is because your image has a record of the MAC address of
the network interface card when it was first installed,
and this MAC address is different each time the
instance boots. You should alter the following
files:</para>
<itemizedlist>
<listitem>
<para>Replace
<filename>/etc/udev/rules.d/70-persistent-net.rules</filename>
with an empty file (contains network persistence
rules, including MAC address).</para>
</listitem>
<listitem>
<para>Replace
<filename>/lib/udev/rules.d/75-persistent-net-generator.rules</filename>
with an empty file (this generates the file
above).</para>
</listitem>
<listitem>
<para>Remove the HWADDR line from
<filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename>
on Fedora-based images.</para>
</listitem>
</itemizedlist>
<note>
<para>If you delete the network persistent rules files,
you may get a udev kernel warning at boot time, which
is why we recommend replacing them with empty files
instead.</para>
</note>
</section>
<section xml:id="ensure-ssh-server">
<title>Ensure ssh server runs</title>
<para>You must install an ssh server into the image and ensure
that it starts up on boot, or you cannot connect to your
instance by using ssh when it boots inside of OpenStack.
This package is typically called
<literal>openssh-server</literal>.</para>
</section>
<section xml:id="disable-firewall">
<title>Disable firewall</title>
<para>In general, we recommend that you disable any firewalls
inside of your image and use OpenStack security groups to
restrict access to instances. The reason is that having a
firewall installed on your instance can make it more
difficult to troubleshoot networking issues if you cannot
connect to your instance.</para>
</section>
<section xml:id="ssh-public-key">
<title>Access instance by using ssh public key
(cloud-init)</title>
<para>The typical way that users access virtual machines
running on OpenStack is to ssh using public key
authentication. For this to work, your virtual machine
image must be configured to download the ssh public key
from the OpenStack metadata service or config drive, at
boot time.</para>
<para>If both the XenAPI agent and <package>cloud-init</package> are
present in an image, <package>cloud-init</package> handles ssh-key
injection. The system assumes <package>cloud-init</package> is
present when the image has the <literal>cloud_init_installed
</literal> property.</para>
<simplesect>
<title>Use <package>cloud-init</package> to fetch the
public key</title>
<para>The <package>cloud-init</package> package
automatically fetches the public key from the metadata
server and places the key in an account. The account
varies by distribution. On Ubuntu-based virtual
machines, the account is called
<literal>ubuntu</literal>. On Fedora-based virtual
machines, the account is called
<literal>ec2-user</literal>.</para>
<para>You can change the name of the account used by
<package>cloud-init</package> by editing the
<filename>/etc/cloud/cloud.cfg</filename> file and
adding a line with a different user. For example, to
configure <package>cloud-init</package> to put the key
in an account named <literal>admin</literal>, edit the
configuration file so it has the line:</para>
<programlisting>user: admin</programlisting>
</simplesect>
<simplesect>
<title>Write a custom script to fetch the public
key</title>
<para>If you are unable or unwilling to install
<package>cloud-init</package> inside the guest,
you can write a custom script to fetch the public key
and add it to a user account.</para>
<para>To fetch the ssh public key and add it to the root
account, edit the <filename>/etc/rc.local</filename>
file and add the following lines before the line
"touch /var/lock/subsys/local". This code fragment is
taken from the <link
xlink:href="https://github.com/rackerjoe/oz-image-build/blob/master/templates/centos60_x86_64.tdl"
>rackerjoe oz-image-build CentOS 6
template</link>.</para>
<programlisting language="bash">if [ ! -d /root/.ssh ]; then
mkdir -p /root/.ssh
chmod 700 /root/.ssh
fi
# Fetch public key using HTTP
ATTEMPTS=30
FAILED=0
while [ ! -f /root/.ssh/authorized_keys ]; do
curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/metadata-key 2>/dev/null
if [ $? -eq 0 ]; then
cat /tmp/metadata-key >> /root/.ssh/authorized_keys
chmod 0600 /root/.ssh/authorized_keys
restorecon /root/.ssh/authorized_keys
rm -f /tmp/metadata-key
echo "Successfully retrieved public key from instance metadata"
echo "*****************"
echo "AUTHORIZED KEYS"
echo "*****************"
cat /root/.ssh/authorized_keys
echo "*****************"
else
FAILED=`expr $FAILED + 1`
if [ $FAILED -ge $ATTEMPTS ]; then
echo "Failed to retrieve public key from instance metadata after $FAILED attempts, quitting"
break
fi
echo "Could not retrieve public key from instance metadata (attempt #$FAILED/$ATTEMPTS), retrying in 5 seconds..."
sleep 5
fi
done</programlisting>
<note>
<para>Some VNC clients replace : (colon) with ;
(semicolon) and _ (underscore) with - (hyphen). If
editing a file over a VNC session, make sure it's
http: not http; and authorized_keys not
authorized-keys.</para>
</note>
</simplesect>
</section>
<section xml:id="metadata">
<title>Process user data and other metadata
(cloud-init)</title>
<para>In addition to the ssh public key, an image might need
additional information from OpenStack, such as <link
xlink:href="http://docs.openstack.org/user-guide/cli_provide_user_data_to_instances.html"
>Provide user data to instances</link>, that the user
submitted when requesting the image. For example, you might
want to set the host name of the instance when it is booted.
Or, you might wish to configure your image so that it executes
user data content as a script on boot.</para>
<para>You can access this information through the metadata
service or referring to <link
xlink:href="http://docs.openstack.org/user-guide/cli_config_drive.html"
>Store metadata on the configuration drive</link>. As the OpenStack metadata
service is compatible with version 2009-04-04 of the
Amazon EC2 metadata service, consult the Amazon EC2
documentation on <link
xlink:href="http://docs.amazonwebservices.com/AWSEC2/2009-04-04/UserGuide/AESDG-chapter-instancedata.html"
>Using Instance Metadata</link> for details on how to
retrieve user data.</para>
<para>The easiest way to support this type of functionality is
to install the <package>cloud-init</package> package into
your image, which is configured by default to treat user
data as an executable script, and sets the host
name.</para>
</section>
<section xml:id="write-to-console">
<title>Ensure image writes boot log to console</title>
<para>You must configure the image so that the kernel writes
the boot log to the <literal>ttyS0</literal> device. In
particular, the <literal>console=ttyS0</literal> argument
must be passed to the kernel on boot.</para>
<para>If your image uses grub2 as the boot loader, there
should be a line in the grub configuration file. For
example, <filename>/boot/grub/grub.cfg</filename>, which
looks something like this:</para>
<programlisting>linux /boot/vmlinuz-3.2.0-49-virtual root=UUID=6d2231e4-0975-4f35-a94f-56738c1a8150 ro console=ttyS0</programlisting>
<para>If <literal>console=ttyS0</literal> does not appear, you
must modify your grub configuration. In general, you
should not update the <filename>grub.cfg</filename>
directly, since it is automatically generated. Instead,
you should edit <filename>/etc/default/grub</filename> and
modify the value of the
<literal>GRUB_CMDLINE_LINUX_DEFAULT</literal>
variable:
<programlisting language="bash">GRUB_CMDLINE_LINUX_DEFAULT="console=ttyS0"</programlisting></para>
<para>Next, update the grub configuration. On Debian-based
operating-systems such as Ubuntu, run this command:</para>
<screen><prompt>#</prompt> <userinput>update-grub</userinput></screen>
<para>On Fedora-based systems, such as RHEL and CentOS, and on
openSUSE, run this command:</para>
<screen><prompt>#</prompt> <userinput>grub2-mkconfig -o /boot/grub2/grub.cfg</userinput></screen>
</section>
<section xml:id="image-xen-pv">
<title>Paravirtualized Xen support in the kernel (Xen
hypervisor only)</title>
<para>Prior to Linux kernel version 3.0, the mainline branch
of the Linux kernel did not have support for paravirtualized
Xen virtual machine instances (what Xen calls DomU
guests). If you are running the Xen hypervisor with
paravirtualization, and you want to create an image for an
older Linux distribution that has a pre 3.0 kernel, you
must ensure that the image boots a kernel that has been
compiled with Xen support.</para>
</section>
<section xml:id="image-cache-management">
<title>Manage the image cache</title>
<para>Use options in <filename>nova.conf</filename> to control
whether, and for how long, unused base images are stored
in <filename>/var/lib/nova/instances/_base/</filename>. If
you have configured live migration of instances, all your
compute nodes share one common
<filename>/var/lib/nova/instances/</filename>
directory.</para>
<para>For information about libvirt images in OpenStack, see
<link
xlink:href="http://www.pixelbeat.org/docs/openstack_libvirt_images/"
>The life of an OpenStack libvirt image from Pádraig
Brady</link>.</para>
<table rules="all">
<caption>Image cache management configuration
options</caption>
<col width="50%"/>
<col width="50%"/>
<thead>
<tr>
<td>Configuration option=Default value</td>
<td>(Type) Description</td>
</tr>
</thead>
<tbody>
<tr>
<td>preallocate_images=none</td>
<td><para>(StrOpt) VM image preallocation
mode:</para><itemizedlist>
<listitem>
<para><literal>none</literal>. No
storage provisioning occurs up
front.</para>
</listitem>
<listitem>
<para><literal>space</literal>.
Storage is fully allocated at
instance start. The
<literal>$instance_dir/</literal>
images are <link
xlink:href="http://www.kernel.org/doc/man-pages/online/pages/man2/fallocate.2.html"
>fallocate</link>d to immediately
determine if enough space is
available, and to possibly improve
VM I/O performance due to ongoing
allocation avoidance, and better
locality of block
allocations.</para>
</listitem>
</itemizedlist></td>
</tr>
<tr>
<td>remove_unused_base_images=True</td>
<td>(BoolOpt) Should unused base images be
removed? When set to True, the interval at
which base images are removed are set with the
following two settings. If set to False base
images are never removed by Compute.</td>
</tr>
<tr>
<td>remove_unused_original_minimum_age_seconds=86400</td>
<td>(IntOpt) Unused unresized base images younger
than this are not removed. Default is 86400
seconds, or 24 hours.</td>
</tr>
<tr>
<td>remove_unused_resized_minimum_age_seconds=3600</td>
<td>(IntOpt) Unused resized base images younger
than this are not removed. Default is 3600
seconds, or one hour.</td>
</tr>
</tbody>
</table>
<para>To see how the settings affect the deletion of a running
instance, check the directory where the images are
stored:</para>
<screen><prompt>#</prompt> <userinput>ls -lash /var/lib/nova/instances/_base/</userinput></screen>
<para>In the <filename>/var/log/compute/compute.log</filename>
file, look for the identifier:</para>
<screen><computeroutput>2012-02-18 04:24:17 41389 WARNING nova.virt.libvirt.imagecache [-] Unknown base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810
a0d1d5d3_20
2012-02-18 04:24:17 41389 INFO nova.virt.libvirt.imagecache [-] Removable base files: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810
a0d1d5d3 /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3_20
2012-02-18 04:24:17 41389 INFO nova.virt.libvirt.imagecache [-] Removing base file: /var/lib/nova/instances/_base/06a057b9c7b0b27e3b496f53d1e88810a0d1d5d3</computeroutput></screen>
<para>Because 86400 seconds (24 hours) is the default time for
<literal>remove_unused_original_minimum_age_seconds</literal>,
you can either wait for that time interval to see the base
image removed, or set the value to a shorter time period
in <filename>nova.conf</filename>. Restart all nova
services after changing a setting in
<filename>nova.conf</filename>.</para>
</section>
</chapter>