openstack-manuals/doc/src/docbkx/openstack-compute-admin/computeinstall.xml

1712 lines
90 KiB
XML
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_installing-openstack-compute">
<title>Installing OpenStack Compute</title>
<para>The OpenStack system has several key projects that are separate installations but can
work together depending on your cloud needs: OpenStack Compute, OpenStack Object Storage,
and OpenStack Image Service. You can install any of these projects separately and then
configure them either as standalone or connected entities.</para>
<section xml:id="compute-system-requirements">
<title>System Requirements</title>
<para><emphasis role="bold">Hardware</emphasis>: OpenStack components are intended to
run on standard hardware. Recommended hardware configurations for a minimum production
deployment are as follows for the cloud controller nodes and compute nodes.</para>
<table rules="all">
<caption>Hardware Recommendations </caption>
<col width="20%"/>
<col width="23%"/>
<col width="57%"/>
<thead>
<tr>
<td>Server</td>
<td>Recommended Hardware</td>
<td>Notes</td>
</tr>
</thead>
<tbody>
<tr>
<td>Cloud Controller node (runs network, volume, API, scheduler and image
services) </td>
<td>
<para>Processor: 64-bit x86</para>
<para>Memory: 12 GB RAM </para>
<para>Disk space: 30 GB (SATA or SAS or SSD) </para>
<para>Volume storage: two disks with 2 TB (SATA) for volumes attached to the
compute nodes </para>
<para>Network: one 1 GB Network Interface Card (NIC)</para>
</td>
<td>
<para>Two NICS are recommended but not required. A quad core server with 12
GB RAM would be more than sufficient for a cloud controller node.</para>
<para>32-bit processors will work for the cloud controller node. </para>
</td>
</tr>
<tr>
<td>Compute nodes (runs virtual instances)</td>
<td>
<para>Processor: 64-bit x86</para>
<para>Memory: 32 GB RAM</para>
<para>Disk space: 30 GB (SATA)</para>
<para>Network: two 1 GB NICs</para>
</td>
<td>
<para>Note that you cannot run 64-bit VM instances on a 32-bit compute node.
A 64-bit compute node can run either 32- or 64-bit VMs, however.</para>
<para>With 2 GB RAM you can run one m1.small instance on a node or three
m1.tiny instances without memory swapping, so 2 GB RAM would be a
minimum for a test-environment compute node. As an example, Rackspace
Cloud Builders use 96 GB RAM for compute nodes in OpenStack
deployments.</para>
<para>Specifically for virtualization on certain hypervisors on the node or
nodes running nova-compute, you need a x86 machine with an AMD processor
with SVM extensions (also called AMD-V) or an Intel processor with VT
(virtualization technology) extensions. </para>
<para>For Xen-based hypervisors, the Xen wiki contains a list of compatible
processors on the <link
xlink:href="http://wiki.xensource.com/xenwiki/HVM_Compatible_Processors"
>HVM Compatible Processors</link> page. For XenServer-compatible
Intel processors, refer to the <link
xlink:href="http://ark.intel.com/VTList.aspx">Intel® Virtualization
Technology List</link>. </para>
<para>For LXC, the VT extensions are not required.</para>
</td>
</tr>
</tbody>
</table>
<para><emphasis role="bold">Operating System</emphasis>: OpenStack currently has packages
for the following distributions: Ubuntu, RHEL, SUSE, Debian, and Fedora. These packages
are maintained by community members, refer to <link
xlink:href="http://wiki.openstack.org/Packaging"
>http://wiki.openstack.org/Packaging</link> for additional links. </para>
<para><emphasis role="bold">Networking</emphasis>: 1000 Mbps are suggested. For
OpenStack Compute, networking is configured on multi-node installations between the
physical machines on a single subnet. For networking between virtual machine instances,
three network options are available: flat, DHCP, and VLAN. Two NICs (Network Interface
Cards) are recommended on the server running nova-network. </para>
<para><emphasis role="bold">Database</emphasis>: For OpenStack Compute, you need access
to either a PostgreSQL or MySQL database, or you can install it as part of the OpenStack
Compute installation process.</para>
<para><emphasis role="bold">Permissions</emphasis>: You can install OpenStack Compute
either as root or as a user with sudo permissions if you configure the sudoers file to
enable all the permissions. </para>
<para><emphasis role="bold">Network Time Protocol</emphasis>: You must install a time synchronization program such as NTP to keep your cloud
controller and compute nodes talking to the same time server to avoid problems scheduling VM launches on compute nodes.</para>
</section><section xml:id="example-installation-architecture">
<title>Example Installation Architectures</title>
<para>OpenStack Compute uses a shared-nothing, messaging-based architecture. While very
flexible, the fact that you can install each nova- service on an independent server
means there are many possible methods for installing OpenStack Compute. The only
co-dependency between possible multi-node installations is that the Dashboard must be
installed nova-api server. Here are the types of installation architectures:</para>
<itemizedlist>
<listitem>
<para xmlns="http://docbook.org/ns/docbook">Single node: Only one server
runs all nova- services and also drives all the virtual instances. Use this
configuration only for trying out OpenStack Compute, or for development
purposes.</para></listitem>
<listitem><para>Two nodes: A cloud controller node runs the nova- services except for nova-compute, and a
compute node runs nova-compute. A client computer is likely needed to bundle
images and interfacing to the servers, but a client is not required. Use this
configuration for proof of concepts or development environments. </para></listitem>
<listitem><para xmlns="http://docbook.org/ns/docbook">Multiple nodes: You can add more compute nodes to the
two node installation by simply installing nova-compute on an additional server
and copying a nova.conf file to the added node. This would result in a multiple
node installation. You can also add a volume controller and a network controller
as additional nodes in a more complex multiple node installation. A minimum of
4 nodes is best for running multiple virtual instances that require a lot of
processing power.</para>
</listitem>
</itemizedlist>
<para>This is an illustration of one possible multiple server installation of OpenStack
Compute; virtual server networking in the cluster may vary.</para>
<para><inlinemediaobject>
<imageobject>
<imagedata scale="80" fileref="../figures/NOVA_install_arch.png"/></imageobject>
</inlinemediaobject></para>
<para>An alternative architecture would be to add more messaging servers if you notice a lot
of back up in the messaging queue causing performance problems. In that case you would
add an additional RabbitMQ server in addition to or instead of scaling up the database
server. Your installation can run any nova- service on any server as long as the
nova.conf is configured to point to the RabbitMQ server and the server can send messages
to the server.</para>
<para>Multiple installation architectures are possible, here is another example
illustration. </para>
<para><inlinemediaobject>
<imageobject>
<imagedata scale="40" fileref="../figures/NOVA_compute_nodes.png"/></imageobject>
</inlinemediaobject></para>
</section>
<section xml:id="service-architecture"><title>Service Architecture</title>
<para>Because Compute has multiple services and many configurations are possible, here is a diagram showing the overall service architecture and communication systems between the services.</para>
<para><inlinemediaobject>
<imageobject>
<imagedata scale="80" fileref="../figures/NOVA_ARCH.png"/></imageobject>
</inlinemediaobject></para></section>
<section xml:id="installing-openstack-compute-on-ubuntu">
<title>Installing OpenStack Compute on Ubuntu </title>
<para>How you go about installing OpenStack Compute depends on your goals for the
installation. You can use an ISO image, you can use a scripted installation, and you can
manually install with a step-by-step installation.</para>
<section xml:id="iso-ubuntu-installation">
<title>ISO Distribution Installation</title>
<para>You can download and use an ISO image that is based on a Ubuntu Linux Server 10.04
LTS distribution containing only the components needed to run OpenStack Compute. See
<link xlink:href="http://sourceforge.net/projects/stackops/files/"
>http://sourceforge.net/projects/stackops/files/</link> for download files and
information, license information, and a README file. For documentation on the
StackOps distro, see <link xlink:href="http://docs.stackops.org">http://docs.stackops.org</link>. For free support, go to
<link xlink:href="http://getsatisfaction.com/stackops">http://getsatisfaction.com/stackops</link>.</para></section>
<section xml:id="scripted-ubuntu-installation">
<title>Scripted Installation</title>
<para>You can download a script for a standalone install for proof-of-concept, learning, or for development purposes for Ubuntu 11.04 at <link
xlink:href="http://devstack.org"
>https://devstack.org</link>.</para>
<orderedlist>
<listitem><para>Install Ubuntu 11.04 (Natty):</para> <para>In order to correctly install all the dependencies, we assume a specific version of Ubuntu to
make it as easy as possible. OpenStack works on other flavors of Linux (and
some folks even run it on Windows!) We recommend using a minimal install of
Ubuntu server in a VM if this is your first time.</para></listitem>
<listitem><para>Download DevStack:</para>
<literallayout class="monospaced">git clone git://github.com/cloudbuilders/devstack.git</literallayout>
<para>The devstack repo contains a script that installs OpenStack Compute, the Image
Service and the Identity Service and offers templates for configuration
files plus data scripts. </para></listitem>
<listitem><para>Start the install:</para><literallayout class="monospaced">cd devstack; ./stack.sh</literallayout><para>It takes a few minutes, we recommend <link xlink:href="http://devstack.org/stack.sh.html"
>reading the well-documented script</link> while it is building to learn
more about what is going on. </para>
</listitem>
</orderedlist>
</section>
<section xml:id="manual-ubuntu-installation">
<title>Manual Installation</title>
<para>The manual installation involves installing from packages on Ubuntu 10.10 or 11.04
as a user with root permission. Depending on your environment, you may need to
prefix these commands with sudo.</para>
<para>This installation process walks through installing a cloud controller node and a
compute node. The cloud controller node contains all the nova- services including
the API server and the database server. The compute node needs to run only the
nova-compute service. You only need one nova-network service running in a multi-node
install. You cannot install nova-objectstore on a different machine from
nova-compute (production-style deployments will use a Glance server for virtual
images).</para>
<section xml:id="installing-the-cloud-controller">
<title>Installing the Cloud Controller</title>
<para>First, set up pre-requisites to use the Nova PPA (Personal Packages Archive)
provided through https://launchpad.net/. The python-software-properties
package is a pre-requisite for setting up the nova package repository. You can
also use the trunk package built daily by adding the ppa:nova-core/trunk
repository, but trunk changes rapidly and may not run any given day.</para>
<literallayout class="monospaced">sudo apt-get install python-software-properties</literallayout>
<literallayout class="monospaced">sudo add-apt-repository ppa:openstack-release/2011.3</literallayout>
<para>Run update.</para>
<literallayout class="monospaced">sudo apt-get update</literallayout>
<para>Install the messaging queue server, RabbitMQ.</para>
<literallayout class="monospaced">sudo apt-get install -y rabbitmq-server</literallayout>
<para>Now, install the Python dependencies. </para>
<literallayout class="monospaced">sudo apt-get install -y python-greenlet python-mysqldb </literallayout>
<note><para>You can use either MySQL or PostgreSQL.</para></note>
<para>Install the required nova- packages, and dependencies are automatically
installed.</para>
<literallayout class="monospaced">
sudo apt-get install nova-volume nova-vncproxy nova-api nova-ajax-console-proxy
sudo apt-get install nova-doc nova-scheduler nova-objectstore
sudo apt-get install nova-network nova-compute
sudo apt-get install glance
</literallayout>
<para>Install the supplemental tools such as euca2ools and unzip.</para>
<literallayout class="monospaced">sudo apt-get install -y euca2ools unzip</literallayout>
<para> Next set up the database, either MySQL or PostgreSQL.</para>
<section xml:id="setting-up-sql-database-mysql">
<title>Setting up the SQL Database (MySQL) on the Cloud Controller</title>
<para>You must use a SQLAlchemy-compatible database, such as MySQL or
PostgreSQL. This example shows MySQL. </para>
<para>First you can set environments with a "pre-seed" line to bypass all
the installation prompts, running this as root: </para>
<para>
<literallayout class="monospaced">bash
MYSQL_PASS=nova
NOVA_PASS=notnova
cat &lt;&lt;MYSQL_PRESEED | debconf-set-selections
mysql-server-5.1 mysql-server/root_password password $MYSQL_PASS
mysql-server-5.1 mysql-server/root_password_again password $MYSQL_PASS
mysql-server-5.1 mysql-server/start_on_boot boolean true
MYSQL_PRESEED</literallayout>
</para>
<para>Next, install MySQL with: <literallayout class="monospaced">sudo apt-get install -y mysql-server</literallayout>
</para>
<para>Edit /etc/mysql/my.cnf to change "bind-address" from localhost
(127.0.0.1) to any (0.0.0.0) and restart the mysql service: </para>
<para>
<literallayout class="monospaced">sudo sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
sudo service mysql restart</literallayout></para>
<para>To configure the MySQL database, create the nova database: </para>
<literallayout class="monospaced">sudo mysql -u root -p$MYSQL_PASS -e 'CREATE DATABASE nova;'</literallayout>
<para>Update the DB to give user nova@% full control of the nova
database:</para>
<para>
<literallayout class="monospaced">sudo mysql -u root -p$MYSQL_PASS -e "GRANT ALL PRIVILEGES ON *.* TO 'nova'@'%' WITH GRANT OPTION;"</literallayout>
</para>
<para>Set MySQL password for the user "nova@%"</para>
<para>
<literallayout class="monospaced">sudo mysql -u root -p$MYSQL_PASS -e "SET PASSWORD FOR 'nova'@'%' = PASSWORD('$NOVA_PASS');"</literallayout>
</para>
</section>
<section xml:id="setting-up-sql-database-postgresql"><title>Setting Up PostgreSQL as the Database on the Cloud Controller</title>
<para>OpenStack can use PostgreSQL as an alternative database. This is a matter of substituting the MySQL steps with PostgreSQL equivalents, as outlined here.</para>
<para>First, install PostgreSQL on the controller node.</para>
<literallayout class="monospaced">apt-fast install postgresql postgresql-server-dev-8.4 python-dev python-psycopg2</literallayout>
<para>Edit /etc/postgresql/8.4/main/postgresql.conf and change the listen_address to listen to all appropriate addesses, PostgreSQL listen only to localhost by default. For example:</para>
<para>To listen on a specific IP address:</para>
<literallayout class="monospaced"># - Connection Settings -
listen_address = '10.1.1.200,192.168.100.2'</literallayout>
<para>To listen on all addresses:</para>
<literallayout class="monospaced"># - Connection Settings -
listen_address = '*'</literallayout>
<para>Add appropriate addresses and networks to /etc/postgresql/8.4/main/pg_hba.conf to allow remote access to PostgreSQL, this should include all servers hosting OpenStack (but not neccessarily those hosted by Openstack). As an example, append the following lines:</para>
<literallayout class="monospaced">host all all 192.168.0.0/16
host all all 10.1.0.0/16
</literallayout>
<para>Change the default PostgreSQL user's password:</para>
<literallayout class="monospaced">
sudo -u postgres psql template1
template1=#\password
Enter Password:
Enter again:
template1=#\q</literallayout>
<para>Restart PostgreSQL:</para>
<literallayout class="monospaced">service postgresql restart</literallayout>
<para>Create nova databases:</para>
<literallayout class="monospaced">sudo -u postgres createdb nova
sudo -u postgres createdb glance</literallayout>
<para>Create nova database user which will be used for all OpenStack services, note the adduser and createuser steps will prompt for the user's password ($PG_PASS):</para>
<literallayout class="monospaced">
adduser nova
sudo -u postgres createuser -PSDR nova
sudo -u postgres psql template1
template1=#GRANT ALL PRIVILEGES ON DATABASE nova TO nova
template1=#GRANT ALL PRIVILEGES ON DATABASE glance TO nova
template1=#\q
</literallayout>
<para>For the Cactus version of Nova, the following fix is required for the PostgreSQL database schema. You don't need to do this for Diablo:</para>
<literallayout class="monospaced">
sudo -u postgres psql template1
template1=#alter table instances alter instance_type_id type integer using cast(instance_type_id as integer);
template1=#\q</literallayout>
<para>For Nova components that require access to this database the required configuration in /etc/nova/nova.conf should be (replace $PG_PASS with password):</para>
<literallayout class="monospaced">--sql_connection=postgresql://nova:$PG_PASS@control.example.com/nova</literallayout>
<para>At this stage the databases are empty and contain no content. These will be initialised when you do the nova-manage db sync command. </para>
</section></section>
<section xml:id="installing-the-compute-node">
<title>Installing the Compute Node</title>
<para>There are many different ways to perform a multinode install of Compute. In
this case, you can install all the nova- packages and dependencies as you did
for the Cloud Controller node, or just install nova-network and nova-compute.
Your installation can run any nova- services anywhere, so long as the service
can access nova.conf so it knows where the rabbitmq server is installed.</para>
<para>The Compute Node is where you configure the Compute network, the networking
between your instances. There are three options: flat, flatDHCP, and VLAN. Read
more about specific configurations in the <xref linkend="ch_networking">Networking chapter</xref>. </para>
<para>Because you may need to query the database from the Compute node and learn
more information about instances, euca2ools and mysql-client packages should be
installed on any additional Compute nodes.</para>
</section>
<section xml:id="restart-nova-services">
<title>Restart All Relevant Services on the Compute Node</title>
<para>On both nodes, restart all six services in total, just to cover the entire
spectrum: </para>
<para>
<literallayout class="monospaced">
restart libvirt-bin; restart nova-network; restart nova-compute;
restart nova-api; restart nova-objectstore; restart nova-scheduler
</literallayout>
</para>
<para>All nova services are now installed, the rest of your steps involve specific configuration steps. Please refer to <link xlink:href="#configuring-openstack-compute-basics">Configuring Compute</link> for additional information. </para>
</section>
</section>
</section>
<section xml:id="installing-openstack-compute-on-rhel6">
<title>Installing OpenStack Compute on Red Hat Enterprise Linux 6 </title>
<para>This section documents a multi-node installation using RHEL 6. RPM repos for the Bexar
release, the Cactus release, milestone releases of Diablo, and also per-commit trunk
builds for OpenStack Nova are available at <link
xlink:href="http://yum.griddynamics.net">http://yum.griddynamics.net</link>. The
final release of Diablo is available at <link
xlink:href="http://yum.griddynamics.net/yum/diablo/"
>http://yum.griddynamics.net/yum/diablo/</link>, but is not yet tested completely
(as of Oct 4, 2011). Check this page for updates: <link
xlink:href="http://wiki.openstack.org/NovaInstall/RHEL6Notes"
>http://wiki.openstack.org/NovaInstall/RHEL6Notes</link>.</para>
<para>Known considerations for RHEL version 6 installations: </para>
<itemizedlist><listitem>
<para>iSCSI LUN not supported due to tgtadm versus ietadm differences</para>
</listitem>
<listitem>
<para>GuestFS is used for files injection</para>
</listitem>
<listitem>
<para>Files injection works with libvirt</para>
</listitem>
<listitem>
<para>Static network configuration can detect OS type for RHEL and Ubuntu</para>
</listitem>
<listitem><para>Only KVM hypervisor has been tested with this installation</para></listitem></itemizedlist>
<para>To install Nova on RHEL v.6 you need access to two repositories, one available on the
yum.griddynamics.net website and the RHEL DVD image connected as repo. </para>
<para>First, install RHEL 6.0, preferrably with a minimal set of packages.</para>
<para>Disable SELinux in /etc/sysconfig/selinux and then reboot. </para>
<para>Connect the RHEL 3. 6.0 x86_64 DVD as a repository in YUM. </para>
<literallayout class="monospaced">
sudo mount /dev/cdrom /mnt/cdrom
cat /etc/yum.repos.d/rhel.repo
[rhel]
name=RHEL 6.0
baseurl=file:///mnt/cdrom/Server
enabled=1
gpgcheck=0
</literallayout>
<para>Download and install repo config and key.</para>
<literallayout class="monospaced">
wget http://yum.griddynamics.net/yum/diablo/openstack-repo-2011.3-0.3.noarch.rpm
sudo rpm -i openstack-repo-2011.1-3.noarch.rpm
</literallayout>
<para>Install the libvirt package (these instructions are tested only on KVM). </para>
<literallayout class="monospaced">
sudo yum install libvirt
sudo chkconfig libvirtd on
sudo service libvirtd start
</literallayout>
<para>Repeat the basic installation steps to put the pre-requisites on all cloud controller and compute nodes. Nova has many different possible configurations. You can install Nova services on separate servers as needed but these are the basic pre-reqs.</para>
<para>These are the basic packages to install for a cloud controller node:</para>
<literallayout class="monospaced">sudo yum install euca2ools openstack-nova-node-full</literallayout>
<para>These are the basic packages to install compute nodes. Repeat for each compute node (the node that runs the VMs) that you want to install.</para>
<literallayout class="monospaced">sudo yum install openstack-nova-compute </literallayout>
<para>On the cloud controller node, create a MySQL database named nova. </para>
<literallayout class="monospaced">
sudo service mysqld start
sudo chkconfig mysqld on
sudo service rabbitmq-server start
sudo chkconfig rabbitmq-server on
mysqladmin -u root password nova
</literallayout>
<para>You can use this script to create the database. </para>
<programlisting>
#!/bin/bash
DB_NAME=nova
DB_USER=nova
DB_PASS=nova
PWD=nova
CC_HOST="A.B.C.D" # IPv4 address
HOSTS='node1 node2 node3' # compute nodes list
mysqladmin -uroot -p$PWD -f drop nova
mysqladmin -uroot -p$PWD create nova
for h in $HOSTS localhost; do
echo "GRANT ALL PRIVILEGES ON $DB_NAME.* TO '$DB_USER'@'$h' IDENTIFIED BY '$DB_PASS';" | mysql -u root -p$DB_PASS mysql
done
echo "GRANT ALL PRIVILEGES ON $DB_NAME.* TO $DB_USER IDENTIFIED BY '$DB_PASS';" | mysql -u root -p$DB_PASS mysql
echo "GRANT ALL PRIVILEGES ON $DB_NAME.* TO root IDENTIFIED BY '$DB_PASS';" | mysql -u root -p$DB_PASS mysql
</programlisting>
<para>Now, ensure the database version matches the version of nova that you are installing:</para>
<literallayout class="monospaced">nova-manage db sync</literallayout>
<para>For iptables configuration, update your firewall configuration to allow incoming
requests on ports 5672 (RabbitMQ), 3306 (MySQL DB), 9292 (Glance), 6080 (noVNC web
console), API (8773, 8774) and DHCP traffic from instances. For non-production
environments the easiest way to fix any firewall problems is removing final REJECT in
INPUT chain of filter table. </para>
<literallayout class="monospaced">
sudo iptables -I INPUT 1 -p tcp --dport 5672 -j ACCEPT
sudo iptables -I INPUT 1 -p tcp --dport 3306 -j ACCEPT
sudo iptables -I INPUT 1 -p tcp --dport 9292 -j ACCEPT
sudo iptables -I INPUT 1 -p tcp --dport 6080 -j ACCEPT
sudo iptables -I INPUT 1 -p tcp --dport 8773 -j ACCEPT
sudo iptables -I INPUT 1 -p tcp --dport 8774 -j ACCEPT
sudo iptables -I INPUT 1 -p udp --dport 67 -j ACCEPT
</literallayout>
<para>On every node when you have nova-compute running ensure that unencrypted VNC access is allowed only from Cloud Controller node:</para>
<literallayout class="monospaced">sudo iptables -I INPUT 1 -p tcp -s &lt;CLOUD_CONTROLLER_IP_ADDRESS&gt; --dport 5900:6400 -j ACCEPT
</literallayout><para>On each node, set up the configuration file in /etc/nova/nova.conf.</para>
<para>Start the Nova services after configuring and you then are running an OpenStack
cloud!</para>
<literallayout class="monospaced">
for n in api compute network objectstore scheduler vncproxy; do sudo service openstack-nova-$n start; done
sudo service openstack-glance-api start
sudo service openstack-glance-registry start
for n in node1 node2 node3; do ssh $n sudo service openstack-nova-compute start; done
</literallayout>
</section>
<section xml:id="configuring-openstack-compute-basics">
<title>Post-Installation Configuration for OpenStack Compute</title>
<para>Configuring your Compute installation involves nova-manage commands plus editing the
nova.conf file to ensure the correct flags are set. This section contains the basics for
a simple multi-node installation, but Compute can be configured many ways. You can find
networking options and hypervisor options described in separate chapters, and you will
read about additional configuration information in a separate chapter as well.</para>
<section xml:id="setting-flags-in-nova-conf-file">
<title>Setting Flags in the nova.conf File</title>
<para>The configuration file nova.conf is installed in /etc/nova by default. You only
need to do these steps when installing manually, the scripted installation above
does this configuration during the installation. A default set of options are
already configured in nova.conf when you install manually. The defaults are as
follows:</para>
<programlisting>
--daemonize=1
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
</programlisting>
<para>Starting with the default file, you must define the following required items in
/etc/nova/nova.conf. The flag variables are described below. You can place
comments in the nova.conf file by entering a new line with a # sign at the beginning of the line. To see a listing of all possible flag settings, see
the output of running /bin/nova-api --help.</para>
<table rules="all">
<caption>Description of nova.conf flags (not comprehensive)</caption>
<thead>
<tr>
<td>Flag</td>
<td>Description</td>
</tr>
</thead>
<tbody>
<tr>
<td>--sql_connection</td>
<td>SQL Alchemy connect string (reference); Location of OpenStack Compute
SQL database</td>
</tr>
<tr>
<td>--s3_host</td>
<td>IP address; Location where OpenStack Compute is hosting the objectstore
service, which will contain the virtual machine images and buckets</td>
</tr>
<tr>
<td>--rabbit_host</td>
<td>IP address; Location of RabbitMQ server</td>
</tr>
<tr>
<td>--verbose</td>
<td>Set to 1 to turn on; Optional but helpful during initial setup</td>
</tr>
<tr>
<td>--network_manager</td>
<td>
<para>Configures how your controller will communicate with additional
OpenStack Compute nodes and virtual machines. Options: </para>
<itemizedlist>
<listitem>
<para>nova.network.manager.FlatManager</para>
<para>Simple, non-VLAN networking</para>
</listitem>
<listitem>
<para>nova.network.manager.FlatDHCPManager</para>
<para>Flat networking with DHCP</para>
</listitem>
<listitem>
<para>nova.network.manager.VlanManager</para>
<para>VLAN networking with DHCP; This is the Default if no
network manager is defined here in nova.conf. </para>
</listitem>
</itemizedlist>
</td>
</tr>
<tr>
<td>--fixed_range</td>
<td>IP address/range; Network prefix for the IP network that all the
projects for future VM guests reside on. Example: 192.168.0.0/12</td>
</tr>
<tr>
<td>--ec2_host</td>
<td>IP address; Indicates where the nova-api service is installed.</td>
</tr>
<tr>
<td>--osapi_host</td>
<td>IP address; Indicates where the nova-api service is installed.</td>
</tr>
<tr>
<td>--network_size</td>
<td>Number value; Number of addresses in each private subnet.</td>
</tr>
<tr>
<td>--glance_api_servers </td>
<td>IP and port; Address for Image Service.</td>
</tr>
<tr>
<td>--use_deprecated_auth </td>
<td>If this flag is present, the Cactus method of authentication is used with the novarc file containing credentials.</td>
</tr>
</tbody>
</table>
<para>Here is a simple example nova.conf file for a small private cloud, with all the
cloud controller services, database server, and messaging server on the same
server.</para>
<programlisting>
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--verbose
--use_deprecated_auth
--ec2_host=http://184.106.239.134
--osapi_host=http://184.106.239.134
--s3_host=184.106.239.134
--rabbit_host=184.106.239.134
--fixed_range=192.168.0.0/16
--network_size=8
--glance_api_servers=184.106.239.134:9292
--routing_source_ip=184.106.239.134
--sql_connection=mysql://nova:notnova@184.106.239.134/nova
</programlisting>
<para>Create a “nova” group, so you can set permissions on the configuration file: </para>
<literallayout class="monospaced">sudo addgroup nova</literallayout>
<para>The nova.config file should have its owner set to root:nova, and mode set to 0640,
since the file contains your MySQL servers username and password. You also want to
ensure that the nova user belongs to the nova group.</para>
<literallayout class="monospaced">
sudo usermod -g nova nova
chown -R root:nova /etc/nova
chmod 640 /etc/nova/nova.conf
</literallayout>
</section><section xml:id="setting-up-openstack-compute-environment-on-the-compute-node">
<title>Setting Up OpenStack Compute Environment on the Compute Node</title>
<para>
These are the commands you run to ensure the database schema is current, and
then set up a user and project, if you are using built-in auth with the
<literallayout class="monospaced">--use_deprecated_auth flag</literallayout> rather than the Identity Service:
</para>
<para>
<literallayout class="monospaced">
nova-manage db sync
nova-manage user admin &lt;user_name>
nova-manage project create &lt;project_name> &lt;user_name>
nova-manage network create &lt;network-label> &lt;project-network> &lt;number-of-networks-in-project> &lt;addresses-in-each-network>
</literallayout>
</para>
<para>Here is an example of what this looks like with real values entered: </para>
<literallayout class="monospaced">
nova-manage db sync
nova-manage user admin dub
nova-manage project create dubproject dub
nova-manage network create novanet 192.168.0.0/24 1 256
</literallayout>
<para>For this example, the number of IPs is /24 since that falls inside the /16
range that was set in fixed-range in nova.conf. Currently, there can only be
one network, and this set up would use the max IPs available in a /24. You can
choose values that let you use any valid amount that you would like. </para>
<para>The nova-manage service assumes that the first IP address is your network
(like 192.168.0.0), that the 2nd IP is your gateway (192.168.0.1), and that the
broadcast is the very last IP in the range you defined (192.168.0.255). If this is
not the case you will need to manually edit the sql db networks table.o. </para>
<para>When you run the nova-manage network create command, entries are made
in the networks and fixed_ips table. However, one of the networks listed in the
networks table needs to be marked as bridge in order for the code to know that a
bridge exists. The network in the Nova networks table is marked as bridged
automatically for Flat Manager.</para>
</section>
<section xml:id="creating-certifications">
<title>Creating Credentials</title>
<para>Generate the credentials as a zip file. These are the certs you will use to
launch instances, bundle images, and all the other assorted API functions. </para>
<para>
<literallayout class="monospaced">
mkdir p /root/creds
/usr/bin/python /usr/bin/nova-manage project zipfile $NOVA_PROJECT $NOVA_PROJECT_USER /root/creds/novacreds.zip
</literallayout>
</para>
<para>If you are using one of the Flat modes for networking, you may see a Warning
message "No vpn data for project &lt;project_name>" which you can safely
ignore.</para>
<para>Unzip them in your home directory, and add them to your environment. </para>
<literallayout class="monospaced">
unzip /root/creds/novacreds.zip -d /root/creds/
cat /root/creds/novarc >> ~/.bashrc
source ~/.bashrc
</literallayout>
<para>
If you already have Nova credentials present in your environment, you can use a script included with Glance the Image Service, tools/nova_to_os_env.sh, to create Glance-style credentials. This script adds OS_AUTH credentials to the environment which are used by the Image Service to enable private images when the Identity Service is configured as the authentication system for Compute and the Image Service.</para>
</section>
<section xml:id="enabling-access-to-vms-on-the-compute-node">
<title>Enabling Access to VMs on the Compute Node</title>
<para>One of the most commonly missed configuration areas is not allowing the proper
access to VMs. Use the euca-authorize command to enable access. Below, you
will find the commands to allow ping and ssh to your VMs: </para>
<literallayout class="monospaced">
euca-authorize -P icmp -t -1:-1 default
euca-authorize -P tcp -p 22 default
</literallayout>
<para>Another
common issue is you cannot ping or SSH your instances after issuing the
euca-authorize commands. Something to look at is the amount of dnsmasq
processes that are running. If you have a running instance, check to see that
TWO dnsmasq processes are running. If not, perform the following:</para>
<literallayout class="monospaced">
killall dnsmasq
service nova-network restart
</literallayout>
</section>
<section xml:id="configuring-multiple-compute-nodes">
<title>Configuring Multiple Compute Nodes</title><para>If your goal is to split your VM load across more than one server, you can connect an
additional nova-compute node to a cloud controller node. This configuring can be
reproduced on multiple compute servers to start building a true multi-node OpenStack
Compute cluster. </para><para>To build out and scale the Compute platform, you spread out services amongst many servers.
While there are additional ways to accomplish the build-out, this section describes
adding compute nodes, and the service we are scaling out is called
'nova-compute.'</para>
<para>For a multi-node install you only make changes to nova.conf and copy it to
additional compute nodes. Ensure each nova.conf file points to the correct IP
addresses for the respective services. Customize the nova.conf example below to
match your environment. The CC_ADDR is the Cloud Controller IP Address. </para>
<programlisting>
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--flat_network_bridge=br100
--logdir=/var/log/nova
--state_path=/var/lib/nova
--verbose
--sql_connection=mysql://root:nova@CC_ADDR/nova
--s3_host=CC_ADDR
--rabbit_host=CC_ADDR
--ec2_api=CC_ADDR
--ec2_url=http://CC_ADDR:8773/services/Cloud
--network_manager=nova.network.manager.FlatManager
--fixed_range= network/CIDR
--network_size=number of addresses
</programlisting>
<para>
By default, Nova sets the bridge device based on the setting in --flat_network_bridge. Now you
can edit /etc/network/interfaces with the following template, updated with your IP
information.
</para>
<programlisting>
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto br100
iface br100 inet static
bridge_ports eth0
bridge_stp off
bridge_maxwait 0
bridge_fd 0
address xxx.xxx.xxx.xxx
netmask xxx.xxx.xxx.xxx
network xxx.xxx.xxx.xxx
broadcast xxx.xxx.xxx.xxx
gateway xxx.xxx.xxx.xxx
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers xxx.xxx.xxx.xxx
</programlisting>
<para>Restart networking:</para>
<literallayout class="monospaced">/etc/init.d/networking restart</literallayout>
<para>With nova.conf updated and networking set, configuration is nearly complete.
First, bounce the relevant services to take the latest updates:</para>
<literallayout class="monospaced">restart libvirt-bin; service nova-compute restart</literallayout>
<para>To avoid issues with KVM and permissions with Nova, run the following commands to ensure we have VM's that are running optimally:</para>
<literallayout class="monospaced">
chgrp kvm /dev/kvm
chmod g+rwx /dev/kvm
</literallayout>
<para>If you want to use the 10.04 Ubuntu Enterprise Cloud images that are readily available at http://uec-images.ubuntu.com/releases/10.04/release/, you may run into delays with booting. Any server that does not have nova-api running on it needs this iptables entry so that UEC images can get metadata info. On compute nodes, configure the iptables with this next step:</para>
<literallayout class="monospaced"> # iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination $NOVA_API_IP:8773</literallayout>
<para>Lastly, confirm that your compute node is talking to your cloud controller. From the cloud controller, run this database query:</para>
<literallayout class="monospaced">mysql -u$MYSQL_USER -p$MYSQL_PASS nova -e 'select * from services;'</literallayout>
<para>In return, you should see something similar to this:</para>
<programlisting>
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
| 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova |
| 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova |
| 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova |
| 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova |
| 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova |
| 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova |
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
</programlisting>
<para>You can see that 'osdemo0{1,2,4,5} are all running 'nova-compute.' When you start spinning up instances, they will allocate on any node that is running nova-compute from this list.</para>
</section>
<section xml:id="determining-version-of-compute">
<title>Determining the Version of Compute</title>
<para>You can find the version of the installation by using the nova-manage
command:</para>
<literallayout class="monospaced">nova-manage version list</literallayout>
</section>
<section xml:id="migrating-from-cactus-to-diablo"><title>Migrating from Cactus to Diablo</title>
<para>If you have an installation already installed and running, is is possible to run
smoothly an uprade from Cactus Stable (2011.2) to Diablo Stable (2011.3), without
losing any of your running instances, but also keep the current network, volumes,
and images available. </para>
<para>In order to update, we will start by updating the Image Service(<emphasis
role="bold">Glance</emphasis>), then update the Compute Service (<emphasis
role="bold">Nova</emphasis>). We will finally make sure the client-tools
(euca2ools and novaclient) are properly integrated.</para>
<para>For Nova, Glance and euca2ools, we will use the PPA repositories, while we will
use the latest version of novaclient from Github, due to important updates.</para>
<note>
<para> That upgrade guide does not integrate Keystone. If you want to integrate
Keystone, please read the section "Installing the Identity Service" </para>
</note>
<para/>
<simplesect>
<title>A- Glance upgrade</title>
<para>In order to update Glance, we will start by stopping all running services :
<literallayout class="monospaced">glance-control all stop</literallayout>Make
sure the services are stopped, you can check by running ps :
<literallayout class="monospaced">ps axl |grep glance</literallayout>If the
commands doesn't output any Glance process, it means you can continue ;
otherwise, simply kill the PID's.</para>
<para>While the Cactus release of Glance uses one glance.conf file (usually located
at "/etc/glance/glance.conf"), the Diablo release brings up new configuration
files. (Look into them, they are pretty self-explanatory). </para>
<orderedlist>
<listitem>
<para><emphasis role="bold">Update the repositories</emphasis></para>
<para> The first thing to do is to update the packages. Update your
"/etc/apt/sources.list", or create a
"/etc/apt/sources.list.d/openstack_diablo.list file :
<programlisting>
deb http://ppa.launchpad.net/openstack-release/2011.<emphasis role="bold">3</emphasis>/ubuntu maverick main
deb-src http://ppa.launchpad.net/openstack-release/2011.<emphasis role="bold">3</emphasis>/ubuntu maverick main
</programlisting>If
you are running Ubuntu Lucid, point to Lucid, otherwise to another
version (Maverick, or Natty). You can now update the repository :
<literallayout class="monospaced">aptitude update
aptitude upgrade</literallayout></para>
<para>You could encounter the message "<emphasis role="italic">The following
signatures couldn't be verified because the public key is not
available: NO_PUBKEY XXXXXXXXXXXX</emphasis>", simply run :
<programlisting>
gpg --keyserver pgpkeys.mit.edu --recv-key XXXXXXXXXXXX
gpg -a --export XXXXXXXXXXXX | sudo apt-key add -
(Where XXXXXXXXXXXX is the key)
</programlisting>Then
re-run the two steps, which should work proceed without error. The
package system should propose you to upgrade you Glance installation to
the Diablo one, accept the upgrade, and you will have successfully
performed the package upgrade. In the next step, we will reconfigure the
service. </para>
<para/>
</listitem>
<listitem>
<para><emphasis role="bold">Update Glance configuration files</emphasis>
</para>
<para> You need now to update the configuration files. The main file you
will need to update is
<literallayout class="monospaced">etc/glance/glance-registry.conf</literallayout>In
that one you will specify the database backend. If you used a MySQL
backend under Cactus ; replace the <literallayout class="monospaced">sql_connection</literallayout> with the entry you
have into the /etc/glance/glance.conf.</para>
<para>Here is how the configuration files should look like : </para>
<literallayout class="monospaced">glance-api.conf</literallayout>
<programlisting>
[DEFAULT]
# Show more verbose log output (sets INFO log level output)
verbose = True
# Show debugging output in logs (sets DEBUG log level output)
debug = False
# Which backend store should Glance use by default is not specified
# in a request to add a new image to Glance? Default: &apos;file&apos;
# Available choices are &apos;file&apos;, &apos;swift&apos;, and &apos;s3&apos;
default_store = file
# Address to bind the API server
bind_host = 0.0.0.0
# Port the bind the API server to
bind_port = 9292
# Address to find the registry server
registry_host = 0.0.0.0
# Port the registry server is listening on
registry_port = 9191
# Log to this file. Make sure you do not set the same log
# file for both the API and registry servers!
log_file = /var/log/glance/api.log
# Send logs to syslog (/dev/log) instead of to file specified by `log_file`
use_syslog = False
# ============ Notification System Options =====================
# Notifications can be sent when images are create, updated or deleted.
# There are three methods of sending notifications, logging (via the
# log_file directive), rabbit (via a rabbitmq queue) or noop (no
# notifications sent, the default)
notifier_strategy = noop
# Configuration options if sending notifications via rabbitmq (these are
# the defaults)
rabbit_host = localhost
rabbit_port = 5672
rabbit_use_ssl = false
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /
rabbit_notification_topic = glance_notifications
# ============ Filesystem Store Options ========================
# Directory that the Filesystem backend store
# writes image data to
filesystem_store_datadir = /var/lib/glance/images/
# ============ Swift Store Options =============================
# Address where the Swift authentication service lives
swift_store_auth_address = 127.0.0.1:8080/v1.0/
# User to authenticate against the Swift authentication service
swift_store_user = jdoe
# Auth key for the user authenticating against the
# Swift authentication service
swift_store_key = a86850deb2742ec3cb41518e26aa2d89
# Container within the account that the account should use
# for storing images in Swift
swift_store_container = glance
# Do we create the container if it does not exist?
swift_store_create_container_on_put = False
# What size, in MB, should Glance start chunking image files
# and do a large object manifest in Swift? By default, this is
# the maximum object size in Swift, which is 5GB
swift_store_large_object_size = 5120
# When doing a large object manifest, what size, in MB, should
# Glance write chunks to Swift? This amount of data is written
# to a temporary disk buffer during the process of chunking
# the image file, and the default is 200MB
swift_store_large_object_chunk_size = 200
# Whether to use ServiceNET to communicate with the Swift storage servers.
# (If you aren&apos;t RACKSPACE, leave this False!)
#
# To use ServiceNET for authentication, prefix hostname of
# `swift_store_auth_address` with &apos;snet-&apos;.
# Ex. https://example.com/v1.0/ -&gt; https://snet-example.com/v1.0/
swift_enable_snet = False
# ============ S3 Store Options =============================
# Address where the S3 authentication service lives
s3_store_host = 127.0.0.1:8080/v1.0/
# User to authenticate against the S3 authentication service
s3_store_access_key = &lt;20-char AWS access key&gt;
# Auth key for the user authenticating against the
# S3 authentication service
s3_store_secret_key = &lt;40-char AWS secret key&gt;
# Container within the account that the account should use
# for storing images in S3. Note that S3 has a flat namespace,
# so you need a unique bucket name for your glance images. An
# easy way to do this is append your AWS access key to &quot;glance&quot;.
# S3 buckets in AWS *must* be lowercased, so remember to lowercase
# your AWS access key if you use it in your bucket name below!
s3_store_bucket = &lt;lowercased 20-char aws access key&gt;glance
# Do we create the bucket if it does not exist?
s3_store_create_bucket_on_put = False
# ============ Image Cache Options ========================
image_cache_enabled = False
# Directory that the Image Cache writes data to
# Make sure this is also set in glance-pruner.conf
image_cache_datadir = /var/lib/glance/image-cache/
# Number of seconds after which we should consider an incomplete image to be
# stalled and eligible for reaping
image_cache_stall_timeout = 86400
# ============ Delayed Delete Options =============================
# Turn on/off delayed delete
delayed_delete = False
[pipeline:glance-api]
pipeline = versionnegotiation context apiv1app
# NOTE: use the following pipeline for keystone
# pipeline = versionnegotiation authtoken context apiv1app
# To enable Image Cache Management API replace pipeline with below:
# pipeline = versionnegotiation context imagecache apiv1app
# NOTE: use the following pipeline for keystone auth (with caching)
# pipeline = versionnegotiation authtoken context imagecache apiv1app
[pipeline:versions]
pipeline = versionsapp
[app:versionsapp]
paste.app_factory = glance.api.versions:app_factory
[app:apiv1app]
paste.app_factory = glance.api.v1:app_factory
[filter:versionnegotiation]
paste.filter_factory = glance.api.middleware.version_negotiation:filter_factory
[filter:imagecache]
paste.filter_factory = glance.api.middleware.image_cache:filter_factory
[filter:context]
paste.filter_factory = glance.common.context:filter_factory
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = 127.0.0.1
service_port = 5000
auth_host = 127.0.0.1
auth_port = 5001
auth_protocol = http
auth_uri = http://127.0.0.1:5000/
admin_token = 999888777666
</programlisting>
<literallayout class="monospaced">glance-registry.conf</literallayout>
<programlisting>
[DEFAULT]
# Show more verbose log output (sets INFO log level output)
verbose = True
# Show debugging output in logs (sets DEBUG log level output)
debug = False
# Address to bind the registry server
bind_host = 0.0.0.0
# Port the bind the registry server to
bind_port = 9191
# Log to this file. Make sure you do not set the same log
# file for both the API and registry servers!
log_file = /var/log/glance/registry.log
# Send logs to syslog (/dev/log) instead of to file specified by `log_file`
use_syslog = False
# SQLAlchemy connection string for the reference implementation
# registry server. Any valid SQLAlchemy connection string is fine.
# See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_engine
#sql_connection = sqlite:////var/lib/glance/glance.sqlite
sql_connection = mysql://glance_user:glance_pass@glance_host/glance
# Period in seconds after which SQLAlchemy should reestablish its connection
# to the database.
#
# MySQL uses a default `wait_timeout` of 8 hours, after which it will drop
# idle connections. This can result in &apos;MySQL Gone Away&apos; exceptions. If you
# notice this, you can lower this value to ensure that SQLAlchemy reconnects
# before MySQL can drop the connection.
sql_idle_timeout = 3600
# Limit the api to return `param_limit_max` items in a call to a container. If
# a larger `limit` query param is provided, it will be reduced to this value.
api_limit_max = 1000
# If a `limit` query param is not provided in an api request, it will
# default to `limit_param_default`
limit_param_default = 25
[pipeline:glance-registry]
pipeline = context registryapp
# NOTE: use the following pipeline for keystone
# pipeline = authtoken keystone_shim context registryapp
[app:registryapp]
paste.app_factory = glance.registry.server:app_factory
[filter:context]
context_class = glance.registry.context.RequestContext
paste.filter_factory = glance.common.context:filter_factory
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = 127.0.0.1
service_port = 5000
auth_host = 127.0.0.1
auth_port = 5001
auth_protocol = http
auth_uri = http://127.0.0.1:5000/
admin_token = 999888777666
[filter:keystone_shim]
paste.filter_factory = keystone.middleware.glance_auth_token:filter_factory
</programlisting>
</listitem>
<listitem>
<para><emphasis role="bold">Fire up Glance</emphasis></para>
<para> You should now be able to start glance (glance-control runs bothh
glance-api and glance-registry services :
<literallayout class="monospaced">glance-controll all start</literallayout>
You can now make sure the new version of Glance is running :
<literallayout class="monospaced">
ps axl |grep glance
</literallayout>But
also make sure you are running the Diablo version :
<literallayout class="monospaced">glance --version
which should output :
glance 2011.3 </literallayout>
If you do not see the two process running, an error occured somewhere.
You can check for errors by running : </para>
<para><literallayout class="monospaced"> glance-api /etc/glance/glance-api.conf and :
glance-registry /etc/glance/glance-registry.conf</literallayout>
You are now ready to upgrade the database scheme. </para>
<para/>
</listitem>
<listitem>
<para><emphasis role="bold">Update Glance database</emphasis></para>
<para>Before running any upgrade, make sure you backup the database. If you
have a MySQL backend :
<literallayout class="monospaced">
mysqldump -u $glance_user -p$glance_password glance &gt; glance_backup.sql
</literallayout>If
you use the default backend, SQLite, simply copy the database's file.
You are now ready to update the database scheme. In order to update the
Glance service, just run :
<literallayout class="monospaced"> glance-manage db_sync </literallayout></para>
</listitem>
<listitem>
<para><emphasis role="bold">Validation test</emphasis></para>
<para>
In order to make sure Glance has been properly updated, simply run :
<literallayout class="monospaced">glance index</literallayout>
which should display your registered images :
<programlisting>
ID Name Disk Format Container Format Size
---------------- ------------------------------ -------------------- -------------------- --------------
94 Debian 6.0.3 amd64 raw bare 1067778048
</programlisting>
</para>
</listitem>
</orderedlist>
</simplesect>
<simplesect>
<title>B- Nova upgrade</title>
<para>In order to successfully go through the upgrade process, it is advised to
follow the exact order of the process' steps. By doing so, you make sure you
don't miss any mandatory step.</para>
<orderedlist>
<listitem>
<para><emphasis role="bold">Update the repositoiries</emphasis></para>
<para> Update your "/etc/apt/sources.list", or create a
"/etc/apt/sources.list.d/openstack_diablo.list file :
<programlisting>
deb http://ppa.launchpad.net/openstack-release/2011.<emphasis role="bold">3</emphasis>/ubuntu maverick main
deb-src http://ppa.launchpad.net/openstack-release/2011.<emphasis role="bold">3</emphasis>/ubuntu maverick main
</programlisting>If
you are running Ubuntu Lucid, point to Lucid, otherwise to another
version (Maverick, or Natty). You can now update the repository (do not
upgrade the packages at the moment) :
<literallayout class="monospaced">aptitude update</literallayout></para>
</listitem>
<listitem>
<para><emphasis role="bold">Stop all nova services</emphasis></para>
<para>By stopping all nova services, that would make our instances unreachables (for instance,
stopping the nova-network service will make all the routing rules
flushed) ; but they won't neither be terminated, nor deleted. </para>
<itemizedlist>
<listitem>
<para> We first stop nova services</para>
<para><literallayout>cd /etc/init.d &amp;&amp; for i in $(ls nova-*); do service $i stop; done</literallayout></para>
</listitem>
<listitem>
<para> We stop rabbitmq; used by nova-scheduler</para>
<para><literallayout class="monospaced">service rabbitmq-server stop</literallayout></para>
</listitem>
<listitem>
<para>We finally killl dnsmasq, used by nova-network</para>
<para><literallayout class="monospaced">killall dnsmasq</literallayout></para>
</listitem>
</itemizedlist>
<para>You can make sure not any services used by nova are still running via : </para>
<para><literallayout class="monospaced">ps axl | grep nova</literallayout>
that should not output any service, if so, simply kill the PIDs
</para>
<para/>
</listitem>
<listitem>
<para><emphasis role="bold">MySQL pre-requisites</emphasis></para>
<para>
Before running the upgrade, make sure the following tables don't already exist (They could, if you ran tests, or by mistake an upgrade) :
<simplelist>
<member>block_device_mapping</member>
<member>snapshots</member>
<member>provider_fw_rules</member>
<member>instance_type_extra_specs</member>
<member>virtual_interfaces</member>
<member>volume_types</member>
<member>volume_type_extra_specs</member>
<member>volume_metadata;</member>
<member>virtual_storage_arrays</member>
</simplelist>
If so, you can safely remove them; since they are not used at all by Cactus (2011.2) :
</para>
<para>
<programlisting>
drop table block_device_mapping;
drop table snapshots;
drop table provider_fw_rules;
drop table instance_type_extra_specs;
drop table virtual_interfaces;
drop table volume_types;
drop table volume_type_extra_specs;
drop table volume_metadata;
drop table virtual_storage_arrays;
</programlisting>
</para>
<para/>
</listitem>
<listitem>
<para><emphasis role="bold">Upgrade nova packages</emphasis></para>
<para> You can now perform an upgrade :
<literallayout class="monospaced">aptitude upgrade</literallayout>
During the upgrade process, you would see :
<programlisting>
Configuration file '/etc/nova/nova.conf'
==> Modified (by you or by a script) since installation.
==> Package distributor has shipped an updated version.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.
*** /etc/nova/nova.conf (Y/I/N/O/D/Z) [default=N] ?
</programlisting>
Type "N" or validate in order to keep your current configuration file.
We will manually update in order to use some of new Diablo settings. </para>
<para/>
</listitem>
<listitem>
<para><emphasis role="bold">Update the configuration files</emphasis></para>
<para>Diablo introduces several new files : </para>
<para>api-paste.ini, which contains all api-related settings</para>
<para>nova-compute.conf, a configuration file dedicated to the compte-node
settings.</para>
<para>Here are the settings you would add into nova.conf : </para>
<programlisting>
--multi_host=T
--api_paste_config=/etc/nova/api-paste.ini
</programlisting>
<para> and that one if you plan to integrate Keystone to your environment, with euca2ools : </para>
<programlisting>
--keystone_ec2_url=http://$NOVA-API-IP.11:5000/v2.0/ec2tokens
</programlisting>
<para>Here is how the files should look like : </para>
<literallayout class="monospaced">nova.conf</literallayout>
<programlisting>
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--flagfile=/etc/nova/nova-compute.conf
--force_dhcp_release=True
--verbose
--daemonize=1
--s3_host=172.16.40.11
--rabbit_host=172.16.40.11
--cc_host=172.16.40.11
--keystone_ec2_url=http://172.16.40.11:5000/v2.0/ec2tokens
--ec2_url=http://172.16.40.11:8773/services/Cloud
--ec2_host=172.16.40.11
--ec2_dmz_host=172.16.40.11
--ec2_port=8773
--fixed_range=192.168.0.0/12
--FAKE_subdomain=ec2
--routing_source_ip=10.0.10.14
--sql_connection=mysql://nova:nova-pass@172.16.40.11/nova
--glance_api_servers=172.16.40.13:9292
--image_service=nova.image.glance.GlanceImageService
--image_decryption_dir=/var/lib/nova/tmp
--network_manager=nova.network.manager.VlanManager
--public_interface=eth0
--vlan_interface=eth0
--iscsi_ip_prefix=172.16.40.12
--vnc_enabled
--multi_host=T
--debug
--api_paste_config=/etc/nova/api-paste.ini
</programlisting>
<para><literallayout class="monospaced">api-paste.ini</literallayout></para>
<programlisting>
#######
# EC2 #
#######
[composite:ec2]
use = egg:Paste#urlmap
/: ec2versions
/services/Cloud: ec2cloud
/services/Admin: ec2admin
/latest: ec2metadata
/2007-01-19: ec2metadata
/2007-03-01: ec2metadata
/2007-08-29: ec2metadata
/2007-10-10: ec2metadata
/2007-12-15: ec2metadata
/2008-02-01: ec2metadata
/2008-09-01: ec2metadata
/2009-04-04: ec2metadata
/1.0: ec2metadata
[pipeline:ec2cloud]
# pipeline = logrequest ec2noauth cloudrequest authorizer ec2executor
# NOTE(vish): use the following pipeline for deprecated auth
pipeline = logrequest authenticate cloudrequest authorizer ec2executor
[pipeline:ec2admin]
# pipeline = logrequest ec2noauth adminrequest authorizer ec2executor
# NOTE(vish): use the following pipeline for deprecated auth
pipeline = logrequest authenticate adminrequest authorizer ec2executor
[pipeline:ec2metadata]
pipeline = logrequest ec2md
[pipeline:ec2versions]
pipeline = logrequest ec2ver
[filter:logrequest]
paste.filter_factory = nova.api.ec2:RequestLogging.factory
[filter:ec2lockout]
paste.filter_factory = nova.api.ec2:Lockout.factory
[filter:ec2noauth]
paste.filter_factory = nova.api.ec2:NoAuth.factory
[filter:authenticate]
paste.filter_factory = nova.api.ec2:Authenticate.factory
[filter:cloudrequest]
controller = nova.api.ec2.cloud.CloudController
paste.filter_factory = nova.api.ec2:Requestify.factory
[filter:adminrequest]
controller = nova.api.ec2.admin.AdminController
paste.filter_factory = nova.api.ec2:Requestify.factory
[filter:authorizer]
paste.filter_factory = nova.api.ec2:Authorizer.factory
[app:ec2executor]
paste.app_factory = nova.api.ec2:Executor.factory
[app:ec2ver]
paste.app_factory = nova.api.ec2:Versions.factory
[app:ec2md]
paste.app_factory = nova.api.ec2.metadatarequesthandler:MetadataRequestHandler.factory
#############
# Openstack #
#############
[composite:osapi]
use = egg:Paste#urlmap
/: osversions
/v1.0: openstackapi10
/v1.1: openstackapi11
[pipeline:openstackapi10]
# pipeline = faultwrap noauth ratelimit osapiapp10
# NOTE(vish): use the following pipeline for deprecated auth
pipeline = faultwrap auth ratelimit osapiapp10
[pipeline:openstackapi11]
# pipeline = faultwrap noauth ratelimit extensions osapiapp11
# NOTE(vish): use the following pipeline for deprecated auth
pipeline = faultwrap auth ratelimit extensions osapiapp11
[filter:faultwrap]
paste.filter_factory = nova.api.openstack:FaultWrapper.factory
[filter:auth]
paste.filter_factory = nova.api.openstack.auth:AuthMiddleware.factory
[filter:noauth]
paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory
[filter:ratelimit]
paste.filter_factory = nova.api.openstack.limits:RateLimitingMiddleware.factory
[filter:extensions]
paste.filter_factory = nova.api.openstack.extensions:ExtensionMiddleware.factory
[app:osapiapp10]
paste.app_factory = nova.api.openstack:APIRouterV10.factory
[app:osapiapp11]
paste.app_factory = nova.api.openstack:APIRouterV11.factory
[pipeline:osversions]
pipeline = faultwrap osversionapp
[app:osversionapp]
paste.app_factory = nova.api.openstack.versions:Versions.factory
##########
# Shared #
##########
[filter:keystonecontext]
paste.filter_factory = keystone.middleware.nova_keystone_context:NovaKeystoneContext.factory
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = 127.0.0.1
service_port = 5000
auth_host = 127.0.0.1
auth_port = 5001
auth_protocol = http
auth_uri = http://127.0.0.1:5000/
admin_token = 999888777666
</programlisting>
</listitem>
<listitem>
<para><emphasis role="bold">Database update</emphasis></para>
<para>You are now ready to upgrade the database, by running : <literallayout class="monospaced">nova-manage db sync</literallayout></para>
</listitem>
<listitem><para><emphasis role="bold">Restart the services</emphasis></para>
<para>After the database upgrade, services can be restarted : </para>
<itemizedlist>
<listitem>
<para>
Rabbitmq-server
<literallayout class="monospaced">service rabbitmq-server start</literallayout>
</para>
</listitem>
<listitem>
<para> Nova services
<literallayout class="monospaced">cd /etc/init.d &amp;&amp; for i $(ls nova-*); do service $i start; done</literallayout>
You can check the version you are running :
<literallayout class="monospaced">nova-manage version</literallayout>Should
ouput :
<literallayout class="monospaced">2011.3 </literallayout>
</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para><emphasis role="bold">Validation test</emphasis></para>
<para>The first thing to check is all the services are all running : </para>
<literallayout class="monospaced">ps axl | grep nova</literallayout>
<para>should output all the services running. If some services are missing, check their appropriate log files (e.g /var/log/nova/nova-api.log).
You would then use nova-manage :
<literallayout class="monospaced">nova-manage service list</literallayout>
</para>
<para>
If all the services are up, you can now validate the migration by :
<simplelist>
<member>Launching a new instance</member>
<member>Terminate a running instance</member>
<member>Attach a floating IP to an "old" and a "new" instance</member>
</simplelist>
</para>
</listitem>
</orderedlist>
</simplesect>
<simplesect>
<title>C- Client tools upgrade</title>
<para>
In this part we will see how to make sure our management tools will be correctly integrated to the new environment's version :
<simplelist>
<member><link xlink:href="http://nova.openstack.org/2011.2/runnova/euca2ools.html?highlight=euca2ools">euca2ools</link></member>
<member><link xlink:href="https://github.com/rackspace/python-novaclient">novaclient</link></member>
</simplelist>
</para>
<orderedlist>
<listitem>
<para><emphasis role="bold">euca2ools</emphasis></para>
<para>The euca2ools settings do not change from the client side : </para>
<programlisting>
# Euca2ools
export NOVA_KEY_DIR=/root/creds/
export EC2_ACCESS_KEY="EC2KEY:USER"
export EC2_SECRET_KEY="SECRET_KEY"
export EC2_URL="http://$NOVA-API-IP:8773/services/Cloud"
export S3_URL="http://$NOVA-API-IP:3333"
export EC2_USER_ID=42 # nova does not use user id, but bundling requires it
export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem
export EC2_CERT=${NOVA_KEY_DIR}/cert.pem
export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem
export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set
alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}"
alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}"
</programlisting>
<para>
On server-side, there are also not any changes to make, since we don't use keystone.
Here are some commands you should be able to run :
<literallayout class="monospaced">
euca-describe-instances
euca-describe-adresses
euca-terminate-instance $instance_id
euca-create-volume -s 5 -z $zone
euca-attach-volume -i $instance_id -d $device $volume_name
euca-associate-address -i $instance_id $address
</literallayout>
If all these commands work flawlessly, it means the tools in properly integrated.
</para>
<para/>
</listitem>
<listitem>
<para><emphasis role="bold">python-novaclient</emphasis></para>
<para> This tool requires a recent version on order to use all the service
the OSAPI offers (floating-ip support, volumes support, etc..). In order
to upgrade it :
<literallayout class="monospaced">git clone https://github.com/rackspace/python-novaclient.git &amp;&amp; cd python-novaclient
python setup.py build
python setup.py install
</literallayout>
Make sure you have the correct settings into your .bashrc (or any
source-able file) :
<programlisting>
# Python-novaclient
export NOVA_API_KEY="SECRET_KEY"
export NOVA_PROJECT_ID="PROJECT-NAME"
export NOVA_USERNAME="USER"
export NOVA_URL="http://$NOVA-API-IP:8774/v1.1"
export NOVA_VERSION=1.1
</programlisting>
He are some nova commands you should be able to run :
<literallayout class="monospaced">
nova list
nova image-show
nova boot $flavor_id --image $image_id --key_name $key_name $instance_name
nova volume-create --display_name $name $size
</literallayout>
Again, if the commands run without any error, the tools is then properly
integrated.</para>
</listitem>
</orderedlist>
</simplesect>
<simplesect>
<title>D- Why is Keystone not integrated ?</title>
<para>Keystone introduces a new identity management, instead of having users into
nova's database, they are now fully relagated to Keystone. While nova deals with
"users as IDs" (eg : the project name is the project id), Keystones makes a
difference between a name and an ID. ; thus, the integration breaks a running
Cactus cloud. Since we were looking for a smooth integration on a running
platform, Keystone has not been integrated. </para>
<para>
If you want to integrate Keystone, here are the steps you would follow :
</para>
<orderedlist>
<listitem>
<para><emphasis role="bold">Export the current project</emphasis></para>
<para>The first thing to do is export all credentials-related settings from nova : </para>
<literallayout class="monospaced">nova-manage shell export --filename=nova_export.txt</literallayout>
<para>The created file contains keystone commands (via keystone-manage tool) ; you can import simply import the settings with a loop :</para>
<literallayout class="monospaced">while read line; do $line; done &lt; nova_export.txt</literallayout>
</listitem>
<listitem>
<para><emphasis role="bold">Enable the pipelines</emphasis></para>
<para>
Pipelines are like "communication links" between components. In our case we need to enable pipelines from all the components to Keystone.
</para>
<itemizedlist>
<listitem>
<para>
<emphasis>Glance Pipeline</emphasis>
<literallayout class="monospaced">glance-api.conf</literallayout>
<programlisting>
[pipeline:glance-api]
pipeline = versionnegotiation authtoken context apiv1app
# To enable Image Cache Management API replace pipeline with below:
# pipeline = versionnegotiation context imagecache apiv1app
# NOTE: use the following pipeline for keystone auth (with caching)
pipeline = versionnegotiation authtoken context imagecache apiv1app
[...]
# Keystone
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = 127.0.0.1
service_port = 5000
auth_host = 127.0.0.1
auth_port = 5001
auth_protocol = http
auth_uri = http://127.0.0.1:5000/
admin_token = 999888777666
</programlisting>
<literallayout class="monospaced">glance-registry.conf</literallayout>
<programlisting>
[pipeline:glance-registry]
# pipeline = context registryapp
# NOTE: use the following pipeline for keystone
pipeline = authtoken keystone_shim context registryapp
[...]
# Keystone
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = 127.0.0.1
service_port = 5000
auth_host = 127.0.0.1
auth_port = 5001
auth_protocol = http
auth_uri = http://127.0.0.1:5000/
admin_token = 999888777666
[filter:keystone_shim]
paste.filter_factory = keystone.middleware.glance_auth_token:filter_factory
</programlisting>
</para>
</listitem>
<listitem>
<para>
<emphasis> Nova Pipeline </emphasis>
<literallayout class="monospaced">nova-api.conf</literallayout>
<programlisting>
--keystone_ec2_url=http://$NOVA-API-IP:5000/v2.0/ec2tokens
</programlisting>
</para>
<literallayout class="monospaced">api-paste.ini</literallayout>
<programlisting>
# EC2 API
[pipeline:ec2cloud]
pipeline = logrequest ec2noauth cloudrequest authorizer ec2executor
# NOTE(vish): use the following pipeline for deprecated auth
# pipeline = logrequest authenticate cloudrequest authorizer ec2executor
[pipeline:ec2admin]
pipeline = logrequest ec2noauth adminrequest authorizer ec2executor
# NOTE(vish): use the following pipeline for deprecated auth
# pipeline = logrequest authenticate adminrequest authorizer ec2executor
# OSAPI
[pipeline:openstackapi10]
pipeline = faultwrap noauth ratelimit osapiapp10
# NOTE(vish): use the following pipeline for deprecated auth
# pipeline = faultwrap auth ratelimit osapiapp10
[pipeline:openstackapi11]
pipeline = faultwrap noauth ratelimit extensions osapiapp11
# NOTE(vish): use the following pipeline for deprecated auth
# pipeline = faultwrap auth ratelimit extensions osapiapp11
##########
# Shared #
##########
[filter:keystonecontext]
paste.filter_factory = keystone.middleware.nova_keystone_context:NovaKeystoneContext.factory
[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_protocol = http
service_host = 127.0.0.1
service_port = 5000
auth_host = 127.0.0.1
auth_port = 5001
auth_protocol = http
auth_uri = http://127.0.0.1:5000/
admin_token = 999888777666
</programlisting>
</listitem>
<listitem>
<para>
<emphasis>euca2ools</emphasis>
<literallayout class="monospaced">.bashrc</literallayout>
<programlisting>
# Euca2ools
[...]
export EC2_URL="http://$KEYSTONE-IP:5000/services/Cloud"
[...]
</programlisting>
</para>
</listitem>
<listitem>
<para>
<emphasis>novaclient</emphasis>
<literallayout class="monospaced">novaclient</literallayout>
<programlisting>
# Novaclient
[...]
export NOVA_URL=http://$KEYSTONe-IP:5000/v.20/
export NOVA_REGION_NAME="$REGION"
[...]
</programlisting>
</para>
</listitem>
</itemizedlist>
</listitem>
</orderedlist>
</simplesect>
</section>
</section>
</chapter>