Installing OpenStack Compute The OpenStack system has several key projects that are separate installations but can work together depending on your cloud needs: OpenStack Compute, OpenStack Object Storage, and OpenStack Image Service. You can install any of these projects separately and then configure them either as standalone or connected entities.
Example Installation Architectures OpenStack Compute uses a shared-nothing, messaging-based architecture. While very flexible, the fact that you can install each nova- service on an independent server means there are many possible methods for installing OpenStack Compute. Here are the types of installation architectures: Single node: Only one server runs all nova- services and also drives all the virtual instances. Use this configuration only for trying out OpenStack Compute, or for development purposes. Two nodes: A cloud controller node runs the nova- services except for nova-compute, and a compute node runs nova-compute. A client computer is likely needed to bundle images and interfacing to the servers, but a client is not required. Use this configuration for proof of concepts or development environments. Multiple nodes: You can add more compute nodes to the two node installation by simply installing nova-compute on an additional server and copying a nova.conf file to the added node. This would result in a multiple node installation. You can also add a volume controller and a network controller as additional nodes in a more complex multiple node installation. A minimum of 4 nodes is best for running multiple virtual instances that require a lot of processing power. This is an illustration of one possible multiple server installation of OpenStack Compute; virtual server networking in the cluster may vary. An alternative architecture would be to add more messaging servers if you notice a lot of back up in the messaging queue causing performance problems. In that case you would add an additional RabbitMQ server in addition to or instead of scaling up the database server. Your installation can run any nova- service on any server as long as the nova.conf is configured to point to the RabbitMQ server and the server can send messages to the server. Multiple installation architectures are possible, here is another example illustration.
Service Architecture Because Compute has multiple services and many configurations are possible, here is a diagram showing the overall service architecture and communication systems between the services.
Installing on Ubuntu How you go about installing OpenStack Compute depends on your goals for the installation. You can use an ISO image, you can use a scripted installation, and you can manually install with a step-by-step installation.
ISO Distribution Installation You can download and use an ISO image that is based on a Ubuntu Linux Server 10.04 LTS distribution containing only the components needed to run OpenStack Compute. See http://sourceforge.net/projects/stackops/files/ for download files and information, license information, and a README file. For documentation on the StackOps distro, see http://docs.stackops.org. For free support, go to http://getsatisfaction.com/stackops.
Scripted Installation You can download a script for a standalone install for proof-of-concept, learning, or for development purposes for Ubuntu 11.04 at https://devstack.org. Install Ubuntu 11.04 (Natty): In order to correctly install all the dependencies, we assume a specific version of Ubuntu to make it as easy as possible. OpenStack works on other flavors of Linux (and some folks even run it on Windows!) We recommend using a minimal install of Ubuntu server in a VM if this is your first time. Download DevStack: $ git clone git://github.com/openstack-dev/devstack.git The devstack repo contains a script that installs OpenStack Compute, the Image Service and the Identity Service and offers templates for configuration files plus data scripts. Start the install: $ cd devstack; ./stack.sh It takes a few minutes, we recommend reading the well-documented script while it is building to learn more about what is going on.
Manual Installation on Ubuntu The manual installation involves installing from packages on Ubuntu 11.10 or 11.04 as a user with root (or sudo) permission. The OpenStack Starter Guide provides instructions for a manual installation using the packages shipped with Ubuntu 11.10. The OpenStack Install and Deploy Manual provides instructions for installing using packages provided by OpenStack community members. Refer to those manuals for detailed instructions by going to http://docs.openstack.org and clicking the links next to the manual title.
Installing on Fedora or Red Hat Enterprise Linux 6 The Fedora project provides OpenStack packages in Fedora 16 and later. Fedora also provides packages for RHEL6 via the EPEL (Extra Packages for Enterprise Linux) 6 repository. If you would like to install OpenStack on RHEL6, see this page for more information on enabling the use of EPEL: http://fedoraproject.org/wiki/EPEL. Detailed instructions for installing OpenStack Compute on Fedora or RHEL6 can be found on the Fedora wiki. See these pages for more information: Getting Started with OpenStack on Fedora 17 The Essex release is in Fedora 17. This page discusses the installation of Essex on Fedora 17. Once EPEL 6 has been updated to include Essex, these instructions should be used if installing on RHEL 6. The main difference between the Fedora 17 instructions and what must be done on RHEL 6 is that RHEL 6 does not use systemd, so the systemctl commands will have to substituted with the RHEL 6 equivalent. Getting Started with OpenStack Nova This page was originally written as instructions for getting started with OpenStack on Fedora 16, which includes the Diablo release. At the time of writing, While EPEL 6 still includes Diablo, these instructions should be used if installing on RHEL 6.
Post-Installation Configuration for OpenStack Compute Configuring your Compute installation involves nova-manage commands plus editing the nova.conf file to ensure the correct flags are set. This section contains the basics for a simple multi-node installation, but Compute can be configured many ways. You can find networking options and hypervisor options described in separate chapters, and you will read about additional configuration information in a separate chapter as well.
Setting Flags in the <filename>nova.conf</filename> File The configuration file nova.conf is installed in /etc/nova by default. You only need to do these steps when installing manually. The scripted installation above does this configuration during the installation. A default set of options are already configured in nova.conf when you install manually. The defaults are as follows: --daemonize=1 --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova Starting with the default file, you must define the following required items in /etc/nova/nova.conf. The flag variables are described below. You can place comments in the nova.conf file by entering a new line with a # sign at the beginning of the line. To see a listing of all possible flag settings, see the output of running /bin/nova-api --help.
Description of nova.conf flags (not comprehensive)
Flag Description
--sql_connection SQL Alchemy connect string (reference); Location of OpenStack Compute SQL database
--s3_host IP address; Location where OpenStack Compute is hosting the objectstore service, which will contain the virtual machine images and buckets
--rabbit_host IP address; Location of RabbitMQ server
--verbose Set to 1 to turn on; Optional but helpful during initial setup
--network_manager Configures how your controller will communicate with additional OpenStack Compute nodes and virtual machines. Options: nova.network.manager.FlatManager Simple, non-VLAN networking nova.network.manager.FlatDHCPManager Flat networking with DHCP nova.network.manager.VlanManager VLAN networking with DHCP; This is the Default if no network manager is defined here in nova.conf.
--fixed_range IP address/range; Network prefix for the IP network that all the projects for future VM guests reside on. Example: 192.168.0.0/12
--ec2_host IP address; Indicates where the nova-api service is installed.
--ec2_url Url; Indicates the service for EC2 requests.
--osapi_host IP address; Indicates where the nova-api service is installed.
--network_size Number value; Number of addresses in each private subnet.
--glance_api_servers IP and port; Address for Image Service.
--use_deprecated_auth If this flag is present, the Cactus method of authentication is used with the novarc file containing credentials.
Here is a simple example nova.conf file for a small private cloud, with all the cloud controller services, database server, and messaging server on the same server. --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --verbose --use_deprecated_auth --ec2_host=184.106.239.134 --ec2_url=http://184.106.239.134:8773/services/Cloud --osapi_host=http://184.106.239.134 --s3_host=184.106.239.134 --rabbit_host=184.106.239.134 --fixed_range=192.168.0.0/16 --network_size=8 --glance_api_servers=184.106.239.134:9292 --routing_source_ip=184.106.239.134 --sql_connection=mysql://nova:notnova@184.106.239.134/nova Create a “nova” group, so you can set permissions on the configuration file: $ sudo addgroup nova The nova.config file should have its owner set to root:nova, and mode set to 0640, since the file contains your MySQL server’s username and password. You also want to ensure that the nova user belongs to the nova group. $ sudo usermod -g nova nova $ chown -R root:nova /etc/nova $ chmod 640 /etc/nova/nova.conf
Setting Up OpenStack Compute Environment on the Compute Node These are the commands you run to ensure the database schema is current, and then set up a user and project, if you are using built-in auth with the --use_deprecated_auth flag rather than the Identity Service: $ nova-manage db sync $ nova-manage user admin <user_name> $ nova-manage project create <project_name> <user_name> $ nova-manage network create <network-label> <project-network> <number-of-networks-in-project> <addresses-in-each-network> Here is an example of what this looks like with real values entered: $ nova-manage db sync $ nova-manage user admin dub $ nova-manage project create dubproject dub $ nova-manage network create novanet 192.168.0.0/24 1 256 For this example, the number of IPs is /24 since that falls inside the /16 range that was set in fixed-range in nova.conf. Currently, there can only be one network, and this set up would use the max IPs available in a /24. You can choose values that let you use any valid amount that you would like. The nova-manage service assumes that the first IP address is your network (like 192.168.0.0), that the 2nd IP is your gateway (192.168.0.1), and that the broadcast is the very last IP in the range you defined (192.168.0.255). If this is not the case you will need to manually edit the sql db networks table. When you run the nova-manage network create command, entries are made in the networks and fixed_ips tables. However, one of the networks listed in the networks table needs to be marked as bridge in order for the code to know that a bridge exists. The network in the Nova networks table is marked as bridged automatically for Flat Manager.
Creating Credentials Generate the credentials as a zip file. These are the certs you will use to launch instances, bundle images, and all the other assorted API functions. # mkdir –p /root/creds # nova-manage project zipfile $NOVA_PROJECT $NOVA_PROJECT_USER /root/creds/novacreds.zip If you are using one of the Flat modes for networking, you may see a Warning message "No vpn data for project <project_name>" which you can safely ignore. Unzip them in your home directory, and add them to your environment. # unzip /root/creds/novacreds.zip -d /root/creds/ # cat /root/creds/novarc >> ~/.bashrc # source ~/.bashrc If you already have Nova credentials present in your environment, you can use a script included with Glance the Image Service, tools/nova_to_os_env.sh, to create Glance-style credentials. This script adds OS_AUTH credentials to the environment which are used by the Image Service to enable private images when the Identity Service is configured as the authentication system for Compute and the Image Service.
Enabling Access to VMs on the Compute Node One of the most commonly missed configuration areas is not allowing the proper access to VMs. Use the euca-authorize command to enable access. Below, you will find the commands to allow ping and ssh to your VMs : These commands need to be run as root only if the credentials used to interact with nova-api have been put under /root/.bashrc. If the EC2 credentials have been put into another user's .bashrc file, then, it is necessary to run these commands as the user. $ nova  secgroup-add-rule default icmp - 1 -1 0.0.0.0/0 $ nova  secgroup-add-rule default tcp 22 22 0.0.0.0/0 Another common issue is you cannot ping or SSH to your instances after issuing the euca-authorize commands. Something to look at is the amount of dnsmasq processes that are running. If you have a running instance, check to see that TWO dnsmasq processes are running. If not, perform the following: $ sudo killall dnsmasq $ sudo service nova-network restart If you get the instance not found message while performing the restart, that means the service was not previously running. You simply need to start it instead of restarting it : $ sudo service nova-network start
Configuring Multiple Compute Nodes If your goal is to split your VM load across more than one server, you can connect an additional nova-compute node to a cloud controller node. This configuring can be reproduced on multiple compute servers to start building a true multi-node OpenStack Compute cluster. To build out and scale the Compute platform, you spread out services amongst many servers. While there are additional ways to accomplish the build-out, this section describes adding compute nodes, and the service we are scaling out is called nova-compute. For a multi-node install you only make changes to nova.conf and copy it to additional compute nodes. Ensure each nova.conf file points to the correct IP addresses for the respective services. Customize the nova.conf example below to match your environment. The CC_ADDR is the Cloud Controller IP Address. --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --flat_network_bridge=br100 --logdir=/var/log/nova --state_path=/var/lib/nova --verbose --sql_connection=mysql://root:nova@CC_ADDR/nova --s3_host=CC_ADDR --rabbit_host=CC_ADDR --ec2_api=CC_ADDR --ec2_url=http://CC_ADDR:8773/services/Cloud --network_manager=nova.network.manager.FlatManager --fixed_range= network/CIDR --network_size=number of addresses By default, Nova sets the bridge device based on the setting in --flat_network_bridge. Now you can edit /etc/network/interfaces with the following template, updated with your IP information. # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto br100 iface br100 inet static bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 address xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx network xxx.xxx.xxx.xxx broadcast xxx.xxx.xxx.xxx gateway xxx.xxx.xxx.xxx # dns-* options are implemented by the resolvconf package, if installed dns-nameservers xxx.xxx.xxx.xxx Restart networking: $ sudo service networking restart With nova.conf updated and networking set, configuration is nearly complete. First, bounce the relevant services to take the latest updates: $ sudo service libvirtd restart $ sudo service nova-compute restart To avoid issues with KVM and permissions with Nova, run the following commands to ensure we have VM's that are running optimally: # chgrp kvm /dev/kvm # chmod g+rwx /dev/kvm If you want to use the 10.04 Ubuntu Enterprise Cloud images that are readily available at http://uec-images.ubuntu.com/releases/10.04/release/, you may run into delays with booting. Any server that does not have nova-api running on it needs this iptables entry so that UEC images can get metadata info. On compute nodes, configure the iptables with this next step: # iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination $NOVA_API_IP:8773 Lastly, confirm that your compute node is talking to your cloud controller. From the cloud controller, run this database query: $ mysql -u$MYSQL_USER -p$MYSQL_PASS nova -e 'select * from services;' In return, you should see something similar to this: +---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ | created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone | +---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ | 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova | | 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova | | 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova | | 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova | | 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova | | 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova | +---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ You can see that osdemo0{1,2,4,5} are all running nova-compute. When you start spinning up instances, they will allocate on any node that is running nova-compute from this list.
Determining the Version of Compute You can find the version of the installation by using the nova-manage command: $ nova-manage version list
Migrating from Cactus to Diablo If you have an installation already installed and running, is is possible to run smoothly an uprade from Cactus Stable (2011.2) to Diablo Stable (2011.3), without losing any of your running instances, but also keep the current network, volumes, and images available. In order to update, we will start by updating the Image Service(Glance), then update the Compute Service (Nova). We will finally make sure the client-tools (euca2ools and novaclient) are properly integrated. For Nova, Glance and euca2ools, we will use the PPA repositories, while we will use the latest version of novaclient from Github, due to important updates. That upgrade guide does not integrate Keystone. If you want to integrate Keystone, please read the section "Installing the Identity Service" A- Glance upgrade In order to update Glance, we will start by stopping all running services : glance-control all stopMake sure the services are stopped, you can check by running ps : ps axl |grep glanceIf the commands doesn't output any Glance process, it means you can continue ; otherwise, simply kill the PID's. While the Cactus release of Glance uses one glance.conf file (usually located at "/etc/glance/glance.conf"), the Diablo release brings up new configuration files. (Look into them, they are pretty self-explanatory). Update the repositories The first thing to do is to update the packages. Update your "/etc/apt/sources.list", or create a "/etc/apt/sources.list.d/openstack_diablo.list file : deb http://ppa.launchpad.net/openstack-release/2011.3/ubuntu maverick main deb-src http://ppa.launchpad.net/openstack-release/2011.3/ubuntu maverick main If you are running Ubuntu Lucid, point to Lucid, otherwise to another version (Maverick, or Natty). You can now update the repository : aptitude update aptitude upgrade You could encounter the message "The following signatures couldn't be verified because the public key is not available: NO_PUBKEY XXXXXXXXXXXX", simply run : gpg --keyserver pgpkeys.mit.edu --recv-key XXXXXXXXXXXX gpg -a --export XXXXXXXXXXXX | sudo apt-key add - (Where XXXXXXXXXXXX is the key) Then re-run the two steps, which should work proceed without error. The package system should propose you to upgrade you Glance installation to the Diablo one, accept the upgrade, and you will have successfully performed the package upgrade. In the next step, we will reconfigure the service. Update Glance configuration files You need now to update the configuration files. The main file you will need to update is etc/glance/glance-registry.confIn that one you will specify the database backend. If you used a MySQL backend under Cactus ; replace the sql_connection with the entry you have into the /etc/glance/glance.conf. Here is how the configuration files should look like : glance-api.conf [DEFAULT] # Show more verbose log output (sets INFO log level output) verbose = True # Show debugging output in logs (sets DEBUG log level output) debug = False # Which backend store should Glance use by default is not specified # in a request to add a new image to Glance? Default: 'file' # Available choices are 'file', 'swift', and 's3' default_store = file # Address to bind the API server bind_host = 0.0.0.0 # Port the bind the API server to bind_port = 9292 # Address to find the registry server registry_host = 0.0.0.0 # Port the registry server is listening on registry_port = 9191 # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! log_file = /var/log/glance/api.log # Send logs to syslog (/dev/log) instead of to file specified by `log_file` use_syslog = False # ============ Notification System Options ===================== # Notifications can be sent when images are create, updated or deleted. # There are three methods of sending notifications, logging (via the # log_file directive), rabbit (via a rabbitmq queue) or noop (no # notifications sent, the default) notifier_strategy = noop # Configuration options if sending notifications via rabbitmq (these are # the defaults) rabbit_host = localhost rabbit_port = 5672 rabbit_use_ssl = false rabbit_userid = guest rabbit_password = guest rabbit_virtual_host = / rabbit_notification_topic = glance_notifications # ============ Filesystem Store Options ======================== # Directory that the Filesystem backend store # writes image data to filesystem_store_datadir = /var/lib/glance/images/ # ============ Swift Store Options ============================= # Address where the Swift authentication service lives swift_store_auth_address = 127.0.0.1:8080/v1.0/ # User to authenticate against the Swift authentication service swift_store_user = jdoe # Auth key for the user authenticating against the # Swift authentication service swift_store_key = a86850deb2742ec3cb41518e26aa2d89 # Container within the account that the account should use # for storing images in Swift swift_store_container = glance # Do we create the container if it does not exist? swift_store_create_container_on_put = False # What size, in MB, should Glance start chunking image files # and do a large object manifest in Swift? By default, this is # the maximum object size in Swift, which is 5GB swift_store_large_object_size = 5120 # When doing a large object manifest, what size, in MB, should # Glance write chunks to Swift? This amount of data is written # to a temporary disk buffer during the process of chunking # the image file, and the default is 200MB swift_store_large_object_chunk_size = 200 # Whether to use ServiceNET to communicate with the Swift storage servers. # (If you aren't RACKSPACE, leave this False!) # # To use ServiceNET for authentication, prefix hostname of # `swift_store_auth_address` with 'snet-'. # Ex. https://example.com/v1.0/ -> https://snet-example.com/v1.0/ swift_enable_snet = False # ============ S3 Store Options ============================= # Address where the S3 authentication service lives s3_store_host = 127.0.0.1:8080/v1.0/ # User to authenticate against the S3 authentication service s3_store_access_key = <20-char AWS access key> # Auth key for the user authenticating against the # S3 authentication service s3_store_secret_key = <40-char AWS secret key> # Container within the account that the account should use # for storing images in S3. Note that S3 has a flat namespace, # so you need a unique bucket name for your glance images. An # easy way to do this is append your AWS access key to "glance". # S3 buckets in AWS *must* be lowercased, so remember to lowercase # your AWS access key if you use it in your bucket name below! s3_store_bucket = <lowercased 20-char aws access key>glance # Do we create the bucket if it does not exist? s3_store_create_bucket_on_put = False # ============ Image Cache Options ======================== image_cache_enabled = False # Directory that the Image Cache writes data to # Make sure this is also set in glance-pruner.conf image_cache_datadir = /var/lib/glance/image-cache/ # Number of seconds after which we should consider an incomplete image to be # stalled and eligible for reaping image_cache_stall_timeout = 86400 # ============ Delayed Delete Options ============================= # Turn on/off delayed delete delayed_delete = False [pipeline:glance-api] pipeline = versionnegotiation context apiv1app # NOTE: use the following pipeline for keystone # pipeline = versionnegotiation authtoken context apiv1app # To enable Image Cache Management API replace pipeline with below: # pipeline = versionnegotiation context imagecache apiv1app # NOTE: use the following pipeline for keystone auth (with caching) # pipeline = versionnegotiation authtoken context imagecache apiv1app [pipeline:versions] pipeline = versionsapp [app:versionsapp] paste.app_factory = glance.api.versions:app_factory [app:apiv1app] paste.app_factory = glance.api.v1:app_factory [filter:versionnegotiation] paste.filter_factory = glance.api.middleware.version_negotiation:filter_factory [filter:imagecache] paste.filter_factory = glance.api.middleware.image_cache:filter_factory [filter:context] paste.filter_factory = glance.common.context:filter_factory [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 auth_host = 127.0.0.1 auth_port = 5001 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_token = 999888777666 glance-registry.conf [DEFAULT] # Show more verbose log output (sets INFO log level output) verbose = True # Show debugging output in logs (sets DEBUG log level output) debug = False # Address to bind the registry server bind_host = 0.0.0.0 # Port the bind the registry server to bind_port = 9191 # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! log_file = /var/log/glance/registry.log # Send logs to syslog (/dev/log) instead of to file specified by `log_file` use_syslog = False # SQLAlchemy connection string for the reference implementation # registry server. Any valid SQLAlchemy connection string is fine. # See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_engine #sql_connection = sqlite:////var/lib/glance/glance.sqlite sql_connection = mysql://glance_user:glance_pass@glance_host/glance # Period in seconds after which SQLAlchemy should reestablish its connection # to the database. # # MySQL uses a default `wait_timeout` of 8 hours, after which it will drop # idle connections. This can result in 'MySQL Gone Away' exceptions. If you # notice this, you can lower this value to ensure that SQLAlchemy reconnects # before MySQL can drop the connection. sql_idle_timeout = 3600 # Limit the api to return `param_limit_max` items in a call to a container. If # a larger `limit` query param is provided, it will be reduced to this value. api_limit_max = 1000 # If a `limit` query param is not provided in an api request, it will # default to `limit_param_default` limit_param_default = 25 [pipeline:glance-registry] pipeline = context registryapp # NOTE: use the following pipeline for keystone # pipeline = authtoken keystone_shim context registryapp [app:registryapp] paste.app_factory = glance.registry.server:app_factory [filter:context] context_class = glance.registry.context.RequestContext paste.filter_factory = glance.common.context:filter_factory [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 auth_host = 127.0.0.1 auth_port = 5001 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_token = 999888777666 [filter:keystone_shim] paste.filter_factory = keystone.middleware.glance_auth_token:filter_factory Fire up Glance You should now be able to start glance (glance-control runs bothh glance-api and glance-registry services : glance-controll all start You can now make sure the new version of Glance is running : ps axl |grep glance But also make sure you are running the Diablo version : glance --version which should output : glance 2011.3 If you do not see the two process running, an error occured somewhere. You can check for errors by running : glance-api /etc/glance/glance-api.conf and : glance-registry /etc/glance/glance-registry.conf You are now ready to upgrade the database scheme. Update Glance database Before running any upgrade, make sure you backup the database. If you have a MySQL backend : mysqldump -u $glance_user -p$glance_password glance > glance_backup.sql If you use the default backend, SQLite, simply copy the database's file. You are now ready to update the database scheme. In order to update the Glance service, just run : glance-manage db_sync Validation test In order to make sure Glance has been properly updated, simply run : glance index which should display your registered images : ID Name Disk Format Container Format Size ---------------- ------------------------------ -------------------- -------------------- -------------- 94 Debian 6.0.3 amd64 raw bare 1067778048 B- Nova upgrade In order to successfully go through the upgrade process, it is advised to follow the exact order of the process' steps. By doing so, you make sure you don't miss any mandatory step. Update the repositoiries Update your "/etc/apt/sources.list", or create a "/etc/apt/sources.list.d/openstack_diablo.list file : deb http://ppa.launchpad.net/openstack-release/2011.3/ubuntu maverick main deb-src http://ppa.launchpad.net/openstack-release/2011.3/ubuntu maverick main If you are running Ubuntu Lucid, point to Lucid, otherwise to another version (Maverick, or Natty). You can now update the repository (do not upgrade the packages at the moment) : aptitude update Stop all nova services By stopping all nova services, that would make our instances unreachables (for instance, stopping the nova-network service will make all the routing rules flushed) ; but they won't neither be terminated, nor deleted. We first stop nova services cd /etc/init.d && for i in $(ls nova-*); do service $i stop; done We stop rabbitmq; used by nova-scheduler service rabbitmq-server stop We finally killl dnsmasq, used by nova-network killall dnsmasq You can make sure not any services used by nova are still running via : ps axl | grep nova that should not output any service, if so, simply kill the PIDs MySQL pre-requisites Before running the upgrade, make sure the following tables don't already exist (They could, if you ran tests, or by mistake an upgrade) : block_device_mapping snapshots provider_fw_rules instance_type_extra_specs virtual_interfaces volume_types volume_type_extra_specs volume_metadata; virtual_storage_arrays If so, you can safely remove them; since they are not used at all by Cactus (2011.2) : drop table block_device_mapping; drop table snapshots; drop table provider_fw_rules; drop table instance_type_extra_specs; drop table virtual_interfaces; drop table volume_types; drop table volume_type_extra_specs; drop table volume_metadata; drop table virtual_storage_arrays; Upgrade nova packages You can now perform an upgrade : aptitude upgrade During the upgrade process, you would see : Configuration file '/etc/nova/nova.conf' ==> Modified (by you or by a script) since installation. ==> Package distributor has shipped an updated version. What would you like to do about it ? Your options are: Y or I : install the package maintainer's version N or O : keep your currently-installed version D : show the differences between the versions Z : start a shell to examine the situation The default action is to keep your current version. *** /etc/nova/nova.conf (Y/I/N/O/D/Z) [default=N] ? Type "N" or validate in order to keep your current configuration file. We will manually update in order to use some of new Diablo settings. Update the configuration files Diablo introduces several new files : api-paste.ini, which contains all api-related settings nova-compute.conf, a configuration file dedicated to the compte-node settings. Here are the settings you would add into nova.conf : --multi_host=T --api_paste_config=/etc/nova/api-paste.ini and that one if you plan to integrate Keystone to your environment, with euca2ools : --keystone_ec2_url=http://$NOVA-API-IP.11:5000/v2.0/ec2tokens Here is how the files should look like : nova.conf --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --flagfile=/etc/nova/nova-compute.conf --force_dhcp_release=True --verbose --daemonize=1 --s3_host=172.16.40.11 --rabbit_host=172.16.40.11 --cc_host=172.16.40.11 --keystone_ec2_url=http://172.16.40.11:5000/v2.0/ec2tokens --ec2_url=http://172.16.40.11:8773/services/Cloud --ec2_host=172.16.40.11 --ec2_dmz_host=172.16.40.11 --ec2_port=8773 --fixed_range=192.168.0.0/12 --FAKE_subdomain=ec2 --routing_source_ip=10.0.10.14 --sql_connection=mysql://nova:nova-pass@172.16.40.11/nova --glance_api_servers=172.16.40.13:9292 --image_service=nova.image.glance.GlanceImageService --image_decryption_dir=/var/lib/nova/tmp --network_manager=nova.network.manager.VlanManager --public_interface=eth0 --vlan_interface=eth0 --iscsi_ip_prefix=172.16.40.12 --vnc_enabled --multi_host=T --debug --api_paste_config=/etc/nova/api-paste.ini api-paste.ini ####### # EC2 # ####### [composite:ec2] use = egg:Paste#urlmap /: ec2versions /services/Cloud: ec2cloud /services/Admin: ec2admin /latest: ec2metadata /2007-01-19: ec2metadata /2007-03-01: ec2metadata /2007-08-29: ec2metadata /2007-10-10: ec2metadata /2007-12-15: ec2metadata /2008-02-01: ec2metadata /2008-09-01: ec2metadata /2009-04-04: ec2metadata /1.0: ec2metadata [pipeline:ec2cloud] # pipeline = logrequest ec2noauth cloudrequest authorizer ec2executor # NOTE(vish): use the following pipeline for deprecated auth pipeline = logrequest authenticate cloudrequest authorizer ec2executor [pipeline:ec2admin] # pipeline = logrequest ec2noauth adminrequest authorizer ec2executor # NOTE(vish): use the following pipeline for deprecated auth pipeline = logrequest authenticate adminrequest authorizer ec2executor [pipeline:ec2metadata] pipeline = logrequest ec2md [pipeline:ec2versions] pipeline = logrequest ec2ver [filter:logrequest] paste.filter_factory = nova.api.ec2:RequestLogging.factory [filter:ec2lockout] paste.filter_factory = nova.api.ec2:Lockout.factory [filter:ec2noauth] paste.filter_factory = nova.api.ec2:NoAuth.factory [filter:authenticate] paste.filter_factory = nova.api.ec2:Authenticate.factory [filter:cloudrequest] controller = nova.api.ec2.cloud.CloudController paste.filter_factory = nova.api.ec2:Requestify.factory [filter:adminrequest] controller = nova.api.ec2.admin.AdminController paste.filter_factory = nova.api.ec2:Requestify.factory [filter:authorizer] paste.filter_factory = nova.api.ec2:Authorizer.factory [app:ec2executor] paste.app_factory = nova.api.ec2:Executor.factory [app:ec2ver] paste.app_factory = nova.api.ec2:Versions.factory [app:ec2md] paste.app_factory = nova.api.ec2.metadatarequesthandler:MetadataRequestHandler.factory ############# # Openstack # ############# [composite:osapi] use = egg:Paste#urlmap /: osversions /v1.0: openstackapi10 /v1.1: openstackapi11 [pipeline:openstackapi10] # pipeline = faultwrap noauth ratelimit osapiapp10 # NOTE(vish): use the following pipeline for deprecated auth pipeline = faultwrap auth ratelimit osapiapp10 [pipeline:openstackapi11] # pipeline = faultwrap noauth ratelimit extensions osapiapp11 # NOTE(vish): use the following pipeline for deprecated auth pipeline = faultwrap auth ratelimit extensions osapiapp11 [filter:faultwrap] paste.filter_factory = nova.api.openstack:FaultWrapper.factory [filter:auth] paste.filter_factory = nova.api.openstack.auth:AuthMiddleware.factory [filter:noauth] paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory [filter:ratelimit] paste.filter_factory = nova.api.openstack.limits:RateLimitingMiddleware.factory [filter:extensions] paste.filter_factory = nova.api.openstack.extensions:ExtensionMiddleware.factory [app:osapiapp10] paste.app_factory = nova.api.openstack:APIRouterV10.factory [app:osapiapp11] paste.app_factory = nova.api.openstack:APIRouterV11.factory [pipeline:osversions] pipeline = faultwrap osversionapp [app:osversionapp] paste.app_factory = nova.api.openstack.versions:Versions.factory ########## # Shared # ########## [filter:keystonecontext] paste.filter_factory = keystone.middleware.nova_keystone_context:NovaKeystoneContext.factory [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 auth_host = 127.0.0.1 auth_port = 5001 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_token = 999888777666 Database update You are now ready to upgrade the database, by running : nova-manage db sync You will also need to update the field "bridge_interface" for your network into your database, and make sure that field contains the inteface used for the brige (in our case eth1) : created_at: 2011-06-08 07:45:23 updated_at: 2011-06-08 07:46:06 deleted_at: NULL deleted: 0 id: 9 injected: NULL cidr: 192.168.2.0/24 netmask: 255.255.255.0 bridge: br100 gateway: 192.168.2.1 broadcast: 192.168.2.255 dns1: NULL vlan: 100 vpn_public_address: 172.16.40.11 vpn_public_port: 1000 vpn_private_address: 192.168.2.2 dhcp_start: 192.168.2.3 project_id: myproject host: nova-cc1 cidr_v6: NULL gateway_v6: NULL label: NULL netmask_v6: NULL bridge_interface: eth1 multi_host: 0 dns2: NULL uuid: 852fa14 Without the update of that field, nova-network won't start since it won't be able to create the bridges per network. Restart the services After the database upgrade, services can be restarted : Rabbitmq-server service rabbitmq-server start Nova services cd /etc/init.d && for i $(ls nova-*); do service $i start; done You can check the version you are running : nova-manage versionShould ouput : 2011.3 Validation test The first thing to check is all the services are all running : ps axl | grep nova should output all the services running. If some services are missing, check their appropriate log files (e.g /var/log/nova/nova-api.log). You would then use nova-manage : nova-manage service list If all the services are up, you can now validate the migration by : Launching a new instance Terminate a running instance Attach a floating IP to an "old" and a "new" instance C- Client tools upgrade In this part we will see how to make sure our management tools will be correctly integrated to the new environment's version : euca2ools novaclient euca2ools The euca2ools settings do not change from the client side : # Euca2ools export NOVA_KEY_DIR=/root/creds/ export EC2_ACCESS_KEY="EC2KEY:USER" export EC2_SECRET_KEY="SECRET_KEY" export EC2_URL="http://$NOVA-API-IP:8773/services/Cloud" export S3_URL="http://$NOVA-API-IP:3333" export EC2_USER_ID=42 # nova does not use user id, but bundling requires it export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem export EC2_CERT=${NOVA_KEY_DIR}/cert.pem export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}" alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}" On server-side, there are also not any changes to make, since we don't use keystone. Here are some commands you should be able to run : euca-describe-instances euca-describe-adresses euca-terminate-instance $instance_id euca-create-volume -s 5 -z $zone euca-attach-volume -i $instance_id -d $device $volume_name euca-associate-address -i $instance_id $address If all these commands work flawlessly, it means the tools in properly integrated. python-novaclient This tool requires a recent version on order to use all the service the OSAPI offers (floating-ip support, volumes support, etc..). In order to upgrade it : git clone https://github.com/rackspace/python-novaclient.git && cd python-novaclient python setup.py build python setup.py install Make sure you have the correct settings into your .bashrc (or any source-able file) : # Python-novaclient export NOVA_API_KEY="SECRET_KEY" export NOVA_PROJECT_ID="PROJECT-NAME" export NOVA_USERNAME="USER" export NOVA_URL="http://$NOVA-API-IP:8774/v1.1" export NOVA_VERSION=1.1 He are some nova commands you should be able to run : nova list nova image-show nova boot $flavor_id --image $image_id --key_name $key_name $instance_name nova volume-create --display_name $name $size Again, if the commands run without any error, the tools is then properly integrated. D- Known issues UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 11: ordinal not in range(128) This error could be found into nova-network.log. It is due to libvirt which doesn't know how to deal with encoded characters. That will happen if the locale of your system differs from "C" or "POSIX". In order to resolve that issue, while the system is running, you will have to do : sudo nova -c "export LANG C" && export LANG=CThat changes the locale for the running user, and for the nova user. In order to make these changes permanent, you will need to edit the default locale file : (/etc/default/locale) : LANG="C"Note that you will need to reboot the server in order to validate the changes you just made to the file, hence our previous command (which directly fixes the locale issue). D- Why is Keystone not integrated ? Keystone introduces a new identity management, instead of having users into nova's database, they are now fully relegated to Keystone. While nova deals with "users as IDs" (eg : the project name is the project id), Keystones makes a difference between a name and an ID. ; thus, the integration breaks a running Cactus cloud. Since we were looking for a smooth integration on a running platform, Keystone has not been integrated. If you want to integrate Keystone, here are the steps you would follow : Export the current project The first thing to do is export all credentials-related settings from nova : nova-manage shell export --filename=nova_export.txt The created file contains keystone commands (via keystone-manage tool) ; you can import simply import the settings with a loop : while read line; do $line; done < nova_export.txt Enable the pipelines Pipelines are like "communication links" between components. In our case we need to enable pipelines from all the components to Keystone. Glance Pipeline glance-api.conf [pipeline:glance-api] pipeline = versionnegotiation authtoken context apiv1app # To enable Image Cache Management API replace pipeline with below: # pipeline = versionnegotiation context imagecache apiv1app # NOTE: use the following pipeline for keystone auth (with caching) pipeline = versionnegotiation authtoken context imagecache apiv1app [...] # Keystone [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 auth_host = 127.0.0.1 auth_port = 5001 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_token = 999888777666 glance-registry.conf [pipeline:glance-registry] # pipeline = context registryapp # NOTE: use the following pipeline for keystone pipeline = authtoken keystone_shim context registryapp [...] # Keystone [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 auth_host = 127.0.0.1 auth_port = 5001 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_token = 999888777666 [filter:keystone_shim] paste.filter_factory = keystone.middleware.glance_auth_token:filter_factory Nova Pipeline nova.conf --keystone_ec2_url=http://$KEYSTONE-IP:5000/v2.0/ec2tokens api-paste.ini # EC2 API [pipeline:ec2cloud] pipeline = logrequest totoken authtoken keystonecontext cloudrequest authorizer ec2executor # NOTE(vish): use the following pipeline for deprecated auth # pipeline = logrequest authenticate cloudrequest authorizer ec2executor [pipeline:ec2admin] pipeline = logrequest totoken authtoken keystonecontext adminrequest authorizer ec2executor # NOTE(vish): use the following pipeline for deprecated auth # pipeline = logrequest authenticate adminrequest authorizer ec2executor # OSAPI [pipeline:openstackapi10] pipeline = faultwrap authtoken keystonecontext ratelimit osapiapp10 # NOTE(vish): use the following pipeline for deprecated auth # pipeline = faultwrap auth ratelimit osapiapp10 [pipeline:openstackapi11] pipeline = faultwrap authtoken keystonecontext ratelimit extensions osapiapp11 # NOTE(vish): use the following pipeline for deprecated auth # pipeline = faultwrap auth ratelimit extensions osapiapp11 ########## # Shared # ########## [filter:keystonecontext] paste.filter_factory = keystone.middleware.nova_keystone_context:NovaKeystoneContext.factory [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 auth_host = 127.0.0.1 auth_port = 5001 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_token = 999888777666 euca2ools .bashrc # Euca2ools [...] export EC2_URL="http://$NOVA-API-IP:8773/services/Cloud" [...] novaclient novaclient # Novaclient [...] export NOVA_URL=http://$KEYSTONe-IP:5000/v.20/ export NOVA_REGION_NAME="$REGION" [...]