Installing OpenStack Compute The OpenStack system has several key projects that are separate installations but can work together depending on your cloud needs: OpenStack Compute, OpenStack Object Storage, and OpenStack Image Service. You can install any of these projects separately and then configure them either as standalone or connected entities.
System Requirements Hardware: OpenStack components are intended to run on standard hardware. Recommended hardware configurations for a minimum production deployment are as follows for the cloud controller nodes and compute nodes.
Hardware Recommendations
Server Recommended Hardware Notes
Cloud Controller node (runs network, volume, API, scheduler and image services) Processor: 64-bit x86 Memory: 12 GB RAM Disk space: 30 GB (SATA or SAS or SSD) Volume storage: two disks with 2 TB (SATA) for volumes attached to the compute nodes Network: one 1 GB Network Interface Card (NIC) Two NICS are recommended but not required. A quad core server with 12 GB RAM would be more than sufficient for a cloud controller node. 32-bit processors will work for the cloud controller node.
Compute nodes (runs virtual instances) Processor: 64-bit x86 Memory: 32 GB RAM Disk space: 30 GB (SATA) Network: two 1 GB NICs Note that you cannot run 64-bit VM instances on a 32-bit compute node. A 64-bit compute node can run either 32- or 64-bit VMs, however. With 2 GB RAM you can run one m1.small instance on a node or three m1.tiny instances without memory swapping, so 2 GB RAM would be a minimum for a test-environment compute node. As an example, Rackspace Cloud Builders use 96 GB RAM for compute nodes in OpenStack deployments. Specifically for virtualization on certain hypervisors on the node or nodes running nova-compute, you need a x86 machine with an AMD processor with SVM extensions (also called AMD-V) or an Intel processor with VT (virtualization technology) extensions. For Xen-based hypervisors, the Xen wiki contains a list of compatible processors on the HVM Compatible Processors page. For XenServer-compatible Intel processors, refer to the Intel® Virtualization Technology List. For LXC, the VT extensions are not required.
Operating System: OpenStack currently has packages for the following distributions: Ubuntu, RHEL, SUSE, Debian, and Fedora. These packages are maintained by community members, refer to http://wiki.openstack.org/Packaging for additional links. Networking: 1000 Mbps are suggested. For OpenStack Compute, networking is configured on multi-node installations between the physical machines on a single subnet. For networking between virtual machine instances, three network options are available: flat, DHCP, and VLAN. Two NICs (Network Interface Cards) are recommended on the server running nova-network. Database: For OpenStack Compute, you need access to either a PostgreSQL or MySQL database, or you can install it as part of the OpenStack Compute installation process. Permissions: You can install OpenStack Compute either as root or as a user with sudo permissions if you configure the sudoers file to enable all the permissions. Network Time Protocol: You must install a time synchronization program such as NTP to keep your cloud controller and compute nodes talking to the same time server to avoid problems scheduling VM launches on compute nodes.
Example Installation Architectures OpenStack Compute uses a shared-nothing, messaging-based architecture. While very flexible, the fact that you can install each nova- service on an independent server means there are many possible methods for installing OpenStack Compute. The only co-dependency between possible multi-node installations is that the Dashboard must be installed nova-api server. Here are the types of installation architectures: Single node: Only one server runs all nova- services and also drives all the virtual instances. Use this configuration only for trying out OpenStack Compute, or for development purposes. Two nodes: A cloud controller node runs the nova- services except for nova-compute, and a compute node runs nova-compute. A client computer is likely needed to bundle images and interfacing to the servers, but a client is not required. Use this configuration for proof of concepts or development environments. Multiple nodes: You can add more compute nodes to the two node installation by simply installing nova-compute on an additional server and copying a nova.conf file to the added node. This would result in a multiple node installation. You can also add a volume controller and a network controller as additional nodes in a more complex multiple node installation. A minimum of 4 nodes is best for running multiple virtual instances that require a lot of processing power. This is an illustration of one possible multiple server installation of OpenStack Compute; virtual server networking in the cluster may vary. An alternative architecture would be to add more messaging servers if you notice a lot of back up in the messaging queue causing performance problems. In that case you would add an additional RabbitMQ server in addition to or instead of scaling up the database server. Your installation can run any nova- service on any server as long as the nova.conf is configured to point to the RabbitMQ server and the server can send messages to the server. Multiple installation architectures are possible, here is another example illustration.
Service Architecture Because Compute has multiple services and many configurations are possible, here is a diagram showing the overall service architecture and communication systems between the services.
Installing OpenStack Compute on Ubuntu How you go about installing OpenStack Compute depends on your goals for the installation. You can use an ISO image, you can use a scripted installation, and you can manually install with a step-by-step installation.
ISO Distribution Installation You can download and use an ISO image that is based on a Ubuntu Linux Server 10.04 LTS distribution containing only the components needed to run OpenStack Compute. See http://sourceforge.net/projects/stackops/files/ for download files and information, license information, and a README file. For documentation on the StackOps distro, see http://docs.stackops.org. For free support, go to http://getsatisfaction.com/stackops.
Scripted Installation You can download a script for a standalone install for proof-of-concept, learning, or for development purposes for Ubuntu 11.04 at https://devstack.org. Install Ubuntu 11.04 (Natty): In order to correctly install all the dependencies, we assume a specific version of Ubuntu to make it as easy as possible. OpenStack works on other flavors of Linux (and some folks even run it on Windows!) We recommend using a minimal install of Ubuntu server in a VM if this is your first time. Download DevStack: git clone git://github.com/cloudbuilders/devstack.git The devstack repo contains a script that installs OpenStack Compute, the Image Service and the Identity Service and offers templates for configuration files plus data scripts. Start the install:cd devstack; ./stack.shIt takes a few minutes, we recommend reading the well-documented script while it is building to learn more about what is going on.
Manual Installation The manual installation involves installing from packages on Ubuntu 10.10 or 11.04 as a user with root permission. Depending on your environment, you may need to prefix these commands with sudo. This installation process walks through installing a cloud controller node and a compute node. The cloud controller node contains all the nova- services including the API server and the database server. The compute node needs to run only the nova-compute service. You only need one nova-network service running in a multi-node install. You cannot install nova-objectstore on a different machine from nova-compute (production-style deployments will use a Glance server for virtual images).
Installing the Cloud Controller First, set up pre-requisites to use the Nova PPA (Personal Packages Archive) provided through https://launchpad.net/. The ‘python-software-properties’ package is a pre-requisite for setting up the nova package repository. You can also use the trunk package built daily by adding the ppa:nova-core/trunk repository, but trunk changes rapidly and may not run any given day. sudo apt-get install python-software-properties sudo add-apt-repository ppa:openstack-release/2011.3 Run update. sudo apt-get update Install the messaging queue server, RabbitMQ. sudo apt-get install -y rabbitmq-server Now, install the Python dependencies. sudo apt-get install -y python-greenlet python-mysqldb You can use either MySQL or PostgreSQL. Install the required nova- packages, and dependencies are automatically installed. sudo apt-get install nova-volume nova-vncproxy nova-api nova-ajax-console-proxy sudo apt-get install nova-doc nova-scheduler nova-objectstore sudo apt-get install nova-network nova-compute sudo apt-get install glance Install the supplemental tools such as euca2ools and unzip. sudo apt-get install -y euca2ools unzip Next set up the database, either MySQL or PostgreSQL.
Setting up the SQL Database (MySQL) on the Cloud Controller You must use a SQLAlchemy-compatible database, such as MySQL or PostgreSQL. This example shows MySQL. First you can set environments with a "pre-seed" line to bypass all the installation prompts, running this as root: bash MYSQL_PASS=nova NOVA_PASS=notnova cat <<MYSQL_PRESEED | debconf-set-selections mysql-server-5.1 mysql-server/root_password password $MYSQL_PASS mysql-server-5.1 mysql-server/root_password_again password $MYSQL_PASS mysql-server-5.1 mysql-server/start_on_boot boolean true MYSQL_PRESEED Next, install MySQL with: sudo apt-get install -y mysql-server Edit /etc/mysql/my.cnf to change "bind-address" from localhost (127.0.0.1) to any (0.0.0.0) and restart the mysql service: sudo sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf sudo service mysql restart To configure the MySQL database, create the nova database: sudo mysql -u root -p$MYSQL_PASS -e 'CREATE DATABASE nova;' Update the DB to give user ‘nova’@’%’ full control of the nova database: sudo mysql -u root -p$MYSQL_PASS -e "GRANT ALL PRIVILEGES ON *.* TO 'nova'@'%' WITH GRANT OPTION;" Set MySQL password for the user "nova@%" sudo mysql -u root -p$MYSQL_PASS -e "SET PASSWORD FOR 'nova'@'%' = PASSWORD('$NOVA_PASS');"
Setting Up PostgreSQL as the Database on the Cloud Controller OpenStack can use PostgreSQL as an alternative database. This is a matter of substituting the MySQL steps with PostgreSQL equivalents, as outlined here. First, install PostgreSQL on the controller node. apt-fast install postgresql postgresql-server-dev-8.4 python-dev python-psycopg2 Edit /etc/postgresql/8.4/main/postgresql.conf and change the listen_address to listen to all appropriate addesses, PostgreSQL listen only to localhost by default. For example: To listen on a specific IP address: # - Connection Settings - listen_address = '10.1.1.200,192.168.100.2' To listen on all addresses: # - Connection Settings - listen_address = '*' Add appropriate addresses and networks to /etc/postgresql/8.4/main/pg_hba.conf to allow remote access to PostgreSQL, this should include all servers hosting OpenStack (but not neccessarily those hosted by Openstack). As an example, append the following lines: host all all 192.168.0.0/16 host all all 10.1.0.0/16 Change the default PostgreSQL user's password: sudo -u postgres psql template1 template1=#\password Enter Password: Enter again: template1=#\q Restart PostgreSQL: service postgresql restart Create nova databases: sudo -u postgres createdb nova sudo -u postgres createdb glance Create nova database user which will be used for all OpenStack services, note the adduser and createuser steps will prompt for the user's password ($PG_PASS): adduser nova sudo -u postgres createuser -PSDR nova sudo -u postgres psql template1 template1=#GRANT ALL PRIVILEGES ON DATABASE nova TO nova template1=#GRANT ALL PRIVILEGES ON DATABASE glance TO nova template1=#\q For the Cactus version of Nova, the following fix is required for the PostgreSQL database schema. You don't need to do this for Diablo: sudo -u postgres psql template1 template1=#alter table instances alter instance_type_id type integer using cast(instance_type_id as integer); template1=#\q For Nova components that require access to this database the required configuration in /etc/nova/nova.conf should be (replace $PG_PASS with password): --sql_connection=postgresql://nova:$PG_PASS@control.example.com/nova At this stage the databases are empty and contain no content. These will be initialised when you do the nova-manage db sync command.
Installing the Compute Node There are many different ways to perform a multinode install of Compute. In this case, you can install all the nova- packages and dependencies as you did for the Cloud Controller node, or just install nova-network and nova-compute. Your installation can run any nova- services anywhere, so long as the service can access nova.conf so it knows where the rabbitmq server is installed. The Compute Node is where you configure the Compute network, the networking between your instances. There are three options: flat, flatDHCP, and VLAN. Read more about specific configurations in the Networking chapter. Because you may need to query the database from the Compute node and learn more information about instances, euca2ools and mysql-client packages should be installed on any additional Compute nodes.
Restart All Relevant Services on the Compute Node On both nodes, restart all six services in total, just to cover the entire spectrum: restart libvirt-bin; restart nova-network; restart nova-compute; restart nova-api; restart nova-objectstore; restart nova-scheduler All nova services are now installed, the rest of your steps involve specific configuration steps. Please refer to Configuring Compute for additional information.
Installing OpenStack Compute on Red Hat Enterprise Linux 6 This section documents a multi-node installation using RHEL 6. RPM repos for the Bexar release, the Cactus release, milestone releases of Diablo, and also per-commit trunk builds for OpenStack Nova are available at http://yum.griddynamics.net. The final release of Diablo is available at http://yum.griddynamics.net/yum/diablo/, but is not yet tested completely (as of Oct 4, 2011). Check this page for updates: http://wiki.openstack.org/NovaInstall/RHEL6Notes. Known considerations for RHEL version 6 installations: iSCSI LUN not supported due to tgtadm versus ietadm differences GuestFS is used for files injection Files injection works with libvirt Static network configuration can detect OS type for RHEL and Ubuntu Only KVM hypervisor has been tested with this installation To install Nova on RHEL v.6 you need access to two repositories, one available on the yum.griddynamics.net website and the RHEL DVD image connected as repo. First, install RHEL 6.0, preferrably with a minimal set of packages. Disable SELinux in /etc/sysconfig/selinux and then reboot. Connect the RHEL 3. 6.0 x86_64 DVD as a repository in YUM. sudo mount /dev/cdrom /mnt/cdrom cat /etc/yum.repos.d/rhel.repo [rhel] name=RHEL 6.0 baseurl=file:///mnt/cdrom/Server enabled=1 gpgcheck=0 Download and install repo config and key. wget http://yum.griddynamics.net/yum/diablo/openstack-repo-2011.3-0.3.noarch.rpm sudo rpm -i openstack-repo-2011.1-3.noarch.rpm Install the libvirt package (these instructions are tested only on KVM). sudo yum install libvirt sudo chkconfig libvirtd on sudo service libvirtd start Repeat the basic installation steps to put the pre-requisites on all cloud controller and compute nodes. Nova has many different possible configurations. You can install Nova services on separate servers as needed but these are the basic pre-reqs. These are the basic packages to install for a cloud controller node: sudo yum install euca2ools openstack-nova-node-full These are the basic packages to install compute nodes. Repeat for each compute node (the node that runs the VMs) that you want to install. sudo yum install openstack-nova-compute On the cloud controller node, create a MySQL database named nova. sudo service mysqld start sudo chkconfig mysqld on sudo service rabbitmq-server start sudo chkconfig rabbitmq-server on mysqladmin -u root password nova You can use this script to create the database. #!/bin/bash DB_NAME=nova DB_USER=nova DB_PASS=nova PWD=nova CC_HOST="A.B.C.D" # IPv4 address HOSTS='node1 node2 node3' # compute nodes list mysqladmin -uroot -p$PWD -f drop nova mysqladmin -uroot -p$PWD create nova for h in $HOSTS localhost; do echo "GRANT ALL PRIVILEGES ON $DB_NAME.* TO '$DB_USER'@'$h' IDENTIFIED BY '$DB_PASS';" | mysql -u root -p$DB_PASS mysql done echo "GRANT ALL PRIVILEGES ON $DB_NAME.* TO $DB_USER IDENTIFIED BY '$DB_PASS';" | mysql -u root -p$DB_PASS mysql echo "GRANT ALL PRIVILEGES ON $DB_NAME.* TO root IDENTIFIED BY '$DB_PASS';" | mysql -u root -p$DB_PASS mysql Now, ensure the database version matches the version of nova that you are installing: nova-manage db sync For iptables configuration, update your firewall configuration to allow incoming requests on ports 5672 (RabbitMQ), 3306 (MySQL DB), 9292 (Glance), 6080 (noVNC web console), API (8773, 8774) and DHCP traffic from instances. For non-production environments the easiest way to fix any firewall problems is removing final REJECT in INPUT chain of filter table. sudo iptables -I INPUT 1 -p tcp --dport 5672 -j ACCEPT sudo iptables -I INPUT 1 -p tcp --dport 3306 -j ACCEPT sudo iptables -I INPUT 1 -p tcp --dport 9292 -j ACCEPT sudo iptables -I INPUT 1 -p tcp --dport 6080 -j ACCEPT sudo iptables -I INPUT 1 -p tcp --dport 8773 -j ACCEPT sudo iptables -I INPUT 1 -p tcp --dport 8774 -j ACCEPT sudo iptables -I INPUT 1 -p udp --dport 67 -j ACCEPT On every node when you have nova-compute running ensure that unencrypted VNC access is allowed only from Cloud Controller node: sudo iptables -I INPUT 1 -p tcp -s <CLOUD_CONTROLLER_IP_ADDRESS> --dport 5900:6400 -j ACCEPT On each node, set up the configuration file in /etc/nova/nova.conf. Start the Nova services after configuring and you then are running an OpenStack cloud! for n in api compute network objectstore scheduler vncproxy; do sudo service openstack-nova-$n start; done sudo service openstack-glance-api start sudo service openstack-glance-registry start for n in node1 node2 node3; do ssh $n sudo service openstack-nova-compute start; done
Post-Installation Configuration for OpenStack Compute Configuring your Compute installation involves nova-manage commands plus editing the nova.conf file to ensure the correct flags are set. This section contains the basics for a simple multi-node installation, but Compute can be configured many ways. You can find networking options and hypervisor options described in separate chapters, and you will read about additional configuration information in a separate chapter as well.
Setting Flags in the nova.conf File The configuration file nova.conf is installed in /etc/nova by default. You only need to do these steps when installing manually, the scripted installation above does this configuration during the installation. A default set of options are already configured in nova.conf when you install manually. The defaults are as follows: --daemonize=1 --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova Starting with the default file, you must define the following required items in /etc/nova/nova.conf. The flag variables are described below. You can place comments in the nova.conf file by entering a new line with a # sign at the beginning of the line. To see a listing of all possible flag settings, see the output of running /bin/nova-api --help.
Description of nova.conf flags (not comprehensive)
Flag Description
--sql_connection SQL Alchemy connect string (reference); Location of OpenStack Compute SQL database
--s3_host IP address; Location where OpenStack Compute is hosting the objectstore service, which will contain the virtual machine images and buckets
--rabbit_host IP address; Location of RabbitMQ server
--verbose Set to 1 to turn on; Optional but helpful during initial setup
--network_manager Configures how your controller will communicate with additional OpenStack Compute nodes and virtual machines. Options: nova.network.manager.FlatManager Simple, non-VLAN networking nova.network.manager.FlatDHCPManager Flat networking with DHCP nova.network.manager.VlanManager VLAN networking with DHCP; This is the Default if no network manager is defined here in nova.conf.
--fixed_range IP address/range; Network prefix for the IP network that all the projects for future VM guests reside on. Example: 192.168.0.0/12
--ec2_host IP address; Indicates where the nova-api service is installed.
--osapi_host IP address; Indicates where the nova-api service is installed.
--network_size Number value; Number of addresses in each private subnet.
--glance_api_servers IP and port; Address for Image Service.
--use_deprecated_auth If this flag is present, the Cactus method of authentication is used with the novarc file containing credentials.
Here is a simple example nova.conf file for a small private cloud, with all the cloud controller services, database server, and messaging server on the same server. --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --verbose --use_deprecated_auth --ec2_host=http://184.106.239.134 --osapi_host=http://184.106.239.134 --s3_host=184.106.239.134 --rabbit_host=184.106.239.134 --fixed_range=192.168.0.0/16 --network_size=8 --glance_api_servers=184.106.239.134:9292 --routing_source_ip=184.106.239.134 --sql_connection=mysql://nova:notnova@184.106.239.134/nova Create a “nova” group, so you can set permissions on the configuration file: sudo addgroup nova The nova.config file should have its owner set to root:nova, and mode set to 0640, since the file contains your MySQL server’s username and password. You also want to ensure that the nova user belongs to the nova group. sudo usermod -g nova nova chown -R root:nova /etc/nova chmod 640 /etc/nova/nova.conf
Setting Up OpenStack Compute Environment on the Compute Node These are the commands you run to ensure the database schema is current, and then set up a user and project, if you are using built-in auth with the --use_deprecated_auth flag rather than the Identity Service: nova-manage db sync nova-manage user admin <user_name> nova-manage project create <project_name> <user_name> nova-manage network create <network-label> <project-network> <number-of-networks-in-project> <addresses-in-each-network> Here is an example of what this looks like with real values entered: nova-manage db sync nova-manage user admin dub nova-manage project create dubproject dub nova-manage network create novanet 192.168.0.0/24 1 256 For this example, the number of IPs is /24 since that falls inside the /16 range that was set in ‘fixed-range’ in nova.conf. Currently, there can only be one network, and this set up would use the max IPs available in a /24. You can choose values that let you use any valid amount that you would like. The nova-manage service assumes that the first IP address is your network (like 192.168.0.0), that the 2nd IP is your gateway (192.168.0.1), and that the broadcast is the very last IP in the range you defined (192.168.0.255). If this is not the case you will need to manually edit the sql db ‘networks’ table.o. When you run the nova-manage network create command, entries are made in the ‘networks’ and ‘fixed_ips’ table. However, one of the networks listed in the ‘networks’ table needs to be marked as bridge in order for the code to know that a bridge exists. The network in the Nova networks table is marked as bridged automatically for Flat Manager.
Creating Credentials Generate the credentials as a zip file. These are the certs you will use to launch instances, bundle images, and all the other assorted API functions. mkdir –p /root/creds /usr/bin/python /usr/bin/nova-manage project zipfile $NOVA_PROJECT $NOVA_PROJECT_USER /root/creds/novacreds.zip If you are using one of the Flat modes for networking, you may see a Warning message "No vpn data for project <project_name>" which you can safely ignore. Unzip them in your home directory, and add them to your environment. unzip /root/creds/novacreds.zip -d /root/creds/ cat /root/creds/novarc >> ~/.bashrc source ~/.bashrc If you already have Nova credentials present in your environment, you can use a script included with Glance the Image Service, tools/nova_to_os_env.sh, to create Glance-style credentials. This script adds OS_AUTH credentials to the environment which are used by the Image Service to enable private images when the Identity Service is configured as the authentication system for Compute and the Image Service.
Enabling Access to VMs on the Compute Node One of the most commonly missed configuration areas is not allowing the proper access to VMs. Use the ‘euca-authorize’ command to enable access. Below, you will find the commands to allow ‘ping’ and ‘ssh’ to your VMs: euca-authorize -P icmp -t -1:-1 default euca-authorize -P tcp -p 22 default Another common issue is you cannot ping or SSH your instances after issuing the ‘euca-authorize’ commands. Something to look at is the amount of ‘dnsmasq’ processes that are running. If you have a running instance, check to see that TWO ‘dnsmasq’ processes are running. If not, perform the following: killall dnsmasq service nova-network restart
Configuring Multiple Compute NodesIf your goal is to split your VM load across more than one server, you can connect an additional nova-compute node to a cloud controller node. This configuring can be reproduced on multiple compute servers to start building a true multi-node OpenStack Compute cluster. To build out and scale the Compute platform, you spread out services amongst many servers. While there are additional ways to accomplish the build-out, this section describes adding compute nodes, and the service we are scaling out is called 'nova-compute.' For a multi-node install you only make changes to nova.conf and copy it to additional compute nodes. Ensure each nova.conf file points to the correct IP addresses for the respective services. Customize the nova.conf example below to match your environment. The CC_ADDR is the Cloud Controller IP Address. --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --flat_network_bridge=br100 --logdir=/var/log/nova --state_path=/var/lib/nova --verbose --sql_connection=mysql://root:nova@CC_ADDR/nova --s3_host=CC_ADDR --rabbit_host=CC_ADDR --ec2_api=CC_ADDR --ec2_url=http://CC_ADDR:8773/services/Cloud --network_manager=nova.network.manager.FlatManager --fixed_range= network/CIDR --network_size=number of addresses By default, Nova sets the bridge device based on the setting in --flat_network_bridge. Now you can edit /etc/network/interfaces with the following template, updated with your IP information. # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto br100 iface br100 inet static bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 address xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx network xxx.xxx.xxx.xxx broadcast xxx.xxx.xxx.xxx gateway xxx.xxx.xxx.xxx # dns-* options are implemented by the resolvconf package, if installed dns-nameservers xxx.xxx.xxx.xxx Restart networking: /etc/init.d/networking restart With nova.conf updated and networking set, configuration is nearly complete. First, bounce the relevant services to take the latest updates: restart libvirt-bin; service nova-compute restart To avoid issues with KVM and permissions with Nova, run the following commands to ensure we have VM's that are running optimally: chgrp kvm /dev/kvm chmod g+rwx /dev/kvm If you want to use the 10.04 Ubuntu Enterprise Cloud images that are readily available at http://uec-images.ubuntu.com/releases/10.04/release/, you may run into delays with booting. Any server that does not have nova-api running on it needs this iptables entry so that UEC images can get metadata info. On compute nodes, configure the iptables with this next step: # iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination $NOVA_API_IP:8773 Lastly, confirm that your compute node is talking to your cloud controller. From the cloud controller, run this database query: mysql -u$MYSQL_USER -p$MYSQL_PASS nova -e 'select * from services;' In return, you should see something similar to this: +---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ | created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone | +---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ | 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova | | 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova | | 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova | | 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova | | 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova | | 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova | +---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ You can see that 'osdemo0{1,2,4,5} are all running 'nova-compute.' When you start spinning up instances, they will allocate on any node that is running nova-compute from this list.
Determining the Version of Compute You can find the version of the installation by using the nova-manage command: nova-manage version list
Migrating from Cactus to Diablo If you have an installation already installed and running, is is possible to run smoothly an uprade from Cactus Stable (2011.2) to Diablo Stable (2011.3), without losing any of your running instances, but also keep the current network, volumes, and images available. In order to update, we will start by updating the Image Service(Glance), then update the Compute Service (Nova). We will finally make sure the client-tools (euca2ools and novaclient) are properly integrated. For Nova, Glance and euca2ools, we will use the PPA repositories, while we will use the latest version of novaclient from Github, due to important updates. That upgrade guide does not integrate Keystone. If you want to integrate Keystone, please read the section "Installing the Identity Service" A- Glance upgrade In order to update Glance, we will start by stopping all running services : glance-control all stopMake sure the services are stopped, you can check by running ps : ps axl |grep glanceIf the commands doesn't output any Glance process, it means you can continue ; otherwise, simply kill the PID's. While the Cactus release of Glance uses one glance.conf file (usually located at "/etc/glance/glance.conf"), the Diablo release brings up new configuration files. (Look into them, they are pretty self-explanatory). Update the repositories The first thing to do is to update the packages. Update your "/etc/apt/sources.list", or create a "/etc/apt/sources.list.d/openstack_diablo.list file : deb http://ppa.launchpad.net/openstack-release/2011.3/ubuntu maverick main deb-src http://ppa.launchpad.net/openstack-release/2011.3/ubuntu maverick main If you are running Ubuntu Lucid, point to Lucid, otherwise to another version (Maverick, or Natty). You can now update the repository : aptitude update aptitude upgrade You could encounter the message "The following signatures couldn't be verified because the public key is not available: NO_PUBKEY XXXXXXXXXXXX", simply run : gpg --keyserver pgpkeys.mit.edu --recv-key XXXXXXXXXXXX gpg -a --export XXXXXXXXXXXX | sudo apt-key add - (Where XXXXXXXXXXXX is the key) Then re-run the two steps, which should work proceed without error. The package system should propose you to upgrade you Glance installation to the Diablo one, accept the upgrade, and you will have successfully performed the package upgrade. In the next step, we will reconfigure the service. Update Glance configuration files You need now to update the configuration files. The main file you will need to update is etc/glance/glance-registry.confIn that one you will specify the database backend. If you used a MySQL backend under Cactus ; replace the sql_connection with the entry you have into the /etc/glance/glance.conf. Here is how the configuration files should look like : glance-api.conf [DEFAULT] # Show more verbose log output (sets INFO log level output) verbose = True # Show debugging output in logs (sets DEBUG log level output) debug = False # Which backend store should Glance use by default is not specified # in a request to add a new image to Glance? Default: 'file' # Available choices are 'file', 'swift', and 's3' default_store = file # Address to bind the API server bind_host = 0.0.0.0 # Port the bind the API server to bind_port = 9292 # Address to find the registry server registry_host = 0.0.0.0 # Port the registry server is listening on registry_port = 9191 # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! log_file = /var/log/glance/api.log # Send logs to syslog (/dev/log) instead of to file specified by `log_file` use_syslog = False # ============ Notification System Options ===================== # Notifications can be sent when images are create, updated or deleted. # There are three methods of sending notifications, logging (via the # log_file directive), rabbit (via a rabbitmq queue) or noop (no # notifications sent, the default) notifier_strategy = noop # Configuration options if sending notifications via rabbitmq (these are # the defaults) rabbit_host = localhost rabbit_port = 5672 rabbit_use_ssl = false rabbit_userid = guest rabbit_password = guest rabbit_virtual_host = / rabbit_notification_topic = glance_notifications # ============ Filesystem Store Options ======================== # Directory that the Filesystem backend store # writes image data to filesystem_store_datadir = /var/lib/glance/images/ # ============ Swift Store Options ============================= # Address where the Swift authentication service lives swift_store_auth_address = 127.0.0.1:8080/v1.0/ # User to authenticate against the Swift authentication service swift_store_user = jdoe # Auth key for the user authenticating against the # Swift authentication service swift_store_key = a86850deb2742ec3cb41518e26aa2d89 # Container within the account that the account should use # for storing images in Swift swift_store_container = glance # Do we create the container if it does not exist? swift_store_create_container_on_put = False # What size, in MB, should Glance start chunking image files # and do a large object manifest in Swift? By default, this is # the maximum object size in Swift, which is 5GB swift_store_large_object_size = 5120 # When doing a large object manifest, what size, in MB, should # Glance write chunks to Swift? This amount of data is written # to a temporary disk buffer during the process of chunking # the image file, and the default is 200MB swift_store_large_object_chunk_size = 200 # Whether to use ServiceNET to communicate with the Swift storage servers. # (If you aren't RACKSPACE, leave this False!) # # To use ServiceNET for authentication, prefix hostname of # `swift_store_auth_address` with 'snet-'. # Ex. https://example.com/v1.0/ -> https://snet-example.com/v1.0/ swift_enable_snet = False # ============ S3 Store Options ============================= # Address where the S3 authentication service lives s3_store_host = 127.0.0.1:8080/v1.0/ # User to authenticate against the S3 authentication service s3_store_access_key = <20-char AWS access key> # Auth key for the user authenticating against the # S3 authentication service s3_store_secret_key = <40-char AWS secret key> # Container within the account that the account should use # for storing images in S3. Note that S3 has a flat namespace, # so you need a unique bucket name for your glance images. An # easy way to do this is append your AWS access key to "glance". # S3 buckets in AWS *must* be lowercased, so remember to lowercase # your AWS access key if you use it in your bucket name below! s3_store_bucket = <lowercased 20-char aws access key>glance # Do we create the bucket if it does not exist? s3_store_create_bucket_on_put = False # ============ Image Cache Options ======================== image_cache_enabled = False # Directory that the Image Cache writes data to # Make sure this is also set in glance-pruner.conf image_cache_datadir = /var/lib/glance/image-cache/ # Number of seconds after which we should consider an incomplete image to be # stalled and eligible for reaping image_cache_stall_timeout = 86400 # ============ Delayed Delete Options ============================= # Turn on/off delayed delete delayed_delete = False [pipeline:glance-api] pipeline = versionnegotiation context apiv1app # NOTE: use the following pipeline for keystone # pipeline = versionnegotiation authtoken context apiv1app # To enable Image Cache Management API replace pipeline with below: # pipeline = versionnegotiation context imagecache apiv1app # NOTE: use the following pipeline for keystone auth (with caching) # pipeline = versionnegotiation authtoken context imagecache apiv1app [pipeline:versions] pipeline = versionsapp [app:versionsapp] paste.app_factory = glance.api.versions:app_factory [app:apiv1app] paste.app_factory = glance.api.v1:app_factory [filter:versionnegotiation] paste.filter_factory = glance.api.middleware.version_negotiation:filter_factory [filter:imagecache] paste.filter_factory = glance.api.middleware.image_cache:filter_factory [filter:context] paste.filter_factory = glance.common.context:filter_factory [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 auth_host = 127.0.0.1 auth_port = 5001 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_token = 999888777666 glance-registry.conf [DEFAULT] # Show more verbose log output (sets INFO log level output) verbose = True # Show debugging output in logs (sets DEBUG log level output) debug = False # Address to bind the registry server bind_host = 0.0.0.0 # Port the bind the registry server to bind_port = 9191 # Log to this file. Make sure you do not set the same log # file for both the API and registry servers! log_file = /var/log/glance/registry.log # Send logs to syslog (/dev/log) instead of to file specified by `log_file` use_syslog = False # SQLAlchemy connection string for the reference implementation # registry server. Any valid SQLAlchemy connection string is fine. # See: http://www.sqlalchemy.org/docs/05/reference/sqlalchemy/connections.html#sqlalchemy.create_engine #sql_connection = sqlite:////var/lib/glance/glance.sqlite sql_connection = mysql://glance_user:glance_pass@glance_host/glance # Period in seconds after which SQLAlchemy should reestablish its connection # to the database. # # MySQL uses a default `wait_timeout` of 8 hours, after which it will drop # idle connections. This can result in 'MySQL Gone Away' exceptions. If you # notice this, you can lower this value to ensure that SQLAlchemy reconnects # before MySQL can drop the connection. sql_idle_timeout = 3600 # Limit the api to return `param_limit_max` items in a call to a container. If # a larger `limit` query param is provided, it will be reduced to this value. api_limit_max = 1000 # If a `limit` query param is not provided in an api request, it will # default to `limit_param_default` limit_param_default = 25 [pipeline:glance-registry] pipeline = context registryapp # NOTE: use the following pipeline for keystone # pipeline = authtoken keystone_shim context registryapp [app:registryapp] paste.app_factory = glance.registry.server:app_factory [filter:context] context_class = glance.registry.context.RequestContext paste.filter_factory = glance.common.context:filter_factory [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 auth_host = 127.0.0.1 auth_port = 5001 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_token = 999888777666 [filter:keystone_shim] paste.filter_factory = keystone.middleware.glance_auth_token:filter_factory Fire up Glance You should now be able to start glance (glance-control runs bothh glance-api and glance-registry services : glance-controll all start You can now make sure the new version of Glance is running : ps axl |grep glance But also make sure you are running the Diablo version : glance --version which should output : glance 2011.3 If you do not see the two process running, an error occured somewhere. You can check for errors by running : glance-api /etc/glance/glance-api.conf and : glance-registry /etc/glance/glance-registry.conf You are now ready to upgrade the database scheme. Update Glance database Before running any upgrade, make sure you backup the database. If you have a MySQL backend : mysqldump -u $glance_user -p$glance_password glance > glance_backup.sql If you use the default backend, SQLite, simply copy the database's file. You are now ready to update the database scheme. In order to update the Glance service, just run : glance-manage db_sync Validation test In order to make sure Glance has been properly updated, simply run : glance index which should display your registered images : ID Name Disk Format Container Format Size ---------------- ------------------------------ -------------------- -------------------- -------------- 94 Debian 6.0.3 amd64 raw bare 1067778048 B- Nova upgrade In order to successfully go through the upgrade process, it is advised to follow the exact order of the process' steps. By doing so, you make sure you don't miss any mandatory step. Update the repositoiries Update your "/etc/apt/sources.list", or create a "/etc/apt/sources.list.d/openstack_diablo.list file : deb http://ppa.launchpad.net/openstack-release/2011.3/ubuntu maverick main deb-src http://ppa.launchpad.net/openstack-release/2011.3/ubuntu maverick main If you are running Ubuntu Lucid, point to Lucid, otherwise to another version (Maverick, or Natty). You can now update the repository (do not upgrade the packages at the moment) : aptitude update Stop all nova services By stopping all nova services, that would make our instances unreachables (for instance, stopping the nova-network service will make all the routing rules flushed) ; but they won't neither be terminated, nor deleted. We first stop nova services cd /etc/init.d && for i in $(ls nova-*); do service $i stop; done We stop rabbitmq; used by nova-scheduler service rabbitmq-server stop We finally killl dnsmasq, used by nova-network killall dnsmasq You can make sure not any services used by nova are still running via : ps axl | grep nova that should not output any service, if so, simply kill the PIDs MySQL pre-requisites Before running the upgrade, make sure the following tables don't already exist (They could, if you ran tests, or by mistake an upgrade) : block_device_mapping snapshots provider_fw_rules instance_type_extra_specs virtual_interfaces volume_types volume_type_extra_specs volume_metadata; virtual_storage_arrays If so, you can safely remove them; since they are not used at all by Cactus (2011.2) : drop table block_device_mapping; drop table snapshots; drop table provider_fw_rules; drop table instance_type_extra_specs; drop table virtual_interfaces; drop table volume_types; drop table volume_type_extra_specs; drop table volume_metadata; drop table virtual_storage_arrays; Upgrade nova packages You can now perform an upgrade : aptitude upgrade During the upgrade process, you would see : Configuration file '/etc/nova/nova.conf' ==> Modified (by you or by a script) since installation. ==> Package distributor has shipped an updated version. What would you like to do about it ? Your options are: Y or I : install the package maintainer's version N or O : keep your currently-installed version D : show the differences between the versions Z : start a shell to examine the situation The default action is to keep your current version. *** /etc/nova/nova.conf (Y/I/N/O/D/Z) [default=N] ? Type "N" or validate in order to keep your current configuration file. We will manually update in order to use some of new Diablo settings. Update the configuration files Diablo introduces several new files : api-paste.ini, which contains all api-related settings nova-compute.conf, a configuration file dedicated to the compte-node settings. Here are the settings you would add into nova.conf : --multi_host=T --api_paste_config=/etc/nova/api-paste.ini and that one if you plan to integrate Keystone to your environment, with euca2ools : --keystone_ec2_url=http://$NOVA-API-IP.11:5000/v2.0/ec2tokens Here is how the files should look like : nova.conf --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --flagfile=/etc/nova/nova-compute.conf --force_dhcp_release=True --verbose --daemonize=1 --s3_host=172.16.40.11 --rabbit_host=172.16.40.11 --cc_host=172.16.40.11 --keystone_ec2_url=http://172.16.40.11:5000/v2.0/ec2tokens --ec2_url=http://172.16.40.11:8773/services/Cloud --ec2_host=172.16.40.11 --ec2_dmz_host=172.16.40.11 --ec2_port=8773 --fixed_range=192.168.0.0/12 --FAKE_subdomain=ec2 --routing_source_ip=10.0.10.14 --sql_connection=mysql://nova:nova-pass@172.16.40.11/nova --glance_api_servers=172.16.40.13:9292 --image_service=nova.image.glance.GlanceImageService --image_decryption_dir=/var/lib/nova/tmp --network_manager=nova.network.manager.VlanManager --public_interface=eth0 --vlan_interface=eth0 --iscsi_ip_prefix=172.16.40.12 --vnc_enabled --multi_host=T --debug --api_paste_config=/etc/nova/api-paste.ini api-paste.ini ####### # EC2 # ####### [composite:ec2] use = egg:Paste#urlmap /: ec2versions /services/Cloud: ec2cloud /services/Admin: ec2admin /latest: ec2metadata /2007-01-19: ec2metadata /2007-03-01: ec2metadata /2007-08-29: ec2metadata /2007-10-10: ec2metadata /2007-12-15: ec2metadata /2008-02-01: ec2metadata /2008-09-01: ec2metadata /2009-04-04: ec2metadata /1.0: ec2metadata [pipeline:ec2cloud] # pipeline = logrequest ec2noauth cloudrequest authorizer ec2executor # NOTE(vish): use the following pipeline for deprecated auth pipeline = logrequest authenticate cloudrequest authorizer ec2executor [pipeline:ec2admin] # pipeline = logrequest ec2noauth adminrequest authorizer ec2executor # NOTE(vish): use the following pipeline for deprecated auth pipeline = logrequest authenticate adminrequest authorizer ec2executor [pipeline:ec2metadata] pipeline = logrequest ec2md [pipeline:ec2versions] pipeline = logrequest ec2ver [filter:logrequest] paste.filter_factory = nova.api.ec2:RequestLogging.factory [filter:ec2lockout] paste.filter_factory = nova.api.ec2:Lockout.factory [filter:ec2noauth] paste.filter_factory = nova.api.ec2:NoAuth.factory [filter:authenticate] paste.filter_factory = nova.api.ec2:Authenticate.factory [filter:cloudrequest] controller = nova.api.ec2.cloud.CloudController paste.filter_factory = nova.api.ec2:Requestify.factory [filter:adminrequest] controller = nova.api.ec2.admin.AdminController paste.filter_factory = nova.api.ec2:Requestify.factory [filter:authorizer] paste.filter_factory = nova.api.ec2:Authorizer.factory [app:ec2executor] paste.app_factory = nova.api.ec2:Executor.factory [app:ec2ver] paste.app_factory = nova.api.ec2:Versions.factory [app:ec2md] paste.app_factory = nova.api.ec2.metadatarequesthandler:MetadataRequestHandler.factory ############# # Openstack # ############# [composite:osapi] use = egg:Paste#urlmap /: osversions /v1.0: openstackapi10 /v1.1: openstackapi11 [pipeline:openstackapi10] # pipeline = faultwrap noauth ratelimit osapiapp10 # NOTE(vish): use the following pipeline for deprecated auth pipeline = faultwrap auth ratelimit osapiapp10 [pipeline:openstackapi11] # pipeline = faultwrap noauth ratelimit extensions osapiapp11 # NOTE(vish): use the following pipeline for deprecated auth pipeline = faultwrap auth ratelimit extensions osapiapp11 [filter:faultwrap] paste.filter_factory = nova.api.openstack:FaultWrapper.factory [filter:auth] paste.filter_factory = nova.api.openstack.auth:AuthMiddleware.factory [filter:noauth] paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory [filter:ratelimit] paste.filter_factory = nova.api.openstack.limits:RateLimitingMiddleware.factory [filter:extensions] paste.filter_factory = nova.api.openstack.extensions:ExtensionMiddleware.factory [app:osapiapp10] paste.app_factory = nova.api.openstack:APIRouterV10.factory [app:osapiapp11] paste.app_factory = nova.api.openstack:APIRouterV11.factory [pipeline:osversions] pipeline = faultwrap osversionapp [app:osversionapp] paste.app_factory = nova.api.openstack.versions:Versions.factory ########## # Shared # ########## [filter:keystonecontext] paste.filter_factory = keystone.middleware.nova_keystone_context:NovaKeystoneContext.factory [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 auth_host = 127.0.0.1 auth_port = 5001 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_token = 999888777666 Database update You are now ready to upgrade the database, by running : nova-manage db sync Restart the services After the database upgrade, services can be restarted : Rabbitmq-server service rabbitmq-server start Nova services cd /etc/init.d && for i $(ls nova-*); do service $i start; done You can check the version you are running : nova-manage versionShould ouput : 2011.3 Validation test The first thing to check is all the services are all running : ps axl | grep nova should output all the services running. If some services are missing, check their appropriate log files (e.g /var/log/nova/nova-api.log). You would then use nova-manage : nova-manage service list If all the services are up, you can now validate the migration by : Launching a new instance Terminate a running instance Attach a floating IP to an "old" and a "new" instance C- Client tools upgrade In this part we will see how to make sure our management tools will be correctly integrated to the new environment's version : euca2ools novaclient euca2ools The euca2ools settings do not change from the client side : # Euca2ools export NOVA_KEY_DIR=/root/creds/ export EC2_ACCESS_KEY="EC2KEY:USER" export EC2_SECRET_KEY="SECRET_KEY" export EC2_URL="http://$NOVA-API-IP:8773/services/Cloud" export S3_URL="http://$NOVA-API-IP:3333" export EC2_USER_ID=42 # nova does not use user id, but bundling requires it export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem export EC2_CERT=${NOVA_KEY_DIR}/cert.pem export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this set alias ec2-bundle-image="ec2-bundle-image --cert ${EC2_CERT} --privatekey ${EC2_PRIVATE_KEY} --user 42 --ec2cert ${NOVA_CERT}" alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_KEY} --url ${S3_URL} --ec2cert ${NOVA_CERT}" On server-side, there are also not any changes to make, since we don't use keystone. Here are some commands you should be able to run : euca-describe-instances euca-describe-adresses euca-terminate-instance $instance_id euca-create-volume -s 5 -z $zone euca-attach-volume -i $instance_id -d $device $volume_name euca-associate-address -i $instance_id $address If all these commands work flawlessly, it means the tools in properly integrated. python-novaclient This tool requires a recent version on order to use all the service the OSAPI offers (floating-ip support, volumes support, etc..). In order to upgrade it : git clone https://github.com/rackspace/python-novaclient.git && cd python-novaclient python setup.py build python setup.py install Make sure you have the correct settings into your .bashrc (or any source-able file) : # Python-novaclient export NOVA_API_KEY="SECRET_KEY" export NOVA_PROJECT_ID="PROJECT-NAME" export NOVA_USERNAME="USER" export NOVA_URL="http://$NOVA-API-IP:8774/v1.1" export NOVA_VERSION=1.1 He are some nova commands you should be able to run : nova list nova image-show nova boot $flavor_id --image $image_id --key_name $key_name $instance_name nova volume-create --display_name $name $size Again, if the commands run without any error, the tools is then properly integrated. D- Why is Keystone not integrated ? Keystone introduces a new identity management, instead of having users into nova's database, they are now fully relagated to Keystone. While nova deals with "users as IDs" (eg : the project name is the project id), Keystones makes a difference between a name and an ID. ; thus, the integration breaks a running Cactus cloud. Since we were looking for a smooth integration on a running platform, Keystone has not been integrated. If you want to integrate Keystone, here are the steps you would follow : Export the current project The first thing to do is export all credentials-related settings from nova : nova-manage shell export --filename=nova_export.txt The created file contains keystone commands (via keystone-manage tool) ; you can import simply import the settings with a loop : while read line; do $line; done < nova_export.txt Enable the pipelines Pipelines are like "communication links" between components. In our case we need to enable pipelines from all the components to Keystone. Glance Pipeline glance-api.conf [pipeline:glance-api] pipeline = versionnegotiation authtoken context apiv1app # To enable Image Cache Management API replace pipeline with below: # pipeline = versionnegotiation context imagecache apiv1app # NOTE: use the following pipeline for keystone auth (with caching) pipeline = versionnegotiation authtoken context imagecache apiv1app [...] # Keystone [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 auth_host = 127.0.0.1 auth_port = 5001 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_token = 999888777666 glance-registry.conf [pipeline:glance-registry] # pipeline = context registryapp # NOTE: use the following pipeline for keystone pipeline = authtoken keystone_shim context registryapp [...] # Keystone [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 auth_host = 127.0.0.1 auth_port = 5001 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_token = 999888777666 [filter:keystone_shim] paste.filter_factory = keystone.middleware.glance_auth_token:filter_factory Nova Pipeline nova-api.conf --keystone_ec2_url=http://$NOVA-API-IP:5000/v2.0/ec2tokens api-paste.ini # EC2 API [pipeline:ec2cloud] pipeline = logrequest ec2noauth cloudrequest authorizer ec2executor # NOTE(vish): use the following pipeline for deprecated auth # pipeline = logrequest authenticate cloudrequest authorizer ec2executor [pipeline:ec2admin] pipeline = logrequest ec2noauth adminrequest authorizer ec2executor # NOTE(vish): use the following pipeline for deprecated auth # pipeline = logrequest authenticate adminrequest authorizer ec2executor # OSAPI [pipeline:openstackapi10] pipeline = faultwrap noauth ratelimit osapiapp10 # NOTE(vish): use the following pipeline for deprecated auth # pipeline = faultwrap auth ratelimit osapiapp10 [pipeline:openstackapi11] pipeline = faultwrap noauth ratelimit extensions osapiapp11 # NOTE(vish): use the following pipeline for deprecated auth # pipeline = faultwrap auth ratelimit extensions osapiapp11 ########## # Shared # ########## [filter:keystonecontext] paste.filter_factory = keystone.middleware.nova_keystone_context:NovaKeystoneContext.factory [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory service_protocol = http service_host = 127.0.0.1 service_port = 5000 auth_host = 127.0.0.1 auth_port = 5001 auth_protocol = http auth_uri = http://127.0.0.1:5000/ admin_token = 999888777666 euca2ools .bashrc # Euca2ools [...] export EC2_URL="http://$KEYSTONE-IP:5000/services/Cloud" [...] novaclient novaclient # Novaclient [...] export NOVA_URL=http://$KEYSTONe-IP:5000/v.20/ export NOVA_REGION_NAME="$REGION" [...]