Basic Operating System Configuration This guide starts by creating two nodes: a controller node to host most services, and a compute node to run virtual machine instances. Later chapters create additional nodes to run more services. OpenStack offers a lot of flexibility in how and where you run each service, so this is not the only possible configuration. However, you do need to configure certain aspects of the operating system on each node. This chapter details a sample configuration for both the controller node and any additional nodes. It's possible to configure the operating system in other ways, but the remainder of this guide assumes you have a configuration compatible with the one shown here. All of the commands throughout this guide assume you have administrative privileges. Either run the commands as the root user, or prefix them with the sudo command.
Networking For a production deployment of OpenStack, most nodes should have two network interface cards: one for external network traffic, and one to communicate only with other OpenStack nodes. For simple test cases, you can use machines with only a single network interface card. This section sets up networking on two networks with static IP addresses and manually manages a list of hostnames on each machine. If you manage a large network, you probably already have systems in place to manage this. If so, you may skip this section, but note that the rest of this guide assumes that each node can reach the other nodes on the internal network using hostnames like controller and compute1. Start by disabling the NetworkManager service and enabling the network service. The network service is more suitable for the static network configuration done in this guide. # service NetworkManager stop # service network start # chkconfig NetworkManager off # chkconfig network on Since Fedora 19, firewalld replaced iptables as the default firewall system. You can configure firewalld successfully, but this guide currently recommends and demonstrates the use of iptables. For Fedora 19 systems, run the following commands to disable firewalld and enable iptables. # service firewalld stop # service iptables start # chkconfig firewalld off # chkconfig iptables on When you setup your system, use the traditional network scripts and do not use the NetworkManager. You can change the settings also after installation with the YaST network module: # yast2 network Next, create the configuration for both eth0 and eth1. This guide uses 192.168.0.x address for the internal network and 10.0.0.x addresses for the external network. Make sure that the corresponding network devices are connected to the correct network. In this guide, the controller node uses the IP addresses 192.168.0.10 and 10.0.0.10. When creating the compute node, use 192.168.0.11 and 10.0.0.11 instead. Additional nodes added in later chapters will follow this pattern.
Basic Architecture
<filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename> # Internal Network DEVICE=eth0 TYPE=Ethernet BOOTPROTO=static IPADDR=192.168.0.10 NETMASK=255.255.255.0 DEFROUTE=yes ONBOOT=yes <filename>/etc/sysconfig/network-scripts/ifcfg-eth1</filename> # External Network DEVICE=eth1 TYPE=Ethernet BOOTPROTO=static IPADDR=10.0.0.10 NETMASK=255.255.255.0 DEFROUTE=yes ONBOOT=yes To set up the two network interfaces, start the YaST network module, as follows: # yast2 network Use the following parameters to set up the first ethernet card eth0 for the internal network: Statically assigned IP Address IP Address: 192.168.0.10 Subnet Mask: 255.255.255.0 Use the following parameters to set up the second ethernet card eth1 for the external network: Statically assigned IP Address IP Address: 10.0.0.10 Subnet Mask: 255.255.255.0 Setup a default route on the external network. <filename>/etc/network/interfaces</filename> # Internal Network auto eth0 iface eth0 inet static address 192.168.0.10 netmask 255.255.255.0 # External Network auto eth1 iface eth1 inet static address 10.0.0.10 netmask 255.255.255.0 Once you've configured the network, restart the daemon for changes to take effect: # service networking restart # service network restart # systemctl restart network.service Set the hostname of each machine. Name the controller node controller and the first compute node compute1. These are the hostnames used in the examples throughout this guide. Use the hostname command to set the hostname: # hostname controller Use yast network to set the hostname with YaST. To have the hostname change persist when the system reboots, you need to specify it in the proper configuration file. In Red Het Enterprise Linux, Centos, and older versions of Fedora, you set this in the file /etc/sysconfig/network. Change the line starting with HOSTNAME=. HOSTNAME=controller As of Fedora 18, Fedora now uses the file /etc/hostname. This file contains a single line with just the hostname. To have this hostname set when the system reboots, you need to specify it in the file /etc/hostname. This file contains a single line with just the hostname. Finally, ensure that each node can reach the other nodes using hostnames. In this guide, we will manually edit the /etc/hosts file on each system. For large-scale deployments, you should use DNS or a configuration management system like Puppet. 127.0.0.1 localhost 192.168.0.10 controller 192.168.0.11 compute1
Network Time Protocol (NTP) To keep all the services in sync across multiple machines, you need to install NTP. In this guide, we will configure the controller node to be the reference server, and configure all additional nodes to set their time from the controller node. Install the ntp package on each system running OpenStack services. # apt-get install ntp # yum install ntp # zypper install ntp Set up the NTP server on your controller node so that it receives data by modifying the ntp.conf file and restarting the service. # service ntpd start # chkconfig ntpd on # systemctl start ntp.service # systemctl enable ntp.service Set up all additional nodes to synchronize their time from the controller node. The simplest way to do this is to add a daily cron job. Add a file at /etc/cron.daily/ntpdate that contains the following: ntpdate controller hwclock -w Make sure to mark this file as executable. # chmod a+x /etc/cron.daily/ntpdate
MySQL Database Most OpenStack services require a database to store information. In this guide, we use a MySQL database running on the controller node. The controller node needs to have the MySQL database installed. Any additional nodes that access MySQL need to have the MySQL client software installed: On the controller node, install the MySQL client, the MySQL database, and the MySQL Python library. # apt-get install python-mysqldb mysql-server # yum install mysql mysql-server MySQL-python # zypper install mysql-community-server-client mysql-community-server python-mysql When you install the server package, you will be asked to enter a root password for the database. Be sure to choose a strong password and remember it - it will be needed later. On any nodes besides the controller node, just install the MySQL client and the MySQL Python library. This is all you need to do on any system not hosting the MySQL database. # apt-get install python-mysqldb # yum install mysql MySQL-python # zypper install mysql-community-server-client python-mysql Start the MySQL database server and set it to start automatically when the system boots. # service mysqld start # chkconfig mysqld on # systemctl enable mysql.service # systemctl start mysql.service Finally, it's a good idea to set a root password for your MySQL database. The OpenStack programs that set up databases and tables will prompt you for this password if it's set. # mysqladmin password newPassword
Messaging Server On the controller node, install the messaging queue server. Typically this is RabbitMQQpid but QpidRabbitMQ and ZeroMQ (0MQ) are also available. # apt-get install rabbitmq-server # zypper install rabbitmq-server # yum install qpid-cpp-server memcached Disable Qpid authentication by setting the value of the auth configuration key to no in the /etc/qpidd.conf file. # echo "auth=no" >> /etc/qpidd.conf Start Qpid and set it to start automatically when the system boots. # service qpidd start # chkconfig qpidd on Start the messaging service and set it to start automatically when the system boots: # systemctl start rabbitmq-server.service # systemctl enable rabbitmq-server.service
OpenStack Packages Distribution releases and OpenStack releases are often independent of each other and thus you might need to add some extra steps to access the latest OpenStack release after installation of the machine before installation of any OpenStack packages. This guide uses the OpenStack packages from the RDO repository. These packages work on Red Hat Enterprise Linux 6 and compatible versions of CentOS, as well as Fedora 19. Enable the RDO repository by downloading and installing the rdo-release-havana package. # yum install http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-6.noarch.rpm The EPEL package includes gpg keys for package signing and repository information.Install the latest 'epel-release' package (see http://download.fedoraproject.org/pub/epel/6/x86_64/repoview/epel-release.html). For example: # yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm The openstack-utils package contains utility programs that make installation and configuration easier. These programs will be used throughout this guide. Install openstack-utils. This will also verify that you can access the RDO repository. # yum install openstack-utils Use the Open Build Service repositories for Havana based on your openSUSE version, for example if you run openSUSE 12.3 use: # zypper ar -f obs://Cloud:OpenStack:Havana/openSUSE_12.3 Havana For openSUSE 13.1, nothing needs to be done since OpenStack Havana packages are part of the distribution itself. To use the Ubuntu Cloud Archive for Havana The Ubuntu Cloud Archive is a special repository that allows you to install newer releases of OpenStack on the stable supported version of Ubuntu. Install the keyring: # apt-get install ubuntu-cloud-keyring Create a new repository sources file /etc/apt/sources.list.d/cloud-archive.list containing: deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/havana main Upgrade the system (and reboot if you need): # apt-get update && apt-get dist-upgrade Congratulations, now you are ready to start installing OpenStack services!