201f5c6639
backport: havana Change-Id: I6851ef705247d619e78fb63cbcdf1ce804a548f1 Partial-Bug: #1121866
102 lines
6.2 KiB
XML
102 lines
6.2 KiB
XML
<?xml version="1.0" encoding="UTF-8"?>
|
|
<section xmlns="http://docbook.org/ns/docbook"
|
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
|
|
xml:id="configuring-multiple-compute-nodes">
|
|
<title>Configure multiple Compute nodes</title>
|
|
<para>To distribute your VM load across more than one server, you
|
|
can connect an additional <systemitem class="service"
|
|
>nova-compute</systemitem> node to a cloud controller
|
|
node. You can reproduce this configuration on multiple compute
|
|
servers to build a true multi-node OpenStack Compute
|
|
cluster.</para>
|
|
<para>To build and scale the Compute platform, you distribute
|
|
services across many servers. While you can accomplish this in
|
|
other ways, this section describes how to add compute nodes
|
|
and scale out the <systemitem class="service"
|
|
>nova-compute</systemitem> service.</para>
|
|
<para>For a multi-node installation, you make changes to only the
|
|
<filename>nova.conf</filename> file and copy it to
|
|
additional compute nodes. Ensure that each
|
|
<filename>nova.conf</filename> file points to the correct
|
|
IP addresses for the respective services.</para>
|
|
<procedure>
|
|
<step>
|
|
<para>By default, <systemitem class="service"
|
|
>nova-network</systemitem> sets the bridge device
|
|
based on the setting in
|
|
<literal>flat_network_bridge</literal>. Update
|
|
your IP information in the
|
|
<filename>/etc/network/interfaces</filename> file
|
|
by using this template:</para>
|
|
<programlisting language="bash"># The loopback network interface
|
|
auto lo
|
|
iface lo inet loopback
|
|
|
|
# The primary network interface
|
|
auto br100
|
|
iface br100 inet static
|
|
bridge_ports eth0
|
|
bridge_stp off
|
|
bridge_maxwait 0
|
|
bridge_fd 0
|
|
address <replaceable>xxx.xxx.xxx.xxx</replaceable>
|
|
netmask <replaceable>xxx.xxx.xxx.xxx</replaceable>
|
|
network <replaceable>xxx.xxx.xxx.xxx</replaceable>
|
|
broadcast <replaceable>xxx.xxx.xxx.xxx</replaceable>
|
|
gateway <replaceable>xxx.xxx.xxx.xxx</replaceable>
|
|
# dns-* options are implemented by the resolvconf package, if installed
|
|
dns-nameservers <replaceable>xxx.xxx.xxx.xxx</replaceable></programlisting>
|
|
</step>
|
|
<step>
|
|
<para>Restart networking:</para>
|
|
<screen><prompt>$</prompt> <userinput>sudo service networking restart</userinput></screen>
|
|
</step>
|
|
<step>
|
|
<para>Bounce the relevant services to take the latest
|
|
updates:</para>
|
|
<screen><prompt>$</prompt> <userinput>sudo service libvirtd restart</userinput>
|
|
<prompt>$</prompt> <userinput>sudo service nova-compute restart</userinput></screen>
|
|
</step>
|
|
<step>
|
|
<para>To avoid issues with KVM and permissions with the Compute Service,
|
|
run these commands to ensure that your VMs run
|
|
optimally:</para>
|
|
<screen><prompt>#</prompt> <userinput>chgrp kvm /dev/kvm</userinput>
|
|
<prompt>#</prompt> <userinput>chmod g+rwx /dev/kvm</userinput></screen>
|
|
</step>
|
|
<step>
|
|
<para>Any server that does not have
|
|
<command>nova-api</command> running on it requires
|
|
an iptables entry so that images can get metadata
|
|
information.</para>
|
|
<para>On compute nodes, configure iptables with this
|
|
command:</para>
|
|
<screen><prompt>#</prompt> <userinput>iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination <replaceable>$NOVA_API_IP</replaceable>:8773</userinput></screen>
|
|
</step>
|
|
<step>
|
|
<para>Confirm that your compute node can talk to your
|
|
cloud controller.</para>
|
|
<para>From the cloud controller, run this database
|
|
query:</para>
|
|
<screen><prompt>$</prompt> <userinput>mysql -u<replaceable>$MYSQL_USER</replaceable> -p<replaceable>$MYSQL_PASS</replaceable> nova -e 'select * from services;'</userinput></screen>
|
|
<screen><computeroutput>+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
|
|
| created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone |
|
|
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
|
|
| 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova |
|
|
| 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova |
|
|
| 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova |
|
|
| 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova |
|
|
| 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova |
|
|
| 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova |
|
|
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+</computeroutput></screen>
|
|
<para>In this example, the <literal>osdemo</literal> hosts
|
|
all run the <systemitem class="service"
|
|
>nova-compute</systemitem> service. When you
|
|
launch instances, they allocate on any node that runs
|
|
<systemitem class="service"
|
|
>nova-compute</systemitem> from this list.</para>
|
|
</step>
|
|
</procedure>
|
|
</section>
|