Merge "Moved console access and ServiceGroup config to Cloud Admin"
This commit is contained in:
commit
03d235f62f
@ -0,0 +1,102 @@
|
|||||||
|
<!DOCTYPE section [
|
||||||
|
<!-- Some useful entities borrowed from HTML -->
|
||||||
|
<!ENTITY ndash "–">
|
||||||
|
<!ENTITY mdash "—">
|
||||||
|
<!ENTITY hellip "…">
|
||||||
|
]><section xml:id="configuring-compute-service-groups"
|
||||||
|
xmlns="http://docbook.org/ns/docbook"
|
||||||
|
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||||
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||||
|
xmlns:ns5="http://www.w3.org/1999/xhtml"
|
||||||
|
xmlns:ns4="http://www.w3.org/2000/svg"
|
||||||
|
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
|
||||||
|
xmlns:ns="http://docbook.org/ns/docbook"
|
||||||
|
version="5.0">
|
||||||
|
<title>Configure Compute service groups</title>
|
||||||
|
<para>To effectively manage and utilize compute nodes, the Compute service must know their
|
||||||
|
statuses. For example, when a user launches a new VM, the Compute scheduler sends the
|
||||||
|
request to a live node; the Compute service queries the ServiceGroup API to get information
|
||||||
|
about whether a node is alive.</para>
|
||||||
|
<para>When a compute worker (running the <systemitem class="service">nova-compute</systemitem>
|
||||||
|
daemon) starts, it calls the <systemitem>join</systemitem> API to join the compute group.
|
||||||
|
Any interested service (for example, the scheduler) can query the group's membership and the
|
||||||
|
status of its nodes. Internally, the <systemitem>ServiceGroup</systemitem> client driver
|
||||||
|
automatically updates the compute worker status.</para>
|
||||||
|
<para>The database, ZooKeeper, and Memcache drivers are available.</para>
|
||||||
|
<section xml:id="database-servicegroup-driver">
|
||||||
|
<title>Database ServiceGroup driver</title>
|
||||||
|
<para>By default, Compute uses the database driver to track node liveness. In a compute worker,
|
||||||
|
this driver periodically sends a <command>db update</command> command to the database,
|
||||||
|
saying <quote>I'm OK</quote> with a timestamp. Compute uses a pre-defined timeout
|
||||||
|
(<literal>service_down_time</literal>) to determine whether a node is dead.</para>
|
||||||
|
<para>The driver has limitations, which can be an issue depending on your setup. The more compute
|
||||||
|
worker nodes that you have, the more pressure you put on the database. By default, the
|
||||||
|
timeout is 60 seconds so it might take some time to detect node failures. You could
|
||||||
|
reduce the timeout value, but you must also make the database update more frequently,
|
||||||
|
which again increases the database workload.</para>
|
||||||
|
<para>The database contains data that is both transient (whether the node is alive) and persistent
|
||||||
|
(for example, entries for VM owners). With the ServiceGroup abstraction, Compute can treat
|
||||||
|
each type separately.</para>
|
||||||
|
</section>
|
||||||
|
<section xml:id="zookeeper-servicegroup-driver">
|
||||||
|
<title>ZooKeeper ServiceGroup driver</title>
|
||||||
|
<para>The ZooKeeper ServiceGroup driver works by using ZooKeeper
|
||||||
|
ephemeral nodes. ZooKeeper, in contrast to databases, is a
|
||||||
|
distributed system. Its load is divided among several servers.
|
||||||
|
At a compute worker node, after establishing a ZooKeeper session,
|
||||||
|
the driver creates an ephemeral znode in the group directory. Ephemeral
|
||||||
|
znodes have the same lifespan as the session. If the worker node
|
||||||
|
or the <systemitem class="service">nova-compute</systemitem> daemon crashes, or a network
|
||||||
|
partition is in place between the worker and the ZooKeeper server quorums,
|
||||||
|
the ephemeral znodes are removed automatically. The driver
|
||||||
|
gets the group membership by running the <command>ls</command> command in the group directory.</para>
|
||||||
|
<para>To use the ZooKeeper driver, you must install ZooKeeper servers and client libraries.
|
||||||
|
Setting up ZooKeeper servers is outside the scope of this guide (for more information,
|
||||||
|
see <link xlink:href="http://zookeeper.apache.org/"
|
||||||
|
>Apache Zookeeper</link>).</para>
|
||||||
|
<para>To use ZooKeeper, you must install client-side Python libraries on every nova node:
|
||||||
|
<literal>python-zookeeper</literal> – the official Zookeeper Python binding
|
||||||
|
and <literal>evzookeeper</literal> – the library to make the binding work with the
|
||||||
|
eventlet threading model.</para>
|
||||||
|
<para>The following example assumes the ZooKeeper server addresses and ports are
|
||||||
|
<literal>192.168.2.1:2181</literal>, <literal>192.168.2.2:2181</literal>, and
|
||||||
|
<literal>192.168.2.3:2181</literal>.</para>
|
||||||
|
<para>The following values in the <filename>/etc/nova/nova.conf</filename> file (on every
|
||||||
|
node) are required for the <systemitem>ZooKeeper</systemitem> driver:</para>
|
||||||
|
<programlisting language="ini"># Driver for the ServiceGroup serice
|
||||||
|
servicegroup_driver="zk"
|
||||||
|
|
||||||
|
[zookeeper]
|
||||||
|
address="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181"</programlisting>
|
||||||
|
<para>To customize the Compute Service groups, use the following configuration option
|
||||||
|
settings:</para>
|
||||||
|
<xi:include href="../../common/tables/nova-zookeeper.xml"/>
|
||||||
|
</section>
|
||||||
|
<section xml:id="memcache-servicegroup-driver">
|
||||||
|
<title>Memcache ServiceGroup driver</title>
|
||||||
|
<para>The <systemitem>memcache</systemitem> ServiceGroup driver uses memcached, which is a
|
||||||
|
distributed memory object caching system that is often used to increase site
|
||||||
|
performance. For more details, see <link xlink:href="http://memcached.org/"
|
||||||
|
>memcached.org</link>.</para>
|
||||||
|
<para>To use the <systemitem>memcache</systemitem> driver, you must install
|
||||||
|
<systemitem>memcached</systemitem>. However, because
|
||||||
|
<systemitem>memcached</systemitem> is often used for both OpenStack Object Storage
|
||||||
|
and OpenStack dashboard, it might already be installed. If
|
||||||
|
<systemitem>memcached</systemitem> is not installed, refer to the <link
|
||||||
|
xlink:href="http://docs.openstack.org/havana/install-guide/contents"
|
||||||
|
><citetitle>OpenStack Installation Guide</citetitle></link> for more
|
||||||
|
information.</para>
|
||||||
|
<para>The following values in the <filename>/etc/nova/nova.conf</filename> file (on every
|
||||||
|
node) are required for the <systemitem>memcache</systemitem> driver:</para>
|
||||||
|
<programlisting language="ini"># Driver for the ServiceGroup serice
|
||||||
|
servicegroup_driver="mc"
|
||||||
|
|
||||||
|
# Memcached servers. Use either a list of memcached servers to use for caching (list value),
|
||||||
|
# or "<None>" for in-process caching (default).
|
||||||
|
memcached_servers=<None>
|
||||||
|
|
||||||
|
# Timeout; maximum time since last check-in for up service (integer value).
|
||||||
|
# Helps to define whether a node is dead
|
||||||
|
service_down_time=60</programlisting>
|
||||||
|
</section>
|
||||||
|
</section>
|
@ -498,6 +498,8 @@ local0.error @@172.20.1.43:1024</programlisting>
|
|||||||
</step>
|
</step>
|
||||||
</procedure>
|
</procedure>
|
||||||
</section>
|
</section>
|
||||||
|
<xi:include href="../../common/section_compute-configure-console.xml"/>
|
||||||
|
<xi:include href="section_compute-configure-service-groups.xml"/>
|
||||||
<section xml:id="section_nova-compute-node-down">
|
<section xml:id="section_nova-compute-node-down">
|
||||||
<title>Recover from a failed compute node</title>
|
<title>Recover from a failed compute node</title>
|
||||||
<para>If you have deployed Compute with a shared file
|
<para>If you have deployed Compute with a shared file
|
||||||
|
@ -19,6 +19,6 @@
|
|||||||
<para>VNC must be explicitly disabled to get access to the SPICE console.
|
<para>VNC must be explicitly disabled to get access to the SPICE console.
|
||||||
Set the <option>vnc_enabled</option> option to <literal>False</literal> in
|
Set the <option>vnc_enabled</option> option to <literal>False</literal> in
|
||||||
the <literal>[DEFAULT]</literal> section to disable the VNC console.</para>
|
the <literal>[DEFAULT]</literal> section to disable the VNC console.</para>
|
||||||
<para><xref linkend="config_table_nova_spice"/> documents the options to
|
<para>Use the following options to configure SPICE as the console for OpenStack Compute:</para>
|
||||||
configure SPICE as the console for OpenStack Compute.</para>
|
<xi:include href="../common/tables/nova-spice.xml"/>
|
||||||
</section>
|
</section>
|
||||||
|
@ -40,13 +40,13 @@
|
|||||||
continues to proxy until the session ends.</para>
|
continues to proxy until the session ends.</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
</orderedlist>
|
</orderedlist>
|
||||||
<para>The proxy also tunnels the VNC protocol over WebSockets so
|
<para>The proxy also tunnels the VNC protocol over WebSockets so that the
|
||||||
that the noVNC client can talk VNC.</para>
|
<systemitem>noVNC</systemitem> client can talk to VNC servers. In general, the VNC
|
||||||
<para>In general, the VNC proxy:</para>
|
proxy:</para>
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>Bridges between the public network where the clients live
|
<para>Bridges between the public network where the clients live and the private network where
|
||||||
and the private network where vncservers live.</para>
|
VNC servers live.</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>Mediates token authentication.</para>
|
<para>Mediates token authentication.</para>
|
||||||
@ -118,8 +118,8 @@
|
|||||||
</section>
|
</section>
|
||||||
<section xml:id="vnc-configuration-options">
|
<section xml:id="vnc-configuration-options">
|
||||||
<title>VNC configuration options</title>
|
<title>VNC configuration options</title>
|
||||||
<para>To customize the VNC console, use the configuration option settings
|
<para>To customize the VNC console, use the following configuration options:</para>
|
||||||
documented in <xref linkend="config_table_nova_vnc"/>.</para>
|
<xi:include href="../common/tables/nova-vnc.xml"/>
|
||||||
<note>
|
<note>
|
||||||
<para>To support <link
|
<para>To support <link
|
||||||
xlink:href="http://docs.openstack.org/trunk/config-reference/content/configuring-openstack-compute-basics.html#section_configuring-compute-migrations"
|
xlink:href="http://docs.openstack.org/trunk/config-reference/content/configuring-openstack-compute-basics.html#section_configuring-compute-migrations"
|
||||||
@ -128,29 +128,33 @@
|
|||||||
IP address does not exist on the destination host.</para>
|
IP address does not exist on the destination host.</para>
|
||||||
</note>
|
</note>
|
||||||
<note>
|
<note>
|
||||||
<para>The <literal>vncserver_proxyclient_address</literal>
|
<para>
|
||||||
defaults to <literal>127.0.0.1</literal>, which is the address
|
<itemizedlist>
|
||||||
of the compute host that nova instructs proxies to use when
|
<listitem>
|
||||||
connecting to instance servers.</para>
|
<para>The <literal>vncserver_proxyclient_address</literal> defaults to
|
||||||
<para>For all-in-one XenServer domU deployments, set this to
|
<literal>127.0.0.1</literal>, which is the address of the compute host that
|
||||||
169.254.0.1.</para>
|
Compute instructs proxies to use when connecting to instance servers.
|
||||||
<para>For multi-host XenServer domU deployments, set to a dom0
|
</para>
|
||||||
management IP on the same network as the proxies.</para>
|
</listitem>
|
||||||
<para>For multi-host libvirt deployments, set to a host
|
<listitem><para>For all-in-one XenServer domU deployments, set this to 169.254.0.1.</para></listitem>
|
||||||
management IP on the same network as the proxies.</para>
|
<listitem><para>For multi-host XenServer domU deployments, set to a dom0 management IP on the
|
||||||
|
same network as the proxies.</para></listitem>
|
||||||
|
<listitem><para>For multi-host libvirt deployments, set to a host management IP on the same
|
||||||
|
network as the proxies.</para></listitem>
|
||||||
|
</itemizedlist>
|
||||||
|
</para>
|
||||||
</note>
|
</note>
|
||||||
</section>
|
</section>
|
||||||
<section xml:id="nova-vncproxy-replaced-with-nova-novncproxy">
|
<section xml:id="nova-vncproxy-replaced-with-nova-novncproxy">
|
||||||
<info>
|
<info>
|
||||||
<title>nova-novncproxy (noVNC)</title>
|
<title>nova-novncproxy (noVNC)</title>
|
||||||
</info>
|
</info>
|
||||||
<para>You must install the noVNC package, which contains the
|
<para>You must install the <package>noVNC</package> package, which contains the <systemitem
|
||||||
<systemitem class="service">nova-novncproxy</systemitem>
|
class="service">nova-novncproxy</systemitem> service. As root, run the following
|
||||||
service.</para>
|
command:</para>
|
||||||
<para>As root, run the following command:</para>
|
|
||||||
<programlisting language="bash" role="gutter: false"><prompt>#</prompt> <userinput>apt-get install novnc</userinput></programlisting>
|
<programlisting language="bash" role="gutter: false"><prompt>#</prompt> <userinput>apt-get install novnc</userinput></programlisting>
|
||||||
<para>The service starts automatically on installation.</para>
|
<para>The service starts automatically on installation.</para>
|
||||||
<para>To restart it, run the following command:</para>
|
<para>To restart the service, run:</para>
|
||||||
<programlisting language="bash" role="gutter: false"><prompt>#</prompt> <userinput>service novnc restart</userinput></programlisting>
|
<programlisting language="bash" role="gutter: false"><prompt>#</prompt> <userinput>service novnc restart</userinput></programlisting>
|
||||||
<para>The configuration option parameter should point to your
|
<para>The configuration option parameter should point to your
|
||||||
<filename>nova.conf</filename> file, which includes the
|
<filename>nova.conf</filename> file, which includes the
|
||||||
@ -158,9 +162,8 @@
|
|||||||
<para>By default, <systemitem class="service"
|
<para>By default, <systemitem class="service"
|
||||||
>nova-novncproxy</systemitem> binds on
|
>nova-novncproxy</systemitem> binds on
|
||||||
<literal>0.0.0.0:6080</literal>.</para>
|
<literal>0.0.0.0:6080</literal>.</para>
|
||||||
<para>To connect the service to your nova deployment, add the
|
<para>To connect the service to your Compute deployment, add the following configuration options
|
||||||
following configuration options to your
|
to your <filename>nova.conf</filename> file:</para>
|
||||||
<filename>nova.conf</filename> file:</para>
|
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>
|
<para>
|
||||||
@ -181,9 +184,8 @@
|
|||||||
<literal>vncserver_ proxyclient_ address
|
<literal>vncserver_ proxyclient_ address
|
||||||
</literal>=<replaceable>127.0.0.1</replaceable>
|
</literal>=<replaceable>127.0.0.1</replaceable>
|
||||||
</para>
|
</para>
|
||||||
<para>The address of the compute host that nova instructs
|
<para>The address of the compute host that Compute instructs proxies to use when connecting
|
||||||
proxies to use when connecting to instance
|
to instance <literal>vncservers</literal>.</para>
|
||||||
<literal>vncservers</literal>.</para>
|
|
||||||
</listitem>
|
</listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
</section>
|
</section>
|
||||||
@ -198,24 +200,22 @@
|
|||||||
<literal>nova-xvpvncproxy</literal> and <systemitem
|
<literal>nova-xvpvncproxy</literal> and <systemitem
|
||||||
class="service">nova-novncproxy</systemitem>?</emphasis>
|
class="service">nova-novncproxy</systemitem>?</emphasis>
|
||||||
</para>
|
</para>
|
||||||
<para>A: <literal>nova-xvpvncproxy</literal>, which ships with
|
<para>A: <literal>nova-xvpvncproxy</literal>, which ships with OpenStack Compute, is a proxy
|
||||||
nova, is a proxy that supports a simple Java client.
|
that supports a simple Java client. <systemitem class="service"
|
||||||
<systemitem class="service">nova-novncproxy</systemitem>
|
>nova-novncproxy</systemitem> uses noVNC to provide VNC support through a web
|
||||||
uses noVNC to provide VNC support through a web
|
|
||||||
browser.</para>
|
browser.</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><emphasis role="bold">Q: I want VNC support in the
|
<para><emphasis role="bold">Q: I want VNC support in the OpenStack dashboard. What services
|
||||||
Dashboard. What services do I need? </emphasis></para>
|
do I need? </emphasis></para>
|
||||||
<para>A: You need <systemitem class="service"
|
<para>A: You need <systemitem class="service"
|
||||||
>nova-novncproxy</systemitem>, <systemitem class="service"
|
>nova-novncproxy</systemitem>, <systemitem class="service"
|
||||||
>nova-consoleauth</systemitem>, and correctly configured
|
>nova-consoleauth</systemitem>, and correctly configured
|
||||||
compute hosts.</para>
|
compute hosts.</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><emphasis role="bold">Q: When I use <command>nova
|
<para><emphasis role="bold">Q: When I use <command>nova get-vnc-console</command> or click
|
||||||
get-vnc-console</command> or click on the VNC tab of the
|
on the VNC tab of the OpenStack dashboard, it hangs. Why? </emphasis></para>
|
||||||
Dashboard, it hangs. Why? </emphasis></para>
|
|
||||||
<para>A: Make sure you are running <systemitem class="service"
|
<para>A: Make sure you are running <systemitem class="service"
|
||||||
>nova-consoleauth</systemitem> (in addition to <systemitem
|
>nova-consoleauth</systemitem> (in addition to <systemitem
|
||||||
class="service">nova-novncproxy</systemitem>). The proxies
|
class="service">nova-novncproxy</systemitem>). The proxies
|
||||||
|
@ -87,9 +87,6 @@
|
|||||||
<xi:include href="../common/section_compute_config-api.xml"/>
|
<xi:include href="../common/section_compute_config-api.xml"/>
|
||||||
<xi:include href="../common/section_compute-configure-ec2.xml"/>
|
<xi:include href="../common/section_compute-configure-ec2.xml"/>
|
||||||
<xi:include href="../common/section_compute-configure-quotas.xml"/>
|
<xi:include href="../common/section_compute-configure-quotas.xml"/>
|
||||||
<xi:include href="../common/section_compute-configure-console.xml"/>
|
|
||||||
<xi:include
|
|
||||||
href="compute/section_compute-configure-service-groups.xml"/>
|
|
||||||
<xi:include href="../common/section_fibrechannel.xml"/>
|
<xi:include href="../common/section_fibrechannel.xml"/>
|
||||||
<xi:include href="compute/section_compute-hypervisors.xml"/>
|
<xi:include href="compute/section_compute-hypervisors.xml"/>
|
||||||
<xi:include href="compute/section_compute-scheduler.xml"/>
|
<xi:include href="compute/section_compute-scheduler.xml"/>
|
||||||
|
@ -1,86 +0,0 @@
|
|||||||
<!DOCTYPE section [
|
|
||||||
<!-- Some useful entities borrowed from HTML -->
|
|
||||||
<!ENTITY ndash "–">
|
|
||||||
<!ENTITY mdash "—">
|
|
||||||
<!ENTITY hellip "…">
|
|
||||||
]><section xml:id="configuring-compute-service-groups"
|
|
||||||
xmlns="http://docbook.org/ns/docbook"
|
|
||||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
|
||||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
|
||||||
xmlns:ns5="http://www.w3.org/1999/xhtml"
|
|
||||||
xmlns:ns4="http://www.w3.org/2000/svg"
|
|
||||||
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
|
|
||||||
xmlns:ns="http://docbook.org/ns/docbook"
|
|
||||||
version="5.0">
|
|
||||||
<title>Configuring Compute service groups</title>
|
|
||||||
<para>To effectively manage and utilize compute nodes, the Compute service must know their statuses. For example, when a user launches a
|
|
||||||
new VM, the Compute scheduler should send the request to a live node
|
|
||||||
(with enough capacity too, of course). From the Grizzly release
|
|
||||||
and later, the Compute service queries the ServiceGroup API to get the node
|
|
||||||
liveness information.</para>
|
|
||||||
<para>When a compute worker (running the <systemitem class="service">nova-compute</systemitem> daemon) starts,
|
|
||||||
it calls the join API to join the compute group, so that every
|
|
||||||
service that is interested in the information (for example, the scheduler)
|
|
||||||
can query the group membership or the status of a
|
|
||||||
particular node. Internally, the ServiceGroup client driver
|
|
||||||
automatically updates the compute worker status.</para>
|
|
||||||
<para>The following drivers are implemented: database and
|
|
||||||
ZooKeeper. Further drivers are in review or development, such as
|
|
||||||
memcache.</para>
|
|
||||||
<section xml:id="database-servicegroup-driver">
|
|
||||||
<title>Database ServiceGroup driver</title>
|
|
||||||
<para>Compute uses the database driver, which is the default driver, to track node
|
|
||||||
liveness.
|
|
||||||
In a compute worker, this driver periodically sends a <command>db update</command> command
|
|
||||||
to the database, saying <quote>I'm OK</quote> with a timestamp. A pre-defined
|
|
||||||
timeout (<literal>service_down_time</literal>)
|
|
||||||
determines if a node is dead.</para>
|
|
||||||
<para>The driver has limitations, which may or may not be an
|
|
||||||
issue for you, depending on your setup. The more compute
|
|
||||||
worker nodes that you have, the more pressure you put on the database.
|
|
||||||
By default, the timeout is 60 seconds so it might take some time to detect node failures. You could reduce
|
|
||||||
the timeout value, but you must also make the DB update
|
|
||||||
more frequently, which again increases the DB workload.</para>
|
|
||||||
<para>Fundamentally, the data that describes whether the
|
|
||||||
node is alive is "transient" — After a
|
|
||||||
few seconds, this data is obsolete. Other data in the database is persistent, such as the entries
|
|
||||||
that describe who owns which VMs. However, because this data is stored in the same database,
|
|
||||||
is treated the same way. The
|
|
||||||
ServiceGroup abstraction aims to treat
|
|
||||||
them separately.</para>
|
|
||||||
</section>
|
|
||||||
<section xml:id="zookeeper-servicegroup-driver">
|
|
||||||
<title>ZooKeeper ServiceGroup driver</title>
|
|
||||||
<para>The ZooKeeper ServiceGroup driver works by using ZooKeeper
|
|
||||||
ephemeral nodes. ZooKeeper, in contrast to databases, is a
|
|
||||||
distributed system. Its load is divided among several servers.
|
|
||||||
At a compute worker node, after establishing a ZooKeeper session,
|
|
||||||
it creates an ephemeral znode in the group directory. Ephemeral
|
|
||||||
znodes have the same lifespan as the session. If the worker node
|
|
||||||
or the <systemitem class="service">nova-compute</systemitem> daemon crashes, or a network
|
|
||||||
partition is in place between the worker and the ZooKeeper server quorums,
|
|
||||||
the ephemeral znodes are removed automatically. The driver
|
|
||||||
gets the group membership by running the <command>ls</command> command in the group directory.</para>
|
|
||||||
<para>To use the ZooKeeper driver, you must install
|
|
||||||
ZooKeeper servers and client libraries. Setting
|
|
||||||
up ZooKeeper servers is outside the scope of this article.
|
|
||||||
For the rest of the article, assume these servers are installed,
|
|
||||||
and their addresses and ports are <literal>192.168.2.1:2181</literal>, <literal>192.168.2.2:2181</literal>,
|
|
||||||
<literal>192.168.2.3:2181</literal>.
|
|
||||||
</para>
|
|
||||||
<para>To use ZooKeeper, you must install client-side Python
|
|
||||||
libraries on every nova node: <literal>python-zookeeper</literal>
|
|
||||||
– the official Zookeeper Python binding
|
|
||||||
and <literal>evzookeeper</literal> – the library to make the
|
|
||||||
binding work with the eventlet threading model.
|
|
||||||
</para>
|
|
||||||
<para>The relevant configuration snippet in the <filename>/etc/nova/nova.conf</filename> file on every node is:</para>
|
|
||||||
<programlisting language="ini">servicegroup_driver="zk"
|
|
||||||
|
|
||||||
[zookeeper]
|
|
||||||
address="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181"</programlisting>
|
|
||||||
<para>To customize the Compute Service groups, use the configuration option
|
|
||||||
settings documented in <xref
|
|
||||||
linkend="config_table_nova_zookeeper"/>.</para>
|
|
||||||
</section>
|
|
||||||
</section>
|
|
Loading…
Reference in New Issue
Block a user