Merge "Moved console access and ServiceGroup config to Cloud Admin"
This commit is contained in:
commit
03d235f62f
@ -0,0 +1,102 @@
|
||||
<!DOCTYPE section [
|
||||
<!-- Some useful entities borrowed from HTML -->
|
||||
<!ENTITY ndash "–">
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
]><section xml:id="configuring-compute-service-groups"
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:ns5="http://www.w3.org/1999/xhtml"
|
||||
xmlns:ns4="http://www.w3.org/2000/svg"
|
||||
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
|
||||
xmlns:ns="http://docbook.org/ns/docbook"
|
||||
version="5.0">
|
||||
<title>Configure Compute service groups</title>
|
||||
<para>To effectively manage and utilize compute nodes, the Compute service must know their
|
||||
statuses. For example, when a user launches a new VM, the Compute scheduler sends the
|
||||
request to a live node; the Compute service queries the ServiceGroup API to get information
|
||||
about whether a node is alive.</para>
|
||||
<para>When a compute worker (running the <systemitem class="service">nova-compute</systemitem>
|
||||
daemon) starts, it calls the <systemitem>join</systemitem> API to join the compute group.
|
||||
Any interested service (for example, the scheduler) can query the group's membership and the
|
||||
status of its nodes. Internally, the <systemitem>ServiceGroup</systemitem> client driver
|
||||
automatically updates the compute worker status.</para>
|
||||
<para>The database, ZooKeeper, and Memcache drivers are available.</para>
|
||||
<section xml:id="database-servicegroup-driver">
|
||||
<title>Database ServiceGroup driver</title>
|
||||
<para>By default, Compute uses the database driver to track node liveness. In a compute worker,
|
||||
this driver periodically sends a <command>db update</command> command to the database,
|
||||
saying <quote>I'm OK</quote> with a timestamp. Compute uses a pre-defined timeout
|
||||
(<literal>service_down_time</literal>) to determine whether a node is dead.</para>
|
||||
<para>The driver has limitations, which can be an issue depending on your setup. The more compute
|
||||
worker nodes that you have, the more pressure you put on the database. By default, the
|
||||
timeout is 60 seconds so it might take some time to detect node failures. You could
|
||||
reduce the timeout value, but you must also make the database update more frequently,
|
||||
which again increases the database workload.</para>
|
||||
<para>The database contains data that is both transient (whether the node is alive) and persistent
|
||||
(for example, entries for VM owners). With the ServiceGroup abstraction, Compute can treat
|
||||
each type separately.</para>
|
||||
</section>
|
||||
<section xml:id="zookeeper-servicegroup-driver">
|
||||
<title>ZooKeeper ServiceGroup driver</title>
|
||||
<para>The ZooKeeper ServiceGroup driver works by using ZooKeeper
|
||||
ephemeral nodes. ZooKeeper, in contrast to databases, is a
|
||||
distributed system. Its load is divided among several servers.
|
||||
At a compute worker node, after establishing a ZooKeeper session,
|
||||
the driver creates an ephemeral znode in the group directory. Ephemeral
|
||||
znodes have the same lifespan as the session. If the worker node
|
||||
or the <systemitem class="service">nova-compute</systemitem> daemon crashes, or a network
|
||||
partition is in place between the worker and the ZooKeeper server quorums,
|
||||
the ephemeral znodes are removed automatically. The driver
|
||||
gets the group membership by running the <command>ls</command> command in the group directory.</para>
|
||||
<para>To use the ZooKeeper driver, you must install ZooKeeper servers and client libraries.
|
||||
Setting up ZooKeeper servers is outside the scope of this guide (for more information,
|
||||
see <link xlink:href="http://zookeeper.apache.org/"
|
||||
>Apache Zookeeper</link>).</para>
|
||||
<para>To use ZooKeeper, you must install client-side Python libraries on every nova node:
|
||||
<literal>python-zookeeper</literal> – the official Zookeeper Python binding
|
||||
and <literal>evzookeeper</literal> – the library to make the binding work with the
|
||||
eventlet threading model.</para>
|
||||
<para>The following example assumes the ZooKeeper server addresses and ports are
|
||||
<literal>192.168.2.1:2181</literal>, <literal>192.168.2.2:2181</literal>, and
|
||||
<literal>192.168.2.3:2181</literal>.</para>
|
||||
<para>The following values in the <filename>/etc/nova/nova.conf</filename> file (on every
|
||||
node) are required for the <systemitem>ZooKeeper</systemitem> driver:</para>
|
||||
<programlisting language="ini"># Driver for the ServiceGroup serice
|
||||
servicegroup_driver="zk"
|
||||
|
||||
[zookeeper]
|
||||
address="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181"</programlisting>
|
||||
<para>To customize the Compute Service groups, use the following configuration option
|
||||
settings:</para>
|
||||
<xi:include href="../../common/tables/nova-zookeeper.xml"/>
|
||||
</section>
|
||||
<section xml:id="memcache-servicegroup-driver">
|
||||
<title>Memcache ServiceGroup driver</title>
|
||||
<para>The <systemitem>memcache</systemitem> ServiceGroup driver uses memcached, which is a
|
||||
distributed memory object caching system that is often used to increase site
|
||||
performance. For more details, see <link xlink:href="http://memcached.org/"
|
||||
>memcached.org</link>.</para>
|
||||
<para>To use the <systemitem>memcache</systemitem> driver, you must install
|
||||
<systemitem>memcached</systemitem>. However, because
|
||||
<systemitem>memcached</systemitem> is often used for both OpenStack Object Storage
|
||||
and OpenStack dashboard, it might already be installed. If
|
||||
<systemitem>memcached</systemitem> is not installed, refer to the <link
|
||||
xlink:href="http://docs.openstack.org/havana/install-guide/contents"
|
||||
><citetitle>OpenStack Installation Guide</citetitle></link> for more
|
||||
information.</para>
|
||||
<para>The following values in the <filename>/etc/nova/nova.conf</filename> file (on every
|
||||
node) are required for the <systemitem>memcache</systemitem> driver:</para>
|
||||
<programlisting language="ini"># Driver for the ServiceGroup serice
|
||||
servicegroup_driver="mc"
|
||||
|
||||
# Memcached servers. Use either a list of memcached servers to use for caching (list value),
|
||||
# or "<None>" for in-process caching (default).
|
||||
memcached_servers=<None>
|
||||
|
||||
# Timeout; maximum time since last check-in for up service (integer value).
|
||||
# Helps to define whether a node is dead
|
||||
service_down_time=60</programlisting>
|
||||
</section>
|
||||
</section>
|
@ -498,6 +498,8 @@ local0.error @@172.20.1.43:1024</programlisting>
|
||||
</step>
|
||||
</procedure>
|
||||
</section>
|
||||
<xi:include href="../../common/section_compute-configure-console.xml"/>
|
||||
<xi:include href="section_compute-configure-service-groups.xml"/>
|
||||
<section xml:id="section_nova-compute-node-down">
|
||||
<title>Recover from a failed compute node</title>
|
||||
<para>If you have deployed Compute with a shared file
|
||||
|
@ -19,6 +19,6 @@
|
||||
<para>VNC must be explicitly disabled to get access to the SPICE console.
|
||||
Set the <option>vnc_enabled</option> option to <literal>False</literal> in
|
||||
the <literal>[DEFAULT]</literal> section to disable the VNC console.</para>
|
||||
<para><xref linkend="config_table_nova_spice"/> documents the options to
|
||||
configure SPICE as the console for OpenStack Compute.</para>
|
||||
<para>Use the following options to configure SPICE as the console for OpenStack Compute:</para>
|
||||
<xi:include href="../common/tables/nova-spice.xml"/>
|
||||
</section>
|
||||
|
@ -40,13 +40,13 @@
|
||||
continues to proxy until the session ends.</para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
<para>The proxy also tunnels the VNC protocol over WebSockets so
|
||||
that the noVNC client can talk VNC.</para>
|
||||
<para>In general, the VNC proxy:</para>
|
||||
<para>The proxy also tunnels the VNC protocol over WebSockets so that the
|
||||
<systemitem>noVNC</systemitem> client can talk to VNC servers. In general, the VNC
|
||||
proxy:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Bridges between the public network where the clients live
|
||||
and the private network where vncservers live.</para>
|
||||
<para>Bridges between the public network where the clients live and the private network where
|
||||
VNC servers live.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Mediates token authentication.</para>
|
||||
@ -118,8 +118,8 @@
|
||||
</section>
|
||||
<section xml:id="vnc-configuration-options">
|
||||
<title>VNC configuration options</title>
|
||||
<para>To customize the VNC console, use the configuration option settings
|
||||
documented in <xref linkend="config_table_nova_vnc"/>.</para>
|
||||
<para>To customize the VNC console, use the following configuration options:</para>
|
||||
<xi:include href="../common/tables/nova-vnc.xml"/>
|
||||
<note>
|
||||
<para>To support <link
|
||||
xlink:href="http://docs.openstack.org/trunk/config-reference/content/configuring-openstack-compute-basics.html#section_configuring-compute-migrations"
|
||||
@ -128,29 +128,33 @@
|
||||
IP address does not exist on the destination host.</para>
|
||||
</note>
|
||||
<note>
|
||||
<para>The <literal>vncserver_proxyclient_address</literal>
|
||||
defaults to <literal>127.0.0.1</literal>, which is the address
|
||||
of the compute host that nova instructs proxies to use when
|
||||
connecting to instance servers.</para>
|
||||
<para>For all-in-one XenServer domU deployments, set this to
|
||||
169.254.0.1.</para>
|
||||
<para>For multi-host XenServer domU deployments, set to a dom0
|
||||
management IP on the same network as the proxies.</para>
|
||||
<para>For multi-host libvirt deployments, set to a host
|
||||
management IP on the same network as the proxies.</para>
|
||||
<para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>The <literal>vncserver_proxyclient_address</literal> defaults to
|
||||
<literal>127.0.0.1</literal>, which is the address of the compute host that
|
||||
Compute instructs proxies to use when connecting to instance servers.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem><para>For all-in-one XenServer domU deployments, set this to 169.254.0.1.</para></listitem>
|
||||
<listitem><para>For multi-host XenServer domU deployments, set to a dom0 management IP on the
|
||||
same network as the proxies.</para></listitem>
|
||||
<listitem><para>For multi-host libvirt deployments, set to a host management IP on the same
|
||||
network as the proxies.</para></listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
</note>
|
||||
</section>
|
||||
<section xml:id="nova-vncproxy-replaced-with-nova-novncproxy">
|
||||
<info>
|
||||
<title>nova-novncproxy (noVNC)</title>
|
||||
</info>
|
||||
<para>You must install the noVNC package, which contains the
|
||||
<systemitem class="service">nova-novncproxy</systemitem>
|
||||
service.</para>
|
||||
<para>As root, run the following command:</para>
|
||||
<para>You must install the <package>noVNC</package> package, which contains the <systemitem
|
||||
class="service">nova-novncproxy</systemitem> service. As root, run the following
|
||||
command:</para>
|
||||
<programlisting language="bash" role="gutter: false"><prompt>#</prompt> <userinput>apt-get install novnc</userinput></programlisting>
|
||||
<para>The service starts automatically on installation.</para>
|
||||
<para>To restart it, run the following command:</para>
|
||||
<para>To restart the service, run:</para>
|
||||
<programlisting language="bash" role="gutter: false"><prompt>#</prompt> <userinput>service novnc restart</userinput></programlisting>
|
||||
<para>The configuration option parameter should point to your
|
||||
<filename>nova.conf</filename> file, which includes the
|
||||
@ -158,9 +162,8 @@
|
||||
<para>By default, <systemitem class="service"
|
||||
>nova-novncproxy</systemitem> binds on
|
||||
<literal>0.0.0.0:6080</literal>.</para>
|
||||
<para>To connect the service to your nova deployment, add the
|
||||
following configuration options to your
|
||||
<filename>nova.conf</filename> file:</para>
|
||||
<para>To connect the service to your Compute deployment, add the following configuration options
|
||||
to your <filename>nova.conf</filename> file:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
@ -181,9 +184,8 @@
|
||||
<literal>vncserver_ proxyclient_ address
|
||||
</literal>=<replaceable>127.0.0.1</replaceable>
|
||||
</para>
|
||||
<para>The address of the compute host that nova instructs
|
||||
proxies to use when connecting to instance
|
||||
<literal>vncservers</literal>.</para>
|
||||
<para>The address of the compute host that Compute instructs proxies to use when connecting
|
||||
to instance <literal>vncservers</literal>.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
@ -198,24 +200,22 @@
|
||||
<literal>nova-xvpvncproxy</literal> and <systemitem
|
||||
class="service">nova-novncproxy</systemitem>?</emphasis>
|
||||
</para>
|
||||
<para>A: <literal>nova-xvpvncproxy</literal>, which ships with
|
||||
nova, is a proxy that supports a simple Java client.
|
||||
<systemitem class="service">nova-novncproxy</systemitem>
|
||||
uses noVNC to provide VNC support through a web
|
||||
<para>A: <literal>nova-xvpvncproxy</literal>, which ships with OpenStack Compute, is a proxy
|
||||
that supports a simple Java client. <systemitem class="service"
|
||||
>nova-novncproxy</systemitem> uses noVNC to provide VNC support through a web
|
||||
browser.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Q: I want VNC support in the
|
||||
Dashboard. What services do I need? </emphasis></para>
|
||||
<para><emphasis role="bold">Q: I want VNC support in the OpenStack dashboard. What services
|
||||
do I need? </emphasis></para>
|
||||
<para>A: You need <systemitem class="service"
|
||||
>nova-novncproxy</systemitem>, <systemitem class="service"
|
||||
>nova-consoleauth</systemitem>, and correctly configured
|
||||
compute hosts.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Q: When I use <command>nova
|
||||
get-vnc-console</command> or click on the VNC tab of the
|
||||
Dashboard, it hangs. Why? </emphasis></para>
|
||||
<para><emphasis role="bold">Q: When I use <command>nova get-vnc-console</command> or click
|
||||
on the VNC tab of the OpenStack dashboard, it hangs. Why? </emphasis></para>
|
||||
<para>A: Make sure you are running <systemitem class="service"
|
||||
>nova-consoleauth</systemitem> (in addition to <systemitem
|
||||
class="service">nova-novncproxy</systemitem>). The proxies
|
||||
|
@ -87,9 +87,6 @@
|
||||
<xi:include href="../common/section_compute_config-api.xml"/>
|
||||
<xi:include href="../common/section_compute-configure-ec2.xml"/>
|
||||
<xi:include href="../common/section_compute-configure-quotas.xml"/>
|
||||
<xi:include href="../common/section_compute-configure-console.xml"/>
|
||||
<xi:include
|
||||
href="compute/section_compute-configure-service-groups.xml"/>
|
||||
<xi:include href="../common/section_fibrechannel.xml"/>
|
||||
<xi:include href="compute/section_compute-hypervisors.xml"/>
|
||||
<xi:include href="compute/section_compute-scheduler.xml"/>
|
||||
|
@ -1,86 +0,0 @@
|
||||
<!DOCTYPE section [
|
||||
<!-- Some useful entities borrowed from HTML -->
|
||||
<!ENTITY ndash "–">
|
||||
<!ENTITY mdash "—">
|
||||
<!ENTITY hellip "…">
|
||||
]><section xml:id="configuring-compute-service-groups"
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:ns5="http://www.w3.org/1999/xhtml"
|
||||
xmlns:ns4="http://www.w3.org/2000/svg"
|
||||
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
|
||||
xmlns:ns="http://docbook.org/ns/docbook"
|
||||
version="5.0">
|
||||
<title>Configuring Compute service groups</title>
|
||||
<para>To effectively manage and utilize compute nodes, the Compute service must know their statuses. For example, when a user launches a
|
||||
new VM, the Compute scheduler should send the request to a live node
|
||||
(with enough capacity too, of course). From the Grizzly release
|
||||
and later, the Compute service queries the ServiceGroup API to get the node
|
||||
liveness information.</para>
|
||||
<para>When a compute worker (running the <systemitem class="service">nova-compute</systemitem> daemon) starts,
|
||||
it calls the join API to join the compute group, so that every
|
||||
service that is interested in the information (for example, the scheduler)
|
||||
can query the group membership or the status of a
|
||||
particular node. Internally, the ServiceGroup client driver
|
||||
automatically updates the compute worker status.</para>
|
||||
<para>The following drivers are implemented: database and
|
||||
ZooKeeper. Further drivers are in review or development, such as
|
||||
memcache.</para>
|
||||
<section xml:id="database-servicegroup-driver">
|
||||
<title>Database ServiceGroup driver</title>
|
||||
<para>Compute uses the database driver, which is the default driver, to track node
|
||||
liveness.
|
||||
In a compute worker, this driver periodically sends a <command>db update</command> command
|
||||
to the database, saying <quote>I'm OK</quote> with a timestamp. A pre-defined
|
||||
timeout (<literal>service_down_time</literal>)
|
||||
determines if a node is dead.</para>
|
||||
<para>The driver has limitations, which may or may not be an
|
||||
issue for you, depending on your setup. The more compute
|
||||
worker nodes that you have, the more pressure you put on the database.
|
||||
By default, the timeout is 60 seconds so it might take some time to detect node failures. You could reduce
|
||||
the timeout value, but you must also make the DB update
|
||||
more frequently, which again increases the DB workload.</para>
|
||||
<para>Fundamentally, the data that describes whether the
|
||||
node is alive is "transient" — After a
|
||||
few seconds, this data is obsolete. Other data in the database is persistent, such as the entries
|
||||
that describe who owns which VMs. However, because this data is stored in the same database,
|
||||
is treated the same way. The
|
||||
ServiceGroup abstraction aims to treat
|
||||
them separately.</para>
|
||||
</section>
|
||||
<section xml:id="zookeeper-servicegroup-driver">
|
||||
<title>ZooKeeper ServiceGroup driver</title>
|
||||
<para>The ZooKeeper ServiceGroup driver works by using ZooKeeper
|
||||
ephemeral nodes. ZooKeeper, in contrast to databases, is a
|
||||
distributed system. Its load is divided among several servers.
|
||||
At a compute worker node, after establishing a ZooKeeper session,
|
||||
it creates an ephemeral znode in the group directory. Ephemeral
|
||||
znodes have the same lifespan as the session. If the worker node
|
||||
or the <systemitem class="service">nova-compute</systemitem> daemon crashes, or a network
|
||||
partition is in place between the worker and the ZooKeeper server quorums,
|
||||
the ephemeral znodes are removed automatically. The driver
|
||||
gets the group membership by running the <command>ls</command> command in the group directory.</para>
|
||||
<para>To use the ZooKeeper driver, you must install
|
||||
ZooKeeper servers and client libraries. Setting
|
||||
up ZooKeeper servers is outside the scope of this article.
|
||||
For the rest of the article, assume these servers are installed,
|
||||
and their addresses and ports are <literal>192.168.2.1:2181</literal>, <literal>192.168.2.2:2181</literal>,
|
||||
<literal>192.168.2.3:2181</literal>.
|
||||
</para>
|
||||
<para>To use ZooKeeper, you must install client-side Python
|
||||
libraries on every nova node: <literal>python-zookeeper</literal>
|
||||
– the official Zookeeper Python binding
|
||||
and <literal>evzookeeper</literal> – the library to make the
|
||||
binding work with the eventlet threading model.
|
||||
</para>
|
||||
<para>The relevant configuration snippet in the <filename>/etc/nova/nova.conf</filename> file on every node is:</para>
|
||||
<programlisting language="ini">servicegroup_driver="zk"
|
||||
|
||||
[zookeeper]
|
||||
address="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181"</programlisting>
|
||||
<para>To customize the Compute Service groups, use the configuration option
|
||||
settings documented in <xref
|
||||
linkend="config_table_nova_zookeeper"/>.</para>
|
||||
</section>
|
||||
</section>
|
Loading…
Reference in New Issue
Block a user