From d6b4bb8176a5a9bd2a835167fbde0590c6481e27 Mon Sep 17 00:00:00 2001 From: Summer Long Date: Tue, 1 Apr 2014 12:36:28 +1000 Subject: [PATCH] Moved console access and ServiceGroup config to Cloud Admin Moved console access and ServiceGroup HowTo info from the CRG to the CAG. Added memcache driver to ServiceGroup section. Brief edits on all. Partial-Bug: #1290687 Change-Id: I43e895279d5650c0489154c31ce03096ab9e61cc --- ...ction_compute-configure-service-groups.xml | 102 ++++++++++++++++++ .../compute/section_compute-system-admin.xml | 2 + .../section_compute-configure-spice.xml | 4 +- doc/common/section_compute-configure-vnc.xml | 74 ++++++------- doc/config-reference/ch_computeconfigure.xml | 3 - ...ction_compute-configure-service-groups.xml | 86 --------------- 6 files changed, 143 insertions(+), 128 deletions(-) create mode 100644 doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml delete mode 100644 doc/config-reference/compute/section_compute-configure-service-groups.xml diff --git a/doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml b/doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml new file mode 100644 index 0000000000..c3ae17875e --- /dev/null +++ b/doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml @@ -0,0 +1,102 @@ + + + + +]>
+ Configure Compute service groups + To effectively manage and utilize compute nodes, the Compute service must know their + statuses. For example, when a user launches a new VM, the Compute scheduler sends the + request to a live node; the Compute service queries the ServiceGroup API to get information + about whether a node is alive. + When a compute worker (running the nova-compute + daemon) starts, it calls the join API to join the compute group. + Any interested service (for example, the scheduler) can query the group's membership and the + status of its nodes. Internally, the ServiceGroup client driver + automatically updates the compute worker status. + The database, ZooKeeper, and Memcache drivers are available. +
+ Database ServiceGroup driver + By default, Compute uses the database driver to track node liveness. In a compute worker, + this driver periodically sends a db update command to the database, + saying I'm OK with a timestamp. Compute uses a pre-defined timeout + (service_down_time) to determine whether a node is dead. + The driver has limitations, which can be an issue depending on your setup. The more compute + worker nodes that you have, the more pressure you put on the database. By default, the + timeout is 60 seconds so it might take some time to detect node failures. You could + reduce the timeout value, but you must also make the database update more frequently, + which again increases the database workload. + The database contains data that is both transient (whether the node is alive) and persistent + (for example, entries for VM owners). With the ServiceGroup abstraction, Compute can treat + each type separately. +
+
+ ZooKeeper ServiceGroup driver + The ZooKeeper ServiceGroup driver works by using ZooKeeper + ephemeral nodes. ZooKeeper, in contrast to databases, is a + distributed system. Its load is divided among several servers. + At a compute worker node, after establishing a ZooKeeper session, + the driver creates an ephemeral znode in the group directory. Ephemeral + znodes have the same lifespan as the session. If the worker node + or the nova-compute daemon crashes, or a network + partition is in place between the worker and the ZooKeeper server quorums, + the ephemeral znodes are removed automatically. The driver + gets the group membership by running the ls command in the group directory. + To use the ZooKeeper driver, you must install ZooKeeper servers and client libraries. + Setting up ZooKeeper servers is outside the scope of this guide (for more information, + see Apache Zookeeper). + To use ZooKeeper, you must install client-side Python libraries on every nova node: + python-zookeeper – the official Zookeeper Python binding + and evzookeeper – the library to make the binding work with the + eventlet threading model. + The following example assumes the ZooKeeper server addresses and ports are + 192.168.2.1:2181, 192.168.2.2:2181, and + 192.168.2.3:2181. + The following values in the /etc/nova/nova.conf file (on every + node) are required for the ZooKeeper driver: +# Driver for the ServiceGroup serice +servicegroup_driver="zk" + +[zookeeper] +address="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181" + To customize the Compute Service groups, use the following configuration option + settings: + +
+
+ Memcache ServiceGroup driver + The memcache ServiceGroup driver uses memcached, which is a + distributed memory object caching system that is often used to increase site + performance. For more details, see memcached.org. + To use the memcache driver, you must install + memcached. However, because + memcached is often used for both OpenStack Object Storage + and OpenStack dashboard, it might already be installed. If + memcached is not installed, refer to the OpenStack Installation Guide for more + information. + The following values in the /etc/nova/nova.conf file (on every + node) are required for the memcache driver: + # Driver for the ServiceGroup serice +servicegroup_driver="mc" + +# Memcached servers. Use either a list of memcached servers to use for caching (list value), +# or "<None>" for in-process caching (default). +memcached_servers=<None> + +# Timeout; maximum time since last check-in for up service (integer value). +# Helps to define whether a node is dead +service_down_time=60 +
+
diff --git a/doc/admin-guide-cloud/compute/section_compute-system-admin.xml b/doc/admin-guide-cloud/compute/section_compute-system-admin.xml index f6adf7ff9e..0db3c11d63 100644 --- a/doc/admin-guide-cloud/compute/section_compute-system-admin.xml +++ b/doc/admin-guide-cloud/compute/section_compute-system-admin.xml @@ -498,6 +498,8 @@ local0.error @@172.20.1.43:1024 + +
Recover from a failed compute node If you have deployed Compute with a shared file diff --git a/doc/common/section_compute-configure-spice.xml b/doc/common/section_compute-configure-spice.xml index 93b9c2ae80..599a576464 100644 --- a/doc/common/section_compute-configure-spice.xml +++ b/doc/common/section_compute-configure-spice.xml @@ -19,6 +19,6 @@ VNC must be explicitly disabled to get access to the SPICE console. Set the option to False in the [DEFAULT] section to disable the VNC console. - documents the options to - configure SPICE as the console for OpenStack Compute. + Use the following options to configure SPICE as the console for OpenStack Compute: +
diff --git a/doc/common/section_compute-configure-vnc.xml b/doc/common/section_compute-configure-vnc.xml index 6df88b248c..72ad8dcda1 100644 --- a/doc/common/section_compute-configure-vnc.xml +++ b/doc/common/section_compute-configure-vnc.xml @@ -40,13 +40,13 @@ continues to proxy until the session ends. - The proxy also tunnels the VNC protocol over WebSockets so - that the noVNC client can talk VNC. - In general, the VNC proxy: + The proxy also tunnels the VNC protocol over WebSockets so that the + noVNC client can talk to VNC servers. In general, the VNC + proxy: - Bridges between the public network where the clients live - and the private network where vncservers live. + Bridges between the public network where the clients live and the private network where + VNC servers live. Mediates token authentication. @@ -118,8 +118,8 @@
VNC configuration options - To customize the VNC console, use the configuration option settings - documented in . + To customize the VNC console, use the following configuration options: + To support - The vncserver_proxyclient_address - defaults to 127.0.0.1, which is the address - of the compute host that nova instructs proxies to use when - connecting to instance servers. - For all-in-one XenServer domU deployments, set this to - 169.254.0.1. - For multi-host XenServer domU deployments, set to a dom0 - management IP on the same network as the proxies. - For multi-host libvirt deployments, set to a host - management IP on the same network as the proxies. + + + + The vncserver_proxyclient_address defaults to + 127.0.0.1, which is the address of the compute host that + Compute instructs proxies to use when connecting to instance servers. + + + For all-in-one XenServer domU deployments, set this to 169.254.0.1. + For multi-host XenServer domU deployments, set to a dom0 management IP on the + same network as the proxies. + For multi-host libvirt deployments, set to a host management IP on the same + network as the proxies. + +
nova-novncproxy (noVNC) - You must install the noVNC package, which contains the - nova-novncproxy - service. - As root, run the following command: + You must install the noVNC package, which contains the nova-novncproxy service. As root, run the following + command: # apt-get install novnc The service starts automatically on installation. - To restart it, run the following command: + To restart the service, run: # service novnc restart The configuration option parameter should point to your nova.conf file, which includes the @@ -158,9 +162,8 @@ By default, nova-novncproxy binds on 0.0.0.0:6080. - To connect the service to your nova deployment, add the - following configuration options to your - nova.conf file: + To connect the service to your Compute deployment, add the following configuration options + to your nova.conf file: @@ -181,9 +184,8 @@ vncserver_ proxyclient_ address =127.0.0.1 - The address of the compute host that nova instructs - proxies to use when connecting to instance - vncservers. + The address of the compute host that Compute instructs proxies to use when connecting + to instance vncservers.
@@ -198,24 +200,22 @@ nova-xvpvncproxy and nova-novncproxy? - A: nova-xvpvncproxy, which ships with - nova, is a proxy that supports a simple Java client. - nova-novncproxy - uses noVNC to provide VNC support through a web + A: nova-xvpvncproxy, which ships with OpenStack Compute, is a proxy + that supports a simple Java client. nova-novncproxy uses noVNC to provide VNC support through a web browser.
- Q: I want VNC support in the - Dashboard. What services do I need? + Q: I want VNC support in the OpenStack dashboard. What services + do I need? A: You need nova-novncproxy, nova-consoleauth, and correctly configured compute hosts. - Q: When I use nova - get-vnc-console or click on the VNC tab of the - Dashboard, it hangs. Why? + Q: When I use nova get-vnc-console or click + on the VNC tab of the OpenStack dashboard, it hangs. Why? A: Make sure you are running nova-consoleauth (in addition to nova-novncproxy). The proxies diff --git a/doc/config-reference/ch_computeconfigure.xml b/doc/config-reference/ch_computeconfigure.xml index 9a77432ec6..7e64edc496 100644 --- a/doc/config-reference/ch_computeconfigure.xml +++ b/doc/config-reference/ch_computeconfigure.xml @@ -87,9 +87,6 @@ - - diff --git a/doc/config-reference/compute/section_compute-configure-service-groups.xml b/doc/config-reference/compute/section_compute-configure-service-groups.xml deleted file mode 100644 index 8b6ed07423..0000000000 --- a/doc/config-reference/compute/section_compute-configure-service-groups.xml +++ /dev/null @@ -1,86 +0,0 @@ - - - - -]>
- Configuring Compute service groups - To effectively manage and utilize compute nodes, the Compute service must know their statuses. For example, when a user launches a - new VM, the Compute scheduler should send the request to a live node - (with enough capacity too, of course). From the Grizzly release - and later, the Compute service queries the ServiceGroup API to get the node - liveness information. - When a compute worker (running the nova-compute daemon) starts, - it calls the join API to join the compute group, so that every - service that is interested in the information (for example, the scheduler) - can query the group membership or the status of a - particular node. Internally, the ServiceGroup client driver - automatically updates the compute worker status. - The following drivers are implemented: database and - ZooKeeper. Further drivers are in review or development, such as - memcache. -
- Database ServiceGroup driver - Compute uses the database driver, which is the default driver, to track node - liveness. - In a compute worker, this driver periodically sends a db update command - to the database, saying I'm OK with a timestamp. A pre-defined - timeout (service_down_time) - determines if a node is dead. - The driver has limitations, which may or may not be an - issue for you, depending on your setup. The more compute - worker nodes that you have, the more pressure you put on the database. - By default, the timeout is 60 seconds so it might take some time to detect node failures. You could reduce - the timeout value, but you must also make the DB update - more frequently, which again increases the DB workload. - Fundamentally, the data that describes whether the - node is alive is "transient" — After a - few seconds, this data is obsolete. Other data in the database is persistent, such as the entries - that describe who owns which VMs. However, because this data is stored in the same database, - is treated the same way. The - ServiceGroup abstraction aims to treat - them separately. -
-
- ZooKeeper ServiceGroup driver - The ZooKeeper ServiceGroup driver works by using ZooKeeper - ephemeral nodes. ZooKeeper, in contrast to databases, is a - distributed system. Its load is divided among several servers. - At a compute worker node, after establishing a ZooKeeper session, - it creates an ephemeral znode in the group directory. Ephemeral - znodes have the same lifespan as the session. If the worker node - or the nova-compute daemon crashes, or a network - partition is in place between the worker and the ZooKeeper server quorums, - the ephemeral znodes are removed automatically. The driver - gets the group membership by running the ls command in the group directory. - To use the ZooKeeper driver, you must install - ZooKeeper servers and client libraries. Setting - up ZooKeeper servers is outside the scope of this article. - For the rest of the article, assume these servers are installed, - and their addresses and ports are 192.168.2.1:2181, 192.168.2.2:2181, - 192.168.2.3:2181. - - To use ZooKeeper, you must install client-side Python - libraries on every nova node: python-zookeeper - – the official Zookeeper Python binding - and evzookeeper – the library to make the - binding work with the eventlet threading model. - - The relevant configuration snippet in the /etc/nova/nova.conf file on every node is: -servicegroup_driver="zk" - -[zookeeper] -address="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181" - To customize the Compute Service groups, use the configuration option - settings documented in . -
-