diff --git a/doc/src/docbkx/openstack-config/ch_blockstorageconfigure.xml b/doc/src/docbkx/openstack-config/ch_blockstorageconfigure.xml
index ad190ea69b..f3970386ed 100644
--- a/doc/src/docbkx/openstack-config/ch_blockstorageconfigure.xml
+++ b/doc/src/docbkx/openstack-config/ch_blockstorageconfigure.xml
@@ -17,8 +17,6 @@
/etc/cinder by default. A default set of options are already configured
in cinder.conf when you install manually.
Here is a simple example cinder.conf file.
-
-
-
+
diff --git a/doc/src/docbkx/openstack-config/ch_computecells.xml b/doc/src/docbkx/openstack-config/ch_computecells.xml
index e552f74881..c96d719201 100644
--- a/doc/src/docbkx/openstack-config/ch_computecells.xml
+++ b/doc/src/docbkx/openstack-config/ch_computecells.xml
@@ -71,7 +71,7 @@
The compute API class must be changed in the API cell so that requests can be proxied
via nova-cells down to the correct cell properly. Add the following to
nova.conf in the API
- cell:[DEFAULT]
+ cell:[DEFAULT]
compute_api_class=nova.compute.cells_api.ComputeCellsAPI
...
@@ -83,7 +83,7 @@ name=api
Configuring the child cellsAdd the following to nova.conf in the child cells, replacing
cell1 with the name of each
- cell:[DEFAULT]
+ cell:[DEFAULT]
# Disable quota checking in child cells. Let API cell do it exclusively.
quota_driver=nova.quota.NoopQuotaDriver
@@ -120,14 +120,14 @@ name=cell1As an example, assume we have an API cell named api and a child
cell named cell1. Within the api cell, we have the following RabbitMQ
server
- info:rabbit_host=10.0.0.10
+ info:rabbit_host=10.0.0.10
rabbit_port=5672
rabbit_username=api_user
rabbit_password=api_passwd
rabbit_virtual_host=api_vhostAnd in the child cell named cell1 we have the following RabbitMQ
server
- info:rabbit_host=10.0.1.10
+ info:rabbit_host=10.0.1.10
rabbit_port=5673
rabbit_username=cell1_user
rabbit_password=cell1_passwd
diff --git a/doc/src/docbkx/openstack-config/ch_computeconfigure.xml b/doc/src/docbkx/openstack-config/ch_computeconfigure.xml
index e6c7a32998..b23dfd8182 100644
--- a/doc/src/docbkx/openstack-config/ch_computeconfigure.xml
+++ b/doc/src/docbkx/openstack-config/ch_computeconfigure.xml
@@ -64,9 +64,7 @@
and RABBIT PASSWORD represents the password to your message
queue installation.
-
-
-
+ Create a nova group, so you can set
permissions on the configuration file:
@@ -109,8 +107,7 @@
Download RC File.
-
-#!/bin/bash
+ #!/bin/bash
# *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0. We
# will use the 1.1 *compute api*
export OS_AUTH_URL=http://50.56.12.206:5000/v2.0
@@ -120,15 +117,13 @@ export OS_PASSWORD=$OS_PASSWORD_INPUT
export OS_AUTH_USER=norm
export OS_AUTH_KEY=$OS_PASSWORD_INPUT
export OS_AUTH_TENANT=27755fd279ce43f9b17ad2d65d45b75c
-export OS_AUTH_STRATEGY=keystone
-
+export OS_AUTH_STRATEGY=keystoneYou also may want to enable EC2 access for the euca2ools.
Here is an example ec2rc file for
enabling EC2 access with the required credentials.
-
-export NOVA_KEY_DIR=/root/creds/
+ export NOVA_KEY_DIR=/root/creds/
export EC2_ACCESS_KEY="EC2KEY:USER"
export EC2_SECRET_KEY="SECRET_KEY"
export EC2_URL="http://$NOVA-API-IP:8773/services/Cloud"
@@ -143,8 +138,7 @@ alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_
Lastly, here is an example openrc file that works with
nova client and ec2 tools.
-
-export OS_PASSWORD=${ADMIN_PASSWORD:-secrete}
+ export OS_PASSWORD=${ADMIN_PASSWORD:-secrete}
export OS_AUTH_URL=${OS_AUTH_URL:-http://$SERVICE_HOST:5000/v2.0}
export NOVA_VERSION=${NOVA_VERSION:-1.1}
export OS_REGION_NAME=${OS_REGION_NAME:-RegionOne}
@@ -282,8 +276,7 @@ source ~/.bashrc
edit /etc/network/interfaces with the
following template, updated with your IP information.
-
-# The loopback network interface
+ # The loopback network interface
auto lo
iface lo inet loopback
@@ -300,8 +293,7 @@ iface br100 inet static
broadcast xxx.xxx.xxx.xxx
gateway xxx.xxx.xxx.xxx
# dns-* options are implemented by the resolvconf package, if installed
- dns-nameservers xxx.xxx.xxx.xxx
-
+ dns-nameservers xxx.xxx.xxx.xxxRestart networking:
@@ -464,9 +456,7 @@ $ sudo service nova-compute restartThis example nova.conf file is from
an internal Rackspace test system used for
demonstrations.
-
-
-
+ KVM, Flat, MySQL, and Glance, OpenStack or EC2
API
@@ -488,8 +478,7 @@ $ sudo service nova-compute restartThis example nova.conf file is from
an internal Rackspace test system.
-
-verbose
+ verbose
nodaemon
sql_connection=mysql://root:<password>@127.0.0.1/nova
network_manager=nova.network.manager.FlatManager
@@ -509,8 +498,7 @@ ipv6_backend=account_identifier
ca_path=./nova/CA
# Add the following to your conf file if you're running on Ubuntu Maverick
-xenapi_remap_vbd_dev=true
-
+xenapi_remap_vbd_dev=trueKVM, Flat, MySQL, and Glance, OpenStack or EC2
@@ -753,16 +741,14 @@ xenapi_remap_vbd_dev=true
filter is removed from the pipeline, limiting will be
disabled. There should also be a definition for the rate limit
filter. The lines will appear as follows:
-
-[pipeline:openstack_compute_api_v2]
+ [pipeline:openstack_compute_api_v2]
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_compute_app_v2
[pipeline:openstack_volume_api_v1]
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_volume_app_v1
[filter:ratelimit]
-paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
-
+paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factoryTo modify the limits, add a 'limits'
specification to the [filter:ratelimit]
@@ -770,11 +756,9 @@ paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.
HTTP method, friendly URI, regex, limit, and interval. The
following example specifies the default rate limiting
values:
-
-[filter:ratelimit]
+ [filter:ratelimit]
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
-limits =(POST, "*", .*, 10, MINUTE);(POST, "*/servers", ^/servers, 50, DAY);(PUT, "*", .*, 10, MINUTE);(GET, "*changes-since*", .*changes-since.*, 3, MINUTE);(DELETE, "*", .*, 100, MINUTE)
-
+limits =(POST, "*", .*, 10, MINUTE);(POST, "*/servers", ^/servers, 50, DAY);(PUT, "*", .*, 10, MINUTE);(GET, "*changes-since*", .*changes-since.*, 3, MINUTE);(DELETE, "*", .*, 100, MINUTE)
diff --git a/doc/src/docbkx/openstack-config/ch_computescheduler.xml b/doc/src/docbkx/openstack-config/ch_computescheduler.xml
index 255cf8ac9a..af543d1d59 100644
--- a/doc/src/docbkx/openstack-config/ch_computescheduler.xml
+++ b/doc/src/docbkx/openstack-config/ch_computescheduler.xml
@@ -14,14 +14,12 @@
variety of options.
Compute is configured with the following default scheduler
options:
-
-scheduler_driver=nova.scheduler.multi.MultiScheduler
+ scheduler_driver=nova.scheduler.multi.MultiScheduler
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
scheduler_available_filters=nova.scheduler.filters.all_filters
scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn
-compute_fill_first_cost_fn_weight=-1.0
-
+compute_fill_first_cost_fn_weight=-1.0Compute is configured by default to use the Multi
Scheduler, which allows the admin to specify different
scheduling behavior for compute requests versus volume
@@ -88,28 +86,22 @@ compute_fill_first_cost_fn_weight=-1.0
that will be used by the scheduler. The default setting
specifies all of the filter that are included with the
Compute service:
-
-scheduler_available_filters=nova.scheduler.filters.all_filters
-
+ scheduler_available_filters=nova.scheduler.filters.all_filters
This configuration option can be specified multiple times.
For example, if you implemented your own custom filter in
Python called myfilter.MyFilter and you
wanted to use both the built-in filters and your custom
filter, your nova.conf file would
contain:
-
-scheduler_available_filters=nova.scheduler.filters.all_filters
-scheduler_available_filters=myfilter.MyFilter
-
+ scheduler_available_filters=nova.scheduler.filters.all_filters
+scheduler_available_filters=myfilter.MyFilterThe scheduler_default_filters
configuration option in nova.conf
defines the list of filters that will be applied by the
nova-scheduler service. As
mentioned above, the default filters are:
-
-scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
-
+ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilterThe available filters are described below.
@@ -189,16 +181,12 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
Configuration option in
nova.conf. The default setting
is:
-
- cpu_allocation_ratio=16.0
-
+ cpu_allocation_ratio=16.0
With this setting, if there are 8 vCPUs on a node, the
scheduler will allow instances up to 128 vCPU to be
run on that node.
To disallow vCPU overcommitment set:
-
- cpu_allocation_ratio=1.0
-
+ cpu_allocation_ratio=1.0
@@ -217,8 +205,7 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
With the API, use the
os:scheduler_hints key. For
example:
-
- {
+ {
'server': {
'name': 'server-1',
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
@@ -228,8 +215,7 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
'different_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1',
'8c19174f-4220-44f0-824a-cd1eeef10287'],
}
-}
-
+}
@@ -278,7 +264,7 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
require a host that runs an ARM-based processor and
QEMU as the hypervisor. An image can be decorated with
these properties using
- glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu
+ $glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemuThe image properties that the filter checks for
are:
@@ -320,10 +306,8 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
using the isolated_hosts and
isolated_images configuration
options. For example:
-
-isolated_hosts=server1,server2
-isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09
-
+ isolated_hosts=server1,server2
+isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09
@@ -382,8 +366,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
1 --hint query='[">=","$free_ram_mb",1024]' server1
With the API, use the
os:scheduler_hints key:
-
- {
+ {
'server': {
'name': 'server-1',
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
@@ -392,8 +375,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
'os:scheduler_hints': {
'query': '[">=","$free_ram_mb",1024]',
}
-}
-
+}
@@ -409,9 +391,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
configuration option in
nova.conf. The default setting
is:
-
-ram_allocation_ratio=1.5
-
+ ram_allocation_ratio=1.5
With this setting, if there is 1GB of free RAM, the
scheduler will allow instances up to size 1.5GB to be
run on that instance.
@@ -446,8 +426,7 @@ ram_allocation_ratio=1.5
$nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1
With the API, use the
os:scheduler_hints key:
-
- {
+ {
'server': {
'name': 'server-1',
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
@@ -457,8 +436,7 @@ ram_allocation_ratio=1.5
'same_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1',
'8c19174f-4220-44f0-824a-cd1eeef10287'],
}
-}
-
+}
@@ -493,8 +471,7 @@ ram_allocation_ratio=1.5
With the API, use the
os:scheduler_hints key:
-
- {
+ {
'server': {
'name': 'server-1',
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
@@ -504,8 +481,7 @@ ram_allocation_ratio=1.5
'build_near_host_ip': '192.168.1.1',
'cidr': '24'
}
-}
-
+}
@@ -535,18 +511,14 @@ ram_allocation_ratio=1.5
_weight string appended. Here is an
example of specifying a cost function and its
corresponding weight:
-
-least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn
-compute_fill_first_cost_fn_weight=-1.0
-
+ least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn
+compute_fill_first_cost_fn_weight=-1.0Multiple cost functions can be specified in the
least_cost_functions configuration
option, separated by commas. For example:
-
-least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn,nova.scheduler.least_cost.noop_cost_fn
+ least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn,nova.scheduler.least_cost.noop_cost_fn
compute_fill_first_cost_fn_weight=-1.0
-noop_cost_fn_weight=1.0
-
+noop_cost_fn_weight=1.0If there are multiple cost functions, then the weighted
cost scores are added together. The scheduler selects the
host that has the minimum weighted cost.
@@ -557,17 +529,13 @@ noop_cost_fn_weight=1.0
memory (RAM) available on the node. Because the
scheduler minimizes cost, if this cost function is
used as a weight of +1, by doing:
-
-compute_fill_first_cost_fn_weight=1.0
-
+ compute_fill_first_cost_fn_weight=1.0
then the scheduler will tend to "fill up" hosts,
scheduling virtual machine instances to the same host
until there is no longer sufficient RAM to service the
request, and then moving to the next node
If the user specifies a weight of -1 by doing:
-
-compute_fill_first_cost_fn_weight=-1.0
-
+ compute_fill_first_cost_fn_weight=-1.0
then the scheduler will favor hosts that have the most
amount of available RAM, leading to a "spread-first"
behavior.
diff --git a/doc/src/docbkx/openstack-config/ch_objectstorageconfigure.xml b/doc/src/docbkx/openstack-config/ch_objectstorageconfigure.xml
index 530f88a012..f70a93d276 100644
--- a/doc/src/docbkx/openstack-config/ch_objectstorageconfigure.xml
+++ b/doc/src/docbkx/openstack-config/ch_objectstorageconfigure.xml
@@ -7,38 +7,38 @@
This chapter provides sample configuration files for all the related Object Storage services provided through the swift project.Sample Account Server configuration file
-
+ Sample Container Server configuration file
-
+ Sample Dispersion Server configuration file
-
+ Sample Drive Audit configuration file
-
+ Sample Memcache configuration file
-
+ Sample Mime.Types configuration file
-
+ Sample Object Expirer configuration file
-
+ Sample Object Server configuration file
-
+ Sample Proxy Server configuration file
-
+ Sample Rsyncd configuration file
-
+ Sample Swift bench configuration file
-
+ Sample Swift configuration file
-
+
diff --git a/doc/src/docbkx/openstack-config/compute-configure-migrations.xml b/doc/src/docbkx/openstack-config/compute-configure-migrations.xml
index 01c7643040..6040f01371 100644
--- a/doc/src/docbkx/openstack-config/compute-configure-migrations.xml
+++ b/doc/src/docbkx/openstack-config/compute-configure-migrations.xml
@@ -9,16 +9,13 @@
version="5.0">
Configuring Migrations
-
This feature is for cloud administrators only.
-
Migration allows an administrator to move a virtual machine instance from one compute host
to another. This feature is useful when a compute host requires maintenance. Migration can also
be useful to redistribute the load when many VM instances are running on a specific physical machine.
-
There are two types of migration:
@@ -34,7 +31,6 @@
-
There are three types of live migration:
@@ -48,13 +44,11 @@
-
The following sections describe how to configure your hosts and compute nodes
for migrations using the KVM and XenServer hypervisors.
KVM-Libvirt
-
Prerequisites
@@ -90,7 +84,6 @@
not work correctly.
-
Example Nova Installation EnvironmentPrepare 3 servers at least; for example, HostA, HostB
@@ -179,79 +172,53 @@
Check that "NOVA-INST-DIR/instances/"
directory can be seen at HostA
- $ls -ld NOVA-INST-DIR/instances/
-
-
-
-drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/
-
+ $ls -ld NOVA-INST-DIR/instances/
+ drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/Perform the same check at HostB and HostC - paying special
attention to the permissions (nova should be able to write)
- $ls -ld NOVA-INST-DIR/instances/
-
-
-drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/
-
- $df -k
-
-
-Filesystem 1K-blocks Used Available Use% Mounted on
+ $ls -ld NOVA-INST-DIR/instances/
+ drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/
+ $df -k
+ Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 921514972 4180880 870523828 1% /
none 16498340 1228 16497112 1% /dev
none 16502856 0 16502856 0% /dev/shm
none 16502856 368 16502488 1% /var/run
none 16502856 0 16502856 0% /var/lock
none 16502856 0 16502856 0% /lib/init/rw
-HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( <--- this line is important.)
-
+HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( <--- this line is important.)
Update the libvirt configurations. Modify
/etc/libvirt/libvirtd.conf:
-
-
-before : #listen_tls = 0
+ before : #listen_tls = 0
after : listen_tls = 0
before : #listen_tcp = 1
after : listen_tcp = 1
-add: auth_tcp = "none"
-
+add: auth_tcp = "none"Modify /etc/libvirt/qemu.conf
-
-
-before : #dynamic_ownership = 1
-after : dynamic_ownership = 0
-
+ before : #dynamic_ownership = 1
+after : dynamic_ownership = 0Modify /etc/init/libvirt-bin.conf
-
-
-before : exec /usr/sbin/libvirtd -d
-after : exec /usr/sbin/libvirtd -d -l
-
+ before : exec /usr/sbin/libvirtd -d
+after : exec /usr/sbin/libvirtd -d -lModify /etc/default/libvirt-bin
-
-
-before :libvirtd_opts=" -d"
-after :libvirtd_opts=" -d -l"
-
+ before :libvirtd_opts=" -d"
+after :libvirtd_opts=" -d -l"Restart libvirt. After executing the command, ensure
that libvirt is successfully restarted.$stop libvirt-bin && start libvirt-bin
-$ps -ef | grep libvirt
-
-
-
-root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l
-
+$ps -ef | grep libvirt
+ root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -lConfigure your firewall to allow libvirt to communicate between nodes.
@@ -334,13 +301,13 @@ root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l
Create a host aggregate
- $ nova aggregate-create <name-for-pool> <availability-zone>
+ $nova aggregate-create <name-for-pool> <availability-zone>
The command will display a table which contains the id of the newly created aggregate.
Now add special metadata to the aggregate, to mark it as a hypervisor pool
- $ nova aggregate-set-metadata <aggregate-id> hypervisor_pool=true
-$ nova aggregate-set-metadata <aggregate-id> operational_state=created
+ $nova aggregate-set-metadata <aggregate-id> hypervisor_pool=true
+ $nova aggregate-set-metadata <aggregate-id> operational_state=created
Make the first compute node part of that aggregate
- $ nova aggregate-add-host <aggregate-id> <name-of-master-compute>
+ $nova aggregate-add-host <aggregate-id> <name-of-master-compute>
At this point, the host is part of a XenServer pool.
@@ -348,7 +315,7 @@ $ nova aggregate-set-metadata <aggregate-id> operational_state=created
Add additional hosts to the pool:
- $ nova aggregate-add-host <aggregate-id> <compute-host-name>
+ $nova aggregate-add-host <aggregate-id> <compute-host-name>At this point the added compute node and the host will be shut down, in order to
join the host to the XenServer pool. The operation will fail, if any server other than the
diff --git a/doc/src/docbkx/openstack-config/compute-configure-xen.xml b/doc/src/docbkx/openstack-config/compute-configure-xen.xml
index bd53112aa7..78da9eb54c 100644
--- a/doc/src/docbkx/openstack-config/compute-configure-xen.xml
+++ b/doc/src/docbkx/openstack-config/compute-configure-xen.xml
@@ -47,13 +47,9 @@ Generally a large timeout is required for Windows instances, bug you may want to
Firewall
If using nova-network, IPTables is supported:
-
-firewall_driver=nova.virt.firewall.IptablesFirewallDriver
-
+firewall_driver=nova.virt.firewall.IptablesFirewallDriver
Alternately, doing the isolation in Dom0:
-
-firewall_driver=nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver
-
+firewall_driver=nova.virt.xenapi.firewall.Dom0IptablesFirewallDriverVNC Proxy Address
@@ -67,14 +63,10 @@ and XenServer is on the address: 169.254.0.1, you can use the following:
You can specify which Storage Repository to use with nova by looking at the
following flag. The default is to use the local-storage setup by the default installer:
-
-sr_matching_filter="other-config:i18n-key=local-storage"
-
+sr_matching_filter="other-config:i18n-key=local-storage"
Another good alternative is to use the "default" storage (for example
if you have attached NFS or any other shared storage):
-
-sr_matching_filter="default-sr:true"
-
+sr_matching_filter="default-sr:true"To use a XenServer pool, you must create the pool
by using the Host Aggregates feature.
diff --git a/doc/src/docbkx/openstack-config/section_compute-configure-service-groups.xml b/doc/src/docbkx/openstack-config/section_compute-configure-service-groups.xml
index 6238b2e2ec..1cee2ad9b0 100644
--- a/doc/src/docbkx/openstack-config/section_compute-configure-service-groups.xml
+++ b/doc/src/docbkx/openstack-config/section_compute-configure-service-groups.xml
@@ -80,12 +80,10 @@
The relevent configuration snippet in /etc/nova/nova.conf on every node is:
-
-servicegroup_driver="zk"
+servicegroup_driver="zk"
[zookeeper]
-address="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181"
-
+address="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181"