Merge "cleaning up programlistings (openstack-config)"
This commit is contained in:
@@ -17,8 +17,6 @@
|
||||
<filename>/etc/cinder</filename> by default. A default set of options are already configured
|
||||
in <filename>cinder.conf</filename> when you install manually.</para>
|
||||
<para>Here is a simple example <filename>cinder.conf</filename> file.</para>
|
||||
<programlisting>
|
||||
<xi:include parse="text" href="../common/samples/cinder.conf"/>
|
||||
</programlisting>
|
||||
<programlisting><xi:include parse="text" href="../common/samples/cinder.conf"/></programlisting>
|
||||
</section>
|
||||
</chapter>
|
||||
|
||||
@@ -71,7 +71,7 @@
|
||||
<para>The compute API class must be changed in the API cell so that requests can be proxied
|
||||
via nova-cells down to the correct cell properly. Add the following to
|
||||
<filename>nova.conf</filename> in the API
|
||||
cell:<programlisting>[DEFAULT]
|
||||
cell:<programlisting language="bash">[DEFAULT]
|
||||
compute_api_class=nova.compute.cells_api.ComputeCellsAPI
|
||||
...
|
||||
|
||||
@@ -83,7 +83,7 @@ name=api</programlisting></para>
|
||||
<title>Configuring the child cells</title>
|
||||
<para>Add the following to <filename>nova.conf</filename> in the child cells, replacing
|
||||
<replaceable>cell1</replaceable> with the name of each
|
||||
cell:<programlisting>[DEFAULT]
|
||||
cell:<programlisting language="bash">[DEFAULT]
|
||||
# Disable quota checking in child cells. Let API cell do it exclusively.
|
||||
quota_driver=nova.quota.NoopQuotaDriver
|
||||
|
||||
@@ -120,14 +120,14 @@ name=<replaceable>cell1</replaceable></programlisting></para>
|
||||
<para>As an example, assume we have an API cell named <literal>api</literal> and a child
|
||||
cell named <literal>cell1</literal>. Within the api cell, we have the following RabbitMQ
|
||||
server
|
||||
info:<programlisting>rabbit_host=10.0.0.10
|
||||
info:<programlisting language="bash">rabbit_host=10.0.0.10
|
||||
rabbit_port=5672
|
||||
rabbit_username=api_user
|
||||
rabbit_password=api_passwd
|
||||
rabbit_virtual_host=api_vhost</programlisting></para>
|
||||
<para>And in the child cell named <literal>cell1</literal> we have the following RabbitMQ
|
||||
server
|
||||
info:<programlisting>rabbit_host=10.0.1.10
|
||||
info:<programlisting language="bash">rabbit_host=10.0.1.10
|
||||
rabbit_port=5673
|
||||
rabbit_username=cell1_user
|
||||
rabbit_password=cell1_passwd
|
||||
|
||||
@@ -64,9 +64,7 @@
|
||||
and RABBIT PASSWORD represents the password to your message
|
||||
queue installation.</para>
|
||||
|
||||
<programlisting>
|
||||
<xi:include parse="text" href="../common/samples/nova.conf"/>
|
||||
</programlisting>
|
||||
<programlisting><xi:include parse="text" href="../common/samples/nova.conf"/></programlisting>
|
||||
|
||||
<para>Create a <literal>nova</literal> group, so you can set
|
||||
permissions on the configuration file:</para>
|
||||
@@ -109,8 +107,7 @@
|
||||
Download RC File.</para>
|
||||
|
||||
<para>
|
||||
<programlisting language="bash">
|
||||
#!/bin/bash
|
||||
<programlisting language="bash">#!/bin/bash
|
||||
# *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0. We
|
||||
# will use the 1.1 *compute api*
|
||||
export OS_AUTH_URL=http://50.56.12.206:5000/v2.0
|
||||
@@ -120,15 +117,13 @@ export OS_PASSWORD=$OS_PASSWORD_INPUT
|
||||
export OS_AUTH_USER=norm
|
||||
export OS_AUTH_KEY=$OS_PASSWORD_INPUT
|
||||
export OS_AUTH_TENANT=27755fd279ce43f9b17ad2d65d45b75c
|
||||
export OS_AUTH_STRATEGY=keystone
|
||||
</programlisting>
|
||||
export OS_AUTH_STRATEGY=keystone</programlisting>
|
||||
</para>
|
||||
<para>You also may want to enable EC2 access for the euca2ools.
|
||||
Here is an example <filename>ec2rc</filename> file for
|
||||
enabling EC2 access with the required credentials.</para>
|
||||
<para>
|
||||
<programlisting language="bash">
|
||||
export NOVA_KEY_DIR=/root/creds/
|
||||
<programlisting language="bash">export NOVA_KEY_DIR=/root/creds/
|
||||
export EC2_ACCESS_KEY="EC2KEY:USER"
|
||||
export EC2_SECRET_KEY="SECRET_KEY"
|
||||
export EC2_URL="http://$NOVA-API-IP:8773/services/Cloud"
|
||||
@@ -143,8 +138,7 @@ alias ec2-upload-bundle="ec2-upload-bundle -a ${EC2_ACCESS_KEY} -s ${EC2_SECRET_
|
||||
</para>
|
||||
<para>Lastly, here is an example openrc file that works with
|
||||
nova client and ec2 tools.</para>
|
||||
<programlisting language="bash">
|
||||
export OS_PASSWORD=${ADMIN_PASSWORD:-secrete}
|
||||
<programlisting language="bash">export OS_PASSWORD=${ADMIN_PASSWORD:-secrete}
|
||||
export OS_AUTH_URL=${OS_AUTH_URL:-http://$SERVICE_HOST:5000/v2.0}
|
||||
export NOVA_VERSION=${NOVA_VERSION:-1.1}
|
||||
export OS_REGION_NAME=${OS_REGION_NAME:-RegionOne}
|
||||
@@ -282,8 +276,7 @@ source ~/.bashrc</userinput></screen>
|
||||
edit <filename>/etc/network/interfaces</filename> with the
|
||||
following template, updated with your IP information.</para>
|
||||
|
||||
<programlisting language="bash">
|
||||
# The loopback network interface
|
||||
<programlisting language="bash"># The loopback network interface
|
||||
auto lo
|
||||
iface lo inet loopback
|
||||
|
||||
@@ -300,8 +293,7 @@ iface br100 inet static
|
||||
broadcast <replaceable>xxx.xxx.xxx.xxx</replaceable>
|
||||
gateway <replaceable>xxx.xxx.xxx.xxx</replaceable>
|
||||
# dns-* options are implemented by the resolvconf package, if installed
|
||||
dns-nameservers <replaceable>xxx.xxx.xxx.xxx</replaceable>
|
||||
</programlisting>
|
||||
dns-nameservers <replaceable>xxx.xxx.xxx.xxx</replaceable></programlisting>
|
||||
|
||||
<para>Restart networking:</para>
|
||||
|
||||
@@ -464,9 +456,7 @@ $ <userinput>sudo service nova-compute restart</userinput></screen>
|
||||
<para>This example <filename>nova.conf</filename> file is from
|
||||
an internal Rackspace test system used for
|
||||
demonstrations.</para>
|
||||
<programlisting>
|
||||
<xi:include parse="text" href="../common/samples/nova.conf"/>
|
||||
</programlisting>
|
||||
<programlisting language="bash"><xi:include parse="text" href="../common/samples/nova.conf"/></programlisting>
|
||||
<figure xml:id="Nova_conf_KVM_Flat">
|
||||
<title>KVM, Flat, MySQL, and Glance, OpenStack or EC2
|
||||
API</title>
|
||||
@@ -488,8 +478,7 @@ $ <userinput>sudo service nova-compute restart</userinput></screen>
|
||||
<para>This example <filename>nova.conf</filename> file is from
|
||||
an internal Rackspace test system.</para>
|
||||
|
||||
<programlisting>
|
||||
verbose
|
||||
<programlisting language="bash">verbose
|
||||
nodaemon
|
||||
sql_connection=mysql://root:<password>@127.0.0.1/nova
|
||||
network_manager=nova.network.manager.FlatManager
|
||||
@@ -509,8 +498,7 @@ ipv6_backend=account_identifier
|
||||
ca_path=./nova/CA
|
||||
|
||||
# Add the following to your conf file if you're running on Ubuntu Maverick
|
||||
xenapi_remap_vbd_dev=true
|
||||
</programlisting>
|
||||
xenapi_remap_vbd_dev=true</programlisting>
|
||||
|
||||
<figure xml:id="Nova_conf_XEN_Flat">
|
||||
<title>KVM, Flat, MySQL, and Glance, OpenStack or EC2
|
||||
@@ -753,16 +741,14 @@ xenapi_remap_vbd_dev=true
|
||||
filter is removed from the pipeline, limiting will be
|
||||
disabled. There should also be a definition for the rate limit
|
||||
filter. The lines will appear as follows:</para>
|
||||
<programlisting language="bash">
|
||||
[pipeline:openstack_compute_api_v2]
|
||||
<programlisting language="bash">[pipeline:openstack_compute_api_v2]
|
||||
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_compute_app_v2
|
||||
|
||||
[pipeline:openstack_volume_api_v1]
|
||||
pipeline = faultwrap authtoken keystonecontext ratelimit osapi_volume_app_v1
|
||||
|
||||
[filter:ratelimit]
|
||||
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
|
||||
</programlisting>
|
||||
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory</programlisting>
|
||||
|
||||
<para>To modify the limits, add a '<literal>limits</literal>'
|
||||
specification to the <literal>[filter:ratelimit]</literal>
|
||||
@@ -770,11 +756,9 @@ paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.
|
||||
HTTP method, friendly URI, regex, limit, and interval. The
|
||||
following example specifies the default rate limiting
|
||||
values:</para>
|
||||
<programlisting language="bash">
|
||||
[filter:ratelimit]
|
||||
<programlisting language="bash">[filter:ratelimit]
|
||||
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
|
||||
limits =(POST, "*", .*, 10, MINUTE);(POST, "*/servers", ^/servers, 50, DAY);(PUT, "*", .*, 10, MINUTE);(GET, "*changes-since*", .*changes-since.*, 3, MINUTE);(DELETE, "*", .*, 100, MINUTE)
|
||||
</programlisting>
|
||||
limits =(POST, "*", .*, 10, MINUTE);(POST, "*/servers", ^/servers, 50, DAY);(PUT, "*", .*, 10, MINUTE);(GET, "*changes-since*", .*changes-since.*, 3, MINUTE);(DELETE, "*", .*, 100, MINUTE)</programlisting>
|
||||
</simplesect>
|
||||
</section>
|
||||
<!--status: good, right place-->
|
||||
|
||||
@@ -14,14 +14,12 @@
|
||||
variety of options.</para>
|
||||
<para>Compute is configured with the following default scheduler
|
||||
options:</para>
|
||||
<programlisting>
|
||||
scheduler_driver=nova.scheduler.multi.MultiScheduler
|
||||
<programlisting language="bash">scheduler_driver=nova.scheduler.multi.MultiScheduler
|
||||
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
|
||||
scheduler_available_filters=nova.scheduler.filters.all_filters
|
||||
scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
||||
least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn
|
||||
compute_fill_first_cost_fn_weight=-1.0
|
||||
</programlisting>
|
||||
compute_fill_first_cost_fn_weight=-1.0</programlisting>
|
||||
<para>Compute is configured by default to use the Multi
|
||||
Scheduler, which allows the admin to specify different
|
||||
scheduling behavior for compute requests versus volume
|
||||
@@ -88,28 +86,22 @@ compute_fill_first_cost_fn_weight=-1.0
|
||||
that will be used by the scheduler. The default setting
|
||||
specifies all of the filter that are included with the
|
||||
Compute service:
|
||||
<programlisting>
|
||||
scheduler_available_filters=nova.scheduler.filters.all_filters
|
||||
</programlisting>
|
||||
<programlisting>scheduler_available_filters=nova.scheduler.filters.all_filters</programlisting>
|
||||
This configuration option can be specified multiple times.
|
||||
For example, if you implemented your own custom filter in
|
||||
Python called <literal>myfilter.MyFilter</literal> and you
|
||||
wanted to use both the built-in filters and your custom
|
||||
filter, your <filename>nova.conf</filename> file would
|
||||
contain:
|
||||
<programlisting>
|
||||
scheduler_available_filters=nova.scheduler.filters.all_filters
|
||||
scheduler_available_filters=myfilter.MyFilter
|
||||
</programlisting>
|
||||
<programlisting>scheduler_available_filters=nova.scheduler.filters.all_filters
|
||||
scheduler_available_filters=myfilter.MyFilter</programlisting>
|
||||
</para>
|
||||
<para>The <literal>scheduler_default_filters</literal>
|
||||
configuration option in <filename>nova.conf</filename>
|
||||
defines the list of filters that will be applied by the
|
||||
<systemitem class="service">nova-scheduler</systemitem> service. As
|
||||
mentioned above, the default filters are:
|
||||
<programlisting>
|
||||
scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
||||
</programlisting>
|
||||
<programlisting>scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter</programlisting>
|
||||
</para>
|
||||
<para>The available filters are described below.</para>
|
||||
|
||||
@@ -189,16 +181,12 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
||||
Configuration option in
|
||||
<filename>nova.conf</filename>. The default setting
|
||||
is:
|
||||
<programlisting>
|
||||
cpu_allocation_ratio=16.0
|
||||
</programlisting>
|
||||
<programlisting>cpu_allocation_ratio=16.0</programlisting>
|
||||
With this setting, if there are 8 vCPUs on a node, the
|
||||
scheduler will allow instances up to 128 vCPU to be
|
||||
run on that node.</para>
|
||||
<para>To disallow vCPU overcommitment set:</para>
|
||||
<programlisting>
|
||||
cpu_allocation_ratio=1.0
|
||||
</programlisting>
|
||||
<programlisting>cpu_allocation_ratio=1.0</programlisting>
|
||||
</section>
|
||||
|
||||
<section xml:id="differenthostfilter">
|
||||
@@ -217,8 +205,7 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
||||
With the API, use the
|
||||
<literal>os:scheduler_hints</literal> key. For
|
||||
example:
|
||||
<programlisting language="json">
|
||||
{
|
||||
<programlisting language="json"> {
|
||||
'server': {
|
||||
'name': 'server-1',
|
||||
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
|
||||
@@ -228,8 +215,7 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
||||
'different_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1',
|
||||
'8c19174f-4220-44f0-824a-cd1eeef10287'],
|
||||
}
|
||||
}
|
||||
</programlisting>
|
||||
}</programlisting>
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="diskfilter">
|
||||
@@ -278,7 +264,7 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
||||
require a host that runs an ARM-based processor and
|
||||
QEMU as the hypervisor. An image can be decorated with
|
||||
these properties using
|
||||
<programlisting>glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu</programlisting>
|
||||
<screen><prompt>$</prompt> <userinput>glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu</userinput></screen>
|
||||
</para>
|
||||
<para>The image properties that the filter checks for
|
||||
are:</para>
|
||||
@@ -320,10 +306,8 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
||||
using the <literal>isolated_hosts</literal> and
|
||||
<literal>isolated_images</literal> configuration
|
||||
options. For example:
|
||||
<programlisting>
|
||||
isolated_hosts=server1,server2
|
||||
isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09
|
||||
</programlisting>
|
||||
<programlisting>isolated_hosts=server1,server2
|
||||
isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09</programlisting>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
@@ -382,8 +366,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
|
||||
1 --hint query='[">=","$free_ram_mb",1024]' server1</userinput></screen>
|
||||
With the API, use the
|
||||
<literal>os:scheduler_hints</literal> key:
|
||||
<programlisting>
|
||||
{
|
||||
<programlisting language="json"> {
|
||||
'server': {
|
||||
'name': 'server-1',
|
||||
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
|
||||
@@ -392,8 +375,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
|
||||
'os:scheduler_hints': {
|
||||
'query': '[">=","$free_ram_mb",1024]',
|
||||
}
|
||||
}
|
||||
</programlisting></para>
|
||||
}</programlisting></para>
|
||||
</section>
|
||||
|
||||
<section xml:id="ramfilter">
|
||||
@@ -409,9 +391,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
|
||||
configuration option in
|
||||
<filename>nova.conf</filename>. The default setting
|
||||
is:
|
||||
<programlisting>
|
||||
ram_allocation_ratio=1.5
|
||||
</programlisting>
|
||||
<programlisting>ram_allocation_ratio=1.5</programlisting>
|
||||
With this setting, if there is 1GB of free RAM, the
|
||||
scheduler will allow instances up to size 1.5GB to be
|
||||
run on that instance.</para>
|
||||
@@ -446,8 +426,7 @@ ram_allocation_ratio=1.5
|
||||
<prompt>$</prompt> <userinput>nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1</userinput></screen>
|
||||
With the API, use the
|
||||
<literal>os:scheduler_hints</literal> key:
|
||||
<programlisting>
|
||||
{
|
||||
<programlisting language="json"> {
|
||||
'server': {
|
||||
'name': 'server-1',
|
||||
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
|
||||
@@ -457,8 +436,7 @@ ram_allocation_ratio=1.5
|
||||
'same_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1',
|
||||
'8c19174f-4220-44f0-824a-cd1eeef10287'],
|
||||
}
|
||||
}
|
||||
</programlisting>
|
||||
}</programlisting>
|
||||
</para>
|
||||
</section>
|
||||
|
||||
@@ -493,8 +471,7 @@ ram_allocation_ratio=1.5
|
||||
</screen>
|
||||
<para>With the API, use the
|
||||
<literal>os:scheduler_hints</literal> key:</para>
|
||||
<programlisting language="json">
|
||||
{
|
||||
<programlisting language="json"> {
|
||||
'server': {
|
||||
'name': 'server-1',
|
||||
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
|
||||
@@ -504,8 +481,7 @@ ram_allocation_ratio=1.5
|
||||
'build_near_host_ip': '192.168.1.1',
|
||||
'cidr': '24'
|
||||
}
|
||||
}
|
||||
</programlisting>
|
||||
}</programlisting>
|
||||
</section>
|
||||
|
||||
</section>
|
||||
@@ -535,18 +511,14 @@ ram_allocation_ratio=1.5
|
||||
<literal>_weight</literal> string appended. Here is an
|
||||
example of specifying a cost function and its
|
||||
corresponding weight:</para>
|
||||
<programlisting>
|
||||
least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn
|
||||
compute_fill_first_cost_fn_weight=-1.0
|
||||
</programlisting>
|
||||
<programlisting>least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn
|
||||
compute_fill_first_cost_fn_weight=-1.0</programlisting>
|
||||
<para>Multiple cost functions can be specified in the
|
||||
<literal>least_cost_functions</literal> configuration
|
||||
option, separated by commas. For example:</para>
|
||||
<programlisting>
|
||||
least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn,nova.scheduler.least_cost.noop_cost_fn
|
||||
<programlisting>least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn,nova.scheduler.least_cost.noop_cost_fn
|
||||
compute_fill_first_cost_fn_weight=-1.0
|
||||
noop_cost_fn_weight=1.0
|
||||
</programlisting>
|
||||
noop_cost_fn_weight=1.0</programlisting>
|
||||
<para>If there are multiple cost functions, then the weighted
|
||||
cost scores are added together. The scheduler selects the
|
||||
host that has the minimum weighted cost.</para>
|
||||
@@ -557,17 +529,13 @@ noop_cost_fn_weight=1.0
|
||||
memory (RAM) available on the node. Because the
|
||||
scheduler minimizes cost, if this cost function is
|
||||
used as a weight of +1, by doing:
|
||||
<programlisting>
|
||||
compute_fill_first_cost_fn_weight=1.0
|
||||
</programlisting>
|
||||
<programlisting>compute_fill_first_cost_fn_weight=1.0</programlisting>
|
||||
then the scheduler will tend to "fill up" hosts,
|
||||
scheduling virtual machine instances to the same host
|
||||
until there is no longer sufficient RAM to service the
|
||||
request, and then moving to the next node</para>
|
||||
<para>If the user specifies a weight of -1 by doing:
|
||||
<programlisting>
|
||||
compute_fill_first_cost_fn_weight=-1.0
|
||||
</programlisting>
|
||||
<programlisting>compute_fill_first_cost_fn_weight=-1.0</programlisting>
|
||||
then the scheduler will favor hosts that have the most
|
||||
amount of available RAM, leading to a "spread-first"
|
||||
behavior.</para>
|
||||
|
||||
@@ -7,38 +7,38 @@
|
||||
<para>This chapter provides sample configuration files for all the related Object Storage services provided through the swift project.</para>
|
||||
<section xml:id="account-server-conf">
|
||||
<title>Sample Account Server configuration file</title>
|
||||
<programlisting><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/account-server.conf-sample"/></programlisting></section>
|
||||
<programlisting language="bash"><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/account-server.conf-sample"/></programlisting></section>
|
||||
<section xml:id="container-server-conf">
|
||||
<title>Sample Container Server configuration file</title>
|
||||
<programlisting><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/container-server.conf-sample"/></programlisting></section>
|
||||
<programlisting language="bash"><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/container-server.conf-sample"/></programlisting></section>
|
||||
<section xml:id="dispersion-conf">
|
||||
<title>Sample Dispersion Server configuration file</title>
|
||||
<programlisting><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/dispersion.conf-sample"/></programlisting></section>
|
||||
<programlisting language="bash"><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/dispersion.conf-sample"/></programlisting></section>
|
||||
<section xml:id="drive-audit-conf">
|
||||
<title>Sample Drive Audit configuration file</title>
|
||||
<programlisting><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/drive-audit.conf-sample"/></programlisting></section>
|
||||
<programlisting language="bash"><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/drive-audit.conf-sample"/></programlisting></section>
|
||||
<section xml:id="memcache-conf">
|
||||
<title>Sample Memcache configuration file</title>
|
||||
<programlisting><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/memcache.conf-sample"/></programlisting></section>
|
||||
<programlisting language="bash"><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/memcache.conf-sample"/></programlisting></section>
|
||||
<section xml:id="mime-types-conf">
|
||||
<title>Sample Mime.Types configuration file</title>
|
||||
<programlisting><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/mime.types-sample"/></programlisting></section>
|
||||
<programlisting language="bash"><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/mime.types-sample"/></programlisting></section>
|
||||
<section xml:id="object-expirer-conf">
|
||||
<title>Sample Object Expirer configuration file</title>
|
||||
<programlisting><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/object-expirer.conf-sample"/></programlisting></section>
|
||||
<programlisting language="bash"><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/object-expirer.conf-sample"/></programlisting></section>
|
||||
<section xml:id="object-server-conf">
|
||||
<title>Sample Object Server configuration file</title>
|
||||
<programlisting><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/object-server.conf-sample"/></programlisting></section>
|
||||
<programlisting language="bash"><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/object-server.conf-sample"/></programlisting></section>
|
||||
<section xml:id="proxy-server-conf">
|
||||
<title>Sample Proxy Server configuration file</title>
|
||||
<programlisting><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/proxy-server.conf-sample"/></programlisting></section>
|
||||
<programlisting language="bash"><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/proxy-server.conf-sample"/></programlisting></section>
|
||||
<section xml:id="rsyncd-conf">
|
||||
<title>Sample Rsyncd configuration file</title>
|
||||
<programlisting><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/rsyncd.conf-sample"/></programlisting></section>
|
||||
<programlisting language="bash"><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/rsyncd.conf-sample"/></programlisting></section>
|
||||
<section xml:id="swift-bench-conf">
|
||||
<title>Sample Swift bench configuration file</title>
|
||||
<programlisting><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/swift-bench.conf-sample"/></programlisting></section>
|
||||
<programlisting language="bash"><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/swift-bench.conf-sample"/></programlisting></section>
|
||||
<section xml:id="swift-conf">
|
||||
<title>Sample Swift configuration file</title>
|
||||
<programlisting><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/swift.conf-sample"/></programlisting></section>
|
||||
<programlisting language="bash"><xi:include parse="text" href="https://raw.github.com/openstack/swift/master/etc/swift.conf-sample"/></programlisting></section>
|
||||
</chapter>
|
||||
|
||||
@@ -9,16 +9,13 @@
|
||||
version="5.0">
|
||||
<?dbhtml stop-chunking?>
|
||||
<title>Configuring Migrations</title>
|
||||
|
||||
<note>
|
||||
<para>This feature is for cloud administrators only.
|
||||
</para>
|
||||
</note>
|
||||
|
||||
<para>Migration allows an administrator to move a virtual machine instance from one compute host
|
||||
to another. This feature is useful when a compute host requires maintenance. Migration can also
|
||||
be useful to redistribute the load when many VM instances are running on a specific physical machine.</para>
|
||||
|
||||
<para>There are two types of migration:
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
@@ -34,7 +31,6 @@
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
<para>There are three types of <emphasis role="bold">live migration</emphasis>:
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
@@ -48,13 +44,11 @@
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</para>
|
||||
|
||||
<para>The following sections describe how to configure your hosts and compute nodes
|
||||
for migrations using the KVM and XenServer hypervisors.
|
||||
</para>
|
||||
<section xml:id="configuring-migrations-kvm-libvirt">
|
||||
<title>KVM-Libvirt</title>
|
||||
|
||||
<para><emphasis role="bold">Prerequisites</emphasis>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
@@ -90,7 +84,6 @@
|
||||
not work correctly.</para>
|
||||
</note>
|
||||
</para>
|
||||
|
||||
<para><emphasis role="bold">Example Nova Installation Environment</emphasis> <itemizedlist>
|
||||
<listitem>
|
||||
<para>Prepare 3 servers at least; for example, <literal>HostA</literal>, <literal>HostB</literal>
|
||||
@@ -179,79 +172,53 @@
|
||||
<para>Check that "<filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename>"
|
||||
directory can be seen at HostA</para>
|
||||
|
||||
<screen><prompt>$</prompt> <userinput>ls -ld <filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename></userinput></screen>
|
||||
|
||||
|
||||
<programlisting language="bash">
|
||||
drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/
|
||||
</programlisting>
|
||||
<screen><prompt>$</prompt> <userinput>ls -ld <filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename></userinput>
|
||||
<computeroutput>drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/</computeroutput></screen>
|
||||
|
||||
<para>Perform the same check at HostB and HostC - paying special
|
||||
attention to the permissions (nova should be able to write)</para>
|
||||
|
||||
<screen><prompt>$</prompt> <userinput>ls -ld <filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename></userinput></screen>
|
||||
|
||||
<programlisting language="bash">
|
||||
drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/
|
||||
</programlisting>
|
||||
<screen><prompt>$</prompt> <userinput>df -k</userinput></screen>
|
||||
|
||||
<programlisting language="bash">
|
||||
Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
<screen><prompt>$</prompt> <userinput>ls -ld <filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename></userinput>
|
||||
<computeroutput>drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/</computeroutput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>df -k</userinput>
|
||||
<computeroutput>Filesystem 1K-blocks Used Available Use% Mounted on
|
||||
/dev/sda1 921514972 4180880 870523828 1% /
|
||||
none 16498340 1228 16497112 1% /dev
|
||||
none 16502856 0 16502856 0% /dev/shm
|
||||
none 16502856 368 16502488 1% /var/run
|
||||
none 16502856 0 16502856 0% /var/lock
|
||||
none 16502856 0 16502856 0% /lib/init/rw
|
||||
HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( <--- this line is important.)
|
||||
</programlisting>
|
||||
HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( <--- this line is important.)</computeroutput></screen>
|
||||
</listitem>
|
||||
|
||||
<listitem>
|
||||
<para>Update the libvirt configurations. Modify
|
||||
<filename>/etc/libvirt/libvirtd.conf</filename>:</para>
|
||||
|
||||
<programlisting>
|
||||
before : #listen_tls = 0
|
||||
<programlisting language="bash">before : #listen_tls = 0
|
||||
after : listen_tls = 0
|
||||
|
||||
before : #listen_tcp = 1
|
||||
after : listen_tcp = 1
|
||||
|
||||
add: auth_tcp = "none"
|
||||
</programlisting>
|
||||
add: auth_tcp = "none"</programlisting>
|
||||
|
||||
<para>Modify <filename>/etc/libvirt/qemu.conf</filename></para>
|
||||
|
||||
<programlisting>
|
||||
before : #dynamic_ownership = 1
|
||||
after : dynamic_ownership = 0
|
||||
</programlisting>
|
||||
<programlisting language="bash">before : #dynamic_ownership = 1
|
||||
after : dynamic_ownership = 0</programlisting>
|
||||
|
||||
<para>Modify <filename>/etc/init/libvirt-bin.conf</filename></para>
|
||||
|
||||
<programlisting>
|
||||
before : exec /usr/sbin/libvirtd -d
|
||||
after : exec /usr/sbin/libvirtd -d -l
|
||||
</programlisting>
|
||||
<programlisting language="bash">before : exec /usr/sbin/libvirtd -d
|
||||
after : exec /usr/sbin/libvirtd -d -l</programlisting>
|
||||
|
||||
<para>Modify <filename>/etc/default/libvirt-bin</filename></para>
|
||||
|
||||
<programlisting>
|
||||
before :libvirtd_opts=" -d"
|
||||
after :libvirtd_opts=" -d -l"
|
||||
</programlisting>
|
||||
<programlisting language="bash">before :libvirtd_opts=" -d"
|
||||
after :libvirtd_opts=" -d -l"</programlisting>
|
||||
<para>Restart libvirt. After executing the command, ensure
|
||||
that libvirt is successfully restarted.</para>
|
||||
|
||||
<screen><prompt>$</prompt> <userinput>stop libvirt-bin && start libvirt-bin</userinput>
|
||||
<prompt>$</prompt> <userinput>ps -ef | grep libvirt</userinput></screen>
|
||||
|
||||
<programlisting language="bash">
|
||||
|
||||
root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l
|
||||
</programlisting>
|
||||
<prompt>$</prompt> <userinput>ps -ef | grep libvirt</userinput>
|
||||
<computeroutput>root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l</computeroutput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Configure your firewall to allow libvirt to communicate between nodes.</para>
|
||||
@@ -334,13 +301,13 @@ root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l
|
||||
<listitem>
|
||||
<para>
|
||||
Create a host aggregate
|
||||
<programlisting>$ nova aggregate-create <name-for-pool> <availability-zone></programlisting>
|
||||
<screen><prompt>$</prompt> <userinput>nova aggregate-create <name-for-pool> <availability-zone></userinput></screen>
|
||||
The command will display a table which contains the id of the newly created aggregate.
|
||||
Now add special metadata to the aggregate, to mark it as a hypervisor pool
|
||||
<programlisting>$ nova aggregate-set-metadata <aggregate-id> hypervisor_pool=true
|
||||
$ nova aggregate-set-metadata <aggregate-id> operational_state=created</programlisting>
|
||||
<screen><prompt>$</prompt> <userinput>nova aggregate-set-metadata <aggregate-id> hypervisor_pool=true</userinput></screen>
|
||||
<screen><prompt>$</prompt> <userinput>nova aggregate-set-metadata <aggregate-id> operational_state=created</userinput></screen>
|
||||
Make the first compute node part of that aggregate
|
||||
<programlisting>$ nova aggregate-add-host <aggregate-id> <name-of-master-compute></programlisting>
|
||||
<screen><prompt>$</prompt> <userinput>nova aggregate-add-host <aggregate-id> <name-of-master-compute></userinput></screen>
|
||||
At this point, the host is part of a XenServer pool.
|
||||
</para>
|
||||
</listitem>
|
||||
@@ -348,7 +315,7 @@ $ nova aggregate-set-metadata <aggregate-id> operational_state=created</pr
|
||||
<listitem>
|
||||
<para>
|
||||
Add additional hosts to the pool:
|
||||
<programlisting>$ nova aggregate-add-host <aggregate-id> <compute-host-name></programlisting>
|
||||
<screen><prompt>$</prompt> <userinput>nova aggregate-add-host <aggregate-id> <compute-host-name></userinput></screen>
|
||||
<note>
|
||||
<para>At this point the added compute node and the host will be shut down, in order to
|
||||
join the host to the XenServer pool. The operation will fail, if any server other than the
|
||||
|
||||
@@ -47,13 +47,9 @@ Generally a large timeout is required for Windows instances, bug you may want to
|
||||
<title>Firewall</title>
|
||||
<para>
|
||||
If using nova-network, IPTables is supported:
|
||||
<programlisting>
|
||||
firewall_driver=nova.virt.firewall.IptablesFirewallDriver
|
||||
</programlisting>
|
||||
<programlisting>firewall_driver=nova.virt.firewall.IptablesFirewallDriver</programlisting>
|
||||
Alternately, doing the isolation in Dom0:
|
||||
<programlisting>
|
||||
firewall_driver=nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver
|
||||
</programlisting>
|
||||
<programlisting>firewall_driver=nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver</programlisting>
|
||||
</para></section>
|
||||
<section xml:id="xen-vnc">
|
||||
<title>VNC Proxy Address</title>
|
||||
@@ -67,14 +63,10 @@ and XenServer is on the address: 169.254.0.1, you can use the following:
|
||||
<para>
|
||||
You can specify which Storage Repository to use with nova by looking at the
|
||||
following flag. The default is to use the local-storage setup by the default installer:
|
||||
<programlisting>
|
||||
sr_matching_filter="other-config:i18n-key=local-storage"
|
||||
</programlisting>
|
||||
<programlisting>sr_matching_filter="other-config:i18n-key=local-storage"</programlisting>
|
||||
Another good alternative is to use the "default" storage (for example
|
||||
if you have attached NFS or any other shared storage):
|
||||
<programlisting>
|
||||
sr_matching_filter="default-sr:true"
|
||||
</programlisting>
|
||||
<programlisting>sr_matching_filter="default-sr:true"</programlisting>
|
||||
<note><para>To use a XenServer pool, you must create the pool
|
||||
by using the Host Aggregates feature.</para></note>
|
||||
</para></section>
|
||||
|
||||
@@ -80,12 +80,10 @@
|
||||
</para>
|
||||
<para>
|
||||
The relevent configuration snippet in /etc/nova/nova.conf on every node is:
|
||||
<programlisting>
|
||||
servicegroup_driver="zk"
|
||||
<programlisting>servicegroup_driver="zk"
|
||||
|
||||
[zookeeper]
|
||||
address="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181"
|
||||
</programlisting>
|
||||
address="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181"</programlisting>
|
||||
</para>
|
||||
<xi:include href="../common/tables/nova-zookeeper.xml"/>
|
||||
</section>
|
||||
|
||||
Reference in New Issue
Block a user