Add a section "Create images from running instances"

Update Formatting
Update swift doc
	Fix bug 840016
	Fix bug 891338
	Fix bug 891335
	Fix bug 891337
	Fix bug 891338
	Fix bug 882759
	Fix bug 791584

Change-Id: Id94348fa05eb83d3057335226dedb0b87ab816e4
This commit is contained in:
razique 2011-12-04 17:58:09 +01:00
parent 8a022d74b7
commit c845fda972
13 changed files with 5687 additions and 80 deletions

View File

@ -397,7 +397,106 @@ sudo kvm -m 1024 -cdrom win2k8_dvd.iso -drive file=windowsserver.img,if=virtio,b
<literallayout class="monospaced">
nova-manage image image_register windowsserver.img --public=T --arch=x86
</literallayout>
</section>
</section>
<section xml:id="creating-images-from-running-instances">
<title>Creating images from running instances with KVM and Xen</title>
<para>
It is possible to create an image from a running instance on KVM and Xen. This is a convenient way to spawn pre-configured instances; update them according to your needs ; and re-image the instances.
The process to create an image from a running instance is quite simple :
<itemizedlist>
<listitem>
<para>
<emphasis role="bold">Pre-requisites</emphasis>
</para>
<para> In order to use the feature properly, you will need qemu-img on it's 0.14
version. The imaging feature uses the copy from a snapshot for image files.
(e.g qcow-img convert -f qcow2 -O qcow2 -s $snapshot_name
$instance-disk).</para>
<para>On Debian-like distros, you can check the version by running :
<literallayout class="monospaced">dpkg -l | grep qemu</literallayout></para>
<programlisting>
ii qemu 0.14.0~rc1+noroms-0ubuntu4~ppalucid1 dummy transitional pacakge from qemu to qemu
ii qemu-common 0.14.0~rc1+noroms-0ubuntu4~ppalucid1 qemu common functionality (bios, documentati
ii qemu-kvm 0.14.0~rc1+noroms-0ubuntu4~ppalucid1 Full virtualization on i386 and amd64 hardwa
</programlisting>
</listitem>
<listitem>
<para>
<emphasis role="bold">Write data to disk</emphasis></para>
<para>
Before creating the image, we need to make sure we are not missing any
buffered content that wouldn't have been written to the instance's disk. In
order to resolve that ; connect to the instance and run
<command>sync</command> then exit.
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Create the image</emphasis>
</para>
<para> In order to create the image, we first need obtain the server id :
<literallayout class="monospaced">nova list</literallayout><programlisting>
+-----+------------+--------+--------------------+
| ID | Name | Status | Networks |
+-----+------------+--------+--------------------+
| 116 | Server 116 | ACTIVE | private=20.10.0.14 |
+-----+------------+--------+--------------------+
</programlisting>
Based on the output, we run :
<literallayout class="monospaced">nova image-create 116 Image-116</literallayout>
The command will then perform the image creation (by creating qemu snapshot) and will automatically upload the image to your repository.
<note>
<para>
The image that will be created will be flagged as "Private" (For glance : is_public=False). Thus, the image will be available only for the tenant.
</para>
</note>
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Check image status</emphasis>
</para>
<para> After a while the image will turn from a "SAVING" state to an "ACTIVE"
one. <literallayout class="monospaced">nova image-list</literallayout> will
allow you to check the progress :
<literallayout class="monospaced">nova image-list </literallayout><programlisting>
+----+---------------------------------------------+--------+
| ID | Name | Status |
+----+---------------------------------------------+--------+
| 20 | Image-116 | ACTIVE |
| 6 | ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz | ACTIVE |
| 7 | ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd | ACTIVE |
| 8 | ttylinux-uec-amd64-12.1_2.6.35-22_1.img | ACTIVE |
+----+---------------------------------------------+--------+
</programlisting>
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Create an instance from the image</emphasis>
</para>
<para>You can now create an instance based on this image as you normally do for other images :<literallayout class="monospaced">nova boot --flavor 1 --image20 New_server</literallayout>
</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">
Troubleshooting
</emphasis>
</para>
<para> Mainly, it wouldn't take more than 5 minutes in order to go from a
"SAVING" to the "ACTIVE" state. If this takes longer than five minutes, here are several hints: </para>
<para>- The feature doesn't work while you have attached a volume (via
nova-volume) to the instance. Thus, you should dettach the volume first,
create the image, and re-mount the volume.</para>
<para>- Make sure the version of qemu you are using is not older than the 0.14
version. That would create "unknown option -s" into nova-compute.log.</para>
<para>- Look into nova-api.log and nova-compute.log for extra
information.</para>
</listitem>
</itemizedlist>
</para>
</section>
<section xml:id="understanding-the-compute-service-architecture">
<title>Understanding the Compute Service Architecture</title>

View File

@ -137,6 +137,14 @@
--ldap_netadmin=cn=netadmins,$nova_ldap_base_dn
--ldap_developer=cn=developers,$nova_ldap_base_dn
</programlisting>
<figure xml:id="Nova_conf_KVM_LDAP">
<title>KVM, FlatDHCP, MySQL, Glance, LDAP, and optionally sheepdog</title>
<mediaobject>
<imageobject role="html">
<imagedata scale="60" fileref="figures/SCH_5003_V00_NUAC-Network_mode_KVM_LDAP_OpenStack.png"/>
</imageobject>
</mediaobject>
</figure>
</simplesect>
<simplesect><title>KVM, Flat, MySQL, and Glance, OpenStack or EC2 API</title><para>This example nova.conf file is from an internal Rackspace test system used for demonstrations. </para>
@ -161,6 +169,14 @@
# first 3 octets of the network your volume service is on, substitute with real numbers
--iscsi_ip_prefix=nnn.nnn.nnn
</programlisting>
<figure xml:id="Nova_conf_KVM_Flat">
<title>KVM, Flat, MySQL, and Glance, OpenStack or EC2 API</title>
<mediaobject>
<imageobject role="html">
<imagedata scale="60" fileref="figures/SCH_5004_V00_NUAC-Network_mode_KVM_Flat_OpenStack.png"/>
</imageobject>
</mediaobject>
</figure>
</simplesect>
<simplesect><title>XenServer 5.6, Flat networking, MySQL, and Glance, OpenStack API</title><para>This example nova.conf file is from an internal Rackspace test system. </para>
<programlisting>
@ -187,6 +203,14 @@
# Add the following to your flagfile if you're running on Ubuntu Maverick
--xenapi_remap_vbd_dev=true
</programlisting>
<figure xml:id="Nova_conf_XEN_Flat">
<title>KVM, Flat, MySQL, and Glance, OpenStack or EC2 API</title>
<mediaobject>
<imageobject role="html">
<imagedata scale="60" fileref="figures/SCH_5005_V00_NUAC-Network_mode_XEN_Flat_OpenStack.png"/>
</imageobject>
</mediaobject>
</figure>
</simplesect>
</section>
<section xml:id="configuring-logging">

View File

@ -244,34 +244,44 @@ sudo apt-get install glance
<para>First you can set environments with a "pre-seed" line to bypass all
the installation prompts, running this as root: </para>
<para>
<literallayout class="monospaced">bash
<programlisting>
bash
MYSQL_PASS=nova
NOVA_PASS=notnova
cat &lt;&lt;MYSQL_PRESEED | debconf-set-selections
mysql-server-5.1 mysql-server/root_password password $MYSQL_PASS
mysql-server-5.1 mysql-server/root_password_again password $MYSQL_PASS
mysql-server-5.1 mysql-server/start_on_boot boolean true
MYSQL_PRESEED</literallayout>
MYSQL_PRESEED
</programlisting>
</para>
<para>Next, install MySQL with: <literallayout class="monospaced">sudo apt-get install -y mysql-server</literallayout>
</para>
<para>Edit /etc/mysql/my.cnf to change "bind-address" from localhost
<para>Edit /etc/mysql/my.cnf to change "bind-address" from localhost
(127.0.0.1) to any (0.0.0.0) and restart the mysql service: </para>
<para>
<literallayout class="monospaced">sudo sed -i 's/127.0.0.1/0.0.0.0/g' /etc/mysql/my.cnf
sudo service mysql restart</literallayout></para>
<para>To configure the MySQL database, create the nova database: </para>
<literallayout class="monospaced">sudo mysql -u root -p$MYSQL_PASS -e 'CREATE DATABASE nova;'</literallayout>
<para>Update the DB to give user nova@% full control of the nova
<para>To configure the MySQL database, create the nova database: </para>
<literallayout class="monospaced">sudo mysql -u root -p$MYSQL_PASS -e 'CREATE DATABASE nova;'</literallayout>
<para>Update the DB to give user nova@% full control of the nova
database:</para>
<para>
<literallayout class="monospaced">sudo mysql -u root -p$MYSQL_PASS -e "GRANT ALL PRIVILEGES ON *.* TO 'nova'@'%' WITH GRANT OPTION;"</literallayout>
<literallayout class="monospaced">sudo mysql -u root -p$MYSQL_PASS -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' WITH GRANT OPTION;"</literallayout>
</para>
<para>Set MySQL password for the user "nova@%"</para>
<para>Set MySQL password for the user "nova@%"</para>
<para>
<literallayout class="monospaced">sudo mysql -u root -p$MYSQL_PASS -e "SET PASSWORD FOR 'nova'@'%' = PASSWORD('$NOVA_PASS');"</literallayout>
</para>
<para>
For Nova components that require access to this database the required configuration in /etc/nova/nova.conf should be (replace $PG_PASS with password):
</para>
<literallayout class="monospaced">--sql_connection=mysql://nova:$PG_PASS@control.example.com/nova</literallayout>
<para>Finally, populate the nova database</para>
<para>
<literallayout class="monospaced">nova-manage db sync</literallayout>
</para>
</section>
<section xml:id="setting-up-sql-database-postgresql"><title>Setting Up PostgreSQL as the Database on the Cloud Controller</title>
@ -282,31 +292,35 @@ sudo service mysql restart</literallayout></para>
<para>Edit /etc/postgresql/8.4/main/postgresql.conf and change the listen_address to listen to all appropriate addesses, PostgreSQL listen only to localhost by default. For example:</para>
<para>To listen on a specific IP address:</para>
<literallayout class="monospaced"># - Connection Settings -
listen_address = '10.1.1.200,192.168.100.2'</literallayout>
<programlisting>
# - Connection Settings -
listen_address = '10.1.1.200,192.168.100.2'
</programlisting>
<para>To listen on all addresses:</para>
<literallayout class="monospaced"># - Connection Settings -
listen_address = '*'</literallayout>
<programlisting>
# - Connection Settings -
listen_address = '*'
</programlisting>
<para>Add appropriate addresses and networks to /etc/postgresql/8.4/main/pg_hba.conf to allow remote access to PostgreSQL, this should include all servers hosting OpenStack (but not neccessarily those hosted by Openstack). As an example, append the following lines:</para>
<literallayout class="monospaced">host all all 192.168.0.0/16
<programlisting>
host all all 192.168.0.0/16
host all all 10.1.0.0/16
</literallayout>
<para>Change the default PostgreSQL user's password:</para>
</programlisting>
<para>Change the default PostgreSQL user's password:</para>
<literallayout class="monospaced">
sudo -u postgres psql template1
template1=#\password
Enter Password:
Enter again:
template1=#\q</literallayout>
template1=#\q
</literallayout>
<para>Restart PostgreSQL:</para>
<literallayout class="monospaced">service postgresql restart</literallayout>
<para>Create nova databases:</para>
<literallayout class="monospaced">sudo -u postgres createdb nova
sudo -u postgres createdb glance</literallayout>
<literallayout class="monospaced">
sudo -u postgres createdb nova
sudo -u postgres createdb glance
</literallayout>
<para>Create nova database user which will be used for all OpenStack services, note the adduser and createuser steps will prompt for the user's password ($PG_PASS):</para>
<literallayout class="monospaced">
adduser nova
@ -316,17 +330,20 @@ template1=#GRANT ALL PRIVILEGES ON DATABASE nova TO nova
template1=#GRANT ALL PRIVILEGES ON DATABASE glance TO nova
template1=#\q
</literallayout>
<para>For the Cactus version of Nova, the following fix is required for the PostgreSQL database schema. You don't need to do this for Diablo:</para>
<literallayout class="monospaced">
sudo -u postgres psql template1
template1=#alter table instances alter instance_type_id type integer using cast(instance_type_id as integer);
template1=#\q</literallayout>
template1=#\q
</literallayout>
<para>For Nova components that require access to this database the required configuration in /etc/nova/nova.conf should be (replace $PG_PASS with password):</para>
<literallayout class="monospaced">--sql_connection=postgresql://nova:$PG_PASS@control.example.com/nova</literallayout>
<para>At this stage the databases are empty and contain no content. These will be initialised when you do the nova-manage db sync command. </para>
</section></section>
<para>Finally, populate the nova and glance databases</para>
<para>
<literallayout class="monospaced">nova-manage db sync</literallayout>
<literallayout class="monospaced">glance-manage db_sync</literallayout>
</para>
</section></section>
<section xml:id="installing-the-compute-node">
<title>Installing the Compute Node</title>
<para>There are many different ways to perform a multinode install of Compute. In

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 139 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 156 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 139 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 93 KiB

View File

@ -118,10 +118,12 @@
<para>Install common OpenStack Object Storage software and
pre-requisites:</para>
<para>
<literallayout class="monospaced">apt-get install python-software-properties
<literallayout class="monospaced">
apt-get install python-software-properties
add-apt-repository ppa:swift-core/ppa
apt-get update
apt-get install swift openssh-server</literallayout>
apt-get install swift openssh-server
</literallayout>
</para>
</listitem>
</orderedlist>
@ -132,14 +134,15 @@ apt-get install swift openssh-server</literallayout>
<listitem><para>Create and populate configuration directories on all nodes:</para>
<para><literallayout class="monospaced">mkdir -p /etc/swift
chown -R swift:swift /etc/swift/</literallayout></para>
chown -R swift:swift /etc/swift/
</literallayout></para>
</listitem>
<listitem><para>Create /etc/swift/swift.conf:</para>
<para><literallayout class="monospaced">[swift-hash]
<para><programlisting>
# random unique string that can never change, keep it secret and do NOT lose it
swift_hash_path_suffix = changeme
</literallayout></para>
</programlisting></para>
</listitem>
</orderedlist>
<note>
@ -159,8 +162,10 @@ swift_hash_path_suffix = changeme
on the proxy server, or on a separate server, but you need to point
to swauth from the proxy-server.conf file in the following
line:</para>
<literallayout>[filter:swauth]
use = egg:swauth#swauth </literallayout>
<programlisting>
[filter:swauth]
use = egg:swauth#swauth
</programlisting>
<para>In the Diablo release, the Keystone project at <link xlink:href="http://github.com/rackspace/keystone">http://github.com/rackspace/keystone</link> should become the auth standard for OpenStack, but swauth may be used as an alternative.</para></section>
<section xml:id="installing-and-configuring-the-proxy-node">
@ -181,7 +186,7 @@ use = egg:swauth#swauth </literallayout>
<para>Create self-signed cert for SSL:</para>
<para>
<literallayout class="monospaced">cd /etc/swift
openssl req -new -x509 -nodes -out cert.crt -keyout cert.key</literallayout>
openssl req -new -x509 -nodes -out cert.crt -keyout cert.key </literallayout>
</para>
</listitem>
<listitem>
@ -202,7 +207,7 @@ to
<listitem>
<para>Create /etc/swift/proxy-server.conf:</para>
<para>
<literallayout class="monospaced">[DEFAULT]
<programlisting>[DEFAULT]
# Enter these next two values if using SSL certifications
cert_file = /etc/swift/cert.crt
key_file = /etc/swift/cert.key
@ -221,15 +226,20 @@ allow_account_management = true
[filter:swauth]
# the line below points to swauth as a separate project from swift
use = egg:swauth#swauth
# Highly recommended to change this.
super_admin_key = swauthkey
# Local cluster
default_swift_cluster = https://127.0.0.1:8080/v1
[filter:healthcheck]
use = egg:swift#healthcheck
[filter:cache]
use = egg:swift#memcache
memcache_servers = &lt;PROXY_LOCAL_NET_IP&gt;:11211</literallayout>
memcache_servers = &lt;PROXY_LOCAL_NET_IP&gt;:11211
</programlisting>
</para>
<note>
<para>If you run multiple memcache servers, put the multiple
@ -242,26 +252,33 @@ memcache_servers = &lt;PROXY_LOCAL_NET_IP&gt;:11211</literallayout>
<listitem>
<para>Create the account, container and object rings:</para>
<para>
<literallayout class="monospaced">cd /etc/swift
<literallayout class="monospaced">
cd /etc/swift
swift-ring-builder account.builder create 18 3 1
swift-ring-builder container.builder create 18 3 1
swift-ring-builder object.builder create 18 3 1</literallayout>
swift-ring-builder object.builder create 18 3 1
</literallayout>
</para>
</listitem>
<listitem>
<para>For every storage device on each node add entries to each
ring:</para>
<para>
<literallayout class="monospaced">swift-ring-builder account.builder add z&lt;ZONE&gt;-&lt;STORAGE_LOCAL_NET_IP&gt;:6002/&lt;DEVICE&gt; 100
<literallayout class="monospaced">
swift-ring-builder account.builder add z&lt;ZONE&gt;-&lt;STORAGE_LOCAL_NET_IP&gt;:6002/&lt;DEVICE&gt; 100
swift-ring-builder container.builder add z&lt;ZONE&gt;-&lt;STORAGE_LOCAL_NET_IP_1&gt;:6001/&lt;DEVICE&gt; 100
swift-ring-builder object.builder add z&lt;ZONE&gt;-&lt;STORAGE_LOCAL_NET_IP_1&gt;:6000/&lt;DEVICE&gt; 100</literallayout>
swift-ring-builder object.builder add z&lt;ZONE&gt;-&lt;STORAGE_LOCAL_NET_IP_1&gt;:6000/&lt;DEVICE&gt; 100
</literallayout>
</para>
<para>For example, if you were setting up a storage node with a partition of /dev/sdb1 in Zone 1 on IP 10.0.0.1, the DEVICE would be sdb1 and the commands would look like:
<literallayout class="monospaced">swift-ring-builder account.builder add z1-10.0.0.1:6002/sdb1 100
<para>For example, if you were setting up a storage node with a
partition of /dev/sdb1 in Zone 1 on IP 10.0.0.1, the
DEVICE would be sdb1 and the commands would look
like:
<literallayout class="monospaced">
swift-ring-builder account.builder add z1-10.0.0.1:6002/sdb1 100
swift-ring-builder container.builder add z1-10.0.0.1:6001/sdb1 100
swift-ring-builder object.builder add z1-10.0.0.1:6000/sdb1 100</literallayout></para>
swift-ring-builder object.builder add z1-10.0.0.1:6000/sdb1 100
</literallayout></para>
<note>
<para>Assuming there are 5 zones with 1 node per zone, ZONE
should start at 1 and increment by one for each
@ -271,20 +288,23 @@ swift-ring-builder object.builder add z1-10.0.0.1:6000/sdb1 100</literallayout><
<listitem>
<para>Verify the ring contents for each ring:</para>
<para>
<literallayout class="monospaced">swift-ring-builder account.builder
<literallayout class="monospaced">
swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder</literallayout>
swift-ring-builder object.builder
</literallayout>
</para>
</listitem>
<listitem>
<para>Rebalance the rings:</para>
<para>
<literallayout class="monospaced">swift-ring-builder account.builder rebalance
<literallayout class="monospaced">
swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance</literallayout>
swift-ring-builder object.builder rebalance
</literallayout>
</para>
<note>
<title>Note</title>
<para>Rebalancing rings can take some time.</para>
</note>
</listitem>
@ -310,7 +330,7 @@ swift-ring-builder object.builder rebalance</literallayout>
<section xml:id="installing-and-configuring-storage-nodes">
<title>Installing and Configuring the Storage Nodes</title>
<note><title>Note</title>
<note>
<para>OpenStack Object Storage should work on any modern filesystem that supports
Extended Attributes (XATTRS). We currently recommend XFS as it
demonstrated the best overall performance for the swift use case after
@ -324,15 +344,26 @@ only filesystem that has been thoroughly tested.</para>
</listitem>
<listitem><para>For every device on the node, setup the XFS volume (/dev/sdb is used
as an example):</para>
<para><literallayout class="monospaced">fdisk /dev/sdb (set up a single partition)
<para><literallayout class="monospaced"># We create a partition
fdisk /dev/sdb (set up a single partition)
# We format it
mkfs.xfs -i size=1024 /dev/sdb1
# We add the partition to the /etc/fstab file
echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" &gt;&gt; /etc/fstab
# We mount the partition
mkdir -p /srv/node/sdb1
mount /srv/node/sdb1
chown -R swift:swift /srv/node</literallayout></para>
# We set the rights
chown -R swift:swift /srv/node
</literallayout></para>
</listitem>
<listitem><para>Create /etc/rsyncd.conf:</para>
<para><literallayout class="monospaced">uid = swift
<para><programlisting>
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
@ -354,7 +385,7 @@ lock file = /var/lock/container.lock
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock</literallayout></para>
lock file = /var/lock/object.lock</programlisting></para>
</listitem>
<listitem><para>Edit the following line in /etc/default/rsync:</para>
<para><literallayout class="monospaced">RSYNC_ENABLE = true</literallayout></para>
@ -369,7 +400,8 @@ a local, private network.</para></note>
</listitem>
<listitem><para>Create /etc/swift/account-server.conf:</para>
<para><literallayout class="monospaced">[DEFAULT]
<para><programlisting>
[DEFAULT]
bind_ip = &lt;STORAGE_LOCAL_NET_IP&gt;
workers = 2
@ -383,12 +415,13 @@ use = egg:swift#account
[account-auditor]
[account-reaper]</literallayout></para>
[account-reaper]</programlisting></para>
</listitem>
<listitem><para>Create /etc/swift/container-server.conf:</para>
<para><literallayout class="monospaced">[DEFAULT]
<para><programlisting>
DEFAULT]
bind_ip = &lt;STORAGE_LOCAL_NET_IP&gt;
workers = 2
@ -402,11 +435,13 @@ use = egg:swift#container
[container-updater]
[container-auditor]</literallayout></para>
[container-auditor]</programlisting></para>
</listitem>
<listitem><para>Create /etc/swift/object-server.conf:</para>
<para><literallayout class="monospaced">[DEFAULT]
<para>
<programlisting>
[DEFAULT]
bind_ip = &lt;STORAGE_LOCAL_NET_IP&gt;
workers = 2
@ -420,12 +455,15 @@ use = egg:swift#object
[object-updater]
[object-auditor]</literallayout></para>
[object-auditor]
</programlisting>
</para>
</listitem>
<listitem><para>Start the storage services:</para>
<para><literallayout class="monospaced">swift-init object-server start
<para><literallayout class="monospaced">
swift-init object-server start
swift-init object-replicator start
swift-init object-updater start
swift-init object-auditor start
@ -435,38 +473,47 @@ swift-init container-updater start
swift-init container-auditor start
swift-init account-server start
swift-init account-replicator start
swift-init account-auditor start</literallayout></para>
swift-init account-auditor start
</literallayout></para>
</listitem>
</orderedlist>
<section xml:id="verify-swift-installation">
<section xml:id="verify-swift-installation">
<title>Create OpenStack Object Storage admin Account and Verify the Installation</title>
<para>You can run these commands from the proxy server if you have installed swauth there. Look for the default_swift_cluster setting
in the proxy-server.conf and match the URLs (including http
or https) when issuing swauth commands.</para>
<para>You can run these commands from the proxy server if you have installed swauth there. Look for
the default_swift_cluster setting in the proxy-server.conf
and match the URLs (<emphasis role="bold">including http or
https</emphasis>) when issuing swauth
commands.</para>
<orderedlist>
<listitem>
<para>Prepare the system for authorization commands
by telling it the key and the URL for auth. </para><para><literallayout class="monospaced">swauth-prep -K key -A http://&lt;AUTH_HOSTNAME&gt;:8080/auth/</literallayout></para>
by telling it the key and the URL for auth. </para>
<note>
<para> The key we use along with the following command is the one you defined in the
proxy-server.conf file.</para>
</note>
<para>
<literallayout class="monospaced">
swauth-prep -K $key -A http://&lt;AUTH_HOSTNAME&gt;:8080/auth/
</literallayout>
</para>
</listitem>
<listitem>
<para>Create a user with administrative privileges
(account = system, username = root, password =
testpass). Make sure to replace <code>key</code>
testpass). Make sure to replace key
in the swauth-add-user command below with whatever
super_admin key you assigned in the
proxy-server.conf file above. None of the values
of account, username, or password are special -
they can be anything.</para>
<para>
<literallayout class="monospaced">swauth-add-user -K key -A http://&lt;AUTH_HOSTNAME&gt;:8080/auth/ -a system root testpass</literallayout>
<literallayout class="monospaced">swauth-add-user -K $key -A http://&lt;AUTH_HOSTNAME&gt;:8080/auth/ -a system root testpass</literallayout>
</para>
</listitem>
<listitem><para>Get an X-Storage-Url and X-Auth-Token:</para>
<para><literallayout class="monospaced">curl -k -v -H 'X-Storage-User: system:root' -H 'X-Storage-Pass: testpass' http://&lt;AUTH_HOSTNAME&gt;:8080/auth/v1.0</literallayout></para>
</listitem>
<listitem><para>Check that you can HEAD the account:</para>
<para><literallayout class="monospaced">curl -k -v -H 'X-Auth-Token: &lt;token-from-x-auth-token-above&gt;' &lt;url-from-x-storage-url-above&gt;</literallayout></para>
@ -496,21 +543,24 @@ swift -A http://&lt;AUTH_HOSTNAME&gt;:8080/auth/v1.0 -U system:root -K testpass
<orderedlist>
<listitem><para>Update the list of memcache servers in /etc/swift/proxy-server.conf for all the added proxy servers. If you run multiple memcache servers, use this pattern for the multiple IP:port listings: <literallayout class="monospaced">10.1.2.3:11211,10.1.2.4:11211</literallayout> in each proxy server&#8217;s conf file.:</para>
<para><literallayout class="monospaced">
<para><programlisting>
[filter:cache]
use = egg:swift#memcache
memcache_servers = &lt;PROXY_LOCAL_NET_IP&gt;:11211
</literallayout></para>
</programlisting>
</para>
</listitem>
<listitem><para>Change the default_cluster_url to point to the load balanced url, rather than the first proxy
server you created in
/etc/swift/proxy-server.conf:</para>
<para><literallayout class="monospaced">[app:auth-server]
<para><programlisting>
[app:auth-server]
use = egg:swift#auth
default_cluster_url = https://&lt;LOAD_BALANCER_HOSTNAME&gt;/v1
# Highly recommended to change this key to something else!
super_admin_key = devauth</literallayout></para>
super_admin_key = devauth
</programlisting></para>
</listitem>
<listitem><para>After you change the default_cluster_url setting, you have to delete the auth database and recreate the OpenStack Object Storage users, or manually update the auth database with the correct URL for each account.</para>