Grizzly release prep - old releases, bugs deleted

This patch is the result of searching the documentation for
references to old releases, fixed bugs et al. It attempts to
remove and update information that will be obsolete when these
docs are released for Grizzly.

patch2 removes references to deprecated versions of ubuntu (11.10
 being the most recent)

Rebased after merging in changes for Grizzly to the install guide.

patch4 addresses Anne's comments

Rebased again, ready to merge.

Change-Id: Id57808802eed66145aa683240000f4f3e90706d0
This commit is contained in:
Tom Fifield 2013-03-29 22:09:28 +08:00 committed by annegentle
parent 25afe181e9
commit 0e58372b2e
54 changed files with 322 additions and 211 deletions

View File

@ -11,7 +11,7 @@
<para>Time zone : <emphasis role="bold">UTC</emphasis></para>
</listitem>
<listitem>
<para>Hostname : <emphasis role="bold">folsom-compute</emphasis></para>
<para>Hostname : <emphasis role="bold">grizzly-compute</emphasis></para>
</listitem>
<listitem>
<para>Packages : <emphasis role="bold">OpenSSH-Server</emphasis></para>
@ -20,10 +20,10 @@
<para>After OS Installation, reboot the server .</para>
</listitem>
<listitem>
<para>Since Ubuntu 12.04 LTS has OpenStack Essex by default, we are going to use
Cloud Archives for Folsom :<screen>apt-get install ubuntu-cloud-keyring</screen>Edit
<para>Since the default OpenStack version in Ubuntu 12.04 LTS is older, we are going to use
Cloud Archives for Grizzly :<screen>apt-get install ubuntu-cloud-keyring</screen>Edit
<emphasis role="bold">/etc/apt/sources.list.d/cloud-archive.list</emphasis>
:<screen>deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main </screen>Upgrade
:<screen>deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main </screen>Upgrade
the system (and reboot if you need)
:<screen>apt-get update &amp;&amp; apt-get upgrade</screen></para>
</listitem>
@ -64,9 +64,9 @@ net.ipv4.conf.default.rp_filter = 0 </programlisting>
<itemizedlist>
<listitem>
<para>Edit the <emphasis role="bold">/etc/hosts</emphasis> file and
add <emphasis role="bold">folsom-controller</emphasis>, <emphasis
role="bold">folsom-network</emphasis> and <emphasis role="bold"
>folsom-compute</emphasis> hostnames with correct IP.</para>
add <emphasis role="bold">grizzly-controller</emphasis>, <emphasis
role="bold">grizzly-network</emphasis> and <emphasis role="bold"
>grizzly-compute</emphasis> hostnames with correct IP.</para>
</listitem>
</itemizedlist>
</para>

View File

@ -51,7 +51,6 @@ ec2_url=http://192.168.0.1:8773/services/Cloud
keystone_ec2_url=http://192.168.0.1:5000/v2.0/ec2tokens
api_paste_config=/etc/nova/api-paste.ini
allow_admin_api=true
use_deprecated_auth=false
ec2_private_dns_show_ip=True
dmz_cidr=169.254.169.254/32
ec2_dmz_host=192.168.0.1

View File

@ -22,13 +22,12 @@
<para>After OS Installation, reboot the server.</para>
</listitem>
<listitem>
<para>Since Ubuntu 12.04 LTS has OpenStack Essex
by default, we are going to use the Ubuntu Cloud
Archive for Folsom
<para>Since the default OpenStack release in Ubuntu 12.04 LTS is older, we are going to use the Ubuntu Cloud
Archive for Grizzly
:<screen>apt-get install ubuntu-cloud-keyring</screen>Edit
<emphasis role="bold"
>/etc/apt/sources.list.d/cloud-archive.list</emphasis>
:<screen>deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main </screen>Upgrade
:<screen>deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main </screen>Upgrade
the system (and reboot if you need)
:<screen>apt-get update &amp;&amp; apt-get upgrade</screen></para>
</listitem>

View File

@ -44,7 +44,7 @@ rabbit_password = password</userinput></screen></para>
+--------------------------------------+--------+-------------+------------------+-----------+--------+</userinput></screen></para>
</listitem>
<listitem>
<para>You can also install <link xlink:href="https://review.openstack.org/#/c/7615/">Glance Replicator</link> (new in Folsom).
<para>You can also install <link xlink:href="https://review.openstack.org/#/c/7615/">Glance Replicator</link>.
More informations about it <link xlink:href="http://www.stillhq.com/openstack/000007.html">here</link>.</para>
</listitem>
</itemizedlist>

View File

@ -65,14 +65,14 @@ echo "source novarc">>.bashrc</userinput></screen></para>
</listitem>
<listitem>
<para>Download the <link
xlink:href="https://github.com/EmilienM/openstack-folsom-guide/raw/master/scripts/keystone-data.sh"
xlink:href="https://github.com/EmilienM/openstack-grizzly-guide/raw/master/scripts/keystone-data.sh"
>data script</link> and fill Keystone database
with data (users, tenants, services)
:<screen><userinput>./keystone-data.sh</userinput></screen></para>
</listitem>
<listitem>
<para>Download the <link
xlink:href="https://github.com/EmilienM/openstack-folsom-guide/raw/master/scripts/keystone-endpoints.sh">endpoint script</link> and create the endpoints (for projects) :<screen><userinput>./keystone-endpoints.sh</userinput></screen>
xlink:href="https://github.com/EmilienM/openstack-grizzly-guide/raw/master/scripts/keystone-endpoints.sh">endpoint script</link> and create the endpoints (for projects) :<screen><userinput>./keystone-endpoints.sh</userinput></screen>
If an IP address of the management network on the controller node is different from this example, please use the following:<screen><userinput>./keystone-endpoints.sh -K &lt;ip address of the management network&gt;</userinput></screen></para>
</listitem>
</orderedlist>

View File

@ -19,10 +19,7 @@
file and modify
:<screen><userinput>admin_tenant_name = service
admin_user = nova
admin_password = password</userinput></screen>Since
we are going to use Cinder for volumes, we should also <emphasis
role="bold">delete</emphasis> each part concerning "<emphasis
role="bold">nova-volume</emphasis>" :
admin_password = password</userinput></screen>
<screen>============================================================
[composite:osapi_volume]
use = call:nova.api.openstack.urlmap:urlmap_factory
@ -72,7 +69,6 @@ ec2_url=http://192.168.0.1:8773/services/Cloud
keystone_ec2_url=http://192.168.0.1:5000/v2.0/ec2tokens
api_paste_config=/etc/nova/api-paste.ini
allow_admin_api=true
use_deprecated_auth=false
ec2_private_dns_show_ip=True
dmz_cidr=169.254.169.254/32
ec2_dmz_host=192.168.0.1

View File

@ -5,7 +5,7 @@
xml:id="basic-install_intro">
<title>Introduction</title>
<para>This document helps anyone who wants to deploy OpenStack
Folsom for development purposes with Ubuntu 12.04 LTS (using
Grizzly for development purposes with Ubuntu 12.04 LTS (using
the Ubuntu Cloud Archive).</para>
<para>We are going to install a three-node setup with one
controller, one network and one compute node.</para>

View File

@ -13,7 +13,7 @@
<para>Time zone : <emphasis role="bold">UTC</emphasis></para>
</listitem>
<listitem>
<para>Hostname : <emphasis role="bold">folsom-network</emphasis></para>
<para>Hostname : <emphasis role="bold">grizzly-network</emphasis></para>
</listitem>
<listitem>
<para>Packages : <emphasis role="bold">OpenSSH-Server</emphasis></para>
@ -22,10 +22,10 @@
<para>After OS Installation, reboot the server.</para>
</listitem>
<listitem>
<para>Since Ubuntu 12.04 LTS has OpenStack Essex by default, we are going to use
Cloud Archives for Folsom :<screen>apt-get install ubuntu-cloud-keyring</screen>Edit
<para>Since the default OpenStack version in Ubuntu 12.04 LTS is older, we are going to use
Cloud Archives for Grizzly :<screen>apt-get install ubuntu-cloud-keyring</screen>Edit
<emphasis role="bold">/etc/apt/sources.list.d/cloud-archive.list</emphasis>
:<screen>deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main </screen>Upgrade
:<screen>deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main </screen>Upgrade
the system (and reboot if you need)
:<screen>apt-get update &amp;&amp; apt-get upgrade</screen></para>
</listitem>
@ -74,9 +74,9 @@ net.ipv4.conf.default.rp_filter = 0 </programlisting>
<itemizedlist>
<listitem>
<para>Edit the <emphasis role="bold">/etc/hosts</emphasis> file and
add <emphasis role="bold">folsom-controller</emphasis>, <emphasis
role="bold">folsom-network</emphasis> and <emphasis role="bold"
>folsom-compute</emphasis> hostnames with correct IP.</para>
add <emphasis role="bold">grizzly-controller</emphasis>, <emphasis
role="bold">grizzly-network</emphasis> and <emphasis role="bold"
>grizzly-compute</emphasis> hostnames with correct IP.</para>
</listitem>
</itemizedlist>
</para>

View File

@ -32,7 +32,7 @@ echo "source novarc">>.bashrc</userinput></screen></para>
</itemizedlist>
</listitem>
<listitem>
<para>Download the <link xlink:href="https://github.com/EmilienM/openstack-folsom-guide/raw/master/scripts/quantum-networking.sh">Quantum script</link>.
<para>Download the <link xlink:href="https://github.com/EmilienM/openstack-grizzly-guide/raw/master/scripts/quantum-networking.sh">Quantum script</link>.
We are using the "<emphasis role="bold">Provider Router with Private Networks</emphasis>" use-case.</para></listitem>
<listitem>
<para>Edit the script belong your networking (public network, floatings IP).</para>

View File

@ -23,8 +23,8 @@
spawning a new VM.</para>
</listitem>
<listitem>
<para>Since Horizon does not manage L3 in Folsom release, we have to configure floating IP from Quantum CLI (using demo tenant).
To do that, you need to get the ext_net ID and the port_id of your VM :
<para>We have to configure floating IP (using demo tenant).
To do that using the CLI, you need to get the ext_net ID and the port_id of your VM :
<screen>quantum net-list -- --router:external True
quantum port-list -- --device_id &lt;vm-uuid&gt;</screen></para>
</listitem>

View File

@ -21,9 +21,9 @@
<tr>
<td>
<para>Hostname</para></td>
<td><para>folsom-controller</para></td>
<td><para>folsom-network</para></td>
<td><para>folsom-compute</para>
<td><para>grizzly-controller</para></td>
<td><para>grizzly-network</para></td>
<td><para>grizzly-compute</para>
</td>
</tr>
<tr>

View File

@ -105,14 +105,7 @@ format="SVG" scale="60"/>
<para>Mac OS X
<programlisting language="bash" role="gutter: false"><prompt>$</prompt> sudo easy_install pip</programlisting></para>
</listitem>
<listitem>
<para>Ubuntu 11.10 and
earlier</para>
<para>
<programlisting language="bash" role="gutter: false"><prompt>$</prompt> aptitude install python-pip </programlisting>
</para>
</listitem>
<listitem>
<listitem>
<para>Ubuntu 12.04</para>
<para>A packaged version enables
you to use <command>dpkg</command>

View File

@ -56,7 +56,7 @@
<para>Ensure you have created an image that is OpenStack
compatible. For details, see the <link
xlink:href="http://docs.openstack.org/folsom/openstack-compute/admin/content/ch_image_mgmt.html"
xlink:href="../openstack-compute/admin/content/ch_image_mgmt.html"
>Image Management chapter</link> in the
<citetitle>OpenStack Compute Administration
Manual</citetitle>.</para>

View File

@ -40,13 +40,7 @@
<programlisting language="bash" role="gutter: false"><prompt>$</prompt> sudo easy_install pip</programlisting>
</td>
</tr>
<tr>
<td>Ubuntu 11.10 and earlier</td>
<td>
<programlisting language="bash" role="gutter: false"><prompt>$</prompt> aptitude install python-pip </programlisting>
</td>
</tr>
<tr>
<tr>
<td>Ubuntu 12.04</td>
<td>
<para>There is a packaged version so you can use dpkg or aptitude to install python-keystoneclient.</para>

View File

@ -6,7 +6,7 @@
version="5.0"
xml:id="boot_from_volume">
<title>Launch from a Volume</title>
<para>The Compute service has preliminary support for booting an instance from a
<para>The Compute service has support for booting an instance from a
volume.</para>
<simplesect>
<title>Creating a bootable volume</title>
@ -88,16 +88,12 @@
</listitem>
</varlistentry>
</variablelist></para>
<para><note>
<para>Because of bug <link
xlink:href="https://bugs.launchpad.net/nova/+bug/1008622"
>#1008622</link>, you must specify an image when booting from a volume,
even though this image will not be used.</para>
</note>The following example will attempt boot from volume with
<para>
The following example will attempt boot from volume with
ID=<literal>13</literal>, it will not delete on terminate. Replace the
<literal>--image</literal> flag with a valid image on your system, and the
<literal>--key-name</literal> with a valid keypair
name:<screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>f4addd24-4e8a-46bb-b15d-fae2591f1a35</replaceable> --flavor 2 --key-name <replaceable>mykey</replaceable> \
--block-device-mapping vda=13:::0 boot-from-vol-test</userinput></screen></para>
</simplesect>
</section>
</section>

View File

@ -1350,7 +1350,7 @@ use_cow_images=true
"ACTIVE" state. If this takes longer than five
minutes, here are several hints: </para>
<para>- The feature doesn't work while you have
attached a volume (via nova-volume) to the
attached a volume to the
instance. Thus, you should detach the volume
first, create the image, and re-mount the
volume.</para>

View File

@ -405,7 +405,7 @@ xml:id="ch_getting-started-with-openstack">
xlink:href="http://www.rabbitmq.com/">RabbitMQ</link>
today, but could be any AMPQ message queue (such as <link
xlink:href="http://qpid.apache.org/">Apache
Qpid</link>). New to the Folsom release is support for
Qpid</link>), or
<link xlink:href="http://www.zeromq.org/">Zero MQ</link>.</para>
</listitem>
<listitem>
@ -479,11 +479,7 @@ xml:id="ch_getting-started-with-openstack">
<section xml:id="overview-image-store-arch">
<title>Image Store</title>
<para>The Glance architecture has stayed relatively stable since
the Cactus release. The biggest architectural change has been
the addition of authentication, which was added in the Diablo
release. Just as a quick reminder, Glance has four main parts
to it:</para>
<para>Glance has four main parts to it:</para>
<itemizedlist>
<listitem>
@ -578,7 +574,7 @@ xml:id="ch_getting-started-with-openstack">
<section xml:id="overview-block-storage-arch">
<title>Block Storage</title>
<para>Cinder separates out the persistent block storage functionality that was previously part of Openstack Compute into it's own service. The OpenStack Block Storage API allows for manipulation of volumes, volume types (similar to compute flavors) and volume snapshots.</para>
<para>The OpenStack Block Storage API allows for manipulation of volumes, volume types (similar to compute flavors) and volume snapshots.</para>
<itemizedlist>
<listitem>

View File

@ -4906,7 +4906,7 @@
<glossentry>
<glossterm>volume node</glossterm>
<glossdef>
<para>A nova node that runs the nova-volume
<para>A nova node that runs the cinder-volume
daemon.</para>
</glossdef>
</glossentry>
@ -4927,10 +4927,10 @@
<glossentry>
<glossterm>volume worker</glossterm>
<glossdef>
<para>A nova component that interacts with back-end
<para>A cinder component that interacts with back-end
storage to manage the creation and deletion of
volumes and the creation of compute volumes,
provided by the nova-volume daemon.</para>
provided by the cinder-volume daemon.</para>
</glossdef>
</glossentry>
<glossentry>
@ -4986,7 +4986,7 @@
<glossterm>worker</glossterm>
<glossdef>
<para>A daemon that carries out tasks. For example,
the nova-volume worker attaches storage to an VM
the cinder-volume worker attaches storage to an VM
instance. Workers listen to a queue and take
action when new messages arrive.</para>
</glossdef>

View File

@ -65,8 +65,8 @@
</section>
<section xml:id="enable-iscsi-services-hyper-v">
<title>Enable iSCSI Initiator Service</title>
<para>To prepare the Hyper-V node to be able to attach to volumes provided by nova-volume or
cinder you must first make sure the Windows iSCSI initiator service is running and
<para>To prepare the Hyper-V node to be able to attach to volumes provided by cinder
you must first make sure the Windows iSCSI initiator service is running and
started automatically.</para>
<screen os="windows">
<prompt>C:\</prompt><userinput>sc start MSiSCSI</userinput>

View File

@ -37,7 +37,7 @@ OpenStack with XenAPI supports the following virtual machine image formats:
<para>It is possible to manage Xen using libvirt. This would be
necessary for any Xen-based system that isn't using the XCP
toolstack, such as SUSE Linux or Oracle Linux. Unfortunately,
this is not well-tested or supported as of the Essex release.
this is not well-tested or supported.
To experiment using Xen through libvirt add the following
configuration options
<filename>/etc/nova/nova.conf</filename>:

View File

@ -72,8 +72,8 @@
better, one can combine the “network” and “host” parts of sFlow data to provide a complex
monitoring solution
</para>
<para>With the advent of Quantum in the Folsom release, the virtual network device moved from
Linux bridge to OpenvSwitch. If we add KVM or Xen to the mix, we will have sFlow as an applicable
<para>Quantum uses OpenvSwitch for the virtual network device.
If we add KVM or Xen to the mix, we will have sFlow as an applicable
framework to monitor instances themselves and their virtual network topologies as well.
There are a number of sFlow collectors available. The most widely used seem to be Ganglia and
sFlowTrend, which are free. While Ganglia is focused mainly on monitoring the performance of

View File

@ -33,7 +33,7 @@
| flavor | 8GB Standard Instance |
| hostId | |
| id | d8093de0-850f-4513-b202-7979de6c0d55 |
| image | Ubuntu 11.10 |
| image | Ubuntu 12.04 |
| metadata | {} |
| name | myUbuntuServer |
| progress | 0 |
@ -55,4 +55,4 @@
value to log into your server.</para>
</step>
</procedure>
</section>
</section>

View File

@ -71,7 +71,7 @@
in the description. Paste in your command output or stack traces, link to
screenshots, etc. </para></listitem>
<listitem><para>Be sure to include what version of the software you are using.
This is especially critical if you are using a development branch eg. "Folsom
This is especially critical if you are using a development branch eg. "Grizzly
release" vs git commit bc79c3ecc55929bac585d04a03475b72e06a3208. </para></listitem>
<listitem><para>Any deployment specific info is helpful as well. eg.
Ubuntu 12.04, multi-node install.</para></listitem> </itemizedlist>

View File

@ -62,7 +62,7 @@
be port 22 (SSH). </para>
<note>
<para>Make sure the compute node running
the nova-volume management driver has SSH
the cinder-volume management driver has SSH
network access to
the storage system. </para>
</note>
@ -397,4 +397,4 @@ volume_driver=nova.volume.storwize_svc.StorwizeSVCDriver
</table>
</simplesect>
</section>
</section>
</section>

View File

@ -100,11 +100,11 @@
<title>Operation</title>
<para>The admin uses the nova-manage command
detailed below to add flavors and backends. </para>
<para>One or more nova-volume service instances
<para>One or more cinder-volume service instances
will be deployed per availability zone. When
an instance is started, it will create storage
repositories (SRs) to connect to the backends
available within that zone. All nova-volume
available within that zone. All cinder-volume
instances within a zone can see all the
available backends. These instances are
completely symmetric and hence should be able
@ -186,7 +186,7 @@ Note: SR type and config connection parameters are in keeping with the XenAPI Co
</listitem>
<listitem>
<para>
<emphasis role="bold">Start nova-volume and nova-compute with the new configuration options.
<emphasis role="bold">Start cinder-volume and nova-compute with the new configuration options.
</emphasis>
</para>
</listitem>
@ -204,4 +204,4 @@ Note: SR type and config connection parameters are in keeping with the XenAPI Co
detaching volumes. </para>
</simplesect>
</section>
</section>
</section>

View File

@ -23,7 +23,7 @@
<itemizedlist>
<listitem>
<para><emphasis role="bold"><literal>state_path</literal> and <literal>volumes_dir</literal> settings</emphasis></para>
<para>As of Folsom Cinder is using <command>tgtd</command>
<para>Cinder uses <command>tgtd</command>
as the default iscsi helper and implements persistent targets.
This means that in the case of a tgt restart or
even a node reboot your existing volumes on that

View File

@ -139,7 +139,7 @@
<para>Additional resources such as persistent volume storage and
public IP address may be added to and removed from running
instances. The examples below show the nova-volume service
instances. The examples below show the cinder-volume service
which provide persistent block storage as opposed to the
ephemeral storage provided by the instance flavor.</para>
@ -156,7 +156,7 @@
images. In the cloud there is an available compute
node with available vCPU, memory and local disk
resources. Plus there are a number of predefined
volumes in the nova-volume service.</para>
volumes in the cinder-volume service.</para>
<figure xml:id="initial-instance-state-figure">
<title>Base image state with no running instances</title>
@ -176,7 +176,7 @@
flavor provides a root volume (as all flavors do) labeled vda in
the diagram and additional ephemeral storage labeled vdb in the
diagram. The user has also opted to map a volume from the
nova-volume store to the third virtual disk, vdc, on this
cinder-volume store to the third virtual disk, vdc, on this
instance.</para>
<figure xml:id="running-instance-state-figure">
@ -197,7 +197,7 @@
second disk (vdb). Be aware that the second disk is an
empty disk with an emphemeral life as it is destroyed
when you delete the instance. The compute node
attaches to the requested nova-volume using iSCSI and
attaches to the requested cinder-volume using iSCSI and
maps this to the third disk (vdc) as requested. The
vCPU and memory resources are provisioned and the
instance is booted from the first drive. The instance

View File

@ -215,8 +215,6 @@
connecting to instance vncservers.</para>
</listitem>
</itemizedlist>
<note><para>The previous vnc proxy implementation, called nova-vncproxy, has
been deprecated.</para></note>
</section>
<section xml:id="accessing-a-vnc-console-through-a-web-browser">
<info>
@ -293,8 +291,7 @@ vncserver_listen=192.168.1.2
<para>
A: Make sure you have python-numpy installed, which is
required to support a newer version of the WebSocket protocol
(HyBi-07+). Also, if you are using Diablo's nova-vncproxy, note
that support for this protocol is not provided.
(HyBi-07+).
</para>
</listitem>
<listitem>

View File

@ -162,7 +162,7 @@ format="SVG" scale="60"/>
the nova client, the nova-manage command, and the Euca2ools commands. </para>
<para>The nova-manage command may only be run by cloud administrators. Both
novaclient and euca2ools can be used by all users, though specific commands may be
restricted by Role Based Access Control in the deprecated nova auth system or in the Identity Management service. </para>
restricted by Role Based Access Control in the Identity Management service. </para>
<simplesect><title>Using the nova command-line tool</title>
<para>Installing the python-novaclient gives you a <code>nova</code> shell command that enables
Compute API interactions from the command line. You install the client, and then provide
@ -555,9 +555,7 @@ nova services or updating the <literal>vm_state</literal> and
after a disaster, and how to easily backup the persistent
storage volumes, which is another approach when you face a
disaster. Even apart from the disaster scenario, backup
ARE mandatory. While the Diablo release includes the
snapshot functions, both the backup procedure and the
utility do apply to the Cactus release. </para>
ARE mandatory. </para>
<para>For reference, you cand find a DRP definition here : <link
xlink:href="http://en.wikipedia.org/wiki/Disaster_Recovery_Plan"
>http://en.wikipedia.org/wiki/Disaster_Recovery_Plan</link>. </para>

View File

@ -77,9 +77,6 @@ format="SVG" scale="60"/>
<thead>
<tr>
<td><para></para></td>
<td><para>ubuntu 10.10</para></td>
<td><para>ubuntu 11.04</para></td>
<td><para>ubuntu 11.10</para></td>
<td><para>ubuntu 12.04</para></td>
</tr>
</thead>

View File

@ -697,7 +697,7 @@ xenapi_remap_vbd_dev=true
<listitem>
<para><literal>HostA</literal> is the "Cloud Controller", and should be running: <literal>nova-api</literal>,
<literal>nova-scheduler</literal>, <literal>nova-network</literal>, <literal>nova-volume</literal>,
<literal>nova-scheduler</literal>, <literal>nova-network</literal>, <literal>cinder-volume</literal>,
<literal>nova-objectstore</literal>.</para>
</listitem>

View File

@ -12,7 +12,7 @@
<title>Selecting a Hypervisor</title>
<para>OpenStack Compute supports many hypervisors, an array of which must provide a bit of
difficulty in selecting a hypervisor unless you are already familiar with one. Most
installations only use a single hypervisor, however as of the Folsom release, it is
installations only use a single hypervisor, however it is
possible to use the <link linkend="computefilter">ComputeFilter</link> and
<link linkend="imagepropertiesfilter">ImagePropertiesFilter</link> to allow
scheduling to different hypervisors within the same installation.

View File

@ -30,28 +30,28 @@
on the <link xlink:href="https://build.opensuse.org/">openSUSE Open Build
Server</link>.</para>
<para>For the Folsom release you can find the packages in the project <link
xlink:href="https://build.opensuse.org/project/show?project=isv:B1-Systems:OpenStack:release:Folsom"
>isv:B1-Systems:OpenStack:release:Folsom</link>.</para>
<para>For the Grizzly release you can find the packages in the project <link
xlink:href="https://build.opensuse.org/project/show?project=isv:B1-Systems:OpenStack:release:Grizzly"
>isv:B1-Systems:OpenStack:release:Grizzly</link>.</para>
<section xml:id="installing-openstack-compute-on-sles">
<title>SUSE Linux Enterprise Server</title>
<para>
First of all you have to import the signing key of the repository.
<screen>
<prompt>#</prompt> <userinput>rpm --import http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/release:/Folsom/SLE_11_SP2/repodata/repomd.xml.key</userinput>
<prompt>#</prompt> <userinput>rpm --import http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/release:/Grizzly/SLE_11_SP2/repodata/repomd.xml.key</userinput>
</screen>
</para>
<para>
Now you can declare the repository to libzypp with <command>zypper ar</command>.
<screen>
<prompt>#</prompt> <userinput>zypper ar http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/release:/Folsom/SLE_11_SP2/isv:B1-Systems:OpenStack:release:Folsom.repo</userinput>
<computeroutput>Adding repository 'OpenStack Folsom (latest stable release) (SLE_11_SP2)' [done]
Repository 'OpenStack Folsom (latest stable release) (SLE_11_SP2)' successfully added
<prompt>#</prompt> <userinput>zypper ar http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/release:/Grizzly/SLE_11_SP2/isv:B1-Systems:OpenStack:release:Grizzly.repo</userinput>
<computeroutput>Adding repository 'OpenStack Grizzly (latest stable release) (SLE_11_SP2)' [done]
Repository 'OpenStack Grizzly (latest stable release) (SLE_11_SP2)' successfully added
Enabled: Yes
Autorefresh: No
GPG check: Yes
URI: http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/release:/Folsom/SLE_11_SP2/</computeroutput>
URI: http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/release:/Grizzly/SLE_11_SP2/</computeroutput>
</screen>
</para>
@ -60,8 +60,8 @@ URI: http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/relea
<screen>
<prompt>#</prompt> <userinput>zypper ref</userinput>
<computeroutput>[...]
Retrieving repository 'OpenStack Folsom (latest stable release) (SLE_11_SP2)' metadata [done]
Building repository 'OpenStack Folsom (latest stable release) (SLE_11_SP2)' cache [done]
Retrieving repository 'OpenStack Grizzly (latest stable release) (SLE_11_SP2)' metadata [done]
Building repository 'OpenStack Grizzly (latest stable release) (SLE_11_SP2)' cache [done]
All repositories have been refreshed.</computeroutput>
</screen>
</para>
@ -93,19 +93,19 @@ All repositories have been refreshed.</computeroutput>
<para>
First of all you have to import the signing key of the repository.
<screen>
<prompt>#</prompt> <userinput>rpm --import http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/release:/Folsom/openSUSE_12.2/repodata/repomd.xml.key</userinput>
<prompt>#</prompt> <userinput>rpm --import http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/release:/Grizzly/openSUSE_12.2/repodata/repomd.xml.key</userinput>
</screen>
</para>
<para>
Now you can declare the repository to libzypp with <command>zypper ar</command>.
<screen>
<prompt>#</prompt> <userinput>zypper ar http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/release:/Folsom/openSUSE_12.2/isv:B1-Systems:OpenStack:release:Folsom.repo</userinput>
<computeroutput>Adding repository 'OpenStack Folsom (latest stable release) (openSUSE_12.2)' [done]
Repository 'OpenStack Folsom (latest stable release) (openSUSE_12.2)' successfully added
<prompt>#</prompt> <userinput>zypper ar http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/release:/Grizzly/openSUSE_12.2/isv:B1-Systems:OpenStack:release:Grizzly.repo</userinput>
<computeroutput>Adding repository 'OpenStack Grizzly (latest stable release) (openSUSE_12.2)' [done]
Repository 'OpenStack Grizzly (latest stable release) (openSUSE_12.2)' successfully added
Enabled: Yes
Autorefresh: No
GPG check: Yes
URI: http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/release:/Folsom/openSUSE_12.2/</computeroutput>
URI: http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/release:/Grizzly/openSUSE_12.2/</computeroutput>
</screen>
</para>
@ -114,8 +114,8 @@ URI: http://download.opensuse.org/repositories/isv:/B1-Systems:/OpenStack:/relea
<screen>
<prompt>#</prompt> <userinput>zypper ref</userinput>
<computeroutput>[...]
Retrieving repository 'OpenStack Folsom (latest stable release) (openSUSE_12.2)' metadata [done]
Building repository 'OpenStack Folsom (latest stable release) (openSUSE_12.2)' cache [done]
Retrieving repository 'OpenStack Grizzly (latest stable release) (openSUSE_12.2)' metadata [done]
Building repository 'OpenStack Grizzly (latest stable release) (openSUSE_12.2)' cache [done]
All repositories have been refreshed.</computeroutput>
</screen>
</para>

View File

@ -11,7 +11,7 @@
<section xml:id="networking-options">
<title>Networking Options</title>
<para>This section offers a brief overview of each concept in
networking for Compute. With the Folsom release, you can
networking for Compute. With the Grizzly release, you can
chose either to install and configure nova-network for
networking between VMs or use the Networking service
(quantum) for networking. Refer to the <link
@ -225,9 +225,7 @@
reserve
--address=<replaceable>x.x.x.x</replaceable></code> to
specify the starting point IP address (x.x.x.x) to reserve
with the DHCP server, replacing the
flat_network_dhcp_start configuration option that was
available in Diablo. This reservation only affects which
with the DHCP server. This reservation only affects which
IP address the VMs start at, not the fixed IP addresses
that the nova-network service places on the bridges.</para>
</section>

View File

@ -28,7 +28,7 @@ compute_fill_first_cost_fn_weight=-1.0
requests. </para>
<para>The volume scheduler is configured by default as a Chance
Scheduler, which picks a host at random that has the
<command>nova-volume</command> service running.</para>
<command>cinder-volume</command> service running.</para>
<para> The compute scheduler is configured by default as a Filter
Scheduler, described in detail in the next section. In the
default configuration, this scheduler will only consider hosts
@ -586,7 +586,7 @@ compute_fill_first_cost_fn_weight=-1.0
<literal>nova.scheduler.multi.MultiScheduler</literal>
holds multiple sub-schedulers, one for
<literal>nova-compute</literal> requests and one
for <literal>nova-volume</literal> requests. It is the
for <literal>cinder-volume</literal> requests. It is the
default top-level scheduler as specified by the
<literal>scheduler_driver</literal> configuration
option.</para>

View File

@ -4,17 +4,6 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_volumes">
<title>Volumes</title>
<section xml:id="cinder-vs-nova-volumes">
<title>Cinder Versus Nova-Volumes</title>
<para>You now have two options in terms of Block Storage.
Currently (as of the Folsom release) both are nearly
identical in terms of functionality, API's and even the
general theory of operation. Keep in mind however that
<literal>nova-volume</literal> is deprecated and will be removed at the
release of Grizzly. </para>
<para>For Cinder-specific install
information, refer to the OpenStack Installation Guide.</para>
</section>
<section xml:id="managing-volumes">
<title>Managing Volumes</title>
<para>The Cinder project provides the service that allows you
@ -219,12 +208,8 @@ iscsi_helper=tgtadm
</listitem>
</varlistentry>
</variablelist></para>
<para><note>
<para>Because of bug <link
xlink:href="https://bugs.launchpad.net/nova/+bug/1008622"
>#1008622</link>, you must specify an image when booting from a volume,
even though this image will not be used.</para>
</note>The following example will attempt boot from volume with
<para>
The following example will attempt boot from volume with
ID=<literal>13</literal>, it will not delete on terminate. Replace the
<literal>--image</literal> flag with a valid image on your system, and the
<literal>--key_name</literal> with a valid keypair

View File

@ -5722,7 +5722,7 @@
id="tspan13532"
x="538.51111"
y="684.09662"
style="font-size:16px;stroke:none">nova-volume</tspan></text>
style="font-size:16px;stroke:none">cinder-volume</tspan></text>
<text
xml:space="preserve"
style="font-size:10px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;font-family:DejaVu Sans;-inkscape-font-specification:DejaVu Sans"

Before

Width:  |  Height:  |  Size: 204 KiB

After

Width:  |  Height:  |  Size: 204 KiB

View File

@ -11769,7 +11769,7 @@
sodipodi:role="line"
id="tspan7476-3"
x="381.88196"
y="419.97501">(nova-volume)</tspan></text>
y="419.97501">(cinder-volume)</tspan></text>
</g><g
id="g23645"
@ -13019,7 +13019,7 @@
sodipodi:role="line"
id="tspan7476"
x="381"
y="172">(nova-volume)</tspan></text>
y="172">(cinder-volume)</tspan></text>
<text
xml:space="preserve"
@ -13043,6 +13043,6 @@
sodipodi:role="line"
id="tspan7476-3-9"
x="381.88196"
y="614.97498">(nova-volume)</tspan></text>
y="614.97498">(cinder-volume)</tspan></text>
</svg>
</svg>

Before

Width:  |  Height:  |  Size: 608 KiB

After

Width:  |  Height:  |  Size: 608 KiB

View File

@ -126,17 +126,8 @@
can be removed. There are 5 drivers in core
openstack: fake.FakeDriver, libvirt.LibvirtDriver,
baremetal.BareMetalDriver, xenapi.XenAPIDriver,
vmwareapi.VMWareESXDriver. If nothing is specified
the older connection_type mechanism will be used.
Be aware that method will be removed after the
Folsom release. </td>
</tr>
<tr>
<td>connection_type='libvirt' (Deprecated)</td>
<td>libvirt, xenapi, hyperv, or fake; Value that
indicates the virtualization connection type.
Deprecated as of Folsom, will be removed in G
release.</td>
vmwareapi.VMWareESXDriver, vmwareapi.VMWareVCDriver.
</td>
</tr>
<tr>
<td><para>

View File

@ -372,7 +372,6 @@ sql_connection=mysql://nova:openstack@10.211.55.20/nova
ec2_url=http://10.211.55.20:8773/services/Cloud
# Auth
use_deprecated_auth=false
auth_strategy=keystone
keystone_ec2_url=http://10.211.55.20:5000/v2.0/ec2tokens
# Imaging service
@ -427,7 +426,7 @@ signing_dirname = /tmp/keystone-signing-nova</programlisting>Populate
network.<screen><prompt>$</prompt><userinput>nova-manage network create private --fixed_range_v4=192.168.4.32/27 --vlan=100 --num_networks=1 --bridge=br100 --bridge_interface=eth1 --network_size=32</userinput></screen>
Restart everything.</para>
<note><para>The use of <literal>--vlan=100</literal> in the above command is to work
around <link xlink:href="https://bugs.launchpad.net/devstack/+bug/1049869">a bug in OpenStack Folsom</link>.</para></note>
around <link xlink:href="https://bugs.launchpad.net/devstack/+bug/1049869">a bug in OpenStack</link>.</para></note>
<screen><prompt>$</prompt> <userinput>cd /etc/init.d/; for i in $( ls nova-* ); do sudo service $i restart; done </userinput>
<prompt>$</prompt> <userinput>service open-iscsi restart</userinput>
<prompt>$</prompt> <userinput>service nova-novncproxy restart</userinput></screen>

View File

@ -73,9 +73,8 @@
</listitem>
<listitem os="ubuntu">
<para>On Ubuntu, enable the <link
xlink:href="https://wiki.ubuntu.com/ServerTeam/CloudArchive"
>Cloud Archive</link> repository by adding the
following to
xlink:href="https://wiki.ubuntu.com/ServerTeam/CloudArchive">Cloud
Archive</link> repository by adding the following to
/etc/apt/sources.list.d/grizzly.list:<screen>deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main</screen></para>
<para>Prior to running apt-get update and apt-get upgrade, install the keyring :</para>
<screen>sudo apt-get install ubuntu-cloud-keyring</screen>

View File

@ -98,18 +98,11 @@
separate OpenStack project, codenamed Quantum.</para>
</simplesect>
<simplesect>
<title>Cinder</title>
<para>By default, <systemitem class="service"
>Cinder</systemitem> service uses <emphasis
role="italic">LVM</emphasis> to create and manage
local volumes, and exports them via iSCSI using <emphasis
role="italic">IET</emphasis> or <emphasis
role="italic">tgt</emphasis>. It can also be
configured to use other iSCSI-based storage technologies.
Functionality previously available through <systemitem
class="service">nova-volume</systemitem> is now
available through a separate OpenStack Block Storage
project, code-named Cinder.</para>
<title>OpenStack Block Storage (Cinder)</title>
<para>By default, <systemitem class="service">Cinder</systemitem> service uses <emphasis
role="italic">LVM</emphasis> to create and manage local volumes, and exports them
via iSCSI using <emphasis role="italic">IET</emphasis> or <emphasis role="italic">tgt</emphasis>.
It can also be configured to use other iSCSI-based storage technologies.</para>
</simplesect>
<simplesect>
<title>openstack-dashboard</title>

View File

@ -263,7 +263,7 @@
data through the operating system at the device level: users access the data by
mounting the remote device in a similar manner to how they would mount a local,
physical disk (e.g., using the "mount" command in Linux). In OpenStack, the
<systemitem class="service">nova-volume</systemitem> service that forms part of
<systemitem class="service">cinder-volume</systemitem> service that forms part of
the Compute service provides this type of functionality, and uses iSCSI to expose
remote data as a SCSI disk that is attached to the network. </para>
<para>Because the data is exposed as a physical device, the end-user is responsible for

View File

@ -25,7 +25,6 @@ sudo start nova-conductor
sudo start nova-network
sudo start nova-scheduler
sudo start nova-novncproxy
sudo start nova-volume
sudo start libvirt-bin
sudo /etc/init.d/rabbitmq-server restart </screen>
</para>

View File

@ -89,7 +89,6 @@ flat_network_bridge=br100</programlisting>
<prompt>#</prompt> <userinput>stop nova-network</userinput>
<prompt>#</prompt> <userinput>stop nova-scheduler</userinput>
<prompt>#</prompt> <userinput>stop nova-novncproxy</userinput>
<prompt>#</prompt> <userinput>stop nova-volume</userinput>
</screen>
<screen os="rhel;fedora;centos">
<prompt>$></prompt> <userinput>for svc in api objectstore conductor network volume scheduler cert; do sudo service openstack-nova-$svc stop ; sudo chkconfig openstack-nova-$svc on ; done</userinput>

View File

@ -82,18 +82,10 @@
xlink:href="http://wiki.openstack.org/Packaging"
>http://wiki.openstack.org/Packaging</link> for additional
links. <note>
<para os="ubuntu">OpenStack Compute requires Ubuntu 12.04
or later, as the version of libvirt that ships with
Ubuntu 11.10 does not function properly with OpenStack
due to <link
xlink:href="https://bugs.launchpad.net/nova/+bug/1011863"
>bug #1011863</link>.</para>
<para os="fedora">OpenStack Compute requires Fedora 16 or
later, as the version of libvirt that ships with
Fedora 15 does not function properly with OpenStack
due to <link
xlink:href="https://bugs.launchpad.net/nova/+bug/1011863"
>bug #1011863</link>.</para>
<para os="ubuntu">The Grizzly release of OpenStack Compute
requires Ubuntu 12.04 or later.</para>
<para os="fedora">The Grizzly release of OpenStack Compute
requires Fedora 16 or later.</para>
</note></para>
<para><emphasis role="bold">Database</emphasis>: For
OpenStack Compute, you need access to either a PostgreSQL

View File

@ -11,7 +11,6 @@
<screen><computeroutput>Binary Host Zone Status State Updated_At
nova-compute myhost nova enabled :-) 2012-04-02 14:06:15
nova-cert myhost nova enabled :-) 2012-04-02 14:06:16
nova-volume myhost nova enabled :-) 2012-04-02 14:06:14
nova-scheduler myhost nova enabled :-) 2012-04-02 14:06:11
nova-network myhost nova enabled :-) 2012-04-02 14:06:13
nova-consoleauth myhost nova enabled :-) 2012-04-02 14:06:10</computeroutput></screen>
@ -27,6 +26,5 @@ nova-consoleauth myhost nova enabled :-) 2012-04-02 14:06:10</compute
<para>The version number 2013.1 corresponds with the Grizzly
release of Compute.</para>
<literallayout class="monospaced">2013.1 </literallayout>
</section>

View File

@ -113,18 +113,6 @@ sudo chown -R keystone:keystone /etc/keystone/*</screen>
linkend="setting-up-tenants-users-and-roles-manually">manual
steps</link> or <link linkend="scripted-keystone-setup">use a
script</link>. </para>
<note>
<para>The parameters <literal>--token</literal> and
<literal>--endpoint</literal> are valid for
the keystoneclient available after October 2012. Use
<literal>--token</literal> and <literal>--endpoint</literal>
with the keystoneclient released with the Folsom packaging.
This install guide documents installing the client from packages,
but you can use the client from another computer with the <link
xlink:href="http://docs.openstack.org/cli/quick-start/content/install_openstack_keystone_cli.html"
>CLI Guide instructions for <command>pip install</command></link>.
</para>
</note>
<section xml:id="setting-up-tenants-users-and-roles-manually">
<title>Setting up tenants, users, and roles - manually</title>
<para>You need to minimally define a tenant, user, and role to

View File

@ -6,7 +6,7 @@
and its components for the "Use Case: Provider
Router with Private Networks".</para>
<para> We will follow the <link
xlink:href="http://docs.openstack.org/folsom/basic-install/content/basic-install_intro.html"
xlink:href="http://docs.openstack.org/grizzly/basic-install/content/basic-install_intro.html"
>Basic Install</link> document except for the Quantum, Open-vSwitch and Virtual
Networking sections on each of the nodes. </para>
<para>The <emphasis role="bold">Basic Install</emphasis> document uses gre tunnels. This

View File

@ -30,7 +30,7 @@
attributes on all virtual networks, and are able to
specify these attributes in order to create provider
networks.</para>
<para>As of the Folsom release, the provider extension is
<para>The provider extension is
supported by the openvswitch and linuxbridge plugins.
Configuration of these plugins requires familiarity with
this extension.</para>

View File

@ -252,8 +252,9 @@ $ quantum ext-show quotas
| updated | 2012-07-29T10:00:00-00:00 |
+-------------+------------------------------------------------------------+</computeroutput></screen>
<note><para>
In Folsom release, per-tenant quota is supported by Open vSwitch plugin,
Linux Bridge plugin, and Nicira NVP plugin and cannot be used with other plugins.
Per-tenant quotas are supported only supported by some plugins. At least Open vSwitch,
Linux Bridge, and Nicira NVP are known to work but new versions of other plugins may
bring additional functionality - consult the documentation for each plugin.
</para></note>
<para>
There are four CLI commands to manage per-tenant quota.

View File

@ -23,9 +23,9 @@
xlink:href="http://blog.canonical.com/2012/09/14/now-you-can-have-your-openstack-cake-and-eat-it/"
>http://bit.ly/Q8OJ9M </link></para>
</note>
<para>Point to Folsom PPAs:                        
<para>Point to Grizzly PPAs:                        
                                                </para>
<screen><prompt>#</prompt> <userinput>echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/folsom main >> /etc/apt/sources.list.d/folsom.list</userinput>
<screen><prompt>#</prompt> <userinput>echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main >>/etc/apt/sources.list.d/grizzly.list</userinput>
<prompt>#</prompt> <userinput>apt-get install ubuntu-cloud-keyring </userinput>
<prompt>#</prompt> <userinput>apt-get update</userinput>
<prompt>#</prompt> <userinput>apt-get upgrade</userinput>   </screen>

204
www/grizzly/index.html Normal file
View File

@ -0,0 +1,204 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html lang="en" xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<meta name="generator" content=
"HTML Tidy for Linux/x86 (vers 11 February 2007), see www.w3.org" />
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii" />
<meta name="google-site-verification" content=
"Ip5yk0nd8yQHEo8I7SjzVfAiadlHvTvqQHLGwn1GFyU" />
<title>OpenStack Docs: Grizzly</title><!-- Google Fonts -->
<link href='http://fonts.googleapis.com/css?family=PT+Sans&amp;subset=latin'
rel='stylesheet' type='text/css' /><!-- Framework CSS -->
<link rel="stylesheet" href=
"http://openstack.org/themes/openstack/css/blueprint/screen.css" type=
"text/css" media="screen, projection" />
<link rel="stylesheet" href=
"http://openstack.org/themes/openstack/css/blueprint/print.css" type=
"text/css" media="print" />
<!--[if lt IE 8]><link rel="stylesheet" href="http://openstack.org/themes/openstack/css/blueprint/ie.css" type="text/css" media="screen, projection"><![endif]-->
<!-- OpenStack Specific CSS -->
<link rel="stylesheet" href=
"http://openstack.org/themes/openstack/css/main.css" type="text/css" media=
"screen, projection, print" />
<link rel="stylesheet" type="text/css" href="http://docs.openstack.org/common/css/docblitz.css" />
<!--<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-17511903-1']);
_gaq.push(['_setDomainName', '.openstack.org']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>-->
</head>
<body class="docshome" id="docshome">
<div class="container">
<div id="header">
<div class="span-5">
<h1 id="logo"><a href="/">OpenStack</a></h1>
</div>
<div class="span-19 last blueLine">
<div id="navigation" class="span-19">
<ul id="Menu1">
<li><a href="http://www.openstack.org/" title="Go to the Home page">Home</a></li>
<li><a href="http://www.openstack.org/software/" title="Go to the Software page" class="link">Software</a></li>
<li><a href="http://www.openstack.org/user-stories/" title="Go to the User Stories page" class="link">User Stories</a></li>
<li><a href="http://www.openstack.org/community/" title="Go to the Community page" class="link">Community</a></li>
<li><a href="http://www.openstack.org/profile/" title="Go to the Profile page" class="link">Profile</a></li>
<li><a href="http://www.openstack.org/blog/" title="Go to the OpenStack Blog">Blog</a></li>
<li><a href="http://wiki.openstack.org/" title="Go to the OpenStack Wiki">Wiki</a></li>
<li><a href="http://docs.openstack.org/" title="Go to OpenStack Documentation" class="current">Documentation</a></li>
</ul>
</div>
</div>
</div>
</div>
<!-- Page Content -->
<div class="container">
<div class="span-12">
<h3 class="subhead"><a href="http://docs.openstack.org/">Documentation</a> </h3>
</div>
<div class="searchArea span-10 last">
<div id="cse" style="width: 100%;">
Loading
</div>
<script src="http://www.google.com/jsapi" type="text/javascript">
</script>
<script type="text/javascript">
//<![CDATA[
google.load('search', '1', {language : 'en'});
var _gaq = _gaq || [];
_gaq.push(["_setAccount", "UA-17511903-6"]);
function _trackQuery(control, searcher, query) {
var gaQueryParamName = "q";
var loc = document.location;
var url = [
loc.pathname,
loc.search,
loc.search ? '&' : '?',
gaQueryParamName == '' ? 'q' : encodeURIComponent(gaQueryParamName),
'=',
encodeURIComponent(query)
].join('');
_gaq.push(["_trackPageview", url]);
}
google.setOnLoadCallback(function() {
var customSearchControl = new google.search.CustomSearchControl('011012898598057286222:elxsl505o0o');
customSearchControl.setResultSetSize(google.search.Search.FILTERED_CSE_RESULTSET);
customSearchControl.setSearchStartingCallback(null, _trackQuery);
customSearchControl.draw('cse');
}, true);
//]]>
</script>
</div>
</div>
<div class="container">
<div class="span-12">
<h2>These documents are under construction. For a list of known
issues, please refer to <a
href="https://launchpad.net/openstack-manuals/+milestone/grizzly">this site</a>.
<h2><a href="http://openstack.org/software/start/">Getting Started</a>
</h2>
<p>Get up and running quickly with <a href="http://devstack.org">DevStack</a> or <a href="http://trystack.org">TryStack</a>.
</p>
<h2><a href="/install/">Installing OpenStack</a>
</h2>
<p>Installation and deployment guides for production-sized systems.
</p>
<h2><a href="/run/">Running OpenStack</a>
</h2>
<p>Operational and administration documentation for OpenStack cloud service providers.
</p>
<h2><a href="/developer/">Developing OpenStack</a>
</h2>
<p>Python developer documentation, continuous integration documentation, and language bindings documentation for OpenStack projects.
</p>
</div>
<div class="span-12 last">
<h2><a href="/cli/quick-start/content/index.html">Command Line Interfaces (CLI)</a>
</h2>
<p>The CLI documentation for nova, swift, glance, quantum, and keystone.
</p>
<h2><a href="/api/">API</a>
</h2>
<p>Documentation on the RESTful APIs provided by OpenStack services.
</p>
<h2><a href="/glossary/content/glossary.html">Glossary</a>
</h2>
<p>A list of terms and their definitions.</p>
</div>
</div>
<div class="container">
<div id="footer">
<hr />
<p>Documentation treated like code, powered by the community - interested? Here's <a href="http://wiki.openstack.org/Documentation/HowTo">how to contribute</a>. </p>
<p>The OpenStack project is provided under the Apache 2.0 license.
Openstack.org is powered by <a href=
"http://www.rackspacecloud.com/">Rackspace Cloud Computing</a>.</p>
</div>
</div>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.js"></script>
<script src="http://docs.openstack.org/common/jquery/jquery.hoverIntent.minified.js" type="text/javascript" charset="utf-8">
</script>
<script type="text/javascript" charset="utf-8">
//<![CDATA[
$(document).ready(function() {
function addMenu(){
$(".dropDown").addClass("menuHover");
}
function removeMenu(){
$(".dropDown").removeClass("menuHover");
}
var menuConfig = {
interval: 500,
sensitivity: 4,
over: addMenu,
timeout: 500,
out: removeMenu
};
$(".dropDownTrigger").hoverIntent(menuConfig);
});
//]]>
</script>
</body>
</html>

View File

@ -127,7 +127,7 @@
<li class="link"><a href="/trunk/" title="Go to the &quot;Current release&quot; page">Current (master branch)</a></li>
<li class="link"><a href="/folsom/" title="Go to the &quot;Current release&quot; page">Folsom</a></li>
<li class="link"><a href="/folsom/" title="Go to the &quot;Folsom release&quot; page">Folsom</a></li>
<li class="link"><a href="/essex/" title="Go to the &quot;Essex&quot; page">Essex</a></li>