Adds identity management config and VM console config info.

patchset 2 addresses Dianne's excellent editing additions, and rebases

patchset 3 fixes the build

Change-Id: Ic2988d0d065424c4e8efce1b03a6c14442acffbd
This commit is contained in:
annegentle 2013-06-17 10:41:49 -05:00 committed by Tom Fifield
parent 8a46afe1c9
commit e1e1d35aa3
16 changed files with 519 additions and 246 deletions

View File

@ -6,43 +6,36 @@
<title>About the OpenStack dashboard</title>
<para>To install the OpenStack dashboard, complete the
following high-level steps: </para>
<orderedlist>
<listitem>
<para>Meet the system requirements for accessing the
dashboard. See <xref
linkend="dashboard-system-requirements"
/>.</para>
</listitem>
<listitem>
<para>Install the OpenStack Dashboard framework,
including Apache and related modules. See <xref
linkend="installing-openstack-dashboard"
/>.</para>
</listitem>
<listitem>
<para>Configure the dashboard.</para>
<para>Then, restart and run the Apache server.</para>
<para>See <xref linkend="configure-dashboard"
/>.</para>
</listitem>
<listitem>
<para>Verify your installation. See <xref
linkend="verify-dashboard"/>.</para>
</listitem>
</orderedlist>
<orderedlist>
<listitem>
<para>Meet the system requirements for accessing the
dashboard.</para>
</listitem>
<listitem>
<para>Install the OpenStack Dashboard framework, including
Apache and related modules.</para>
</listitem>
<listitem>
<para>Configure the dashboard.</para>
<para>Then, restart and run the Apache server.</para>
</listitem>
<listitem>
<para>Verify your installation by going to the URL of the
Apache server you configured. </para>
</listitem>
</orderedlist>
<simplesect>
<title>Next steps:</title>
<para>After you install the dashboard, you can complete the following tasks:</para>
<itemizedlist>
<listitem>
<para>To customize your dashboard, see <xref
linkend="dashboard-custom-brand"/>.
<para>To customize your dashboard, see <link xlink:href="http://docs.openstack.org/grizzly/openstack-compute/install/apt/content/dashboard-custom-brand.html">How To Custom Brand The OpenStack Dashboard (Horizon)</link>.
</para>
</listitem>
<listitem>
<para>To set up session storage for the dashboard,
see <xref linkend="dashboard-sessions"
/>.</para>
see <link xlink:href="http://docs.openstack.org/grizzly/openstack-compute/install/apt/content/dashboard-sessions.html"
>OpenStack Dashboard Session Storage</link>.</para>
</listitem>
<listitem>
<para>To deploy the

View File

@ -51,115 +51,11 @@ pipeline = stats_monitoring url_normalize token_auth admin_token_auth xml_body j
-H &quot;X_Auth_Token: &lt;authtokenid&gt;&quot; -d '{&quot;user&quot;: {&quot;password&quot;: &quot;ABCD&quot;, &quot;original_password&quot;: &quot;DCBA&quot;}}'
</screen>
<para>
In addition to changing their password all of the users current
tokens will be deleted (if the backend used is kvs or sql)
If the backend is kvs or sql, all users' passwords are changed
and their tokens are deleted.
</para>
</section>
<section xml:id="keystone-configuration-file">
<title>Configuration Files</title>
<para>
The Identity configuration file is an 'ini' file format with
sections, extended from
<link xlink:href="http://pythonpaste.org/">Paste</link>, a common
system used to configure python WSGI based applications. In
addition to the paste config entries, general configuration values
are stored under <literal>[DEFAULT]</literal>,
<literal>[sql]</literal>, <literal>[ec2]</literal> and then
drivers for the various services are included under their
individual sections.
</para>
<para> The services include: </para>
<itemizedlist>
<listitem>
<para>
<literal>[DEFAULT]</literal> - general configuration
</para>
</listitem>
<listitem>
<para>
<literal>[sql]</literal> - optional storage backend
configuration
</para>
</listitem>
<listitem>
<para>
<literal>[ec2]</literal> - Amazon EC2 authentication driver
configuration
</para>
</listitem>
<listitem>
<para>
<literal>[s3]</literal> - Amazon S3 authentication driver
configuration.
</para>
</listitem>
<listitem>
<para>
<literal>[identity]</literal> - identity system driver
configuration
</para>
</listitem>
<listitem>
<para>
<literal>[catalog]</literal> - service catalog driver
configuration
</para>
</listitem>
<listitem>
<para>
<literal>[token]</literal> - token driver configuration
</para>
</listitem>
<listitem>
<para>
<literal>[policy]</literal> - policy system driver configuration
for RBAC
</para>
</listitem>
<listitem>
<para>
<literal>[signing]</literal> - cryptographic signatures for PKI
based tokens
</para>
</listitem>
<listitem>
<para>
<literal>[ssl]</literal> - SSL configuration
</para>
</listitem>
</itemizedlist>
<para>
The configuration file is expected to be named
<filename>keystone.conf</filename>. When starting Identity, you
can specify a different configuration file to use with
<literal>--config-file</literal>. If you do
<emphasis role="strong">not</emphasis> specify a configuration
file, keystone will look in the following directories for a
configuration file, in order:
</para>
<itemizedlist>
<listitem>
<para>
<literal>~/.keystone</literal>
</para>
</listitem>
<listitem>
<para>
<literal>~/</literal>
</para>
</listitem>
<listitem>
<para>
<literal>/etc/keystone</literal>
</para>
</listitem>
<listitem>
<para>
<literal>/etc</literal>
</para>
</listitem>
</itemizedlist>
</section>
<xi:include href="../common/identity-configure.xml"/>
<section xml:id="keystone-logging">
<title>Logging</title>
<para> Logging is configured externally to the rest of Identity,
@ -257,21 +153,7 @@ $ curl -H 'X-Auth-Token: ADMIN' -X DELETE http://localhost:35357/v2.0/OS-STATS/s
</screen>
</section>
<xi:include href="certificates-for-pki.xml"/>
<section xml:id="sample-configuration-files">
<title>Sample Configuration Files</title>
<itemizedlist>
<listitem>
<para>
<filename>etc/keystone.conf</filename>
</para>
</listitem>
<listitem>
<para>
<literal>etc/logging.conf.sample</literal>
</para>
</listitem>
</itemizedlist>
</section>
<xi:include href="keystone-sample-conf-files.xml"/>
<section xml:id="running-keystone">
<title>Running</title>

View File

@ -0,0 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="configuring-the-hypervisor"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Understanding the Hypervisor</title>
<para>For production environments the most tested hypervisors are KVM and Xen-based hypervisors.
KVM runs through libvirt, Xen runs best through XenAPI calls. KVM is selected by default and
requires the least additional configuration. </para>
<xi:include href="../common/kvm.xml" />
<xi:include href="../common/qemu.xml" />
<xi:include href="../common/introduction-to-xen.xml" />
</section>

View File

@ -0,0 +1,111 @@
<?xml version="1.0" encoding="UTF-8"?>
<section
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="keystone-configuration-file">
<title>Configuration Files</title>
<para>
The Identity configuration file is an 'ini' file format with
sections, extended from
<link xlink:href="http://pythonpaste.org/">Paste</link>, a common
system used to configure python WSGI based applications. In
addition to the paste config entries, general configuration values
are stored under <literal>[DEFAULT]</literal>,
<literal>[sql]</literal>, <literal>[ec2]</literal> and then
drivers for the various services are included in
individual sections.
</para>
<para> The services include: </para>
<itemizedlist>
<listitem>
<para>
<literal>[DEFAULT]</literal> - general configuration
</para>
</listitem>
<listitem>
<para>
<literal>[sql]</literal> - optional storage backend
configuration
</para>
</listitem>
<listitem>
<para>
<literal>[ec2]</literal> - Amazon EC2 authentication driver
configuration
</para>
</listitem>
<listitem>
<para>
<literal>[s3]</literal> - Amazon S3 authentication driver
configuration.
</para>
</listitem>
<listitem>
<para>
<literal>[identity]</literal> - identity system driver
configuration
</para>
</listitem>
<listitem>
<para>
<literal>[catalog]</literal> - service catalog driver
configuration
</para>
</listitem>
<listitem>
<para>
<literal>[token]</literal> - token driver configuration
</para>
</listitem>
<listitem>
<para>
<literal>[policy]</literal> - policy system driver configuration
for RBAC
</para>
</listitem>
<listitem>
<para>
<literal>[signing]</literal> - cryptographic signatures for PKI
based tokens
</para>
</listitem>
<listitem>
<para>
<literal>[ssl]</literal> - SSL configuration
</para>
</listitem>
</itemizedlist>
<para>
The configuration file is expected to be named
<filename>keystone.conf</filename>. When starting Identity, you
can specify a different configuration file to use with
<literal>--config-file</literal>. If you do
<emphasis role="strong">not</emphasis> specify a configuration
file, keystone looks in the following directories for a
configuration file, in the following order:
</para>
<orderedlist>
<listitem>
<para>
<literal>~/.keystone</literal>
</para>
</listitem>
<listitem>
<para>
<literal>~/</literal>
</para>
</listitem>
<listitem>
<para>
<literal>/etc/keystone</literal>
</para>
</listitem>
<listitem>
<para>
<literal>/etc</literal>
</para>
</listitem>
</orderedlist>
</section>

View File

@ -0,0 +1,23 @@
<?xml version="1.0" encoding="UTF-8"?>
<section
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="sample-configuration-files">
<title>Sample Configuration Files</title>
<itemizedlist>
<listitem>
<para>
<filename>etc/keystone.conf</filename>
</para><para>
<programlisting><xi:include parse="text" href="https://raw.github.com/openstack/keystone/master/etc/keystone.conf.sample"></xi:include></programlisting></para>
</listitem>
<listitem>
<para>
<literal>etc/logging.conf.sample</literal>
</para>
<para><programlisting><xi:include parse="text" href="https://raw.github.com/openstack/keystone/master/etc/logging.conf.sample"/></programlisting></para>
</listitem>
</itemizedlist>
</section>

View File

@ -6,7 +6,8 @@
<title>Using VNC Console</title>
<para> There are several methods to interact with the VNC console,
using a VNC client directly, a special java client, or through the
web browser.
web browser. For information about configuring the console,
please refer <link linkend="remote-console-access">refer here</link>.
</para>
<section xmlns="http://docbook.org/ns/docbook"

View File

@ -230,15 +230,14 @@
<xi:include href="../common/getstart.xml"/>
<xi:include href="aboutcompute.xml"/>
<xi:include href="computeinstall.xml"/>
<!--<xi:include href="computeconfigure.xml"/> -->
<!--<xi:include href="compute-options-reference.xml"/> -->
<xi:include href="../openstack-config/ch_computeconfigure.xml"/>
<xi:include href="../openstack-config/ch_compute-options-reference.xml"/>
<xi:include href="../common/ch_identity_mgmt.xml"/>
<xi:include href="../common/ch_image_mgmt.xml"/>
<xi:include href="ch_instance_mgmt.xml"/>
<xi:include href="../openstack-config/ch_computehypervisors.xml"/>
<xi:include href="computenetworking.xml"/>
<xi:include href="computevolumes.xml"/>
<!-- next two files previously commented out - but build fails -->
<xi:include href="computescheduler.xml"/>
<xi:include href="../openstack-config/ch_computecells.xml"/>
<xi:include href="computeadmin.xml"/>

View File

@ -14,24 +14,6 @@
</listitem></itemizedlist></para>
<xi:include href="section_dashboard.xml"/>
<section
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="remote-console-access">
<title>Access OpenStack through a remote console</title>
<!--<?dbhtml stop-chunking?>-->
<para>OpenStack has two main methods for providing a remote
console or remote desktop access to guest Virtual
Machines. They are VNC, and SPICE HTML5 and can be used
either through the OpenStack dashboard and the command
line. Best practice is to select one or the other to
run.</para>
<xi:include href="../common/using-vnc-console.xml"/>
<xi:include href="compute-spice-console.xml"/>
</section>
<!--<xi:include href="../openstack-config/compute-configure-console.xml"/>-->
<xi:include href="../common/using-vnc-console.xml"/>
</chapter>

View File

@ -0,0 +1,281 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="installing-moosefs-as-backend"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Installing MooseFS as shared storage for the instances directory</title>
<para> In the sections about Block Storage you see a convenient way to deploy a shared storage using
NFS. For better transactions performance, you could deploy MooseFS instead. </para>
<para>MooseFS (Moose File System) is a shared file system ; it implements the same rough
concepts of shared storage solutions - such as Ceph, Lustre or even GlusterFS. </para>
<para>
<emphasis role="bold">Main concepts </emphasis>
<itemizedlist>
<listitem>
<para> A metadata server (MDS), also called master server, which manages the file
repartition, their access and the namespace.</para>
</listitem>
<listitem>
<para>A metalogger server (MLS) which backs up the MDS logs, including, objects, chunks,
sessions and object metadata</para>
</listitem>
<listitem>
<para>A chunk server (CSS) which store the data as chunks
and replicate them across the chunkservers</para>
</listitem>
<listitem>
<para>A client, which talks with the MDS and interact with the CSS. MooseFS clients manage
MooseFS filesystem using FUSE</para>
</listitem>
</itemizedlist> For more informations, please see the <link
xlink:href="http://www.moosefs.org/">Official project website</link>
</para>
<para>Our setup will be made the following way : </para>
<para>
<itemizedlist>
<listitem>
<para> Two compute nodes running both MooseFS chunkserver and client services. </para>
</listitem>
<listitem>
<para> One MooseFS master server, running the metadata service. </para>
</listitem>
<listitem>
<para> One MooseFS slave server, running the metalogger service. </para>
</listitem>
</itemizedlist> For that particular walkthrough, we will use the following network schema : </para>
<para>
<itemizedlist>
<listitem>
<para><literal>10.0.10.15</literal> for the MooseFS metadata server admin IP</para>
</listitem>
<listitem>
<para><literal>10.0.10.16</literal> for the MooseFS metadata server main IP</para>
</listitem>
<listitem>
<para><literal>10.0.10.17</literal> for the MooseFS metalogger server admin IP</para>
</listitem>
<listitem>
<para><literal>10.0.10.18</literal> for the MooseFS metalogger server main IP</para>
</listitem>
<listitem>
<para><literal>10.0.10.19</literal> for the MooseFS first chunkserver IP</para>
</listitem>
<listitem>
<para><literal>10.0.10.20</literal> for the MooseFS second chunkserver IP</para>
</listitem>
</itemizedlist>
<figure xml:id="moose-FS-deployment">
<title>MooseFS deployment for OpenStack</title>
<mediaobject>
<imageobject>
<imagedata fileref="figures/moosefs/SCH_5008_V00_NUAC-MooseFS_OpenStack.png" scale="60"
/>
</imageobject>
</mediaobject>
</figure>
</para>
<section xml:id="installing-moosefs-metadata-metalogger-servers">
<title> Installing the MooseFS metadata and metalogger servers</title>
<para>You can run these components anywhere as long as the MooseFS chunkservers can reach
the MooseFS master server. </para>
<para>In our deployment, both MooseFS master and slave run their services inside a virtual
machine ; you just need to make sure to allocate enough memory to the MooseFS metadata
server, all the metadata being stored in RAM when the service runs. </para>
<para>
<orderedlist>
<listitem>
<para><emphasis role="bold">Hosts entry configuration</emphasis></para>
<para>In the <filename>/etc/hosts</filename> add the following entry :
<programlisting>
10.0.10.16 mfsmaster
</programlisting></para>
</listitem>
<listitem>
<para><emphasis role="bold">Required packages</emphasis></para>
<para>Install the required packages by running the following commands :
<screen os="ubuntu"><prompt>$</prompt> <userinput>apt-get install zlib1g-dev python pkg-config</userinput> </screen>
<screen os="rhel;fedora;centos"><prompt>$</prompt> <userinput>yum install make automake gcc gcc-c++ kernel-devel python26 pkg-config</userinput></screen>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">User and group creation</emphasis></para>
<para> Create the adequate user and group :
<screen><prompt>$</prompt> <userinput>groupadd mfs &amp;&amp; useradd -g mfs mfs </userinput></screen>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Download the sources</emphasis></para>
<para> Go to the <link xlink:href="http://www.moosefs.org/download.html">MooseFS download page</link>
and fill the download form in order to obtain your URL for the package.
</para>
<para/>
</listitem>
<listitem>
<para><emphasis role="bold">Extract and configure the sources</emphasis></para>
<para>Extract the package and compile it :
<screen><prompt>$</prompt> <userinput>tar -zxvf mfs-1.6.25.tar.gz &amp;&amp; cd mfs-1.6.25 </userinput></screen>
For the MooseFS master server installation, we disable from the compilation the
mfschunkserver and mfsmount components :
<screen><prompt>$</prompt> <userinput>./configure --prefix=/usr --sysconfdir=/etc/moosefs --localstatedir=/var/lib --with-default-user=mfs --with-default-group=mfs --disable-mfschunkserver --disable-mfsmount</userinput></screen><screen><prompt>$</prompt> <userinput>make &amp;&amp; make install</userinput></screen>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Create configuration files</emphasis></para>
<para> We will keep the default settings, for tuning performance, you can read the <link
xlink:href="http://www.moosefs.org/moosefs-faq.html">MooseFS official FAQ</link>
</para>
<para><screen><prompt>$</prompt> <userinput>cd /etc/moosefs</userinput></screen>
<screen><prompt>$</prompt> <userinput>cp mfsmaster.cfg.dist mfsmaster.cfg </userinput></screen>
<screen><prompt>$</prompt> <userinput>cp mfsmetalogger.cfg.dist mfsmetalogger.cfg </userinput></screen>
<screen><prompt>$</prompt> <userinput>cp mfsexports.cfg.dist mfsexports.cfg </userinput></screen>
In <filename>/etc/moosefs/mfsexports.cfg</filename> edit the second line in order to
restrict the access to our private network : </para>
<programlisting>
10.0.10.0/24 / rw,alldirs,maproot=0
</programlisting>
<para>
Create the metadata file :
<screen><prompt>$</prompt> <userinput>cd /var/lib/mfs &amp;&amp; cp metadata.mfs.empty metadata.mfs</userinput></screen>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Power up the MooseFS mfsmaster service</emphasis></para>
<para>You can now start the <literal>mfsmaster</literal> and <literal>mfscgiserv</literal> deamons on the MooseFS
metadataserver (The <literal>mfscgiserv</literal> is a webserver which allows you to see via a
web interface the MooseFS status realtime) :
<screen><prompt>$</prompt> <userinput>/usr/sbin/mfsmaster start &amp;&amp; /usr/sbin/mfscgiserv start</userinput></screen>Open
the following url in your browser : http://10.0.10.16:9425 to see the MooseFS status
page</para>
<para/>
</listitem>
<listitem>
<para><emphasis role="bold">Power up the MooseFS metalogger service</emphasis></para>
<para>
<screen><prompt>$</prompt> <userinput>/usr/sbin/mfsmetalogger start</userinput></screen>
</para>
</listitem>
</orderedlist>
</para>
<para/>
</section>
<section xml:id="installing-moosefs-chunk-client-services">
<title>Installing the MooseFS chunk and client services</title>
<para> In the first part, we will install the last version of FUSE, and proceed to the
installation of the MooseFS chunk and client in the second part. </para>
<para/>
<para><emphasis role="bold">Installing FUSE</emphasis></para>
<para>
<orderedlist>
<listitem>
<para><emphasis role="bold">Required package</emphasis></para>
<para>
<screen os="ubuntu"><prompt>$</prompt> <userinput>apt-get install util-linux</userinput> </screen>
<screen os="rhel;fedora;centos"><prompt>$</prompt> <userinput>yum install util-linux</userinput></screen></para>
</listitem>
<listitem>
<para><emphasis role="bold">Download the sources and configure them</emphasis></para>
<para> For that setup we will retrieve the last version of fuse to make sure every
function will be available :
<screen><prompt>$</prompt> <userinput>wget http://downloads.sourceforge.net/project/fuse/fuse-2.X/2.9.1/fuse-2.9.1.tar.gz &amp;&amp; tar -zxvf fuse-2.9.1.tar.gz &amp;&amp; cd fuse-2.9.1</userinput></screen><screen><prompt>$</prompt> <userinput>./configure &amp;&amp; make &amp;&amp; make install</userinput></screen>
</para>
</listitem>
</orderedlist>
</para>
<para><emphasis role="bold">Installing the MooseFS chunk and client services</emphasis></para>
<para> For installing both services, you can follow the same steps that were presented before
(Steps 1 to 4) : <orderedlist>
<listitem>
<para> Hosts entry configuration</para>
</listitem>
<listitem>
<para>Required packages</para>
</listitem>
<listitem>
<para>User and group creation</para>
</listitem>
<listitem>
<para>Download the sources</para>
</listitem>
<listitem>
<para><emphasis role="bold">Extract and configure the sources</emphasis></para>
<para>Extract the package and compile it :
<screen><prompt>$</prompt> <userinput>tar -zxvf mfs-1.6.25.tar.gz &amp;&amp; cd mfs-1.6.25 </userinput></screen>
For the MooseFS chunk server installation, we only disable from the compilation the
mfsmaster component :
<screen><prompt>$</prompt> <userinput>./configure --prefix=/usr --sysconfdir=/etc/moosefs --localstatedir=/var/lib --with-default-user=mfs --with-default-group=mfs --disable-mfsmaster</userinput></screen><screen><prompt>$</prompt> <userinput>make &amp;&amp; make install</userinput></screen>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Create configuration files</emphasis></para>
<para> The chunk servers configuration is relatively easy to setup. You only need to
create on every server directories that will be used for storing the datas of your
cluster.</para>
<para><screen><prompt>$</prompt> <userinput>cd /etc/moosefs</userinput></screen>
<screen><prompt>$</prompt> <userinput>cp mfschunkserver.cfg.dist mfschunkserver.cfg</userinput></screen>
<screen><prompt>$</prompt> <userinput>cp mfshdd.cfg.dist mfshdd.cfg</userinput></screen>
<screen><prompt>$</prompt> <userinput>mkdir /mnt/mfschunks{1,2} &amp;&amp; chown -R mfs:mfs /mnt/mfschunks{1,2}</userinput></screen>
Edit <filename>/etc/moosefs/mfhdd.cfg</filename> and add the directories you created
to make them part of the cluster : </para>
<programlisting>
# mount points of HDD drives
#
#/mnt/hd1
#/mnt/hd2
#etc.
/mnt/mfschunks1
/mnt/mfschunks2
</programlisting>
</listitem>
<listitem>
<para><emphasis role="bold">Power up the MooseFS mfschunkserver service</emphasis></para>
<para>
<screen><prompt>$</prompt> <userinput>/usr/sbin/mfschunkserver start</userinput></screen>
</para>
</listitem>
</orderedlist>
</para>
</section>
<section xml:id="access-to-cluster-storage">
<title>Access to your cluster storage</title>
<para> You can now access your cluster space from the compute node, (both acting as
chunkservers) : <screen><prompt>$</prompt> <userinput>mfsmount /var/lib/nova/instances -H mfsmaster</userinput></screen>
<computeroutput> mfsmaster accepted connection with parameters: read-write,restricted_ip ;
root mapped to root:root </computeroutput>
<screen><prompt>$</prompt> <userinput>mount</userinput></screen><programlisting>
/dev/cciss/c0d0p1 on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
none on /sys type sysfs (rw,noexec,nosuid,nodev)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
none on /dev type devtmpfs (rw,mode=0755)
none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
none on /dev/shm type tmpfs (rw,nosuid,nodev)
none on /var/run type tmpfs (rw,nosuid,mode=0755)
none on /var/lock type tmpfs (rw,noexec,nosuid,nodev)
none on /var/lib/ureadahead/debugfs type debugfs (rw,relatime)
<emphasis role="bold">mfsmaster:9421 on /var/lib/nova/instances type fuse.mfs (rw,allow_other,default_permissions)</emphasis>
</programlisting>You
can interact with it the way you would interact with a classical mount, using build-in linux
commands (cp, rm, etc...).
</para>
<para> The MooseFS client has several tools for managing the objects within the cluster (set
replication goals, etc..). You can see the list of the available tools by running
<screen><prompt>$</prompt> <userinput>mfs &lt;TAB&gt; &lt;TAB&gt;</userinput> </screen><programlisting>
mfsappendchunks mfschunkserver mfsfileinfo mfsgetgoal mfsmount mfsrsetgoal mfssetgoal mfstools
mfscgiserv mfsdeleattr mfsfilerepair mfsgettrashtime mfsrgetgoal mfsrsettrashtime mfssettrashtime
mfscheckfile mfsdirinfo mfsgeteattr mfsmakesnapshot mfsrgettrashtime mfsseteattr mfssnapshot
</programlisting>You
can read the manual for every command. You can also see the <link xlink:href="http://linux.die.net/man/1/mfsrgetgoal">online help</link>
</para>
<para><emphasis role="bold">Add an entry into the fstab file</emphasis></para>
<para>
In order to make sure to have the storage mounted, you can add an entry into the <filename>/etc/fstab</filename> on both compute nodes:
<programlisting>
mfsmount /var/lib/nova/instances fuse mfsmaster=mfsmaster,_netdev 0 0
</programlisting>
</para>
</section>
</section>

View File

@ -2,7 +2,7 @@
<book xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="bk_block-storage-grizzly">
xml:id="bk-openstack-config-reference">
<title>OpenStack Configuration Reference</title>
<?rax title.font.size="28px" subtitle.font.size="28px"?>
<titleabbrev>OpenStack Configuration Reference</titleabbrev>
@ -20,8 +20,8 @@
<year>2013</year>
<holder>OpenStack Foundation</holder>
</copyright>
<productname>OpenStack Block Storage Service</productname>
<releaseinfo>Grizzly, 2013.1</releaseinfo>
<productname>OpenStack</productname>
<releaseinfo>Havanah</releaseinfo>
<pubdate/>
<legalnotice role="apache2">
<annotation>
@ -54,43 +54,22 @@
</revhistory>
</info>
<xi:include href="config-overview.xml"/>
<xi:include href="ch_config-overview.xml"/>
<!-- Identity -->
<xi:include href="ch_identityconfigure.xml"/>
<!-- Compute -->
<xi:include href="ch_computeconfigure.xml"/>
<xi:include href="ch_computehypervisors.xml"/>
<xi:include href="ch_computescheduler.xml"/>
<xi:include href="ch_computecells.xml"/>
<!-- Dashboard -->
<xi:include href="ch_dashboardconfigure.xml"/>
<!-- Object Storage -->
<xi:include href="ch_objectstorageconfigure.xml"/>
<!-- Block Storage -->
<xi:include href="ch_blockstorageconfigure.xml"/>
<!-- Long listings of reference tables -->
<xi:include href="ch_compute-options-reference.xml"/>
<xi:include href="ch_image-options-reference.xml"/>
<xi:include href="ch_networking-options-reference.xml"/>
<xi:include href="ch_dashboardconfigure.xml"/>
<xi:include href="ch_objectstorageconfigure.xml"/>
<xi:include href="ch_blockstorageconfigure.xml"/>
<!--
Outline
Configuring Compute
api
scheduler
volumes
compute
network
Configuring Networking
Networking with nova-network
Networking settings
Networking scheduler settings
Configuring Volumes/Block Storage settings
Volume scheduler settings
Configuring Object Storage
Configuring Identity settings
Configuring Image settings
api
registry
Configuring Metering
-->
</book>

View File

@ -434,7 +434,7 @@ $ <userinput>sudo service nova-compute restart</userinput></screen>
<mediaobject>
<imageobject>
<imagedata fileref="figures/SCH_5004_V00_NUAC-Network_mode_KVM_Flat_OpenStack.png"
<imagedata fileref="../openstack-compute-admin/figures/SCH_5004_V00_NUAC-Network_mode_KVM_Flat_OpenStack.png"
contentwidth="6in"/>
</imageobject>
</mediaobject>
@ -477,7 +477,7 @@ xenapi_remap_vbd_dev=true
<mediaobject>
<imageobject>
<imagedata fileref="figures/SCH_5005_V00_NUAC-Network_mode_XEN_Flat_OpenStack.png"
<imagedata fileref="../openstack-compute-admin/figures/SCH_5005_V00_NUAC-Network_mode_XEN_Flat_OpenStack.png"
scale="60"/>
</imageobject>
</mediaobject>

View File

@ -0,0 +1,11 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_configuring-dashboard">
<title>Configuring Console Access</title>
<para>This chapter describes how to configure access to running
VMS through a console, either through the Dashboard or the
nova CLI.</para>
<xi:include href="../common/ compute-vnc-console.xml"/>
</chapter>

View File

@ -0,0 +1,32 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter version="5.0" xml:id="ch_configuring-openstack-identity"
xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:ns5="http://www.w3.org/1999/xhtml"
xmlns:ns4="http://www.w3.org/2000/svg"
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
xmlns:ns="http://docbook.org/ns/docbook">
<title>Configuring OpenStack Identity</title>
<?dbhtml stop-chunking?>
<para>The Identity project has several configuration
options.</para>
<section xml:id="setting-flags-in-keystone-conf-file">
<title>Setting Configuration Options in the
<filename>keystone.conf</filename> File</title>
<para>The configuration file <filename>keystone.conf</filename>
is installed in <filename>/etc/keystone</filename> by default. A
default set of options are already configured in
<filename>cinder.conf</filename> when you install manually. </para>
<para>Here is a simple example
<filename>keystone.conf</filename> file.</para>
<programlisting>
<xi:include parse="text" href="../openstack-install/samples/keystone.conf"/>
</programlisting>
</section>
<xi:include href="../common/identity-configure.xml"/>
<xi:include href="../common/keystone-sample-conf-files.xml"/>
<xi:include href="../common/certificates-for-pki.xml"/>
<xi:include href="../common/keystone-ssl-config.xml"/>
</chapter>

View File

@ -57,7 +57,7 @@
<mediaobject>
<imageobject>
<imagedata
fileref="figures/novnc/SCH_5009_V00_NUAC-VNC_OpenStack.png"
fileref="../common/figures/novnc/SCH_5009_V00_NUAC-VNC_OpenStack.png"
format="PNG" width="5in"/>
</imageobject>
</mediaobject>

View File

@ -1,35 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="cli_help">
<title>OpenStack Configuration Overview</title>
<para>OpenStack is a collection of open source projects to enable
setting up cloud services. Each project uses similar
configuration techniques and a common framework for INI file
options. This guide pulls together multiple references for
each type of configuration. </para>
<para>.conf files</para>
<para>.ini files</para>
<para><emphasis role="bold">Compute</emphasis></para>
<para>API options</para>
<para>Authentication options</para>
<para>Database backends</para>
<para>Scheduling backends</para>
<para>Messaging backends</para>
<para>Virtualization options</para>
<para>Storage driver options</para>
<para>Networking driver options</para>
<para><emphasis role="bold">Block Storage</emphasis></para>
<para>Database backends</para>
<para>Scheduling backends</para>
<para>Storage driver options</para>
<para><emphasis role="bold">Object Storage</emphasis></para>
<para>.conf files</para>
<para>Authentication options</para>
<para><emphasis role="bold">Image</emphasis>
</para>
<para/>
<para><emphasis role="bold">Identity</emphasis></para>
<para/>
</chapter>