Updates for Kilo release

Many small updates to various sections to:
* remove/reword/redirect old pre-icehouse cruft
* update links from Juno to kilo (eg config reference)
* generally improve anything that looked dated

Change-Id: I26618be8527cccc1cbb7811272613da7d410fe50
This commit is contained in:
Tom Fifield 2015-04-29 16:09:30 +08:00
parent aed095792f
commit 4b9fe08d94
35 changed files with 79 additions and 111 deletions

View File

@ -38,8 +38,8 @@
<para>OpenStack offers open source software for cloud
administrators to manage and troubleshoot an OpenStack
cloud.</para>
<para>This guide documents OpenStack Juno, OpenStack
Icehouse, and OpenStack Havana releases.</para>
<para>This guide documents OpenStack Kilo, OpenStack
Juno, and OpenStack Icehouse releases.</para>
</abstract>
<revhistory>
<!-- ... continue adding more revisions here as you change this document using the markup shown below... -->

View File

@ -16,7 +16,7 @@
<para>On the KVM host run, <code>cat /proc/cpuinfo</code>. Make sure the <code>vme</code>
and <code>svm</code> flags are set.</para>
<para>Follow the instructions in the
<link xlink:href="http://docs.openstack.org/juno/config-reference/content/kvm.html#section_kvm_enable">
<link xlink:href="http://docs.openstack.org/kilo/config-reference/content/kvm.html#section_kvm_enable">
enabling KVM section</link> of the <citetitle>Configuration
Reference</citetitle> to enable hardware virtualization
support in your BIOS.</para>

View File

@ -29,7 +29,7 @@
<note>
<para>While most back-ends support this function, not all do.
See the driver documentation in the <link
xlink:href="http://docs.openstack.org/juno/config-reference/content/"
xlink:href="http://docs.openstack.org/kilo/config-reference/content/"
><citetitle>OpenStack Configuration
Reference</citetitle></link> for more
details.</para>

View File

@ -121,7 +121,7 @@
</listitem>
</itemizedlist>
<para>For more information about hypervisors, see the <link
xlink:href="http://docs.openstack.org/juno/config-reference/content/section_compute-hypervisors.html"
xlink:href="http://docs.openstack.org/kilo/config-reference/content/section_compute-hypervisors.html"
>Hypervisors</link> section in the
<citetitle>OpenStack Configuration
Reference</citetitle>.</para>
@ -247,7 +247,7 @@
and its state is maintained, even if the instance is shut down.
For more information about this type of configuration, see
the <link
xlink:href="http://docs.openstack.org/juno/config-reference/content/">
xlink:href="http://docs.openstack.org/kilo/config-reference/content/">
<citetitle>OpenStack Configuration Reference</citetitle></link>.
</para>
<note>
@ -268,7 +268,7 @@
an EC2-compatible API. This API allows EC2 legacy workflows
built for EC2 to work with OpenStack. For more information and
configuration options about this compatibility API, see the <link
xlink:href="http://docs.openstack.org/juno/config-reference/content/">
xlink:href="http://docs.openstack.org/kilo/config-reference/content/">
<citetitle>OpenStack Configuration Reference</citetitle></link>.
</para>
<para>Numerous third-party tools and language-specific SDKs

View File

@ -27,7 +27,7 @@
</listitem>
<listitem>
<para>For more information about image configuration options,
see the <link xlink:href="http://docs.openstack.org/juno/config-reference/content/ch_configuring-openstack-image-service.html">
see the <link xlink:href="http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-image-service.html">
Image services</link> section of the <citetitle>OpenStack
Configuration Reference</citetitle>.</para>
</listitem>
@ -119,7 +119,7 @@
<xi:include href="section_compute-instance-mgt-tools.xml"/>
<section xml:id="section_instance-scheduling-constraints">
<title>Control where instances run</title>
<para>The <link xlink:href="http://docs.openstack.org/juno/config-reference/content/">
<para>The <link xlink:href="http://docs.openstack.org/kilo/config-reference/content/">
<citetitle>OpenStack Configuration Reference</citetitle></link>
provides detailed information on controlling where your instances
run, including ensuring a set of instances run on different compute

View File

@ -224,7 +224,7 @@
<programlisting language="ini">dnsmasq_config_file=/etc/dnsmasq-nova.conf</programlisting>
<para>For more information about creating a
<systemitem>dnsmasq</systemitem> configuration file, see the
<link xlink:href="http://docs.openstack.org/juno/config-reference/content/">
<link xlink:href="http://docs.openstack.org/kilo/config-reference/content/">
<citetitle>OpenStack Configuration Reference</citetitle></link>, and
<link xlink:href="http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq.conf.example">
the dnsmasq documentation</link>.</para>

View File

@ -261,7 +261,7 @@ qualname = nova</programlisting>
Python documentation</link> on logging configuration files.</para>
<para>For an example <filename>logging.conf</filename> file with
various defined handlers, see the
<link xlink:href="http://docs.openstack.org/juno/config-reference/content/">
<link xlink:href="http://docs.openstack.org/kilo/config-reference/content/">
<citetitle>OpenStack Configuration Reference</citetitle></link>.
</para>
</simplesect>
@ -457,7 +457,7 @@ ws = websocket.create_connection(
<para>Although the <command>nova</command> command is called
<command>live-migration</command>, under the default Compute
configuration options, the instances are suspended before migration.
For more information, see <link xlink:href="http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html">
For more information, see <link xlink:href="http://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html">
Configure migrations</link> in the <citetitle>OpenStack
Configuration Reference</citetitle>.</para>
</note>

View File

@ -47,7 +47,7 @@
image has been used before it won't necessarily be downloaded
every time. Information on the configuration options for caching
on compute nodes can be found in the <link
xlink:href="http://docs.openstack.org/juno/config-reference/content/"><citetitle>Configuration
xlink:href="http://docs.openstack.org/kilo/config-reference/content/"><citetitle>Configuration
Reference</citetitle></link>.
</para>
</section>

View File

@ -38,7 +38,7 @@
<para>Define roles or policies in the
<filename>policy.json</filename> file:</para>
<programlisting language="json"><xi:include parse="text"
href="https://git.openstack.org/cgit/openstack/glance/plain/etc/policy.json?h=stable/juno"/></programlisting>
href="https://git.openstack.org/cgit/openstack/glance/plain/etc/policy.json?h=stable/kilo"/></programlisting>
<para>For each parameter, use <literal>"rule:restricted"</literal> to
restrict access to all users or <literal>"role:admin"</literal>
to limit access to administrator roles. For example:</para>
@ -89,7 +89,7 @@ delete = !</programlisting>
found, the <systemitem role="service">glance-api</systemitem>
service does not start.</para>
<para>To view a sample configuration file, see <link
xlink:href="http://docs.openstack.org/juno/config-reference/content/section_glance-api.conf.html"
xlink:href="http://docs.openstack.org/kilo/config-reference/content/section_glance-api.conf.html"
>glance-api.conf</link>.</para>
</step>
<step>
@ -99,7 +99,7 @@ delete = !</programlisting>
<programlisting language="ini">property_protection_rule_format = roles</programlisting>
<para>The default is <literal>roles</literal>.</para>
<para>To view a sample configuration file, see <link
xlink:href="http://docs.openstack.org/juno/config-reference/content/section_glance-api.conf.html"
xlink:href="http://docs.openstack.org/kilo/config-reference/content/section_glance-api.conf.html"
>glance-api.conf</link>.</para>
</step>
</procedure>

View File

@ -811,7 +811,7 @@
<title>Basic Load-Balancer-as-a-Service operations</title>
<note>
<para>The Load-Balancer-as-a-Service (LBaaS) API provisions and configures load
balancers. The Havana release offers a reference implementation that is based on the
balancers. The reference implementation is based on the
HAProxy software load balancer.</para>
</note>
<para>This list shows example neutron commands that enable you to complete basic LBaaS

View File

@ -188,9 +188,8 @@
<systemitem class="service"
>neutron-ovs-cleanup</systemitem> service runs
the <command>neutron-ovs-cleanup</command> command
automatically. However, on Debian-based systems
(including Ubuntu in releases earlier than
Icehouse), you must manually run this command or
automatically. However, on Debian-based systems,
you must manually run this command or
write your own system script that runs on boot
before the <systemitem class="service"
>neutron-dhcp-agent</systemitem> service
@ -222,7 +221,7 @@
configure the DHCP agent to automatically detach from a network
when the agent is out of service, or no longer needed.</para>
<para>This feature applies to all plugins that support DHCP scaling. For more information,
see the <link xlink:href="http://docs.openstack.org/juno/config-reference/content/networking-options-dhcp.html">
see the <link xlink:href="http://docs.openstack.org/kilo/config-reference/content/networking-options-dhcp.html">
DHCP agent configuration options</link> listed in the OpenStack Configuration Reference.</para>
<section xml:id="dhcp_agent_ovs">
<title>DHCP agent setup: OVS plug-in</title>
@ -370,8 +369,8 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlist
<systemitem class="service"
>neutron-ovs-cleanup</systemitem> service runs
the <command>neutron-ovs-cleanup</command> command
automatically. However, on Debian-based systems
(including Ubuntu prior to Icehouse), you must
automatically. However, on Debian-based systems,
you must
manually run this command or write your own system
script that runs on boot before the <systemitem
class="service">neutron-l3-agent</systemitem>
@ -380,8 +379,7 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlist
</section>
<section xml:id="install_neutron-metering-agent">
<title>Configure metering agent</title>
<para>Starting with the Havana release, the Neutron
Metering resides beside
<para>The Neutron Metering agent resides beside
<systemitem class="service">neutron-l3-agent</systemitem>.</para>
<procedure>
<title>To install the metering agent and configure the
@ -389,12 +387,6 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlist
<step>
<para>Install the agent by running:</para>
<screen><prompt>#</prompt> <userinput>apt-get install neutron-metering-agent</userinput></screen>
<note>
<title>Package name prior to Icehouse</title>
<para>In releases of neutron prior to
Icehouse, this package was named
<package>neutron-plugin-metering-agent</package>.</para>
</note>
</step>
<step>
<para>If you use one of the following plugins, you
@ -497,15 +489,6 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlist
and
<systemitem class="service">neutron-lbaas-agent</systemitem>
services.</para>
<note>
<title>Upgrade from Havana to Icehouse</title>
<para>In the Icehouse release, LBaaS
server-agent communications changed. If
you transition from Havana to Icehouse,
make sure to upgrade both server and agent
sides before you use the load balancing
service.</para>
</note>
</step>
<step>
<para>Enable load balancing in the
@ -536,7 +519,7 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlist
<para>Before you install the OpenStack Networking Hyper-V L2 agent on a
Hyper-V compute node, ensure the compute node has been configured
correctly using these <link
xlink:href="http://docs.openstack.org/juno/config-reference/content/hyper-v-virtualization-platform.html"
xlink:href="http://docs.openstack.org/kilo/config-reference/content/hyper-v-virtualization-platform.html"
>instructions</link>.</para>
<procedure>
<title>To install the OpenStack Networking Hyper-V agent and configure the node</title>
@ -557,7 +540,7 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlist
<step>
<para>Create the <filename>C:\etc\neutron-hyperv-agent.conf</filename> file and add the
proper configuration options and the <link
xlink:href="http://docs.openstack.org/juno/config-reference/content/networking-plugin-hyperv_agent.html">Hyper-V
xlink:href="http://docs.openstack.org/kilo/config-reference/content/networking-plugin-hyperv_agent.html">Hyper-V
related options</link>. Here is a sample config file:</para>
<programlisting><xi:include parse="text" href="../../common/samples/neutron-hyperv-agent.conf"/></programlisting>
</step>

View File

@ -3,7 +3,7 @@
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="section_plugin-config">
<title>Plug-in configurations</title>
<para>For configurations options, see <link
xlink:href="http://docs.openstack.org/icehouse/config-reference/content/section_networking-options-reference.html"
xlink:href="http://docs.openstack.org/kilo/config-reference/content/section_networking-options-reference.html"
>Networking configuration options</link> in <citetitle>Configuration
Reference</citetitle>. These sections explain how to configure specific plug-ins.</para>
<section xml:id="bigswitch_floodlight_plugin">
@ -26,7 +26,7 @@
<systemitem>controller_ip:port</systemitem> pairs:</para>
<programlisting language="ini">server = <replaceable>CONTROLLER_IP</replaceable>:<replaceable>PORT</replaceable></programlisting>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/icehouse/install-guide/install/apt/content/neutron-ml2-controller-node.html"
xlink:href="http://docs.openstack.org/kilo/install-guide/install/apt/content/neutron-controller-node.html"
>Install Networking Services</link> in the <citetitle>Installation
Guide</citetitle> in the <link xlink:href="http://docs.openstack.org"
>OpenStack Documentation index</link>. (The link defaults to the Ubuntu
@ -66,7 +66,7 @@ password = <replaceable>PASSWORD</replaceable>
address = <replaceable>SWITCH_MGMT_IP_ADDRESS</replaceable>
ostype = NOS</programlisting>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/icehouse/install-guide/install/apt/content/neutron-ml2-controller-node.html"
xlink:href="http://docs.openstack.org/kilo/install-guide/install/apt/content/neutron-controller-node.html"
>Install Networking Services</link> in any of the <citetitle>Installation
Guides</citetitle> in the <link xlink:href="http://docs.openstack.org"
>OpenStack Documentation index</link>. (The link defaults to the Ubuntu
@ -106,7 +106,7 @@ local_ip=<replaceable>DATA_NET_IP_NODE_ADDRESS</replaceable></programlisting>
successfully stop the OVS agent.</para>
</note>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/icehouse/install-guide/install/apt/content/neutron-ml2-controller-node.html"
xlink:href="http://docs.openstack.org/kilo/install-guide/install/apt/content/neutron-controller-node.html"
>Install Networking Services</link> in the <citetitle>Installation
Guide</citetitle>.</para>
</step>
@ -198,7 +198,7 @@ nsx_controllers = <replaceable>API_ENDPOINT_LIST</replaceable> # comma-separated
</listitem>
</itemizedlist>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/icehouse/install-guide/install/apt/content/neutron-ml2-controller-node.html"
xlink:href="http://docs.openstack.org/kilo/install-guide/install/apt/content/neutron-controller-node.html"
>Install Networking Services</link> in the <citetitle>Installation
Guide</citetitle>.</para>
</step>
@ -319,7 +319,7 @@ director_server_port = "PLUMgrid-director-port"
username = "PLUMgrid-director-admin-username"
password = "PLUMgrid-director-admin-password"</programlisting>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/icehouse/install-guide/install/apt/content/neutron-ml2-controller-node.html"
xlink:href="http://docs.openstack.org/kilo/install-guide/install/apt/content/neutron-controller-node.html"
>Install Networking Services</link> in the <citetitle>Installation
Guide</citetitle>.</para>
</step>

View File

@ -499,11 +499,10 @@ enabled = True</programlisting>
plug-ins, the cloud administrator must determine the right
networking technology for the deployment.</para>
<para>The <glossterm baseform="Modular Layer 2 (ML2) neutron plug-in"
>Modular Layer 2 (ML2) plug-in</glossterm> was introduced
in the Havana release, enabling a variety of layer 2 networking
technologies to be created and managed with less effort compared
to earlier plug-ins. It currently works with Open vSwitch, Linux
Bridge, and Hyper-v L2 agents.</para>
>Modular Layer 2 (ML2) plug-in</glossterm> enables a variety
of layer 2 networking technologies to be created and managed with
less effort compared to stand-alone plug-ins. It currently works
with Open vSwitch, Linux Bridge, and Hyper-v L2 agents.</para>
<para>The ML2 framework contains two types of drivers, type drivers and mechanism drivers. Type drivers maintain any needed type-specific network states, perform provider network validation and tenant network allocation. Mechanism drivers ensure the information established by the type driver is properly applied. Multiple mechanism drivers can access the same network simultaneously, which addresses complex requirements in large heterogeneous environments.</para>
<para>You can enable the ML2 plug-in by editing the <literal>core-plugin</literal>
parameter in the <filename>/etc/neutron/neutron.conf</filename>
@ -511,9 +510,8 @@ enabled = True</programlisting>
<programlisting language = "ini">core_plugin = <replaceable>neutron.plugins.ml2.plugin.Ml2Plugin</replaceable></programlisting>
<note>
<title>Plug-in deprecation notice</title>
<para>The Open vSwitch and Linux Bridge plug-ins are
deprecated in the Havana release and will be removed
in the Icehouse release. The features in these
<para>The Open vSwitch and Linux Bridge plug-ins were
removed in the Icehouse release. The features in these
plug-ins are now part of the ML2 plug-in in the form
of mechanism drivers.</para>
</note>

View File

@ -43,8 +43,6 @@
the <emphasis>domain admin</emphasis>, and Orchestration uses that user
to manage the lifecycle of the users in the
<emphasis>stack user domain</emphasis>.</para>
<para>Stack domain users functionality is available since Icehouse release.
</para>
<section xml:id="section_orchestration_stack-domain-users-configuration">
<title>Stack domain users configuration</title>
<para>To configure stack domain users the following steps shall be

View File

@ -12,7 +12,7 @@
xlink:href="http://docs.openstack.org/developer/swift/"
>docs.openstack.org/developer/swift/</link>.</para>
<para>See the <link
xlink:href="http://docs.openstack.org/juno/config-reference/content/"
xlink:href="http://docs.openstack.org/kilo/config-reference/content/"
><citetitle>OpenStack Configuration
Reference</citetitle></link> for a list of
configuration options for Object Storage.</para>

View File

@ -88,7 +88,7 @@
<option>evaluation_service</option> option to <literal>
default</literal>. For more information, see the alarm
section in the
<link xlink:href="http://docs.openstack.org/juno/config-reference/content/ch_configuring-openstack-telemetry.html">
<link xlink:href="http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-telemetry.html">
<citetitle>OpenStack Configuration Reference</citetitle></link>.
</para>
</section>

View File

@ -95,7 +95,7 @@
<para>image.send</para></td>
<td>The required configuration for Image service can be found in the
<link xlink:href=
"http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-agent-glance.html">
"http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-glance.html">
Configure the Image service for Telemetry section</link> section
in the <citetitle>OpenStack Installation Guide</citetitle>.</td>
</tr>
@ -143,7 +143,7 @@
<para>snapshot.update.*</para></td>
<td>The required configuration for Block Storage service can be found in the
<link xlink:href=
"http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-agent-cinder.html">
"http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-cinder.html">
Add the Block Storage service agent for Telemetry section</link>
section in the <citetitle>OpenStack Installation Guide</citetitle>.</td>
</tr>
@ -169,7 +169,7 @@
"section_telemetry-object-storage-meters"/>,
marked with <literal>notification</literal> as origin.</para>
<para>The instructions on how to install this middleware can be found in <link xlink:href=
"http://docs.openstack.org/juno/install-guide/install/apt/content/ceilometer-agent-swift.html">
"http://docs.openstack.org/kilo/install-guide/install/apt/content/ceilometer-swift.html">
Configure the Object Storage service for Telemetry</link>
section in the <citetitle>OpenStack Installation Guide</citetitle>.
</para>
@ -258,7 +258,7 @@
</listitem>
</itemizedlist>
<para>To install and configure this service use the <link xlink:href=
"http://docs.openstack.org/juno/install-guide/install/apt/content/ch_ceilometer.html">
"http://docs.openstack.org/kilo/install-guide/install/apt/content/ch_ceilometer.html">
Install the Telemetry module</link> section in the <citetitle>OpenStack
Installation Guide</citetitle>.</para>
<para>The central agent does not need direct database connection. The
@ -275,7 +275,7 @@
is placed on the host machines to locally retrieve this information.</para>
<para>A compute agent instance has to be installed on each and every compute node,
installation instructions can be found in the <link xlink:href=
"http://docs.openstack.org/juno/config-reference/content/ch_configuring-openstack-telemetry.html">
"http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-telemetry.html">
Install the Compute agent for Telemetry</link> section in the
<citetitle>OpenStack Installation Guide</citetitle>.
</para>
@ -325,7 +325,7 @@
<para>For information about the required configuration options that have to be set in the
<filename>ceilometer.conf</filename> configuration file for both the central and compute
agents, see the
<link xlink:href="http://docs.openstack.org/juno/config-reference/content/ch_configuring-openstack-telemetry.html">
<link xlink:href="http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-telemetry.html">
<literal>coordination</literal> section</link>
in the <citetitle>OpenStack Configuration Reference</citetitle>.</para>
<note>
@ -345,7 +345,7 @@
configuration also supports using different configuration files for groups of service
instances of this type that are running in parallel. For enabling this configuration
set a value for the <option>partitioning_group_prefix</option> option in the
<link xlink:href="http://docs.openstack.org/juno/config-reference/content/ch_configuring-openstack-telemetry.html">
<link xlink:href="http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-telemetry.html">
<literal>central</literal> section</link> in the <citetitle>OpenStack Configuration
Reference</citetitle>.</para>
<warning>
@ -359,7 +359,7 @@
<para>To enable the compute agent to run multiple instances simultaneously with
workload partitioning, the
<option>workload_partitioning</option> option has to be set to <literal>True</literal>
under the <link xlink:href="http://docs.openstack.org/juno/config-reference/content/ch_configuring-openstack-telemetry.html">
under the <link xlink:href="http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-telemetry.html">
compute section</link> in the <filename>ceilometer.conf</filename> configuration
file.</para>
</section>
@ -496,7 +496,7 @@
<filename>ceilometer.conf</filename> file. The meter pipeline and event pipeline
configuration files can be set by the <option>pipeline_cfg_file</option>
and <option>event_pipeline_cfg_file</option> options listed in the <link xlink:href=
"http://docs.openstack.org/juno/config-reference/content/ch_configuring-openstack-telemetry.html"
"http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-telemetry.html"
>Description of configuration options for api table</link> section in the
<citetitle>OpenStack Configuration Reference</citetitle> respectively. Multiple pipelines
can be defined in one pipeline configuration file.</para>
@ -827,7 +827,7 @@ sinks:
run at a time. It is also supported to start multiple worker threads per collector process.
The <option>collector_workers</option> configuration option has to be modified in the
<link xlink:href=
"http://docs.openstack.org/juno/config-reference/content/ch_configuring-openstack-telemetry.html">
"http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-telemetry.html">
collector section</link> of the <filename>ceilometer.conf</filename>
configuration file.</para>
<note>
@ -897,7 +897,7 @@ sinks:
<option>dispatcher</option> has to be changed to <literal>http</literal> in the
<filename>ceilometer.conf</filename> configuration file. For the list
of options that you need to set, see the see the
<link xlink:href="http://docs.openstack.org/juno/config-reference/content/ch_configuring-openstack-telemetry.html">
<link xlink:href="http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-telemetry.html">
<literal>dispatcher_http</literal> section</link>
in the <citetitle>OpenStack Configuration Reference</citetitle>.</para>
</simplesect>
@ -906,7 +906,7 @@ sinks:
<para>You can store samples in a file by setting the <option>dispatcher</option>
option in <filename>ceilometer.conf</filename> o <literal>file</literal>. For the list
of configuration options, see the
<link xlink:href="http://docs.openstack.org/juno/config-reference/content/ch_configuring-openstack-telemetry.html">
<link xlink:href="http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-telemetry.html">
<literal>dispatcher_file</literal> section</link>
in the <citetitle>OpenStack Configuration Reference</citetitle>.</para>
</simplesect>

View File

@ -238,7 +238,7 @@
<para>To be able to use the <command>ceilometer</command> command, the
<package>python-ceilometerclient</package> package needs to be installed and configured
properly. For details about the installation process, see the <link xlink:href=
"http://docs.openstack.org/juno/install-guide/install/apt/content/ch_ceilometer.html">
"http://docs.openstack.org/kilo/install-guide/install/apt/content/ch_ceilometer.html">
Telemetry chapter</link> in the <citetitle>OpenStack Installation Guide</citetitle>.</para>
<note>
<para>The Telemetry module captures the user-visible resource usage data. Therefore

View File

@ -19,7 +19,7 @@
<option>store_events</option> option needs to be set to
<literal>True</literal>. For further configuration options, see the event
section in the
<link xlink:href="http://docs.openstack.org/juno/config-reference/content/ch_configuring-openstack-telemetry.html">
<link xlink:href="http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-telemetry.html">
<citetitle>OpenStack Configuration Reference</citetitle></link>.</para>
<note><para>It is advisable to set <option>disable_non_metric_meters</option>
to <literal>True</literal> when enabling events in the Telemetry module.

View File

@ -21,7 +21,7 @@
services in order to be able to collect all the samples you need.
For further information about configuration requirements see the
<link xlink:href=
"http://docs.openstack.org/juno/install-guide/install/apt/content/ch_ceilometer.html">
"http://docs.openstack.org/kilo/install-guide/install/apt/content/ch_ceilometer.html">
Telemetry chapter</link> in the <citetitle>OpenStack Installation
Guide</citetitle>. Also check the <link xlink:href=
"http://docs.openstack.org/developer/ceilometer/install/manual.html">
@ -167,7 +167,7 @@
<literal>ComputeDriverCPUMonitor</literal> in the <filename>nova.conf</filename>
configuration file. For further information see the Compute configuration
section in the <link xlink:href=
"http://docs.openstack.org/trunk/config-reference/content/list-of-compute-config-options.html">
"http://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html">
Compute chapter</link> of the <citetitle>OpenStack Configuration
Reference</citetitle>.</para>
<para>The following host machine related meters are collected

View File

@ -230,7 +230,7 @@
<title>Users, roles and tenants</title>
<para>This module of OpenStack uses OpenStack Identity for authenticating and authorizing
users. The required configuration options are listed in the <link xlink:href=
"http://docs.openstack.org/juno/config-reference/content/ch_configuring-openstack-telemetry.html">
"http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-telemetry.html">
Telemetry section</link> in the <citetitle>OpenStack Configuration Reference</citetitle>.</para>
<para>Two roles are used in the system basically, which are the 'admin' and 'non-admin'. The
authorization happens before processing each API request. The amount of returned data depends

View File

@ -15,7 +15,7 @@
<filename>ceilometer.conf</filename>. The list of configuration
options are listed in the logging configuration options table in
the <link xlink:href=
"http://docs.openstack.org/juno/config-reference/content/ch_configuring-openstack-telemetry.html">
"http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-telemetry.html">
Telemetry section</link> in the <citetitle>OpenStack Configuration
Reference</citetitle>.</para>
<para>By default <literal>stderr</literal> is used as standard

View File

@ -34,14 +34,6 @@
<para>Vendors can add proprietary customization to their distributions. If
an application or architecture makes use of these features, it will be
difficult to migrate to or use other types of environments.</para>
<warning>
<para>Anyone planning to use older versions of OpenStack prior
to Havana should consider carefully before attempting to incorporate
functionality between versions. Internal differences in older
versions may be so great that the best approach might be to
consider the versions to be essentially diverse platforms, as
different as OpenStack is from Amazon Web Services or Microsoft Azure.</para>
</warning>
<para>If an environment includes non-OpenStack clouds, it may experience
compatibility problems. CMP tools must account for the differences in the
handling of operations and implementation of services. Some situations in which these
@ -303,7 +295,7 @@
are not common in other situations:</para>
<itemizedlist>
<listitem>
<para>Image portability: Note that, as of the Icehouse release,
<para>Image portability: Note that, as of the Kilo release,
there is no single common image format that is usable by all
clouds. Conversion or the recreation of images is necessary
if porting between clouds. To make things simpler,

View File

@ -16,7 +16,7 @@
customization of the service catalog for their site either
manually or via customization of the deployment tools in
use.</para>
<note><para>As of the Icehouse release, documentation for
<note><para>As of the Kilo release, documentation for
implementing this feature is in progress. See this bug for
more information:
<link

View File

@ -51,7 +51,7 @@
external overlay manager or controller be used to map these
overlays together. It is necessary to ensure the amount of
possible IDs between the zones are identical. Note that, as of
the Icehouse release, OpenStack Networking was not capable of managing
the Kilo release, OpenStack Networking was not capable of managing
tunnel IDs across installations. This means that if one site
runs out of IDs, but other does not, that tenant's network
is unable to reach the other site.</para>

View File

@ -38,7 +38,7 @@
be required to fill in the functional gaps. Hardware load
balancers are an example of equipment that may be necessary to
distribute workloads or offload certain functions. Note that,
as of the Icehouse release, dynamic routing is currently in
as of the Kilo release, dynamic routing is currently in
its infancy within OpenStack and may need to be implemented
either by an external device or a specialized service instance
within OpenStack. Tunneling is a feature provided by OpenStack Networking,
@ -161,7 +161,7 @@
options are available. This will alter the requirements for
any address plan as single-stacked and transitional IPv6
deployments can alleviate the need for IPv4 addresses.</para>
<para>As of the Icehouse release, OpenStack has limited support
<para>As of the Kilo release, OpenStack has limited support
for dynamic routing, however there are a number of options
available by incorporating third party solutions to implement
routing within the cloud including network equipment, hardware

View File

@ -137,7 +137,7 @@
Service</link> enables you to use
or <package>zypper</package> to install the package.
First, add the Open Build Service repository:
<screen><prompt>#</prompt> <userinput>zypper addrepo -f obs://Cloud:OpenStack:Icehouse/SLE_11_SP3 Icehouse</userinput></screen>
<screen><prompt>#</prompt> <userinput>zypper addrepo -f obs://Cloud:OpenStack:Kilo/SLE_12 Kilo</userinput></screen>
Then install <package>pip</package> and use it to manage client installation:
<screen><prompt>#</prompt> <userinput>zypper install python-devel python-pip</userinput></screen>
There are also packaged versions of the clients available
@ -261,7 +261,7 @@
<package>zypper</package> to install the clients from
the distribution packages in the Open Build Service. First,
add the Open Build Service repository:
<screen><prompt>#</prompt> <userinput>zypper addrepo -f obs://Cloud:OpenStack:Icehouse/SLE_11_SP3 Icehouse</userinput></screen>
<screen><prompt>#</prompt> <userinput>zypper addrepo -f obs://Cloud:OpenStack:Kilo/SLE_12 Kilo</userinput></screen>
Then you can install the packages:
<screen><prompt>#</prompt> <userinput>zypper install python-<replaceable>PROJECT</replaceable></userinput></screen>
</para>

View File

@ -2135,10 +2135,12 @@
<para>An official OpenStack project. Currently consists of Compute
(nova), Object Storage (swift), Image service (glance), Identity
(keystone), Dashboard (horizon), Networking (neutron), and Block
Storage (cinder). The Telemetry module (ceilometer) and Orchestration
module (heat) are integrated projects as of the Havana release. In the
Icehouse release, the Database service (trove) gains integrated project
status.</para>
Storage (cinder), the Telemetry module (ceilometer), Orchestration
module (heat), Database service (trove), Bare Metal service (ironic),
Data Processing service (sahara). However, this
definition is changing based on
community discussions about the "Big Tent".
</para>
</glossdef>
</glossentry>

View File

@ -14,7 +14,7 @@
nodes. Depending upon the drivers used, the volume service can run
on controllers, compute nodes, or standalone storage nodes.
For more information, see the
<link xlink:href="http://docs.openstack.org/juno/config-reference/content/section_volume-drivers.html">
<link xlink:href="http://docs.openstack.org/kilo/config-reference/content/section_volume-drivers.html">
<citetitle>Configuration Reference</citetitle></link>.</para>
<note>
<para>This chapter omits the backup manager because it depends on the

View File

@ -812,7 +812,7 @@
<title>Basic Load-Balancer-as-a-Service operations</title>
<note>
<para>The Load-Balancer-as-a-Service (LBaaS) API provisions and configures load
balancers. The Havana release offers a reference implementation that is based on the
balancers. The reference implementation is based on the
HAProxy software load balancer.</para>
</note>
<para>This list shows example neutron commands that enable you to complete basic LBaaS

View File

@ -4,7 +4,7 @@ Configuration
This content currently under development. For general configuration, see
the `Configuration Reference
<http://docs.openstack.org/juno/config-reference/content/>`_.
<http://docs.openstack.org/kilo/config-reference/content/>`_.
.. toctree::
:maxdepth: 2

View File

@ -73,7 +73,7 @@ The Virtual Private Network as a Service (VPNaaS) is a neutron extension that in
LbaaS
-----
The Load-Balancer-as-a-Service (LBaaS) API provisions and configures load balancers. The Havana release offers a reference implementation that is based on the HAProxy software load balancer.
The Load-Balancer-as-a-Service (LBaaS) API provisions and configures load balancers. The reference implementation is based on the HAProxy software load balancer.
FwaaS
-----

View File

@ -111,7 +111,7 @@ command-line clients, and provides installation instructions as needed.
| | .. code:: |
| | |
| | # zypper addrepo -f obs://Cloud:OpenStack: \ |
| | Icehouse/SLE_11_SP3 Icehouse |
| | Kilo/SLE_12 Kilo |
| | |
| | Then install pip and use it to manage client |
| | installation: |
@ -219,7 +219,7 @@ installed without ``pip``.
the distribution packages in the Open Build Service. First, add the Open
Build Service repository::
# zypper addrepo -f obs://Cloud:OpenStack:Icehouse/SLE_11_SP3 Icehouse
# zypper addrepo -f obs://Cloud:OpenStack:Kilo/SLE_12 Kilo
Then you can install the packages::

View File

@ -15,11 +15,6 @@ If you use the bare-metal driver, you must create a network interface
and add it to a bare-metal node. Then, you can launch an instance from a
bare-metal image.
.. note::
Development efforts are focused on moving the driver out of the
Compute code base in the Icehouse release.
You can list and delete bare-metal nodes. When you delete a node, any
associated network interfaces are removed. You can list and remove
network interfaces that are associated with a bare-metal node.

View File

@ -23,9 +23,9 @@ The Static Web filter must be added to the pipeline in your
middleware. You must also add a Static Web middleware configuration
section.
See the Cloud Administrator Guide for an example of the `static web configuration syntax <http://docs.openstack.org/juno/config-reference/content/object-storage-static-web.html>`_.
See the Cloud Administrator Guide for an example of the `static web configuration syntax <http://docs.openstack.org/kilo/config-reference/content/object-storage-static-web.html>`_.
See the Cloud Administrator Guide for a complete example of the `/etc/swift/proxy-server.conf file <http://docs.openstack.org/juno/config-reference/content/proxy-server-conf.html>`_ (including static web).
See the Cloud Administrator Guide for a complete example of the `/etc/swift/proxy-server.conf file <http://docs.openstack.org/kilo/config-reference/content/proxy-server-conf.html>`_ (including static web).
Your publicly readable containers are checked for two headers,
``X-Container-Meta-Web-Index`` and ``X-Container-Meta-Web-Error``. The