Browse Source

Remove '.. end' comments

These were used by now-dead tooling. We can remove them.

Change-Id: I4b4ef694206249da8b98589b3026f2a2be501ee3
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
changes/97/605097/1
Stephen Finucane 4 years ago
parent
commit
2250b193e4
  1. 4
      doc/common/conventions.rst
  2. 2
      doc/doc-contrib-guide/source/rst-conv/inline-markups.rst
  3. 8
      doc/doc-contrib-guide/source/rst-conv/profiling.rst
  4. 14
      doc/doc-contrib-guide/source/rst-conv/rst2bash.rst
  5. 14
      doc/doc-contrib-guide/source/rst-conv/source-code.rst
  6. 14
      doc/install-guide/source/environment-etcd-obs.rst
  7. 6
      doc/install-guide/source/environment-etcd-rdo.rst
  8. 6
      doc/install-guide/source/environment-etcd-ubuntu.rst
  9. 6
      doc/install-guide/source/environment-memcached-obs.rst
  10. 6
      doc/install-guide/source/environment-memcached-rdo.rst
  11. 6
      doc/install-guide/source/environment-memcached-ubuntu.rst
  12. 8
      doc/install-guide/source/environment-messaging-obs.rst
  13. 8
      doc/install-guide/source/environment-messaging-rdo.rst
  14. 6
      doc/install-guide/source/environment-messaging-ubuntu.rst
  15. 6
      doc/install-guide/source/environment-networking-compute.rst
  16. 6
      doc/install-guide/source/environment-networking-controller.rst
  17. 8
      doc/install-guide/source/environment-networking-verify.rst
  18. 16
      doc/install-guide/source/environment-ntp-controller.rst
  19. 12
      doc/install-guide/source/environment-ntp-other.rst
  20. 4
      doc/install-guide/source/environment-ntp-verify.rst
  21. 5
      doc/install-guide/source/environment-packages-obs.rst
  22. 2
      doc/install-guide/source/environment-security.rst
  23. 8
      doc/install-guide/source/environment-sql-database-obs.rst
  24. 8
      doc/install-guide/source/environment-sql-database-rdo.rst
  25. 8
      doc/install-guide/source/environment-sql-database-ubuntu.rst
  26. 14
      doc/install-guide/source/launch-instance-cinder.rst
  27. 12
      doc/install-guide/source/launch-instance-networks-provider.rst
  28. 26
      doc/install-guide/source/launch-instance-networks-selfservice.rst
  29. 24
      doc/install-guide/source/launch-instance-provider.rst
  30. 30
      doc/install-guide/source/launch-instance-selfservice.rst
  31. 12
      doc/install-guide/source/launch-instance.rst
  32. 2
      doc/install-guide/source/shared/edit_hosts_file.txt

4
doc/common/conventions.rst

@ -35,8 +35,6 @@ Command prompts
$ command
.. end
Any user, including the ``root`` user, can run commands that are
prefixed with the ``$`` prompt.
@ -44,8 +42,6 @@ prefixed with the ``$`` prompt.
# command
.. end
The ``root`` user must run commands that are prefixed with the ``#``
prompt. You can also prefix these commands with the :command:`sudo`
command, if available, to run them.

2
doc/doc-contrib-guide/source/rst-conv/inline-markups.rst

@ -23,8 +23,6 @@ To insert a semantic markup into your document, use the syntax below.
:markup:`inline text`
.. end
Application
~~~~~~~~~~~

8
doc/doc-contrib-guide/source/rst-conv/profiling.rst

@ -35,8 +35,6 @@ The valid tags for the ``only`` directive are:
# apt-get install chrony
.. end
.. endonly
.. only:: rdo
@ -45,8 +43,6 @@ The valid tags for the ``only`` directive are:
# yum install chrony
.. end
.. endonly
.. only:: obs
@ -58,8 +54,6 @@ The valid tags for the ``only`` directive are:
# zypper addrepo http://download.opensuse.org/repositories/network:time/openSUSE_13.2/network:time.repo
...
.. end
On SLES:
.. code-block:: console
@ -67,8 +61,6 @@ The valid tags for the ``only`` directive are:
# zypper addrepo http://download.opensuse.org/repositories/network:time/SLE_12/network:time.repo
...
.. end
.. endonly
For more details refer to `Including content based on tags

14
doc/doc-contrib-guide/source/rst-conv/rst2bash.rst

@ -28,10 +28,6 @@ syntax format.
$ echo "Hello, World!"
.. end
.. end
* The ``code-block`` tags which rely on path should have ``path``
tag one line above without a line break as shown below. May it
be some code which has to be run from a specific folder or a
@ -47,10 +43,6 @@ syntax format.
$ echo "Run a command from a specific folder"
$ chmod -R +rx bin
.. end
.. end
* Example 2: Configure a configuration file.
.. code-block:: rst
@ -62,10 +54,6 @@ syntax format.
...
debug = True
.. end
.. end
* The ``only`` tags should be closed with ``endonly``.
.. code-block:: rst
@ -75,5 +63,3 @@ syntax format.
All related content.
.. endonly
.. end

14
doc/doc-contrib-guide/source/rst-conv/source-code.rst

@ -90,10 +90,6 @@ files, ``console`` for console inputs and outputs, and so on.
[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
.. end
.. end
**Output**
.. code-block:: ini
@ -104,8 +100,6 @@ files, ``console`` for console inputs and outputs, and so on.
[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
.. end
.. note::
When you write the command example, you should write the input and output
@ -134,10 +128,6 @@ highlight some specific lines with the ``:emphasize-lines:`` parameter:
print '...but this one is.'
print 'This one is highlighted too.'
.. end
.. end
**Output**
.. code-block:: python
@ -151,8 +141,6 @@ highlight some specific lines with the ``:emphasize-lines:`` parameter:
print '...but this one is.'
print 'This one is highlighted too.'
.. end
.. _remote-block:
Literal block using a remote file
@ -184,8 +172,6 @@ content from a remote URL (``http`` or ``https``).
https://git.openstack.org/cgit/openstack/nova/tree/etc/nova/api-paste.ini?h=stable/ocata
.. end
**Output**
.. code-block:: yaml

14
doc/install-guide/source/environment-etcd-obs.rst

@ -22,8 +22,6 @@ Install and configure components
-g etcd \
etcd
.. end
- Create the necessary directories:
.. code-block:: console
@ -33,8 +31,6 @@ Install and configure components
# mkdir -p /var/lib/etcd
# chown etcd:etcd /var/lib/etcd
.. end
- Download and install the etcd tarball:
.. code-block:: console
@ -49,8 +45,6 @@ Install and configure components
# cp /tmp/etcd/etcd /usr/bin/etcd
# cp /tmp/etcd/etcdctl /usr/bin/etcdctl
.. end
2. Create and edit the ``/etc/etcd/etcd.conf.yml`` file
and set the ``initial-cluster``, ``initial-advertise-peer-urls``,
``advertise-client-urls``, ``listen-client-urls`` to the management
@ -69,8 +63,6 @@ Install and configure components
listen-peer-urls: http://0.0.0.0:2380
listen-client-urls: http://10.0.0.11:2379
.. end
3. Create and edit the ``/usr/lib/systemd/system/etcd.service`` file:
.. code-block:: ini
@ -89,16 +81,12 @@ Install and configure components
[Install]
WantedBy=multi-user.target
.. end
Reload systemd service files with:
.. code-block:: console
# systemctl daemon-reload
.. end
Finalize installation
~~~~~~~~~~~~~~~~~~~~~
@ -109,5 +97,3 @@ Finalize installation
# systemctl enable etcd
# systemctl start etcd
.. end

6
doc/install-guide/source/environment-etcd-rdo.rst

@ -15,8 +15,6 @@ Install and configure components
# yum install etcd
.. end
2. Edit the ``/etc/etcd/etcd.conf`` file and set the ``ETCD_INITIAL_CLUSTER``,
``ETCD_INITIAL_ADVERTISE_PEER_URLS``, ``ETCD_ADVERTISE_CLIENT_URLS``,
``ETCD_LISTEN_CLIENT_URLS`` to the management IP address of the controller
@ -36,8 +34,6 @@ Install and configure components
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
.. end
Finalize installation
~~~~~~~~~~~~~~~~~~~~~
@ -48,5 +44,3 @@ Finalize installation
# systemctl enable etcd
# systemctl start etcd
.. end

6
doc/install-guide/source/environment-etcd-ubuntu.rst

@ -15,8 +15,6 @@ Install and configure components
# apt install etcd
.. end
#. Edit the ``/etc/default/etcd`` file and set the ``ETCD_INITIAL_CLUSTER``,
``ETCD_INITIAL_ADVERTISE_PEER_URLS``, ``ETCD_ADVERTISE_CLIENT_URLS``,
``ETCD_LISTEN_CLIENT_URLS`` to the management IP address of the
@ -35,8 +33,6 @@ Install and configure components
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.0.0.11:2379"
.. end
Finalize installation
~~~~~~~~~~~~~~~~~~~~~
@ -46,5 +42,3 @@ Finalize installation
# systemctl enable etcd
# systemctl start etcd
.. end

6
doc/install-guide/source/environment-memcached-obs.rst

@ -15,8 +15,6 @@ Install and configure components
# zypper install memcached python-python-memcached
.. end
2. Edit the ``/etc/sysconfig/memcached`` file and complete the
following actions:
@ -28,8 +26,6 @@ Install and configure components
MEMCACHED_PARAMS="-l 10.0.0.11"
.. end
.. note::
Change the existing line ``MEMCACHED_PARAMS="-l 127.0.0.1"``.
@ -44,5 +40,3 @@ Finalize installation
# systemctl enable memcached.service
# systemctl start memcached.service
.. end

6
doc/install-guide/source/environment-memcached-rdo.rst

@ -15,8 +15,6 @@ Install and configure components
# yum install memcached python-memcached
.. end
2. Edit the ``/etc/sysconfig/memcached`` file and complete the
following actions:
@ -28,8 +26,6 @@ Install and configure components
OPTIONS="-l 127.0.0.1,::1,controller"
.. end
.. note::
Change the existing line ``OPTIONS="-l 127.0.0.1,::1"``.
@ -44,5 +40,3 @@ Finalize installation
# systemctl enable memcached.service
# systemctl start memcached.service
.. end

6
doc/install-guide/source/environment-memcached-ubuntu.rst

@ -16,8 +16,6 @@ Install and configure components
# apt install memcached python-memcache
.. end
2. Edit the ``/etc/memcached.conf`` file and configure the
service to use the management IP address of the controller node.
This is to enable access by other nodes via the management network:
@ -26,8 +24,6 @@ Install and configure components
-l 10.0.0.11
.. end
.. note::
Change the existing line that had ``-l 127.0.0.1``.
@ -40,5 +36,3 @@ Finalize installation
.. code-block:: console
# service memcached restart
.. end

8
doc/install-guide/source/environment-messaging-obs.rst

@ -23,8 +23,6 @@ Install and configure components
# zypper install rabbitmq-server
.. end
2. Start the message queue service and configure it to start when the
system boots:
@ -33,8 +31,6 @@ Install and configure components
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
.. end
3. Add the ``openstack`` user:
.. code-block:: console
@ -43,8 +39,6 @@ Install and configure components
Creating user "openstack" ...
.. end
Replace ``RABBIT_PASS`` with a suitable password.
4. Permit configuration, write, and read access for the
@ -55,5 +49,3 @@ Install and configure components
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
.. end

8
doc/install-guide/source/environment-messaging-rdo.rst

@ -23,8 +23,6 @@ Install and configure components
# yum install rabbitmq-server
.. end
2. Start the message queue service and configure it to start when the
system boots:
@ -33,8 +31,6 @@ Install and configure components
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
.. end
3. Add the ``openstack`` user:
.. code-block:: console
@ -43,8 +39,6 @@ Install and configure components
Creating user "openstack" ...
.. end
Replace ``RABBIT_PASS`` with a suitable password.
4. Permit configuration, write, and read access for the
@ -55,5 +49,3 @@ Install and configure components
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
.. end

6
doc/install-guide/source/environment-messaging-ubuntu.rst

@ -23,8 +23,6 @@ Install and configure components
# apt install rabbitmq-server
.. end
2. Add the ``openstack`` user:
.. code-block:: console
@ -33,8 +31,6 @@ Install and configure components
Creating user "openstack" ...
.. end
Replace ``RABBIT_PASS`` with a suitable password.
3. Permit configuration, write, and read access for the
@ -45,5 +41,3 @@ Install and configure components
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
.. end

6
doc/install-guide/source/environment-networking-compute.rst

@ -36,8 +36,6 @@ Configure network interfaces
up ip link set dev $IFACE up
down ip link set dev $IFACE down
.. end
For RHEL or CentOS:
* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file
@ -53,8 +51,6 @@ Configure network interfaces
ONBOOT="yes"
BOOTPROTO="none"
.. end
For SUSE:
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
@ -66,8 +62,6 @@ Configure network interfaces
STARTMODE='auto'
BOOTPROTO='static'
.. end
#. Reboot the system to activate the changes.
Configure name resolution

6
doc/install-guide/source/environment-networking-controller.rst

@ -32,8 +32,6 @@ Configure network interfaces
up ip link set dev $IFACE up
down ip link set dev $IFACE down
.. end
For RHEL or CentOS:
* Edit the ``/etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME`` file
@ -49,8 +47,6 @@ Configure network interfaces
ONBOOT="yes"
BOOTPROTO="none"
.. end
For SUSE:
* Edit the ``/etc/sysconfig/network/ifcfg-INTERFACE_NAME`` file to
@ -62,8 +58,6 @@ Configure network interfaces
STARTMODE='auto'
BOOTPROTO='static'
.. end
#. Reboot the system to activate the changes.
Configure name resolution

8
doc/install-guide/source/environment-networking-verify.rst

@ -20,8 +20,6 @@ among the nodes before proceeding further.
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *controller* node, test access to the management interface on the
*compute* node:
@ -39,8 +37,6 @@ among the nodes before proceeding further.
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
#. From the *compute* node, test access to the Internet:
.. code-block:: console
@ -57,8 +53,6 @@ among the nodes before proceeding further.
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
.. end
#. From the *compute* node, test access to the management interface on the
*controller* node:
@ -76,8 +70,6 @@ among the nodes before proceeding further.
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
.. end
.. note::
RHEL, CentOS and SUSE distributions enable a restrictive :term:`firewall` by

16
doc/install-guide/source/environment-ntp-controller.rst

@ -17,24 +17,18 @@ Install and configure components
# apt install chrony
.. end
For RHEL or CentOS:
.. code-block:: console
# yum install chrony
.. end
For SUSE:
.. code-block:: console
# zypper install chrony
.. end
2. Edit the ``chrony.conf`` file and add, change, or remove the following keys
as necessary for your environment.
@ -45,8 +39,6 @@ Install and configure components
server NTP_SERVER iburst
.. end
For Ubuntu, edit the ``/etc/chrony/chrony.conf`` file:
.. path /etc/chrony/chrony.conf
@ -54,8 +46,6 @@ Install and configure components
server NTP_SERVER iburst
.. end
Replace ``NTP_SERVER`` with the hostname or IP address of a
suitable more accurate (lower stratum) NTP server. The
configuration supports multiple ``server`` keys.
@ -74,8 +64,6 @@ Install and configure components
allow 10.0.0.0/24
.. end
If necessary, replace ``10.0.0.0/24`` with a description of your
subnet.
@ -87,13 +75,9 @@ Install and configure components
# service chrony restart
.. end
For RHEL, CentOS, or SUSE:
.. code-block:: console
# systemctl enable chronyd.service
# systemctl start chronyd.service
.. end

12
doc/install-guide/source/environment-ntp-other.rst

@ -24,16 +24,12 @@ Install and configure components
# yum install chrony
.. end
For SUSE:
.. code-block:: console
# zypper install chrony
.. end
2. Configure the ``chrony.conf`` file and comment out or remove all
but one ``server`` key. Change it to reference the controller node.
@ -44,8 +40,6 @@ Install and configure components
server controller iburst
.. end
For Ubuntu, edit the ``/etc/chrony/chrony.conf`` file:
.. path /etc/chrony/chrony.conf
@ -53,8 +47,6 @@ Install and configure components
server controller iburst
.. end
3. Comment out the ``pool 2.debian.pool.ntp.org offline iburst`` line.
4. Restart the NTP service.
@ -65,13 +57,9 @@ Install and configure components
# service chrony restart
.. end
For RHEL, CentOS, or SUSE:
.. code-block:: console
# systemctl enable chronyd.service
# systemctl start chronyd.service
.. end

4
doc/install-guide/source/environment-ntp-verify.rst

@ -19,8 +19,6 @@ node, can take several minutes to synchronize.
^- 192.0.2.11 2 7 12 137 -2814us[-3000us] +/- 43ms
^* 192.0.2.12 2 6 177 46 +17us[ -23us] +/- 68ms
.. end
Contents in the *Name/IP address* column should indicate the hostname or IP
address of one or more NTP servers. Contents in the *MS* column should indicate
*\** for the server to which the NTP service is currently synchronized.
@ -36,7 +34,5 @@ node, can take several minutes to synchronize.
===============================================================================
^* controller 3 9 377 421 +15us[ -87us] +/- 15ms
.. end
Contents in the *Name/IP address* column should indicate the hostname of the
controller node.

5
doc/install-guide/source/environment-packages-obs.rst

@ -33,7 +33,7 @@ Enable the OpenStack repository
# zypper addrepo -f obs://Cloud:OpenStack:Rocky/openSUSE_Leap_15.0 Rocky
**On openSUSE for OpenStack Queens:**
**On openSUSE for OpenStack Queens:**
.. code-block:: console
@ -64,7 +64,7 @@ Enable the OpenStack repository
# zypper addrepo -f obs://Cloud:OpenStack:Queens/SLE_12_SP4 Rocky
**On SLES for OpenStack Queens:**
**On SLES for OpenStack Queens:**
.. code-block:: console
@ -88,7 +88,6 @@ Enable the OpenStack repository
Key Created: 2015-12-16T16:48:37 CET
Key Expires: 2018-02-23T16:48:37 CET
Finalize the installation
-------------------------

2
doc/install-guide/source/environment-security.rst

@ -17,8 +17,6 @@ following command:
$ openssl rand -hex 10
.. end
For OpenStack services, this guide uses ``SERVICE_PASS`` to reference
service account passwords and ``SERVICE_DBPASS`` to reference database
passwords.

8
doc/install-guide/source/environment-sql-database-obs.rst

@ -16,8 +16,6 @@ Install and configure components
# zypper install mariadb-client mariadb python-PyMySQL
.. end
2. Create and edit the ``/etc/my.cnf.d/openstack.cnf`` file
and complete the following actions:
@ -39,8 +37,6 @@ Install and configure components
collation-server = utf8_general_ci
character-set-server = utf8
.. end
Finalize installation
---------------------
@ -52,8 +48,6 @@ Finalize installation
# systemctl enable mysql.service
# systemctl start mysql.service
.. end
2. Secure the database service by running the ``mysql_secure_installation``
script. In particular, choose a suitable password for the database
``root`` account:
@ -61,5 +55,3 @@ Finalize installation
.. code-block:: console
# mysql_secure_installation
.. end

8
doc/install-guide/source/environment-sql-database-rdo.rst

@ -16,8 +16,6 @@ Install and configure components
# yum install mariadb mariadb-server python2-PyMySQL
.. end
2. Create and edit the ``/etc/my.cnf.d/openstack.cnf`` file
(backup existing configuration files in ``/etc/my.cnf.d/`` if needed)
and complete the following actions:
@ -40,8 +38,6 @@ Install and configure components
collation-server = utf8_general_ci
character-set-server = utf8
.. end
Finalize installation
---------------------
@ -53,8 +49,6 @@ Finalize installation
# systemctl enable mariadb.service
# systemctl start mariadb.service
.. end
2. Secure the database service by running the ``mysql_secure_installation``
script. In particular, choose a suitable password for the database
``root`` account:
@ -62,5 +56,3 @@ Finalize installation
.. code-block:: console
# mysql_secure_installation
.. end

8
doc/install-guide/source/environment-sql-database-ubuntu.rst

@ -25,8 +25,6 @@ Install and configure components
# apt install mariadb-server python-pymysql
.. end
2. Create and edit the ``/etc/mysql/mariadb.conf.d/99-openstack.cnf`` file
and complete the following actions:
@ -47,8 +45,6 @@ Install and configure components
collation-server = utf8_general_ci
character-set-server = utf8
.. end
Finalize installation
---------------------
@ -58,8 +54,6 @@ Finalize installation
# service mysql restart
.. end
2. Secure the database service by running the ``mysql_secure_installation``
script. In particular, choose a suitable password for the database
``root`` account:
@ -67,5 +61,3 @@ Finalize installation
.. code-block:: console
# mysql_secure_installation
.. end

14
doc/install-guide/source/launch-instance-cinder.rst

@ -13,8 +13,6 @@ Create a volume
$ . demo-openrc
.. end
#. Create a 1 GB volume:
.. code-block:: console
@ -45,8 +43,6 @@ Create a volume
| user_id | 684286a9079845359882afc3aa5011fb |
+---------------------+--------------------------------------+
.. end
#. After a short time, the volume status should change from ``creating``
to ``available``:
@ -60,8 +56,6 @@ Create a volume
| a1e8be72-a395-4a6f-8e07-856a57c39524 | volume1 | available | 1 | |
+--------------------------------------+--------------+-----------+------+-------------+
.. end
Attach the volume to an instance
--------------------------------
@ -71,8 +65,6 @@ Attach the volume to an instance
$ openstack server add volume INSTANCE_NAME VOLUME_NAME
.. end
Replace ``INSTANCE_NAME`` with the name of the instance and ``VOLUME_NAME``
with the name of the volume you want to attach to it.
@ -84,8 +76,6 @@ Attach the volume to an instance
$ openstack server add volume provider-instance volume1
.. end
.. note::
This command provides no output.
@ -102,8 +92,6 @@ Attach the volume to an instance
| a1e8be72-a395-4a6f-8e07-856a57c39524 | volume1 | in-use | 1 | Attached to provider-instance on /dev/vdb |
+--------------------------------------+--------------+--------+------+--------------------------------------------+
.. end
#. Access your instance using SSH and use the ``fdisk`` command to verify
presence of the volume as the ``/dev/vdb`` block storage device:
@ -130,8 +118,6 @@ Attach the volume to an instance
Disk /dev/vdb doesn't contain a valid partition table
.. end
.. note::
You must create a file system on the device and mount it

12
doc/install-guide/source/launch-instance-networks-provider.rst

@ -37,8 +37,6 @@ Create the provider network
$ . admin-openrc
.. end
#. Create the network:
.. code-block:: console
@ -79,8 +77,6 @@ Create the provider network
| updated_at | 2017-03-14T14:37:39Z |
+---------------------------+--------------------------------------+
.. end
The ``--share`` option allows all projects to use the virtual network.
The ``--external`` option defines the virtual network to be external. If
@ -99,8 +95,6 @@ Create the provider network
[ml2_type_flat]
flat_networks = provider
.. end
``linuxbridge_agent.ini``:
.. code-block:: ini
@ -108,8 +102,6 @@ Create the provider network
[linux_bridge]
physical_interface_mappings = provider:eth1
.. end
#. Create a subnet on the network:
.. code-block:: console
@ -119,8 +111,6 @@ Create the provider network
--dns-nameserver DNS_RESOLVER --gateway PROVIDER_NETWORK_GATEWAY \
--subnet-range PROVIDER_NETWORK_CIDR provider
.. end
Replace ``PROVIDER_NETWORK_CIDR`` with the subnet on the provider
physical network in CIDR notation.
@ -175,7 +165,5 @@ Create the provider network
| updated_at | 2017-03-29T05:48:29Z |
+-------------------+--------------------------------------+
.. end
Return to :ref:`Launch an instance - Create virtual networks
<launch-instance-networks>`.

26
doc/install-guide/source/launch-instance-networks-selfservice.rst

@ -43,8 +43,6 @@ Create the self-service network
$ . demo-openrc
.. end
#. Create the network:
.. code-block:: console
@ -78,8 +76,6 @@ Create the self-service network
| updated_at | 2016-11-04T18:20:59Z |
+-------------------------+--------------------------------------+
.. end
Non-privileged users typically cannot supply additional parameters to
this command. The service automatically chooses parameters using
information from the following files:
@ -94,8 +90,6 @@ Create the self-service network
[ml2_type_vxlan]
vni_ranges = 1:1000
.. end
#. Create a subnet on the network:
.. code-block:: console
@ -104,8 +98,6 @@ Create the self-service network
--dns-nameserver DNS_RESOLVER --gateway SELFSERVICE_NETWORK_GATEWAY \
--subnet-range SELFSERVICE_NETWORK_CIDR selfservice
.. end
Replace ``DNS_RESOLVER`` with the IP address of a DNS resolver. In
most cases, you can use one from the ``/etc/resolv.conf`` file on
the host.
@ -156,8 +148,6 @@ Create the self-service network
| updated_at | 2016-11-04T18:30:54Z |
+-------------------+--------------------------------------+
.. end
Create a router
---------------
@ -179,8 +169,6 @@ when creating the ``provider`` network.
$ . demo-openrc
.. end
#. Create the router:
.. code-block:: console
@ -209,24 +197,18 @@ when creating the ``provider`` network.
| updated_at | 2016-11-04T18:32:56Z |
+-------------------------+--------------------------------------+
.. end
#. Add the self-service network subnet as an interface on the router:
.. code-block:: console
$ openstack router add subnet router selfservice
.. end
#. Set a gateway on the provider network on the router:
.. code-block:: console
$ openstack router set router --external-gateway provider
.. end
Verify operation
----------------
@ -241,8 +223,6 @@ creation examples.
$ . admin-openrc
.. end
#. List network namespaces. You should see one ``qrouter`` namespace and two
``qdhcp`` namespaces.
@ -254,8 +234,6 @@ creation examples.
qdhcp-7c6f9b37-76b4-463e-98d8-27e5686ed083
qdhcp-0e62efcd-8cee-46c7-b163-d8df05c3c5ad
.. end
#. List ports on the router to determine the gateway IP address on the
provider network:
@ -270,8 +248,6 @@ creation examples.
| d6fe98db-ae01-42b0-a860-37b1661f5950 | | fa:16:3e:e8:c1:41 | ip_address='203.0.113.102', subnet_id='5cc70da8-4ee7-4565-be53-b9c011fca011' | ACTIVE |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------+--------+
.. end
#. Ping this IP address from the controller node or any host on the physical
provider network:
@ -288,7 +264,5 @@ creation examples.
--- 203.0.113.102 ping statistics ---
rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 ms
.. end
Return to :ref:`Launch an instance - Create virtual networks
<launch-instance-networks>`.

24
doc/install-guide/source/launch-instance-provider.rst

@ -16,8 +16,6 @@ name, network, security group, key, and instance name.
$ . demo-openrc
.. end
#. A flavor specifies a virtual resource allocation profile which
includes processor, memory, and storage.
@ -33,8 +31,6 @@ name, network, security group, key, and instance name.
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
+----+---------+-----+------+-----------+-------+-----------+
.. end
.. note::
You can also reference a flavor by ID.
@ -51,8 +47,6 @@ name, network, security group, key, and instance name.
| 390eb5f7-8d49-41ec-95b7-68c0d5d54b34 | cirros | active |
+--------------------------------------+--------+--------+
.. end
This instance uses the ``cirros`` image.
#. List available networks:
@ -68,8 +62,6 @@ name, network, security group, key, and instance name.
| b5b6993c-ddf9-40e7-91d0-86806a42edb8 | provider | 310911f6-acf0-4a47-824e-3032916582ff |
+--------------------------------------+--------------+--------------------------------------+
.. end
This instance uses the ``provider`` provider network. However, you must
reference this network using the ID instead of the name.
@ -90,8 +82,6 @@ name, network, security group, key, and instance name.
| dd2b614c-3dad-48ed-958b-b155a3b38515 | default | Default security group | a516b957032844328896baa01e0f906c |
+--------------------------------------+---------+------------------------+----------------------------------+
.. end
This instance uses the ``default`` security group.
Launch the instance
@ -146,8 +136,6 @@ Launch the instance
| volumes_attached | |
+-----------------------------+-----------------------------------------------+
.. end
#. Check the status of your instance:
.. code-block:: console
@ -160,8 +148,6 @@ Launch the instance
| 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | provider-instance | ACTIVE | provider=203.0.113.103 | cirros |
+--------------------------------------+-------------------+--------+------------------------+------------+
.. end
The status changes from ``BUILD`` to ``ACTIVE`` when the build process
successfully completes.
@ -182,8 +168,6 @@ Access the instance using the virtual console
| url | http://controller:6080/vnc_auto.html?token=5eeccb47-525c-4918-ac2a-3ad1e9f1f493 |
+-------+---------------------------------------------------------------------------------+
.. end
.. note::
If your web browser runs on a host that cannot resolve the
@ -211,8 +195,6 @@ Access the instance using the virtual console
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms
.. end
#. Verify access to the internet:
.. code-block:: console
@ -229,8 +211,6 @@ Access the instance using the virtual console
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms
.. end
Access the instance remotely
----------------------------
@ -251,8 +231,6 @@ Access the instance remotely
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
.. end
#. Access your instance using SSH from the controller node or any
host on the provider physical network:
@ -265,8 +243,6 @@ Access the instance remotely
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '203.0.113.102' (RSA) to the list of known hosts.
.. end
If your instance does not launch or seem to work as you expect, see the
`Troubleshoot Compute documentation for Pike <https://docs.openstack.org/nova/pike/admin/support-compute.html>`_,
the

30
doc/install-guide/source/launch-instance-selfservice.rst

@ -16,8 +16,6 @@ name, network, security group, key, and instance name.
$ . demo-openrc
.. end
#. A flavor specifies a virtual resource allocation profile which
includes processor, memory, and storage.
@ -33,8 +31,6 @@ name, network, security group, key, and instance name.
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
+----+---------+-----+------+-----------+-------+-----------+
.. end
.. note::
You can also reference a flavor by ID.
@ -51,8 +47,6 @@ name, network, security group, key, and instance name.
| 390eb5f7-8d49-41ec-95b7-68c0d5d54b34 | cirros | active |
+--------------------------------------+--------+--------+
.. end
This instance uses the ``cirros`` image.
#. List available networks:
@ -68,8 +62,6 @@ name, network, security group, key, and instance name.
| b5b6993c-ddf9-40e7-91d0-86806a42edb8 | provider | 310911f6-acf0-4a47-824e-3032916582ff |
+--------------------------------------+-------------+--------------------------------------+
.. end
This instance uses the ``selfservice`` self-service network. However, you
must reference this network using the ID instead of the name.
@ -85,8 +77,6 @@ name, network, security group, key, and instance name.
| dd2b614c-3dad-48ed-958b-b155a3b38515 | default | Default security group |
+--------------------------------------+---------+------------------------+
.. end
This instance uses the ``default`` security group.
#. Launch the instance:
@ -132,8 +122,6 @@ name, network, security group, key, and instance name.
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
+--------------------------------------+---------------------------------------+
.. end
#. Check the status of your instance:
.. code-block:: console
@ -147,8 +135,6 @@ name, network, security group, key, and instance name.
| 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | provider-instance | ACTIVE | provider=203.0.113.103 |
+--------------------------------------+----------------------+--------+------------------------+
.. end
The status changes from ``BUILD`` to ``ACTIVE`` when the build process
successfully completes.
@ -169,8 +155,6 @@ Access the instance using a virtual console
| url | http://controller:6080/vnc_auto.html?token=5eeccb47-525c-4918-ac2a-3ad1e9f1f493 |
+-------+---------------------------------------------------------------------------------+
.. end
.. note::
If your web browser runs on a host that cannot resolve the
@ -198,8 +182,6 @@ Access the instance using a virtual console
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms
.. end
#. Verify access to the internet:
.. code-block:: console
@ -216,8 +198,6 @@ Access the instance using a virtual console
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms
.. end
Access the instance remotely
----------------------------
@ -245,16 +225,12 @@ Access the instance remotely
| updated_at | 2017-01-20T17:29:16Z |
+---------------------+--------------------------------------+
.. end
#. Associate the floating IP address with the instance:
.. code-block:: console
$ openstack server add floating ip selfservice-instance 203.0.113.104
.. end
.. note::
This command provides no output.
@ -272,8 +248,6 @@ Access the instance remotely
| 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf | provider-instance | ACTIVE | provider=203.0.113.103 |
+--------------------------------------+----------------------+--------+---------------------------------------+
.. end
#. Verify connectivity to the instance via floating IP address from
the controller node or any host on the provider physical network:
@ -291,8 +265,6 @@ Access the instance remotely
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
.. end
#. Access your instance using SSH from the controller node or any
host on the provider physical network:
@ -305,8 +277,6 @@ Access the instance remotely
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '203.0.113.104' (RSA) to the list of known hosts.
.. end
If your instance does not launch or seem to work as you expect, see the
`Troubleshoot Compute documentation for Pike <https://docs.openstack.org/nova/pike/admin/support-compute.html>`_,
the

12
doc/install-guide/source/launch-instance.rst

@ -76,8 +76,6 @@ purposes.
| vcpus | 1 |
+----------------------------+---------+
.. end
Generate a key pair
-------------------
@ -91,8 +89,6 @@ must add a public key to the Compute service.