Convert the full documentation to rST

This will make our complete documentation built by our new gate job.

Change-Id: Ieec33153bb79033951ef8ad7adab7a81edd46748
Closes-Bug: #1614531
This commit is contained in:
Attila Darazs 2016-08-18 16:05:54 +02:00
parent c2fac709c5
commit 7fd8b904c8
13 changed files with 588 additions and 554 deletions

View File

@ -55,7 +55,6 @@ undercloud. If a release name is not given, ``mitaka`` is used.
Deploying without instructions
------------------------------
::
bash quickstart.sh --tags all $VIRTHOST
@ -68,7 +67,6 @@ ensure the overcloud is functional.
Deploying on localhost
----------------------
::
bash quickstart.sh localhost
@ -108,7 +106,7 @@ Copyright 2015-2016 Red Hat, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at <http://www.apache.org/licenses/LICENSE-2.0>
a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,

View File

@ -1,160 +0,0 @@
# Accessing the Overcloud
With the virtual infrastructure provisioned by tripleo-quickstart, the
overcloud hosts are deployed on an isolated network that can only be
accessed from the undercloud host. In many cases, simply logging in
to the undercloud host as documented in [Accessing the Undercloud][]
is sufficient, but there are situations when you may want direct
access to overcloud services from your desktop.
[accessing the undercloud]: accessing-undercloud.md
## Logging in to overcloud hosts
If you need to log in to an overcloud host directly, first log in to
the `undercloud` host. From there, you can access the overcloud hosts
by name:
[stack@undercloud ~]$ ssh overcloud-controller-0
Warning: Permanently added 'overcloud-controller-0,192.168.30.9' (ECDSA) to the list of known hosts.
Last login: Wed Mar 23 21:59:24 2016 from 192.168.30.1
[heat-admin@overcloud-controller-0 ~]$
## SSH Port Forwarding
You can forward specific ports from your localhost to addresses on the
overcloud network. For example, to access the overcloud Horizon
interface, you could run:
ssh -F $HOME/.quickstart/ssh.config.ansible \
-L 8080:overcloud-public-vip:80 undercloud
This uses the ssh `-L` command line option to forward port `8080` on
your local host to port `80` on the `overcloud-public-vip` host (which
is defined in `/etc/hosts` on the undercloud). Once you have
connected to the undercloud like this, you can then point your browser
at `http://localhost:8080` to access Horizon.
You can add multiple `-L` arguments to the ssh command line to expose
multiple services.
## SSH Dynamic Proxy
You can configure ssh as a [SOCKS5][] proxy with the `-D` command line
option. For example, to start a proxy on port 1080:
[socks5]: https://www.ietf.org/rfc/rfc1928.txt
ssh -F $HOME/.quickstart/ssh.config.ansible \
-D 1080 undercloud
You can now use this proxy to access any overcloud resources. With
curl, that would look something like this:
$ curl --socks5-hostname localhost:1080 http://overcloud-public-vip:5000/
{"versions": {"values": [{"status": "stable", "updated": "2016-04-04T00:00:00Z",...
### Using Firefox
You can configure Firefox to use a SOCKS5 proxy. You may want to
create [create a new profile][] for this so that you don't impact your
normal browsing.
[create a new profile]: https://support.mozilla.org/en-US/kb/profile-manager-create-and-remove-firefox-profiles
1. Select Edit -> Preferences
1. Select the "Advanced" tab from the list on the left of the window
1. Select the "Network" tab from the list across the top of the window
1. Select the "Settings..." button in the "Connection" section
1. Select "Manual proxy configuration:" in the "Connection Settings"
dialog.
1. Enter `localhost` in the "SOCKS Host" field, and enter `1080` (or
whatever port you supplied to the ssh `-D` option) in the "Port:"
field.
1. Select the "SOCKS5" radio button, and check the "Remote DNS"
checkbox.
Now, if you enter <http://overcloud-public-vip/> in your browser, you
will be able to access the overcloud Horizon instance. Note that you
will probably need to enter the full URL; entering an unqualified
hostname into the location bar will redirect to a search engine rather
than attempting to contact the website.
### Using Chrome
It is not possible to configure a proxy connection using the Chrome UI
without using an extension. You can set things up from the command
line by using [these instructions], which boil down to starting Chrome
like this:
[these instructions]: https://www.chromium.org/developers/design-documents/network-stack/socks-proxy
google-chrome --proxy-server="socks5://localhost:1080" \
--host-resolver-rules="MAP * 0.0.0.0"
## sshuttle
The [sshuttle][] tool is something halfway between a VPN and a proxy
server, and can be used to give your local host direct access to the
overcloud network.
[sshuttle]: https://github.com/apenwarr/sshuttle
1. Note the network range used by the overcloud servers; this will be
the value of `undercloud_network` in your configuration, which as
of this writing defaults for historical reasons to `192.0.2.0/24`.
1. Install the `sshuttle` package if you don't already have it
1. Run `sshuttle`:
sshuttle \
-e "ssh -F $HOME/.quickstart/ssh.config.ansible" \
-r undercloud -v 192.0.2.0/24
(Where `192.0.2.0/24` should be replaced by whatever address range
you noted in the first step.)
With this in place, your local host can access any address on the
overcloud network. Hostname resolution *will not work*, but since the
generated credentials files use ip addresses this should not present a
problem.
## CLI access with tsocks
If you want to use command line tools like the `openstack` integrated
client to access overcloud API services, you can use [tsocks][], which
uses function interposition to redirect all network access to a SOCKS
proxy.
[tsocks]: http://tsocks.sourceforge.net/
1. Install the `tsocks` package if you don't already have it
available.
1. Create a `$HOME/.tsocks` configuration file with the following
content:
server = 127.0.0.1
server_port = 1080
1. Set the `TSOCKS_CONF_FILE` environment variable to point to this
configuration file:
export TSOCKS_CONF_FILE=$HOME/.tsocks
1. Use the `tsocks` command to wrap your command invocations:
$ tsocks openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
| 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
| 4 | m1.large | 8192 | 80 | 0 | 4 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
+----+-----------+-------+------+-----------+-------+-----------+
This solution is known to work with the `openstack` integrated client,
and known to *fail* with many of the legacy clients (such as the
`nova` or `keystone` commands).

View File

@ -1,110 +0,0 @@
# Configuring tripleo-quickstart
The virtual environment deployed by tripleo-quickstart is largely
controlled by variables that get there defaults from the `common`
role.
You configure tripleo-quickstart by placing variable definitions in a
YAML file and passing that to ansible using the `-e` command line
option, like this:
ansible-playbook playbook.yml -e @myconfigfile.yml
## Controlling resources
These variables set the resources that will be assigned to a node by
default, unless overridden by a more specific variable:
- `default_disk`
- `default_memory`
- `default_vcpu`
These variables set the resources assigned to the undercloud node:
- `undercloud_disk`
- `undercloud_memory` (defaults to **12288**)
- `undercloud_vcpu` (defaults to **4**)
These variables set the resources assigned to controller nodes:
- `control_disk`
- `control_memory`
- `control_vcpu`
These variables control the resources assigned to compute nodes:
- `compute_disk`
- `compute_memory`
- `compute_vcpu`
These variables control the resources assigned to ceph storage nodes:
- `ceph_disk`
- `ceph_memory`
- `ceph_vcpu`
## Setting number and type of overcloud nodes
The `overcloud_nodes` variable can be used to change the number and
type of nodes deployed in your overcloud. The default looks like
this:
overcloud_nodes:
- name: control_0
flavor: control
- name: control_1
flavor: control
- name: control_2
flavor: control
- name: compute_0
flavor: compute
- name: ceph_0
flavor: ceph
## Specifying custom heat templates
The `overcloud_templates_path` variable can be used to define a different path
where to get the heat templates. By default this variable will not be set.
The `overcloud_templates_repo` variable can be used to define the remote
repository from where the templates need to be cloned. When this variable is
set, along with `overcloud_templates_path`, the templates will be cloned from
that remote repository into the target specified, and these will be used in
overcloud deployment.
The `overcloud_templates_branch` variable can be used to specify the branch
that needs to be cloned from a specific repository. When this variable is set,
git will clone only the branch specified.
## An example
To create a minimal environment that would be unsuitable for deploying
anything real nova instances, you could place something like the
following in `myconfigfile.yml`:
undercloud_memory: 8192
control_memory: 6000
compute_memory: 2048
overcloud_nodes:
- name: control_0
flavor: control
- name: compute_0
flavor: compute
And then pass that to the `ansible-playbook` command as described at
the beginning of this document.
## Explicit Teardown
You can select what to delete prior to the run of quickstart adding a
--teardown (or -T) options with the following parameters:
- nodes: default, remove only undercloud and overcloud nodes
- virthost: same as nodes but network setup is deleted too
- all: same as virthost but user setup in virthost is deleted too
- none: will not teardown anything (useful for testing multiple actions against
a deployed overcloud)

View File

@ -1,48 +0,0 @@
# Contributing
## Bug reports
If you encounter any problems with `tripleo-quickstart` or if you have feature
suggestions, please feel free to open a bug report in our [issue tracker][].
[issue tracker]: https://bugs.launchpad.net/tripleo-quickstart
## Code
If you *fix* a problem or implement a new feature, you may submit your changes
via Gerrit. The `tripleo-quickstart` project uses the [OpenStack Gerrit
workflow][gerrit].
You can anonymously clone the repository via
`git clone https://git.openstack.org/openstack/tripleo-quickstart.git`
If you wish to contribute, you'll want to get setup by following the
documentation available at [How To Contribute].
Once you've cloned the repository using your account, install the
[git-review][] tool, then from the `tripleo-quickstart` repository run:
git review -s
After you have made your changes locally, commit them to a feature
branch, and then submit them for review by running:
git review
Your changes will be tested by our automated CI infrastructure, and will also
be reviewed by other developers. If you need to make changes (and you probably
will; it's not uncommon for patches to go through several iterations before
being accepted), make the changes on your feature branch, and instead of
creating a new commit, *amend the existing commit*, making sure to retain the
`Change-Id` line that was placed there by `git-review`:
git ci --amend
After committing your changes, resubmit the review:
git review
[gerrit]: http://docs.openstack.org/infra/manual/developers.html#development-workflow
[git-review]: http://docs.openstack.org/infra/manual/developers.html#installing-git-review
[How To Contribute]: https://wiki.openstack.org/wiki/How_To_Contribute

View File

@ -1,16 +1,18 @@
# Accessing libvirt as an unprivileged user
Accessing libvirt as an unprivileged user
=========================================
The virtual infrastructure provisioned by triple-quickstart is created
using an unprivileged account (by default the `stack` user). This
means that logging into your virthost as root and running `virsh list`
using an unprivileged account (by default the ``stack`` user). This
means that logging into your virthost as root and running ``virsh list``
will result in empty output, which can be confusing to someone not
familiar with libvirt's unprivileged mode.
## Where are my guests?
Where are my guests?
--------------------
The easiest way to interact with the unprivileged libvirt instance
used by tripleo-quickstart is to log in as the `stack` user using the
generated ssh key in your quickstart directory:
The easiest way to interact with the unprivileged libvirt instance used
by tripleo-quickstart is to log in as the ``stack`` user using the
generated ssh key in your quickstart directory::
$ ssh -i $HOME/.quickstart/id_rsa_virt_host stack@virthost
[stack@virthost ~]$ virsh list
@ -20,44 +22,44 @@ generated ssh key in your quickstart directory:
5 compute_0 running
6 control_0 running
You can also log in to the virthost as `root` and then `su - stack` to
access the unprivileged user account. While this won't normally work
"out of the box" because of [this issue][xdg], the quickstart ensures
that the `XDG_RUNTIME_DIR` variable is set correctly.
You can also log in to the virthost as ``root`` and then ``su - stack``
to access the unprivileged user account. While this won't normally work
"out of the box" because of `this
issue <https://www.redhat.com/archives/libvirt-users/2016-March/msg00056.html>`__,
the quickstart ensures that the ``XDG_RUNTIME_DIR`` variable is set
correctly.
[xdg]: https://www.redhat.com/archives/libvirt-users/2016-March/msg00056.html
Where are my networks?
----------------------
## Where are my networks?
While most libvirt operations can be performed as an unprivileged
user, creating bridge devices requires root privileges. We create the
networks used by the quickstart as `root`, so as `root` on your virthost
you can run:
While most libvirt operations can be performed as an unprivileged user,
creating bridge devices requires root privileges. We create the networks
used by the quickstart as ``root``, so as ``root`` on your virthost you
can run::
# virsh net-list
And see:
And see::
Name State Autostart Persistent
----------------------------------------------------------
--------------------------------------------------------
default active yes yes
external active yes yes
overcloud active yes yes
In order to expose these networks to the unprivileged `stack` user, we
whitelist them in `/etc/qemu/bridge.conf` (this file is used by the
[qemu bridge helper][] to proxy unprivileged access to privileged
operations):
In order to expose these networks to the unprivileged ``stack`` user, we
whitelist them in ``/etc/qemu/bridge.conf`` (this file is used by the
`qemu bridge
helper <http://wiki.qemu.org/Features-Done/HelperNetworking>`__ to proxy
unprivileged access to privileged operations)::
# cat /etc/qemu-kvm/bridge.conf
allow virbr0
allow brext
allow brovc
[qemu bridge helper]: http://wiki.qemu.org/Features-Done/HelperNetworking
The guests created by the stack user connect to these bridges by name;
the relevant domain XML ends up looking something like:
the relevant domain XML ends up looking something like::
[stack@virthost ~]$ virsh dumpxml undercloud | xmllint --xpath //interface -
<interface type="bridge">

View File

@ -0,0 +1,159 @@
Accessing the Overcloud
=======================
With the virtual infrastructure provisioned by tripleo-quickstart, the
overcloud hosts are deployed on an isolated network that can only be accessed
from the undercloud host. In many cases, simply logging in to the undercloud
host as documented in :ref:`accessing-undercloud` is sufficient, but there are
situations when you may want direct access to overcloud services from your
desktop.
Logging in to overcloud hosts
-----------------------------
If you need to log in to an overcloud host directly, first log in to the
``undercloud`` host. From there, you can access the overcloud hosts by
name::
[stack@undercloud ~]$ ssh overcloud-controller-0
Warning: Permanently added 'overcloud-controller-0,192.168.30.9' (ECDSA) to the list of known hosts.
Last login: Wed Mar 23 21:59:24 2016 from 192.168.30.1
[heat-admin@overcloud-controller-0 ~]$
SSH Port Forwarding
-------------------
You can forward specific ports from your localhost to addresses on the
overcloud network. For example, to access the overcloud Horizon
interface, you could run::
ssh -F $HOME/.quickstart/ssh.config.ansible \
-L 8080:overcloud-public-vip:80 undercloud
This uses the ssh ``-L`` command line option to forward port ``8080`` on
your local host to port ``80`` on the ``overcloud-public-vip`` host
(which is defined in ``/etc/hosts`` on the undercloud). Once you have
connected to the undercloud like this, you can then point your browser
at ``http://localhost:8080`` to access Horizon.
You can add multiple ``-L`` arguments to the ssh command line to expose
multiple services.
SSH Dynamic Proxy
-----------------
You can configure ssh as a
`SOCKS5 <https://www.ietf.org/rfc/rfc1928.txt>`__ proxy with the ``-D``
command line option. For example, to start a proxy on port 1080::
ssh -F $HOME/.quickstart/ssh.config.ansible \
-D 1080 undercloud
You can now use this proxy to access any overcloud resources. With curl,
that would look something like this::
$ curl --socks5-hostname localhost:1080 http://overcloud-public-vip:5000/
{"versions": {"values": [{"status": "stable", "updated": "2016-04-04T00:00:00Z",...
Using Firefox
^^^^^^^^^^^^^
You can configure Firefox to use a SOCKS5 proxy. You may want to create
`create a new
profile <https://support.mozilla.org/en-US/kb/profile-manager-create-and-remove-firefox-profiles>`__
for this so that you don't impact your normal browsing.
#. Select Edit -> Preferences
#. Select the "Advanced" tab from the list on the left of the window
#. Select the "Network" tab from the list across the top of the window
#. Select the "Settings..." button in the "Connection" section
#. Select "Manual proxy configuration:" in the "Connection Settings"
dialog.
#. Enter ``localhost`` in the "SOCKS Host" field, and enter ``1080`` (or
whatever port you supplied to the ssh ``-D`` option) in the "Port:"
field.
#. Select the "SOCKS5" radio button, and check the "Remote DNS"
checkbox.
Now, if you enter http://overcloud-public-vip/ in your browser, you will
be able to access the overcloud Horizon instance. Note that you will
probably need to enter the full URL; entering an unqualified hostname
into the location bar will redirect to a search engine rather than
attempting to contact the website.
Using Chrome
^^^^^^^^^^^^
It is not possible to configure a proxy connection using the Chrome UI
without using an extension. You can set things up from the command line
by using `these
instructions <https://www.chromium.org/developers/design-documents/network-stack/socks-proxy>`__,
which boil down to starting Chrome like this::
google-chrome --proxy-server="socks5://localhost:1080" \
--host-resolver-rules="MAP * 0.0.0.0"
sshuttle
--------
The `sshuttle <https://github.com/apenwarr/sshuttle>`__ tool is
something halfway between a VPN and a proxy server, and can be used to
give your local host direct access to the overcloud network.
#. Note the network range used by the overcloud servers; this will be
the value of ``undercloud_network`` in your configuration, which as
of this writing defaults for historical reasons to ``192.0.2.0/24``.
#. Install the ``sshuttle`` package if you don't already have it
#. Run ``sshuttle``::
sshuttle \
-e "ssh -F $HOME/.quickstart/ssh.config.ansible" \
-r undercloud -v 192.0.2.0/24
(Where ``192.0.2.0/24`` should be replaced by whatever address range
you noted in the first step.)
With this in place, your local host can access any address on the
overcloud network. Hostname resolution *will not work*, but since the
generated credentials files use ip addresses this should not present a
problem.
CLI access with tsocks
----------------------
If you want to use command line tools like the ``openstack`` integrated
client to access overcloud API services, you can use
`tsocks <http://tsocks.sourceforge.net/>`__, which uses function
interposition to redirect all network access to a SOCKS proxy.
#. Install the ``tsocks`` package if you don't already have it
available.
#. Create a ``$HOME/.tsocks`` configuration file with the following
content::
server = 127.0.0.1
server_port = 1080
#. Set the ``TSOCKS_CONF_FILE`` environment variable to point to this
configuration file::
export TSOCKS_CONF_FILE=$HOME/.tsocks
#. Use the ``tsocks`` command to wrap your command invocations::
$ tsocks openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
| 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
| 4 | m1.large | 8192 | 80 | 0 | 4 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
+----+-----------+-------+------+-----------+-------+-----------+
This solution is known to work with the ``openstack`` integrated client,
and known to *fail* with many of the legacy clients (such as the
``nova`` or ``keystone`` commands).

View File

@ -1,19 +1,22 @@
# Accessing the Undercloud
.. _accessing-undercloud:
Accessing the Undercloud
========================
When your deployment is complete, you will find a file named
`ssh.config.ansible` located inside your `local_working_dir` (which
defaults to `$HOME/.quickstart`). This file contains configuration
``ssh.config.ansible`` located inside your ``local_working_dir`` (which
defaults to ``$HOME/.quickstart``). This file contains configuration
settings for ssh to make it easier to connect to the undercloud host.
You use it like this:
You use it like this::
ssh -F $HOME/.quickstart/ssh.config.ansible undercloud
This will connect you to the undercloud host as the `stack` user:
This will connect you to the undercloud host as the ``stack`` user::
[stack@undercloud ~]$
Once logged in to the undercloud, you can source the `stackrc` file if
you want to access undercloud services:
Once logged in to the undercloud, you can source the ``stackrc`` file if
you want to access undercloud services::
[stack@undercloud ~]$ . stackrc
[stack@undercloud ~]$ heat stack-list
@ -23,8 +26,8 @@ you want to access undercloud services:
| 988ad9c3-...| overcloud | CREATE_COMPLETE | 2016-03-21T14:32:21 | None |
+----------...+------------+-----------------+---------------------+--------------+
And you can source the `overcloudrc` file if you want to interact with
the overcloud:
And you can source the ``overcloudrc`` file if you want to interact with
the overcloud::
[stack@undercloud ~]$ . overcloudrc
[stack@undercloud ~]$ nova service-list

View File

@ -0,0 +1,118 @@
.. _configuration:
Configuration
=============
The virtual environment deployed by tripleo-quickstart is largely
controlled by variables that get there defaults from the ``common``
role.
You configure tripleo-quickstart by placing variable definitions in a
YAML file and passing that to ansible using the ``-e`` command line
option, like this::
ansible-playbook playbook.yml -e @myconfigfile.yml
Controlling resources
---------------------
These variables set the resources that will be assigned to a node by
default, unless overridden by a more specific variable:
- ``default_disk``
- ``default_memory``
- ``default_vcpu``
These variables set the resources assigned to the undercloud node:
- ``undercloud_disk``
- ``undercloud_memory`` (defaults to **12288**)
- ``undercloud_vcpu`` (defaults to **4**)
These variables set the resources assigned to controller nodes:
- ``control_disk``
- ``control_memory``
- ``control_vcpu``
These variables control the resources assigned to compute nodes:
- ``compute_disk``
- ``compute_memory``
- ``compute_vcpu``
These variables control the resources assigned to ceph storage nodes:
- ``ceph_disk``
- ``ceph_memory``
- ``ceph_vcpu``
Setting number and type of overcloud nodes
------------------------------------------
The ``overcloud_nodes`` variable can be used to change the number and
type of nodes deployed in your overcloud. The default looks like this::
overcloud_nodes:
- name: control_0
flavor: control
- name: control_1
flavor: control
- name: control_2
flavor: control
- name: compute_0
flavor: compute
- name: ceph_0
flavor: ceph
Specifying custom heat templates
--------------------------------
The ``overcloud_templates_path`` variable can be used to define a
different path where to get the heat templates. By default this variable
will not be set.
The ``overcloud_templates_repo`` variable can be used to define the
remote repository from where the templates need to be cloned. When this
variable is set, along with ``overcloud_templates_path``, the templates
will be cloned from that remote repository into the target specified,
and these will be used in overcloud deployment.
The ``overcloud_templates_branch`` variable can be used to specify the
branch that needs to be cloned from a specific repository. When this
variable is set, git will clone only the branch specified.
An example
----------
To create a minimal environment that would be unsuitable for deploying
anything real nova instances, you could place something like the
following in ``myconfigfile.yml``::
undercloud_memory: 8192
control_memory: 6000
compute_memory: 2048
overcloud_nodes:
- name: control_0
flavor: control
- name: compute_0
flavor: compute
And then pass that to the ``ansible-playbook`` command as described at
the beginning of this document.
Explicit Teardown
-----------------
You can select what to delete prior to the run of quickstart adding a
--teardown (or -T) options with the following parameters:
- nodes: default, remove only undercloud and overcloud nodes
- virthost: same as nodes but network setup is deleted too
- all: same as virthost but user setup in virthost is deleted too
- none: will not teardown anything (useful for testing multiple actions
against a deployed overcloud)

View File

@ -0,0 +1,49 @@
Contributing
============
Bug reports
-----------
If you encounter any problems with ``tripleo-quickstart`` or if you have
feature suggestions, please feel free to open a bug report in our `issue
tracker <https://bugs.launchpad.net/tripleo-quickstart>`__.
Code
----
If you *fix* a problem or implement a new feature, you may submit your
changes via Gerrit. The ``tripleo-quickstart`` project uses the
`OpenStack Gerrit
workflow <http://docs.openstack.org/infra/manual/developers.html#development-workflow>`__.
You can anonymously clone the repository via
``git clone https://git.openstack.org/openstack/tripleo-quickstart.git``
If you wish to contribute, you'll want to get setup by following the
documentation available at `How To
Contribute <https://wiki.openstack.org/wiki/How_To_Contribute>`__.
Once you've cloned the repository using your account, install the
`git-review <http://docs.openstack.org/infra/manual/developers.html#installing-git-review>`__
tool, then from the ``tripleo-quickstart`` repository run::
git review -s
After you have made your changes locally, commit them to a feature
branch, and then submit them for review by running::
git review
Your changes will be tested by our automated CI infrastructure, and will
also be reviewed by other developers. If you need to make changes (and
you probably will; it's not uncommon for patches to go through several
iterations before being accepted), make the changes on your feature
branch, and instead of creating a new commit, *amend the existing
commit*, making sure to retain the ``Change-Id`` line that was placed
there by ``git-review``::
git ci --amend
After committing your changes, resubmit the review::
git review

View File

@ -1 +1,15 @@
.. include:: ../../README.rst
Welcome to tripleo-quickstart's documentation!
==============================================
Contents:
.. toctree::
:maxdepth: 2
readme
configuration
accessing-libvirt
accessing-undercloud
accessing-overcloud
unprivileged
contributing

1
doc/source/readme.rst Normal file
View File

@ -0,0 +1 @@
.. include:: ../../README.rst

204
doc/source/unprivileged.rst Normal file
View File

@ -0,0 +1,204 @@
Running the quickstart as an unprivileged user
==============================================
It is possible to run the bulk of the quickstart deployment as an
unprivileged user (a user without root access). In order to do this,
there are a few system configuration tasks that must be performed in
advance:
- Making sure required packages are installed
- Configuring the required libvirt networks
Automatic system configuration
------------------------------
If you want to perform the system configuration tasks manually, skip
this section and start reading below at "Configure KVM".
Place the following into ``playbook.yml`` in the ``tripleo-quickstart``
directory::
- host: localhost
roles:
- environment/setup
And run it like this (assuming that you have ``sudo`` access on your
local host)::
ansible-playbook playbook.yml
Continue reading at `Deploying Tripleo <#deploying-tripleo>`__.
Configure KVM
-------------
You will need to ensure that the ``kvm`` kernel module is loaded, and
that the appropriate process-specific module (``kvm_intel`` or
``kvm_amd``) is loaded. Run the appropriate ``modprobe`` command to load
the module::
# modprobe kvm_intel [options...]
Or::
# modprobe kvm_amd [options...]
Where ``[options...]`` in the above is either empty, or ``nested=1`` if
you want to enable `nested
kvm <https://www.kernel.org/doc/Documentation/virtual/kvm/nested-vmx.txt>`__.
To ensure this module will be loaded next time your system boots, create
``/etc/modules-load.d/oooq_kvm.conf`` with the following content on
Intel systems::
kvm_intel
Or on AMD systems::
kvm_amd
If you want to enable `nested
kvm <https://www.kernel.org/doc/Documentation/virtual/kvm/nested-vmx.txt>`__
persistently, create the file ``/etc/modprobe.d/kvm.conf`` with the
following contents::
options kvm_intel nested=1
options kvm_amd nested=1
Required packages
-----------------
You will need to install the following packages:
- ``qemu-kvm``
- ``libvirt``
- ``libvirt-python``
- ``libguestfs-tools``
- ``python-lxml``
Once these packages are installed, you need to start ``libvirtd``
::
# systemctl enable libvirtd
# systemctl start libvirtd
Configuring libvirt networks
----------------------------
The quickstart requires two networks. The ``external`` network provides
inbound access into the virtual environment set up the playbooks. The
``overcloud`` network connects the overcloud hosts to the undercloud,
and is used both for provisioning, inbound access to the overcloud, and
communication between overcloud hosts.
In the following steps, note that the names you choose for the libvirt
networks are unimportant (because the vms will be wired up to these
networks using bridge names, rather than libvirt network names).
The external network
^^^^^^^^^^^^^^^^^^^^
If you have the standard ``default`` libvirt network, you can just use
that as your ``external`` network. If you would prefer to create a new
one, run something like the following::
# virsh net-define /dev/stdin <<EOF
<network>
<name>external</name>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='brext' stp='on' delay='0'/>
<ip address='192.168.23.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.23.2' end='192.168.23.254'/>
</dhcp>
</ip>
</network>
EOF
# virsh net-start external
# virsh net-autostart external
The overcloud network
^^^^^^^^^^^^^^^^^^^^^
The overcloud network is really just a bridge, so you could simply
configure this through your distributions standard mechanism for
configuring persistent bridge devices. You can also do it via libvirt
like this::
# virsh net-define /dev/stdin <<EOF
<network>
<name>overcloud</name>
<bridge name="brovc" stp='off' delay='0'/>
</network>
EOF
# virsh net-start overcloud
# virsh net-autostart overcloud
Whitelisting bridges
^^^^^^^^^^^^^^^^^^^^
Once you have started the libvirt networks, you need to enter the bridge
names in the ``/etc/qemu/bridge.conf`` file, which makes these bridges
available to unprivileged users via the `qemu bridge
helper <http://wiki.qemu.org/Features-Done/HelperNetworking>`__. Note
that on some systems this file will be called
``/etc/qemu-kvm/bridge.conf``.
Add an ``allow`` line for each bridge you created in the previous steps::
allow brext
allow brovc
Deploying TripleO
-----------------
With all of the system configuration tasks out of the way, the rest of
the process can be run as an unprivileged user. You will need to create
a YAML document that described your network configuration and that
optionally changes any of the default values used in the quickstart
deployment. To describe the network resources we created above, I would
create a file called ``config.yml`` with the following content::
networks:
- name: external
bridge: brext
address: 192.168.23.1
netmask: 255.255.255.0
- name: overcloud
bridge: brovc
You must have one network named ``external`` and one network named
``overcloud``. The ``address`` and ``netmask`` values must match the
values you used to create the libvirt networks.
Place the following into a file ``playbook.yml`` in your
``tripleo-quickstart`` directory::
- hosts: localhosts
roles:
- libvirt/setup
- tripleo/undercloud
- tripleo/overcloud
And run it like this::
ansible-playbook playbook.yml -e @config.yml
This will deploy the default virtual infrastructure, which includes an
undercloud node, three controllers, one compute node, and one ceph node,
and which requires at least 32GB of memory. If you want to deploy a
smaller environment, you could use the ``minimal.yml`` settings we use
in our CI environment::
ansible-playbook playbook.yml -e @config.yml \
-e playbooks/centosci/minimal.yml
This will create a virtual environment with a single controller and a
single compute node, with a total memory footprint of around 22GB.
See :ref:`configuration` for more information.

View File

@ -1,196 +0,0 @@
# Running the quickstart as an unprivileged user
It is possible to run the bulk of the quickstart deployment as an
unprivileged user (a user without root access). In order to do this,
there are a few system configuration tasks that must be performed in
advance:
- Making sure required packages are installed
- Configuring the required libvirt networks
## Automatic system configuration
If you want to perform the system configuration tasks manually, skip
this section and start reading below at "Configure KVM".
Place the following into `playbook.yml` in the `tripleo-quickstart`
directory:
- host: localhost
roles:
- environment/setup
And run it like this (assuming that you have `sudo` access on your
local host):
ansible-playbook playbook.yml
Continue reading at [Deploying Tripleo](#deploying-tripleo).
## Configure KVM
You will need to ensure that the `kvm` kernel module is loaded, and
that the appropriate process-specific module (`kvm_intel` or
`kvm_amd`) is loaded. Run the appropriate `modprobe` command to load
the module:
# modprobe kvm_intel [options...]
Or:
# modprobe kvm_amd [options...]
Where `[options...]` in the above is either empty, or `nested=1` if
you want to enable [nested kvm][].
To ensure this module will be loaded next time your system boots,
create `/etc/modules-load.d/oooq_kvm.conf` with the following content
on Intel systems:
kvm_intel
Or on AMD systems:
kvm_amd
If you want to enable [nested kvm][] persistently, create the
file `/etc/modprobe.d/kvm.conf` with the following contents:
options kvm_intel nested=1
options kvm_amd nested=1
[nested kvm]: https://www.kernel.org/doc/Documentation/virtual/kvm/nested-vmx.txt
## Required packages
You will need to install the following packages:
- `qemu-kvm`
- `libvirt`
- `libvirt-python`
- `libguestfs-tools`
- `python-lxml`
Once these packages are installed, you need to start `libvirtd`:
# systemctl enable libvirtd
# systemctl start libvirtd
## Configuring libvirt networks
The quickstart requires two networks. The `external` network provides
inbound access into the virtual environment set up the playbooks. The
`overcloud` network connects the overcloud hosts to the undercloud,
and is used both for provisioning, inbound access to the overcloud,
and communication between overcloud hosts.
In the following steps, note that the names you choose for the libvirt
networks are unimportant (because the vms will be wired up to these
networks using bridge names, rather than libvirt network names).
### The external network
If you have the standard `default` libvirt network, you can just use
that as your `external` network. If you would prefer to create a new
one, run something like the following:
# virsh net-define /dev/stdin <<EOF
<network>
<name>external</name>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='brext' stp='on' delay='0'/>
<ip address='192.168.23.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.23.2' end='192.168.23.254'/>
</dhcp>
</ip>
</network>
EOF
# virsh net-start external
# virsh net-autostart external
### The overcloud network
The overcloud network is really just a bridge, so you could simply
configure this through your distributions standard mechanism for
configuring persistent bridge devices. You can also do it via libvirt
like this:
# virsh net-define /dev/stdin <<EOF
<network>
<name>overcloud</name>
<bridge name="brovc" stp='off' delay='0'/>
</network>
EOF
# virsh net-start overcloud
# virsh net-autostart overcloud
### Whitelisting bridges
Once you have started the libvirt networks, you need to enter the
bridge names in the `/etc/qemu/bridge.conf` file, which makes these
bridges available to unprivileged users via the [qemu bridge
helper][]. Note that on some systems this file will be called
`/etc/qemu-kvm/bridge.conf`.
Add an `allow` line for each bridge you created in the previous steps:
allow brext
allow brovc
[qemu bridge helper]: http://wiki.qemu.org/Features-Done/HelperNetworking
## <a name="deploying-tripleo">Deploying tripleo</a>
With all of the system configuration tasks out of the way, the rest of
the process can be run as an unprivileged user. You will need to
create a YAML document that described your network configuration and
that optionally changes any of the default values used in the
quickstart deployment. To describe the network resources we created
above, I would create a file called `config.yml` with the following
content:
networks:
- name: external
bridge: brext
address: 192.168.23.1
netmask: 255.255.255.0
- name: overcloud
bridge: brovc
You must have one network named `external` and one network named
`overcloud`. The `address` and `netmask` values must match the values
you used to create the libvirt networks.
Place the following into a file `playbook.yml` in your
`tripleo-quickstart` directory:
- hosts: localhosts
roles:
- libvirt/setup
- tripleo/undercloud
- tripleo/overcloud
And run it like this:
ansible-playbook playbook.yml -e @config.yml
This will deploy the default virtual infrastructure, which includes
an undercloud node, three controllers, one compute node, and one ceph
node, and which requires at least 32GB of memory. If you want to
deploy a smaller environment, you could use the `minimal.yml` settings
we use in our CI environment:
ansible-playbook playbook.yml -e @config.yml \
-e playbooks/centosci/minimal.yml
This will create a virtual environment with a single controller and a
single compute node, with a total memory footprint of around 22GB.
Please see [configuring.md](configuring.md) for more information about
configuring tripleo-quickstart.