Merge "Upgrade the rst convention of the Reference Guide [2]"

This commit is contained in:
Zuul 2018-03-21 10:49:43 +00:00 committed by Gerrit Code Review
commit 23db93908c
6 changed files with 398 additions and 342 deletions

View File

@ -9,7 +9,7 @@ cluster instead of deploying it with Kolla. This can be achieved with only a
few configuration steps in Kolla.
Requirements
============
~~~~~~~~~~~~
* An existing installation of Ceph
* Existing Ceph storage pools
@ -17,21 +17,23 @@ Requirements
(Glance, Cinder, Nova, Gnocchi)
Enabling External Ceph
======================
~~~~~~~~~~~~~~~~~~~~~~
Using external Ceph with Kolla means not to deploy Ceph via Kolla. Therefore,
disable Ceph deployment in ``/etc/kolla/globals.yml``
::
.. code-block:: yaml
enable_ceph: "no"
.. end
There are flags indicating individual services to use ceph or not which default
to the value of ``enable_ceph``. Those flags now need to be activated in order
to activate external Ceph integration. This can be done individually per
service in ``/etc/kolla/globals.yml``:
::
.. code-block:: yaml
glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"
@ -39,38 +41,42 @@ service in ``/etc/kolla/globals.yml``:
gnocchi_backend_storage: "ceph"
enable_manila_backend_ceph_native: "yes"
.. end
The combination of ``enable_ceph: "no"`` and ``<service>_backend_ceph: "yes"``
triggers the activation of external ceph mechanism in Kolla.
Edit the Inventory File
=======================
~~~~~~~~~~~~~~~~~~~~~~~
When using external Ceph, there may be no nodes defined in the storage group.
This will cause Cinder and related services relying on this group to fail.
In this case, operator should add some nodes to the storage group, all the
nodes where cinder-volume and cinder-backup will run:
nodes where ``cinder-volume`` and ``cinder-backup`` will run:
::
.. code-block:: ini
[storage]
compute01
.. end
Configuring External Ceph
=========================
~~~~~~~~~~~~~~~~~~~~~~~~~
Glance
------
Configuring Glance for Ceph includes three steps:
1) Configure RBD back end in glance-api.conf
2) Create Ceph configuration file in /etc/ceph/ceph.conf
3) Create Ceph keyring file in /etc/ceph/ceph.client.<username>.keyring
#. Configure RBD back end in ``glance-api.conf``
#. Create Ceph configuration file in ``/etc/ceph/ceph.conf``
#. Create Ceph keyring file in ``/etc/ceph/ceph.client.<username>.keyring``
Step 1 is done by using Kolla's INI merge mechanism: Create a file in
``/etc/kolla/config/glance/glance-api.conf`` with the following contents:
::
.. code-block:: ini
[glance_store]
stores = rbd
@ -79,12 +85,13 @@ Step 1 is done by using Kolla's INI merge mechanism: Create a file in
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
.. end
Now put ceph.conf and the keyring file (name depends on the username created in
Ceph) into the same directory, for example:
/etc/kolla/config/glance/ceph.conf
::
.. path /etc/kolla/config/glance/ceph.conf
.. code-block:: ini
[global]
fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
@ -94,15 +101,19 @@ Ceph) into the same directory, for example:
auth_service_required = cephx
auth_client_required = cephx
/etc/kolla/config/glance/ceph.client.glance.keyring
.. end
::
.. code-block:: none
$ cat /etc/kolla/config/glance/ceph.client.glance.keyring
[client.glance]
key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
Kolla will pick up all files named ceph.* in this directory an copy them to the
/etc/ceph/ directory of the container.
.. end
Kolla will pick up all files named ``ceph.*`` in this directory and copy them
to the ``/etc/ceph/`` directory of the container.
Cinder
------
@ -110,9 +121,10 @@ Cinder
Configuring external Ceph for Cinder works very similar to
Glance.
Edit /etc/kolla/config/cinder/cinder-volume.conf with the following content:
Modify ``/etc/kolla/config/cinder/cinder-volume.conf`` file according to
the following configuration:
::
.. code-block:: ini
[DEFAULT]
enabled_backends=rbd-1
@ -126,13 +138,16 @@ Edit /etc/kolla/config/cinder/cinder-volume.conf with the following content:
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = {{ cinder_rbd_secret_uuid }}
.. end
.. note::
``cinder_rbd_secret_uuid`` can be found in ``/etc/kolla/passwords.yml`` file.
Edit /etc/kolla/config/cinder/cinder-backup.conf with the following content:
Modify ``/etc/kolla/config/cinder/cinder-backup.conf`` file according to
the following configuration:
::
.. code-block:: ini
[DEFAULT]
backup_ceph_conf=/etc/ceph/ceph.conf
@ -144,10 +159,11 @@ Edit /etc/kolla/config/cinder/cinder-backup.conf with the following content:
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
Next, place the ceph.conf file into
/etc/kolla/config/cinder/ceph.conf:
.. end
::
Next, copy the ``ceph.conf`` file into ``/etc/kolla/config/cinder/``:
.. code-block:: ini
[global]
fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
@ -157,14 +173,16 @@ Next, place the ceph.conf file into
auth_service_required = cephx
auth_client_required = cephx
.. end
Separate configuration options can be configured for
cinder-volume and cinder-backup by adding ceph.conf files to
/etc/kolla/config/cinder/cinder-volume and
/etc/kolla/config/cinder/cinder-backup respectively. They
will be merged with /etc/kolla/config/cinder/ceph.conf.
``/etc/kolla/config/cinder/cinder-volume`` and
``/etc/kolla/config/cinder/cinder-backup`` respectively. They
will be merged with ``/etc/kolla/config/cinder/ceph.conf``.
Ceph keyrings are deployed per service and placed into
cinder-volume and cinder-backup directories, put the keyring files
``cinder-volume`` and ``cinder-backup`` directories, put the keyring files
to these directories, for example:
.. note::
@ -172,45 +190,53 @@ to these directories, for example:
``cinder-backup`` requires two keyrings for accessing volumes
and backup pool.
/etc/kolla/config/cinder/cinder-backup/ceph.client.cinder.keyring
.. code-block:: console
::
$ cat /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder.keyring
[client.cinder]
key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
/etc/kolla/config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
.. end
::
.. code-block:: console
$ cat /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
[client.cinder-backup]
key = AQC9wNBYrD8MOBAAwUlCdPKxWZlhkrWIDE1J/w==
/etc/kolla/config/cinder/cinder-volume/ceph.client.cinder.keyring
.. end
::
.. code-block:: console
$ cat /etc/kolla/config/cinder/cinder-volume/ceph.client.cinder.keyring
[client.cinder]
key = AQAg5YRXpChaGRAAlTSCleesthCRmCYrfQVX1w==
It is important that the files are named ceph.client*.
.. end
It is important that the files are named ``ceph.client*``.
Nova
------
----
Put ceph.conf, nova client keyring file and cinder client keyring file into
``/etc/kolla/config/nova``:
::
.. code-block:: console
$ ls /etc/kolla/config/nova
ceph.client.cinder.keyring ceph.client.nova.keyring ceph.conf
.. end
Configure nova-compute to use Ceph as the ephemeral back end by creating
``/etc/kolla/config/nova/nova-compute.conf`` and adding the following
contents:
configurations:
::
.. code-block:: ini
[libvirt]
images_rbd_pool=vms
@ -218,14 +244,19 @@ contents:
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=nova
.. note:: ``rbd_user`` might vary depending on your environment.
.. end
.. note::
``rbd_user`` might vary depending on your environment.
Gnocchi
-------
Edit ``/etc/kolla/config/gnocchi/gnocchi.conf`` with the following content:
Modify ``/etc/kolla/config/gnocchi/gnocchi.conf`` file according to
the following configuration:
::
.. code-block:: ini
[storage]
driver = ceph
@ -233,32 +264,35 @@ Edit ``/etc/kolla/config/gnocchi/gnocchi.conf`` with the following content:
ceph_keyring = /etc/ceph/ceph.client.gnocchi.keyring
ceph_conffile = /etc/ceph/ceph.conf
.. end
Put ceph.conf and gnocchi client keyring file in
``/etc/kolla/config/gnocchi``:
::
.. code-block:: console
$ ls /etc/kolla/config/gnocchi
ceph.client.gnocchi.keyring ceph.conf gnocchi.conf
.. end
Manila
------
Configuring Manila for Ceph includes four steps:
1) Configure CephFS backend, setting enable_manila_backend_ceph_native
2) Create Ceph configuration file in /etc/ceph/ceph.conf
3) Create Ceph keyring file in /etc/ceph/ceph.client.<username>.keyring
4) Setup Manila in the usual way
#. Configure CephFS backend, setting ``enable_manila_backend_ceph_native``
#. Create Ceph configuration file in ``/etc/ceph/ceph.conf``
#. Create Ceph keyring file in ``/etc/ceph/ceph.client.<username>.keyring``
#. Setup Manila in the usual way
Step 1 is done by using setting enable_manila_backend_ceph_native=true
Step 1 is done by using setting ``enable_manila_backend_ceph_native=true``
Now put ceph.conf and the keyring file (name depends on the username created
in Ceph) into the same directory, for example:
/etc/kolla/config/manila/ceph.conf
::
.. path /etc/kolla/config/manila/ceph.conf
.. code-block:: ini
[global]
fsid = 1d89fec3-325a-4963-a950-c4afedd37fe3
@ -267,16 +301,20 @@ in Ceph) into the same directory, for example:
auth_service_required = cephx
auth_client_required = cephx
/etc/kolla/config/manila/ceph.client.manila.keyring
.. end
::
.. code-block:: console
$ cat /etc/kolla/config/manila/ceph.client.manila.keyring
[client.manila]
key = AQAg5YRXS0qxLRAAXe6a4R1a15AoRx7ft80DhA==
For more details on the rest of the Manila setup, such as creating the share
type ``default_share_type``, please see:
https://docs.openstack.org/kolla-ansible/latest/reference/manila-guide.html
.. end
For more details on the CephFS Native driver, please see:
https://docs.openstack.org/manila/latest/admin/cephfs_driver.html
For more details on the rest of the Manila setup, such as creating the share
type ``default_share_type``, please see `Manila in Kolla
<https://docs.openstack.org/kolla-ansible/latest/reference/manila-guide.html>`__.
For more details on the CephFS Native driver, please see `CephFS driver
<https://docs.openstack.org/manila/latest/admin/cephfs_driver.html>`__.

View File

@ -9,7 +9,7 @@ it might be necessary to use an externally managed database.
This use case can be achieved by simply taking some extra steps:
Requirements
============
~~~~~~~~~~~~
* An existing MariaDB cluster / server, reachable from all of your
nodes.
@ -23,7 +23,7 @@ Requirements
user accounts for all enabled services.
Enabling External MariaDB support
=================================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order to enable external mariadb support,
you will first need to disable mariadb deployment,
@ -35,38 +35,28 @@ by ensuring the following line exists within ``/etc/kolla/globals.yml`` :
.. end
There are two ways in which you can use
external MariaDB:
There are two ways in which you can use external MariaDB:
* Using an already load-balanced MariaDB address
* Using an external MariaDB cluster
Using an already load-balanced MariaDB address (recommended)
------------------------------------------------------------
If your external database already has a
load balancer, you will need to do the following:
If your external database already has a load balancer, you will
need to do the following:
* Within your inventory file, just add the hostname
of the load balancer within the mariadb group,
described as below:
Change the following
#. Edit the inventory file, change ``control`` to the hostname of the load
balancer within the ``mariadb`` group as below:
.. code-block:: ini
[mariadb:children]
control
.. end
so that it looks like below:
.. code-block:: ini
[mariadb]
myexternalmariadbloadbalancer.com
.. end
* Define **database_address** within ``/etc/kolla/globals.yml``
#. Define ``database_address`` in ``/etc/kolla/globals.yml`` file:
.. code-block:: yaml
@ -74,30 +64,20 @@ so that it looks like below:
.. end
Please note that if **enable_external_mariadb_load_balancer** is
set to "no" - **default**, the external DB load balancer will need to be
accessible from all nodes within your deployment, which might
connect to it.
.. note::
Using an external MariaDB cluster:
----------------------------------
If ``enable_external_mariadb_load_balancer`` is set to ``no``
(default), the external DB load balancer should be accessible
from all nodes during your deployment.
Then, you will need to adjust your inventory file:
Using an external MariaDB cluster
---------------------------------
Change the following
Using this way, you need to adjust the inventory file:
.. code-block:: ini
[mariadb:children]
control
.. end
so that it looks like below:
.. code-block:: ini
[mariadb]
myexternaldbserver1.com
myexternaldbserver2.com
myexternaldbserver3.com
@ -106,12 +86,12 @@ so that it looks like below:
If you choose to use haproxy for load balancing between the
members of the cluster, every node within this group
needs to be resolvable and reachable and resolvable from all
the hosts within the **[haproxy:children]** group
of your inventory (defaults to **[network]**).
needs to be resolvable and reachable from all
the hosts within the ``[haproxy:children]`` group
of your inventory (defaults to ``[network]``).
In addition to that, you also need to set the following within
``/etc/kolla/globals.yml``:
In addition, configure the ``/etc/kolla/globals.yml`` file
according to the following configuration:
.. code-block:: yaml
@ -120,13 +100,12 @@ In addition to that, you also need to set the following within
.. end
Using External MariaDB with a privileged user
=============================================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In case your MariaDB user is root, just leave
everything as it is within globals.yml (Except the
internal mariadb deployment, which should be disabled),
and set the **database_password** field within
``/etc/kolla/passwords.yml``
and set the ``database_password`` in ``/etc/kolla/passwords.yml`` file:
.. code-block:: yaml
@ -134,8 +113,8 @@ and set the **database_password** field within
.. end
In case your username is other than **root**, you will
need to also set it, within ``/etc/kolla/globals.yml``
If the MariaDB ``username`` is not ``root``, set ``database_username`` in
``/etc/kolla/globals.yml`` file:
.. code-block:: yaml
@ -144,11 +123,10 @@ need to also set it, within ``/etc/kolla/globals.yml``
.. end
Using preconfigured databases / users:
======================================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The first step you need to take is the following:
Within ``/etc/kolla/globals.yml``, set the following:
The first step you need to take is to set ``use_preconfigured_databases`` to
``yes`` in the ``/etc/kolla/globals.yml`` file:
.. code-block:: yaml
@ -156,16 +134,18 @@ Within ``/etc/kolla/globals.yml``, set the following:
.. end
.. note:: Please note that when the ``use_preconfigured_databases`` flag
is set to ``"yes"``, you need to have the ``log_bin_trust_function_creators``
mysql variable set to ``1`` by your database administrator before running the
``upgrade`` command.
.. note::
when the ``use_preconfigured_databases`` flag is set to ``"yes"``, you need
to make sure the mysql variable ``log_bin_trust_function_creators``
set to ``1`` by the database administrator before running the
:command:`upgrade` command.
Using External MariaDB with separated, preconfigured users and databases
------------------------------------------------------------------------
In order to achieve this, you will need to define the user names within
``/etc/kolla/globals.yml``, as illustrated by the example below:
In order to achieve this, you will need to define the user names in the
``/etc/kolla/globals.yml`` file, as illustrated by the example below:
.. code-block:: yaml
@ -175,21 +155,18 @@ In order to achieve this, you will need to define the user names within
.. end
You will need to also set the passwords for all databases within
``/etc/kolla/passwords.yml``
However, fortunately, using a common user across
all databases is also possible.
Also, you will need to set the passwords for all databases in the
``/etc/kolla/passwords.yml`` file
However, fortunately, using a common user across all databases is possible.
Using External MariaDB with a common user across databases
----------------------------------------------------------
In order to use a common, preconfigured user across all databases,
all you need to do is the following:
all you need to do is the following steps:
* Within ``/etc/kolla/globals.yml``, add the following:
#. Edit the ``/etc/kolla/globals.yml`` file, add the following:
.. code-block:: yaml
@ -197,7 +174,7 @@ all you need to do is the following:
.. end
* Set the database_user within ``/etc/kolla/globals.yml`` to
#. Set the database_user within ``/etc/kolla/globals.yml`` to
the one provided to you:
.. code-block:: yaml
@ -206,7 +183,7 @@ all you need to do is the following:
.. end
* Set the common password for all components within ``/etc/kolla/passwords.yml```.
#. Set the common password for all components within ``/etc/kolla/passwords.yml```.
In order to achieve that you could use the following command:
.. code-block:: console

View File

@ -5,7 +5,7 @@ Nova-HyperV in Kolla
====================
Overview
========
~~~~~~~~
Currently, Kolla can deploy the following OpenStack services for Hyper-V:
* nova-compute
@ -33,8 +33,7 @@ virtual machines from Horizon web interface.
HyperV services do not currently support outside the box upgrades. Manual
upgrades are required for this process. MSI release versions can be found
`here
<https://cloudbase.it/openstack-hyperv-driver/>`__.
`here <https://cloudbase.it/openstack-hyperv-driver/>`__.
To upgrade an existing MSI to a newer version, simply uninstall the current
MSI and install the newer one. This will not delete the configuration files.
To preserve the configuration files, check the Skip configuration checkbox
@ -42,12 +41,11 @@ virtual machines from Horizon web interface.
Preparation for Hyper-V node
============================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ansible communicates with Hyper-V host via WinRM protocol. An HTTPS WinRM
listener needs to be configured on the Hyper-V host, which can be easily
created with
`this PowerShell script
created with `this PowerShell script
<https://github.com/ansible/ansible/blob/devel/examples/scripts/ConfigureRemotingForAnsible.ps1>`__.
@ -60,10 +58,13 @@ Virtual Interface the following PowerShell may be used:
PS C:\> $if = Get-NetIPAddress -IPAddress 192* | Get-NetIPInterface
PS C:\> New-VMSwitch -NetAdapterName $if.ifAlias -Name YOUR_BRIDGE_NAME -AllowManagementOS $false
.. end
.. note::
It is very important to make sure that when you are using a Hyper-V node with only 1 NIC the
-AllowManagementOS option is set on True, otherwise you will lose connectivity to the Hyper-V node.
It is very important to make sure that when you are using a Hyper-V node
with only 1 NIC the ``-AllowManagementOS`` option is set on ``True``,
otherwise you will lose connectivity to the Hyper-V node.
To prepare the Hyper-V node to be able to attach to volumes provided by
@ -75,42 +76,47 @@ running and started automatically.
PS C:\> Set-Service -Name MSiSCSI -StartupType Automatic
PS C:\> Start-Service MSiSCSI
.. end
Preparation for Kolla deployer node
===================================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Hyper-V role is required, enable it in ``/etc/kolla/globals.yml``:
.. code-block:: console
.. code-block:: yaml
enable_hyperv: "yes"
.. end
Hyper-V options are also required in ``/etc/kolla/globals.yml``:
.. code-block:: console
.. code-block:: yaml
hyperv_username: <HyperV username>
hyperv_password: <HyperV password>
vswitch_name: <HyperV virtual switch name>
nova_msi_url: "https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi"
.. end
If tenant networks are to be built using VLAN add corresponding type in
``/etc/kolla/globals.yml``:
.. code-block:: console
.. code-block:: yaml
neutron_tenant_network_types: 'flat,vlan'
.. end
The virtual switch is the same one created on the HyperV setup part.
For nova_msi_url, different Nova MSI (Mitaka/Newton/Ocata) versions can
be found on `Cloudbase website
<https://cloudbase.it/openstack-hyperv-driver/>`__.
Add the Hyper-V node in ``ansible/inventory`` file:
.. code-block:: console
.. code-block:: none
[hyperv]
<HyperV IP>
@ -122,6 +128,8 @@ Add the Hyper-V node in ``ansible/inventory`` file:
ansible_connection=winrm
ansible_winrm_server_cert_validation=ignore
.. end
``pywinrm`` package needs to be installed in order for Ansible to work
on the HyperV node:
@ -129,16 +137,20 @@ on the HyperV node:
pip install "pywinrm>=0.2.2"
.. end
.. note::
In case of a test deployment with controller and compute nodes as virtual machines
on Hyper-V, if VLAN tenant networking is used, trunk mode has to be enabled on the
VMs:
In case of a test deployment with controller and compute nodes as
virtual machines on Hyper-V, if VLAN tenant networking is used,
trunk mode has to be enabled on the VMs:
.. code-block:: console
Set-VMNetworkAdapterVlan -Trunk -AllowedVlanIdList <VLAN ID> -NativeVlanId 0 <VM name>
.. end
networking-hyperv mechanism driver is needed for neutron-server to
communicate with HyperV nova-compute. This can be built with source
images by default. Manually it can be intalled in neutron-server
@ -148,13 +160,15 @@ container with pip:
pip install "networking-hyperv>=4.0.0"
.. end
For neutron_extension_drivers, ``port_security`` and ``qos`` are
currently supported by the networking-hyperv mechanism driver.
By default only ``port_security`` is set.
Verify Operations
=================
~~~~~~~~~~~~~~~~~
OpenStack HyperV services can be inspected and managed from PowerShell:
@ -163,11 +177,14 @@ OpenStack HyperV services can be inspected and managed from PowerShell:
PS C:\> Get-Service nova-compute
PS C:\> Get-Service neutron-hyperv-agent
.. end
.. code-block:: console
PS C:\> Restart-Service nova-compute
PS C:\> Restart-Service neutron-hyperv-agent
.. end
For more information on OpenStack HyperV, see
`Hyper-V virtualization platform

View File

@ -1,12 +1,12 @@
Reference
=========
Projects Deployment References
==============================
.. toctree::
:maxdepth: 1
:maxdepth: 2
ceph-guide
central-logging-guide
external-ceph-guide
central-logging-guide
external-mariadb-guide
cinder-guide
cinder-guide-hnas

View File

@ -5,7 +5,7 @@ Ironic in Kolla
===============
Overview
========
~~~~~~~~
Currently Kolla can deploy the Ironic services:
- ironic-api
@ -16,52 +16,62 @@ Currently Kolla can deploy the Ironic services:
As well as a required PXE service, deployed as ironic-pxe.
Current status
==============
~~~~~~~~~~~~~~
The Ironic implementation is "tech preview", so currently instances can only be
deployed on baremetal. Further work will be done to allow scheduling for both
virtualized and baremetal deployments.
Pre-deployment Configuration
============================
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enable Ironic role in ``/etc/kolla/globals.yml``:
.. code-block:: console
.. code-block:: yaml
enable_ironic: "yes"
Beside that an additional network type 'vlan,flat' has to be added to a list of
.. end
Beside that an additional network type ``vlan,flat`` has to be added to a list of
tenant network types:
.. code-block:: console
.. code-block:: yaml
neutron_tenant_network_types: "vxlan,vlan,flat"
.. end
Configuring Web Console
=======================
Configuration based off upstream web_console_documentation_.
~~~~~~~~~~~~~~~~~~~~~~~
Configuration based off upstream `Node web console
<https://docs.openstack.org/ironic/latest/admin/console.html#node-web-console>`__.
Serial speed must be the same as the serial configuration in the BIOS settings.
Default value: 115200bps, 8bit, non-parity.If you have different serial speed.
Set ironic_console_serial_speed in ``/etc/kolla/globals.yml``:
::
.. code-block:: yaml
ironic_console_serial_speed: 9600n8
.. _web_console_documentation: https://docs.openstack.org/ironic/latest/admin/console.html#node-web-console
.. end
Post-deployment configuration
=============================
Configuration based off upstream documentation_.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Configuration based off upstream `Ironic installation Documentation
<https://docs.openstack.org/ironic/latest/install/index.html>`__.
Again, remember that enabling Ironic reconfigures nova compute (driver and
scheduler) as well as changes neutron network settings. Further neutron setup
is required as outlined below.
Create the flat network to launch the instances:
::
.. code-block:: console
neutron net-create --tenant-id $TENANT_ID sharednet1 --shared \
--provider:network_type flat --provider:physical_network physnet1
@ -70,7 +80,8 @@ Create the flat network to launch the instances:
--ip-version=4 --gateway=$GATEWAY_IP --allocation-pool \
start=$START_IP,end=$END_IP --enable-dhcp
And then the above ID is used to set cleaning_network in the neutron
section of ironic.conf.
.. end
And then the above ID is used to set ``cleaning_network`` in the neutron
section of ``ironic.conf``.
.. _documentation: https://docs.openstack.org/ironic/latest/install/index.html

View File

@ -1,26 +1,29 @@
==============
Kuryr in Kolla
==============
"Kuryr is a Docker network plugin that uses Neutron to provide networking
services to Docker containers. It provides containerized images for the common
Neutron plugins" [1]. Kuryr requires at least Keystone and neutron. Kolla makes
Neutron plugins. Kuryr requires at least Keystone and neutron. Kolla makes
kuryr deployment faster and accessible.
Requirements
------------
~~~~~~~~~~~~
* A minimum of 3 hosts for a vanilla deploy
Preparation and Deployment
--------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~
To allow Docker daemon connect to the etcd, add the following in the
docker.service file.
``docker.service`` file.
::
.. code-block:: none
ExecStart= -H tcp://172.16.1.13:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://172.16.1.13:2379 --cluster-advertise=172.16.1.13:2375
.. end
The IP address is host running the etcd service. ```2375``` is port that
allows Docker daemon to be accessed remotely. ```2379``` is the etcd listening
port.
@ -29,36 +32,46 @@ By default etcd and kuryr are disabled in the ``group_vars/all.yml``.
In order to enable them, you need to edit the file globals.yml and set the
following variables
::
.. code-block:: yaml
enable_etcd: "yes"
enable_kuryr: "yes"
.. end
Deploy the OpenStack cloud and kuryr network plugin
::
.. code-block:: console
kolla-ansible deploy
Create a Virtual Network
--------------------------------
.. end
::
Create a Virtual Network
~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: console
docker network create -d kuryr --ipam-driver=kuryr --subnet=10.1.0.0/24 --gateway=10.1.0.1 docker-net1
.. end
To list the created network:
::
.. code-block:: console
docker network ls
.. end
The created network is also available from OpenStack CLI:
::
.. code-block:: console
openstack network list
.. end
For more information about how kuryr works, see
`kuryr, OpenStack Containers Networking
`kuryr (OpenStack Containers Networking)
<https://docs.openstack.org/kuryr/latest/>`__.