Browse Source

Merge "Revise installation guides"

changes/87/743287/2
Zuul 1 week ago
committed by Gerrit Code Review
parent
commit
f670adec3a
8 changed files with 872 additions and 1193 deletions
  1. +99
    -263
      doc/source/install/deploy_openwrt.rst
  2. +61
    -136
      doc/source/install/devstack.rst
  3. +95
    -92
      doc/source/install/getting_started.rst
  4. +119
    -120
      doc/source/install/kolla.rst
  5. +206
    -205
      doc/source/install/kubernetes_vim_installation.rst
  6. +193
    -252
      doc/source/install/manual_installation.rst
  7. +97
    -123
      doc/source/install/openstack_vim_installation.rst
  8. +2
    -2
      samples/vim/vim_config.yaml

+ 99
- 263
doc/source/install/deploy_openwrt.rst View File

@@ -21,305 +21,141 @@ Deploying OpenWRT as VNF
Once tacker is installed successfully, follow the steps given below to get
started with deploying OpenWRT as VNF.

1. Ensure Glance already contains OpenWRT image.
#. Ensure Glance already contains OpenWRT image.

Normally, Tacker tries to add OpenWRT image to Glance while installing
via devstack. By running **openstack image list** to check OpenWRT image
if exists. If not, download the customized image of OpenWRT 15.05.1
[#f1]_. Unzip the file by using the command below:
Normally, Tacker tries to add OpenWRT image to Glance while installing
via devstack. By running ``openstack image list`` to check OpenWRT image
if exists.

.. code-block:: console
.. code-block:: console
:emphasize-lines: 5

gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz
$ openstack image list
+--------------------------------------+--------------------------+--------+
| ID | Name | Status |
+--------------------------------------+--------------------------+--------+
| 8cc2aaa8-5218-49e7-9a57-ddb97dc68d98 | OpenWRT | active |
| 32f875b0-9e24-4971-b82d-84d6ec620136 | cirros-0.4.0-x86_64-disk | active |
| ab0abeb8-f73c-467b-9743-b17083c02093 | cirros-0.5.1-x86_64-disk | active |
+--------------------------------------+--------------------------+--------+

..
If not, you can get the customized image of OpenWRT 15.05.1 in your tacker repository,
or download the image from [#f1]_. Unzip the file by using the command below:

And then upload this image into Glance by using the command specified below:
.. code-block:: console

.. code-block:: console
$ cd /path/to/tacker/samples/images/
$ gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz

openstack image create OpenWRT --disk-format qcow2 \
--container-format bare \
--file /path_to_image/openwrt-x86-kvm_guest-combined-ext4.img \
--public
..
Then upload the image into Glance by using command below:

2. Configure OpenWRT

The example below shows how to create the OpenWRT-based Firewall VNF.
First, we have a yaml template which contains the configuration of
OpenWRT as shown below:

*tosca-vnfd-openwrt.yaml* [#f2]_

.. code-block:: yaml

tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0

description: OpenWRT with services

metadata:
template_name: OpenWRT

topology_template:
node_templates:

VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
num_cpus: 1
mem_size: 512 MB
disk_size: 1 GB
properties:
image: OpenWRT
config: |
param0: key1
param1: key2
mgmt_driver: openwrt
monitoring_policy:
name: ping
parameters:
count: 3
interval: 10
actions:
failure: respawn

CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1

CP2:
type: tosca.nodes.nfv.CP.Tacker
properties:
order: 1
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL2
- virtualBinding:
node: VDU1

CP3:
type: tosca.nodes.nfv.CP.Tacker
properties:
order: 2
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL3
- virtualBinding:
node: VDU1

VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: net_mgmt
vendor: Tacker

VL2:
type: tosca.nodes.nfv.VL
properties:
network_name: net0
vendor: Tacker

VL3:
type: tosca.nodes.nfv.VL
properties:
network_name: net1
vendor: Tacker firewall
.. code-block:: console

..
$ openstack image create OpenWRT --disk-format qcow2 \
--container-format bare \
--file /path/to/openwrt-x86-kvm_guest-combined-ext4.img \
--public

We also have another configuration yaml template with some firewall rules of
OpenWRT.

*tosca-config-openwrt-firewall.yaml* [#f3]_

.. code-block:: yaml

vdus:
VDU1:
config:
firewall: |
package firewall
config defaults
option syn_flood '1'
option input 'ACCEPT'
option output 'ACCEPT'
option forward 'REJECT'
config zone
option name 'lan'
list network 'lan'
option input 'ACCEPT'
option output 'ACCEPT'
option forward 'ACCEPT'
config zone
option name 'wan'
list network 'wan'
list network 'wan6'
option input 'REJECT'
option output 'ACCEPT'
option forward 'REJECT'
option masq '1'
option mtu_fix '1'
config forwarding
option src 'lan'
option dest 'wan'
config rule
option name 'Allow-DHCP-Renew'
option src 'wan'
option proto 'udp'
option dest_port '68'
option target 'ACCEPT'
option family 'ipv4'
config rule
option name 'Allow-Ping'
option src 'wan'
option proto 'icmp'
option icmp_type 'echo-request'
option family 'ipv4'
option target 'ACCEPT'
config rule
option name 'Allow-IGMP'
option src 'wan'
option proto 'igmp'
option family 'ipv4'
option target 'ACCEPT'
config rule
option name 'Allow-DHCPv6'
option src 'wan'
option proto 'udp'
option src_ip 'fe80::/10'
option src_port '547'
option dest_ip 'fe80::/10'
option dest_port '546'
option family 'ipv6'
option target 'ACCEPT'
config rule
option name 'Allow-MLD'
option src 'wan'
option proto 'icmp'
option src_ip 'fe80::/10'
list icmp_type '130/0'
list icmp_type '131/0'
list icmp_type '132/0'
list icmp_type '143/0'
option family 'ipv6'
option target 'ACCEPT'
config rule
option name 'Allow-ICMPv6-Input'
option src 'wan'
option proto 'icmp'
list icmp_type 'echo-request'
list icmp_type 'echo-reply'
list icmp_type 'destination-unreachable'
list icmp_type 'packet-too-big'
list icmp_type 'time-exceeded'
list icmp_type 'bad-header'
list icmp_type 'unknown-header-type'
list icmp_type 'router-solicitation'
list icmp_type 'neighbour-solicitation'
list icmp_type 'router-advertisement'
list icmp_type 'neighbour-advertisement'
option limit '190/sec'
option family 'ipv6'
option target 'REJECT'
#. Configure OpenWRT

..
The example below shows how to create the OpenWRT-based Firewall VNF.
First, we have a yaml template which contains the configuration of
OpenWRT as shown below:

In this template file, we specify the **mgmt_driver: openwrt** which means
this VNFD is managed by openwrt driver [#f4]_. This driver can inject
firewall rules which defined in VNFD into OpenWRT instance by using SSH
protocol. We can run**cat /etc/config/firewall** to confirm the firewall
rules if inject succeed.
*tosca-vnfd-openwrt.yaml* [#f2]_

3. Create a sample vnfd
.. literalinclude:: ../../../samples/tosca-templates/vnfd/tosca-vnfd-openwrt.yaml
:language: yaml

.. code-block:: console

openstack vnf descriptor create --vnfd-file tosca-vnfd-openwrt.yaml <VNFD_NAME>
..
We also have another configuration yaml template with some firewall rules of
OpenWRT.

4. Create a VNF
*tosca-config-openwrt-firewall.yaml* [#f3]_

.. code-block:: console
.. literalinclude:: ../../../samples/tosca-templates/vnfd/tosca-config-openwrt-firewall.yaml
:language: yaml

openstack vnf create --vnfd-name <VNFD_NAME> \
--config-file tosca-config-openwrt-firewall.yaml <NAME>
..
In this template file, we specify the ``mgmt_driver: openwrt`` which means
this VNFD is managed by openwrt driver [#f4]_. This driver can inject
firewall rules which defined in VNFD into OpenWRT instance by using SSH
protocol. We can run ``cat /etc/config/firewall`` to confirm the firewall
rules if inject succeed.

5. Check the status
#. Create a sample vnfd

.. code-block:: console
.. code-block:: console

openstack vnf list
openstack vnf show <VNF_ID>
..
$ openstack vnf descriptor create \
--vnfd-file tosca-vnfd-openwrt.yaml <VNFD_NAME>

We can replace the firewall rules configuration file with
tosca-config-openwrt-vrouter.yaml [#f5]_, tosca-config-openwrt-dnsmasq.yaml
[#f6]_, or tosca-config-openwrt-qos.yaml [#f7]_ to deploy the router, DHCP,
DNS, or QoS VNFs. The openwrt VNFM management driver will do the same way to
inject the desired service rules into the OpenWRT instance. You can also do the
same to check if the rules are injected successful: **cat /etc/config/network**
to check vrouter, **cat /etc/config/dhcp** to check DHCP and DNS, and
**cat /etc/config/qos** to check the QoS rules.
#. Create a VNF

6. Notes
.. code-block:: console

6.1. OpenWRT user and password
$ openstack vnf create --vnfd-name <VNFD_NAME> \
--config-file tosca-config-openwrt-firewall.yaml <NAME>

The user account is 'root' and password is '', which means there is no
password for root account.
#. Check the status

6.2. Procedure to customize the OpenWRT image
.. code-block:: console

The OpenWRT is modified based on KVM OpenWRT 15.05.1 to be suitable forTacker.
The procedure is following as below:
$ openstack vnf list
$ openstack vnf show <VNF_ID>

.. code-block:: console
We can replace the firewall rules configuration file with
tosca-config-openwrt-vrouter.yaml [#f5]_, tosca-config-openwrt-dnsmasq.yaml
[#f6]_, or tosca-config-openwrt-qos.yaml [#f7]_ to deploy the router, DHCP,
DNS, or QoS VNFs. The openwrt VNFM management driver will do the same way to
inject the desired service rules into the OpenWRT instance. You can also do the
same to check if the rules are injected successful: **cat /etc/config/network**
to check vrouter, **cat /etc/config/dhcp** to check DHCP and DNS, and
**cat /etc/config/qos** to check the QoS rules.

cd ~
wget https://archive.openwrt.org/chaos_calmer/15.05.1/x86/kvm_guest/openwrt-15.05.1-x86-kvm_guest-combined-ext4.img.gz \
-O openwrt-x86-kvm_guest-combined-ext4.img.gz
gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz
#. Notes

mkdir -p imgroot
#. OpenWRT user and password

sudo kpartx -av openwrt-x86-kvm_guest-combined-ext4.img
The user account is 'root' and password is '', which means there is no
password for root account.

# Replace the loopXp2 with the result of above command, e.g., loop0p2
sudo mount -o loop /dev/mapper/loopXp2 imgroot
sudo chroot imgroot /bin/ash
#. Procedure to customize the OpenWRT image

# Set password of this image to blank, type follow command and then enter two times
passwd
The OpenWRT is modified based on KVM OpenWRT 15.05.1 to be suitable
for Tacker. The procedure is following as below:

# Set DHCP for the network of OpenWRT so that the VNF can be ping
uci set network.lan.proto=dhcp; uci commit
exit
.. code-block:: console

sudo umount imgroot
sudo kpartx -dv openwrt-x86-kvm_guest-combined-ext4.img
$ cd ~
$ wget https://archive.openwrt.org/chaos_calmer/15.05.1/x86/kvm_guest/openwrt-15.05.1-x86-kvm_guest-combined-ext4.img.gz \
-O openwrt-x86-kvm_guest-combined-ext4.img.gz
$ gunzip openwrt-x86-kvm_guest-combined-ext4.img.gz

$ mkdir -p imgroot

$ sudo kpartx -av openwrt-x86-kvm_guest-combined-ext4.img

# Replace the loopXp2 with the result of above command, e.g., loop0p2
$ sudo mount -o loop /dev/mapper/loopXp2 imgroot
$ sudo chroot imgroot /bin/ash

# Set password of this image to blank, type follow command and then enter two times
$ passwd

# Set DHCP for the network of OpenWRT so that the VNF can be ping
$ uci set network.lan.proto=dhcp; uci commit
$ exit

$ sudo umount imgroot
$ sudo kpartx -dv openwrt-x86-kvm_guest-combined-ext4.img

..

.. rubric:: Footnotes

.. [#] https://github.com/openstack/tacker/blob/master/samples/images/openwrt-x86-kvm_guest-combined-ext4.img.gz
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-vnfd-openwrt.yaml
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-firewall.yaml
.. [#] https://github.com/openstack/tacker/blob/master/tacker/vnfm/mgmt_drivers/openwrt/openwrt.py
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-vrouter.yaml
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-dnsmasq.yaml
.. [#] https://github.com/openstack/tacker/blob/master/samples/tosca-templates/vnfd/tosca-config-openwrt-qos.yaml
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/images/openwrt-x86-kvm_guest-combined-ext4.img.gz
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-vnfd-openwrt.yaml
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-firewall.yaml
.. [#] https://opendev.org/openstack/tacker/src/branch/master/tacker/vnfm/mgmt_drivers/openwrt/openwrt.py
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-vrouter.yaml
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-dnsmasq.yaml
.. [#] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd/tosca-config-openwrt-qos.yaml

+ 61
- 136
doc/source/install/devstack.rst View File

@@ -19,167 +19,92 @@
Install via Devstack
====================

The Devstack supports installation from different code branch by specifying
<branch-name> below. If there is no preference, it is recommended to install
Tacker from master branch, i.e. the <branch-name> is master. If pike branch
is the target branch, the <branch-name> is stable/pike.
Devstack should be run as a non-root with sudo enabled(standard logins to
cloud images such as "ubuntu" or "cloud-user" are usually fine). Creating a
separate user and granting relevant privileges please refer [#f0]_.
Overview
--------

1. Download DevStack:
Tacker provides some examples, or templates, of ``local.conf`` used for
Devstack. You can find them in ``${TACKER_ROOT}/devstack`` directory in the
tacker repository.

.. code-block:: console
Devstack supports installation from different code branch by specifying
branch name in your ``local.conf`` as described in below.
If you install the latest version, use ``master`` branch.
On the other hand, if you install specific release, suppose ``ussuri``
in this case, branch name must be ``stable/ussuri``.

$ git clone https://opendev.org/openstack-dev/devstack -b <branch-name>
$ cd devstack
For installation, ``stack.sh`` script in Devstack should be run as a
non-root user with sudo enabled.
Add a separate user ``stack`` and granting relevant privileges is a good way
to install via Devstack [#f0]_.

..

2. Enable tacker related Devstack plugins in **local.conf** file:

First, the **local.conf** file needs to be created by manual or copied from
Tacker Repo [#f1]_ and renamed to **local.conf**. We have two Tacker
configuration installation files. First, it is the all-in-one mode that
installs full Devstack environment including Tacker in one PC or Laptop.
Second, it is the standalone mode which only will install a standalone
Tacker environment with some mandatory OpenStack services.

2.1. All-in-one mode

The **local.conf** file of all-in-one mode from [#f2]_ is shown as below:

.. code-block:: ini

[[local|localrc]]
############################################################
# Customize the following HOST_IP based on your installation
############################################################
HOST_IP=127.0.0.1

ADMIN_PASSWORD=devstack
MYSQL_PASSWORD=devstack
RABBIT_PASSWORD=devstack
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=devstack
Install
-------

############################################################
# Customize the following section based on your installation
############################################################
Devstack expects to be provided ``local.conf`` before running install script.
The first step of installing tacker is to clone Devstack and prepare your
``local.conf``.

# Pip
PIP_USE_MIRRORS=False
USE_GET_PIP=1
#. Download DevStack

#OFFLINE=False
#RECLONE=True
Get Devstack via git, with specific branch optionally if you prefer,
and go down to the directory.

# Logging
LOGFILE=$DEST/logs/stack.sh.log
VERBOSE=True
ENABLE_DEBUG_LOG_LEVEL=True
ENABLE_VERBOSE_LOG_LEVEL=True
.. code-block:: console

# Neutron ML2 with OpenVSwitch
Q_PLUGIN=ml2
Q_AGENT=openvswitch
$ git clone https://opendev.org/openstack-dev/devstack -b <branch-name>
$ cd devstack

# Disable security groups
Q_USE_SECGROUP=False
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
#. Enable tacker related Devstack plugins in ``local.conf`` file

# Enable heat, networking-sfc, barbican and mistral
enable_plugin heat https://opendev.org/openstack/heat master
enable_plugin networking-sfc https://opendev.org/openstack/networking-sfc master
enable_plugin barbican https://opendev.org/openstack/barbican master
enable_plugin mistral https://opendev.org/openstack/mistral master
``local.conf`` needs to be created by manual, or copied from Tacker
repo [#f1]_ renamed as ``local.conf``. We have two choices for
configuration basically. First one is the ``all-in-one`` mode that
installs full Devstack environment including Tacker in one PC or Laptop.
Second, it is ``standalone`` mode which only will install only Tacker
environment with some mandatory OpenStack services. Nova, Neutron or other
essential components are not included in this mode.

# Ceilometer
#CEILOMETER_PIPELINE_INTERVAL=300
enable_plugin ceilometer https://opendev.org/openstack/ceilometer master
enable_plugin aodh https://opendev.org/openstack/aodh master
#. All-in-one mode

# Blazar
enable_plugin blazar https://github.com/openstack/blazar.git master
There are two examples for ``all-in-one`` mode, targetting OpenStack
or Kubernetes as VIM.

# Tacker
enable_plugin tacker https://opendev.org/openstack/tacker master
``local.conf`` for ``all-in-one`` mode with OpenStack [#f2]_
is shown as below.

enable_service n-novnc
enable_service n-cauth
.. literalinclude:: ../../../devstack/local.conf.example
:language: ini

disable_service tempest
The difference between ``all-in-one`` mode with Kubernetes [#f3]_ is
to deploy kuryr-kubernetes and octavia.

# Enable kuryr-kubernetes, docker, octavia
KUBERNETES_VIM=True
enable_plugin kuryr-kubernetes https://opendev.org/openstack/kuryr-kubernetes master
enable_plugin octavia https://opendev.org/openstack/octavia master
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container master
#KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24"
.. literalinclude:: ../../../devstack/local.conf.kubernetes
:language: ini
:emphasize-lines: 60-65

[[post-config|/etc/neutron/dhcp_agent.ini]]
[DEFAULT]
enable_isolated_metadata = True
#. Standalone mode

[[post-config|$OCTAVIA_CONF]]
[controller_worker]
amp_active_retries=9999
The ``local.conf`` file of standalone mode from [#f4]_ is shown as below.

..


2.2. Standalone mode

The **local.conf** file of standalone mode from [#f3]_ is shown as below:

.. code-block:: ini

[[local|localrc]]
############################################################
# Customize the following HOST_IP based on your installation
############################################################
HOST_IP=127.0.0.1
SERVICE_HOST=127.0.0.1
SERVICE_PASSWORD=devstack
ADMIN_PASSWORD=devstack
SERVICE_TOKEN=devstack
DATABASE_PASSWORD=root
RABBIT_PASSWORD=password
ENABLE_HTTPD_MOD_WSGI_SERVICES=True
KEYSTONE_USE_MOD_WSGI=True

# Logging
LOGFILE=$DEST/logs/stack.sh.log
VERBOSE=True
ENABLE_DEBUG_LOG_LEVEL=True
ENABLE_VERBOSE_LOG_LEVEL=True
GIT_BASE=${GIT_BASE:-https://opendev.org}

TACKER_MODE=standalone
USE_BARBICAN=True
TACKER_BRANCH=<branch-name>
enable_plugin networking-sfc ${GIT_BASE}/openstack/networking-sfc $TACKER_BRANCH
enable_plugin barbican ${GIT_BASE}/openstack/barbican $TACKER_BRANCH
enable_plugin mistral ${GIT_BASE}/openstack/mistral $TACKER_BRANCH
enable_plugin tacker ${GIT_BASE}/openstack/tacker $TACKER_BRANCH

..

3. Installation
.. literalinclude:: ../../../devstack/local.conf.standalone
:language: ini

After saving the **local.conf**, we can run **stack.sh** in the terminal
to start setting up:
#. Installation

.. code-block:: console
After saving the ``local.conf``, we can run ``stack.sh`` in the terminal
to start setting up.

$ ./stack.sh
.. code-block:: console

..
$ ./stack.sh

.. rubric:: Footnotes

.. [#f0] https://docs.openstack.org/devstack/latest/
.. [#f1] https://github.com/openstack/tacker/tree/master/devstack
.. [#f2] https://github.com/openstack/tacker/blob/master/devstack/local.conf.kubernetes
.. [#f3] https://github.com/openstack/tacker/blob/master/devstack/local.conf.standalone

.. [#f1] https://opendev.org/openstack/tacker/src/branch/master/devstack
.. [#f2]
https://opendev.org/openstack/tacker/src/branch/master/devstack/local.conf.example
.. [#f3]
https://opendev.org/openstack/tacker/src/branch/master/devstack/local.conf.kubernetes
.. [#f4]
https://opendev.org/openstack/tacker/src/branch/master/devstack/local.conf.standalone

+ 95
- 92
doc/source/install/getting_started.rst View File

@@ -23,126 +23,129 @@ started with Tacker and validate the installation.


Registering default OpenStack VIM
=================================
1. Get one account on the OpenStack VIM.

In Tacker MANO system, the VNF can be on-boarded to one target OpenStack, which
is also called VIM. Get one account on this OpenStack. For example, the below
is the account information collected in file `vim_config.yaml` [1]_:

.. code-block:: yaml

auth_url: 'http://127.0.0.1/identity'
username: 'nfv_user'
password: 'mySecretPW'
project_name: 'nfv'
project_domain_name: 'Default'
user_domain_name: 'Default'
cert_verify: 'True'
..
---------------------------------

.. note::
#. Get one account on the OpenStack VIM

In Keystone, port `5000` is enabled for authentication service [2]_, so the
end users can use `auth_url: 'http://127.0.0.1:5000/v3'` instead of
`auth_url: 'http://127.0.0.1/identity'` as above mention.
In Tacker MANO system, VNFs can be on-boarded to a target OpenStack which
is also called as VIM. Get one account on your OpenStack, such as ``admin``
if you deploy your OpenStack via devstack. Here is an example of a user
named as ``nfv_user`` and has a project ``nfv`` on OpenStack for
VIM configuration. It is described in ``vim_config.yaml`` [1]_:

By default, cert_verify is set as `True`. To disable verifying SSL
certificate, user can set cert_verify parameter to `False`.
.. literalinclude:: ../../../samples/vim/vim_config.yaml
:language: yaml

2. Register the VIM that will be used as a default VIM for VNF deployments.
This will be required when the optional argument `--vim-id` is not provided by
the user during VNF creation.
.. note::

.. code-block:: console
In Keystone, port ``5000`` is enabled for authentication service [2]_,
so the end users can use ``auth_url: 'http://127.0.0.1:5000/v3'`` instead
of ``auth_url: 'http://127.0.0.1/identity'`` as above mention.

By default, ``cert_verify`` is set as ``True``. To disable verifying SSL
certificate, user can set ``cert_verifyi`` parameter to ``False``.

#. Register VIM

Register the default VIM with the config file for VNF deployment.
This will be required when the optional argument ``--vim-id`` is not
provided by the user during VNF creation.

.. code-block:: console

$ openstack vim register --config-file vim_config.yaml \
--description 'my first vim' --is-default hellovim

openstack vim register --config-file vim_config.yaml \
--description 'my first vim' --is-default hellovim
..

Onboarding sample VNF
=====================
---------------------

1. Create a `sample-vnfd.yaml` file with the following template:
#. Create a ``sample-vnfd.yaml`` file with the following template

.. code-block:: yaml
.. code-block:: yaml

tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0

description: Demo example
description: Demo example

metadata:
template_name: sample-tosca-vnfd
metadata:
template_name: sample-tosca-vnfd

topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
num_cpus: 1
mem_size: 512 MB
disk_size: 1 GB
properties:
num_cpus: 1
mem_size: 512 MB
disk_size: 1 GB
properties:
image: cirros-0.4.0-x86_64-disk
availability_zone: nova
mgmt_driver: noop
config: |
param0: key1
param1: key2

CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1

VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: net_mgmt
vendor: Tacker
..
image: cirros-0.4.0-x86_64-disk
availability_zone: nova
mgmt_driver: noop
config: |
param0: key1
param1: key2

CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1

VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: net_mgmt
vendor: Tacker

.. note::
.. note::

You can find more sample tosca templates for VNFD at [3]_
You can find several samples of tosca template for VNFD at [3]_.


2. Create a sample VNFD
#. Create a sample VNFD

.. code-block:: console
.. code-block:: console

openstack vnf descriptor create --vnfd-file sample-vnfd.yaml samplevnfd
..
$ openstack vnf descriptor create --vnfd-file sample-vnfd.yaml samplevnfd

3. Create a VNF
#. Create a VNF

.. code-block:: console
.. code-block:: console

openstack vnf create --vnfd-name samplevnfd samplevnf
..
$ openstack vnf create --vnfd-name samplevnfd samplevnf

4. Some basic Tacker commands
#. Some basic Tacker commands

.. code-block:: console
You can find each of VIM, VNFD and VNF created in previous steps by using
``list`` subcommand.

openstack vim list
openstack vnf descriptor list
openstack vnf list
openstack vnf show samplevnf
..
.. code-block:: console

$ openstack vim list
$ openstack vnf descriptor list
$ openstack vnf list

If you inspect attributes of the isntances, use ``show`` subcommand with
name or ID. For example, you can inspect the VNF named as ``samplevnf``
as below.

.. code-block:: console

$ openstack vnf show samplevnf

References
==========
----------

.. [1] https://github.com/longkb/tacker/blob/master/samples/vim/vim_config.yaml
.. [1] https://opendev.org/openstack/tacker/src/branch/master/samples/vim/vim_config.yaml
.. [2] https://docs.openstack.org/keystoneauth/latest/using-sessions.html#sessions-for-users
.. [3] https://github.com/openstack/tacker/tree/master/samples/tosca-templates/vnfd
.. [3] https://opendev.org/openstack/tacker/src/branch/master/samples/tosca-templates/vnfd

+ 119
- 120
doc/source/install/kolla.rst View File

@@ -19,9 +19,21 @@
Install via Kolla Ansible
=========================

Please refer to "Install dependencies" part of kolla ansible quick start at
https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html to set
up the docker environment that is used by kolla ansible.
.. note::

This installation guide is explaining about Tacker. Other components,
such as nova or neutron, are not covered here.

.. note::

This installation guide is just a bit old, and explained for Redhat distro.


Please refer to
`Install dependencies
<https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html#install-dependencies>`_
of kolla ansible installation [1]_ to set up the docker environment that is
used by kolla ansible.

To install via Kolla Ansible, the version of Kolla Ansible should be consistent
with the target Tacker system. For example, stable/pike branch of Kolla Ansible
@@ -34,164 +46,151 @@ installed in this document.


Install Kolla Ansible
~~~~~~~~~~~~~~~~~~~~~

1. Get the stable/pike version of kolla ansible:

.. code-block:: console
---------------------

$ git clone https://github.com/openstack/kolla-ansible.git -b stable/pike
$ cd kolla-ansible
$ sudo yum install python-devel libffi-devel gcc openssl-devel libselinux-python
$ sudo pip install -r requirements.txt
$ sudo python setup.py install

..
#. Get the stable/pike version of kolla ansible:

.. code-block:: console

If the needed version has already been published at pypi site
'https://pypi.org/project/kolla-ansible', the command below can be used:
$ git clone https://github.com/openstack/kolla-ansible.git -b stable/pike
$ cd kolla-ansible
$ sudo yum install python-devel libffi-devel gcc openssl-devel libselinux-python
$ sudo pip install -r requirements.txt
$ sudo python setup.py install

.. code-block:: console
If the needed version has already been published at pypi site
'https://pypi.org/project/kolla-ansible', the command below can be used:

$ sudo pip install "kolla-ansible==5.0.0"
.. code-block:: console

..
$ sudo pip install "kolla-ansible==5.0.0"


Install Tacker
~~~~~~~~~~~~~~

1. Edit kolla ansible's configuration file /etc/kolla/globals.yml:

.. code-block:: ini

---
kolla_install_type: "source"
# openstack_release can be determined by version of kolla-ansible tool.
# But if needed, it can be specified.
#openstack_release: 5.0.0
kolla_internal_vip_address: <one IP address of local nic interface>
# The Public address used to communicate with OpenStack as set in the
# public_url for the endpoints that will be created. This DNS name
# should map to kolla_external_vip_address.
#kolla_external_fqdn: "{{ kolla_external_vip_address }}"
# define your own registry if needed
#docker_registry: "127.0.0.1:4000"
# If needed OpenStack kolla images are published, docker_namespace should be
# kolla
#docker_namespace: "kolla"
docker_namespace: "gongysh"
enable_glance: "no"
enable_haproxy: "no"
enable_keystone: "yes"
enable_mariadb: "yes"
enable_memcached: "yes"
enable_neutron: "no"
enable_nova: "no"
enable_barbican: "yes"
enable_mistral: "yes"
enable_tacker: "yes"
enable_heat: "no"
enable_openvswitch: "no"
enable_horizon: "yes"
enable_horizon_tacker: "{{ enable_tacker | bool }}"
--------------

..
#. Edit kolla ansible's configuration file ``/etc/kolla/globals.yml``:

.. note::
.. code-block:: ini

To determine version of kolla-ansible, the following commandline can be
used:
---
kolla_install_type: "source"
# openstack_release can be determined by version of kolla-ansible tool.
# But if needed, it can be specified.
#openstack_release: 5.0.0
kolla_internal_vip_address: <one IP address of local nic interface>
# The Public address used to communicate with OpenStack as set in the
# public_url for the endpoints that will be created. This DNS name
# should map to kolla_external_vip_address.
#kolla_external_fqdn: "{{ kolla_external_vip_address }}"
# define your own registry if needed
#docker_registry: "127.0.0.1:4000"
# If needed OpenStack kolla images are published, docker_namespace should be
# kolla
#docker_namespace: "kolla"
docker_namespace: "gongysh"
enable_glance: "no"
enable_haproxy: "no"
enable_keystone: "yes"
enable_mariadb: "yes"
enable_memcached: "yes"
enable_neutron: "no"
enable_nova: "no"
enable_barbican: "yes"
enable_mistral: "yes"
enable_tacker: "yes"
enable_heat: "no"
enable_openvswitch: "no"
enable_horizon: "yes"
enable_horizon_tacker: "{{ enable_tacker | bool }}"

$ python -c "import pbr.version; print(pbr.version.VersionInfo('kolla-ansible'))"
.. note::

To determine version of kolla-ansible, the following commandline can be
used:

2. Run kolla-genpwd to generate system passwords:
.. code-block:: console

.. code-block:: console
$ python -c \
"import pbr.version; print(pbr.version.VersionInfo('kolla-ansible'))"

$ sudo cp etc/kolla/passwords.yml /etc/kolla/passwords.yml
$ sudo kolla-genpwd

..
#. Run kolla-genpwd to generate system passwords:

.. note::
.. code-block:: console

If the pypi version is used to install kolla-ansible the skeleton passwords
file maybe under '/usr/share/kolla-ansible/etc_examples/kolla'.
$ sudo cp etc/kolla/passwords.yml /etc/kolla/passwords.yml
$ sudo kolla-genpwd

.. note::

With this command, /etc/kolla/passwords.yml will be populated with
generated passwords.
If the pypi version is used to install kolla-ansible the skeleton
passwords file maybe under
``/usr/share/kolla-ansible/etc_examples/kolla``.


3. Run kolla ansible deploy to install tacker system:
With this command, ``/etc/kolla/passwords.yml`` will be populated with
generated passwords.

.. code-block:: console
#. Run kolla ansible deploy to install tacker system:

$ sudo kolla-ansible deploy
.. code-block:: console

..
$ sudo kolla-ansible deploy


4. Run kolla ansible post-deploy to generate tacker access environment file:
#. Run kolla ansible post-deploy to generate tacker access environment file:

.. code-block:: console
.. code-block:: console

$ sudo kolla-ansible post-deploy
$ sudo kolla-ansible post-deploy

..
With this command, ``admin-openrc.sh`` will be generated at
``/etc/kolla/admin-openrc.sh``.

With this command, the "admin-openrc.sh" will be generated at
/etc/kolla/admin-openrc.sh.


5. Check the related containers are started and running:

Tacker system consists of some containers. Following is a sample output.
The containers fluentd, cron and kolla_toolbox are from kolla, please see
kolla ansible documentation for their usage. Others are from Tacker system
components.

.. code-block:: console

$ sudo docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
CONTAINER ID IMAGE NAMES
78eafed848a8 gongysh/centos-source-tacker-server:5.0.0 tacker_server
00bbecca5950 gongysh/centos-source-tacker-conductor:5.0.0 tacker_conductor
19eddccf8e8f gongysh/centos-source-barbican-worker:5.0.0 barbican_worker
6434b1d8236e gongysh/centos-source-barbican-keystone-listener:5.0.0 barbican_keystone_listener
48be088643f8 gongysh/centos-source-barbican-api:5.0.0 barbican_api
50b9a9a0e542 gongysh/centos-source-mistral-executor:5.0.0 mistral_executor
07c28d845311 gongysh/centos-source-mistral-engine:5.0.0 mistral_engine
196bbcc592a4 gongysh/centos-source-mistral-api:5.0.0 mistral_api
d5511b195a58 gongysh/centos-source-horizon:5.0.0 horizon
62913ec7c056 gongysh/centos-source-keystone:5.0.0 keystone
552b95e82f98 gongysh/centos-source-rabbitmq:5.0.0 rabbitmq
4d57d7735514 gongysh/centos-source-mariadb:5.0.0 mariadb
4e1142ff158d gongysh/centos-source-cron:5.0.0 cron
000ba4ca1974 gongysh/centos-source-kolla-toolbox:5.0.0 kolla_toolbox
0fe21b1ad18c gongysh/centos-source-fluentd:5.0.0 fluentd
a13e45fc034f gongysh/centos-source-memcached:5.0.0 memcached
#. Check the related containers are started and running:

..
Tacker system consists of some containers. Following is a sample output.
The containers fluentd, cron and kolla_toolbox are from kolla, please see
kolla ansible documentation for their usage. Others are from Tacker system
components.

.. code-block:: console

6. Install tacker client:
$ sudo docker ps --format "table {{.ID}}\t{{.Image}}\t{{.Names}}"
CONTAINER ID IMAGE NAMES
78eafed848a8 gongysh/centos-source-tacker-server:5.0.0 tacker_server
00bbecca5950 gongysh/centos-source-tacker-conductor:5.0.0 tacker_conductor
19eddccf8e8f gongysh/centos-source-barbican-worker:5.0.0 barbican_worker
6434b1d8236e gongysh/centos-source-barbican-keystone-listener:5.0.0 barbican_keystone_listener
48be088643f8 gongysh/centos-source-barbican-api:5.0.0 barbican_api
50b9a9a0e542 gongysh/centos-source-mistral-executor:5.0.0 mistral_executor
07c28d845311 gongysh/centos-source-mistral-engine:5.0.0 mistral_engine
196bbcc592a4 gongysh/centos-source-mistral-api:5.0.0 mistral_api
d5511b195a58 gongysh/centos-source-horizon:5.0.0 horizon
62913ec7c056 gongysh/centos-source-keystone:5.0.0 keystone
552b95e82f98 gongysh/centos-source-rabbitmq:5.0.0 rabbitmq
4d57d7735514 gongysh/centos-source-mariadb:5.0.0 mariadb
4e1142ff158d gongysh/centos-source-cron:5.0.0 cron
000ba4ca1974 gongysh/centos-source-kolla-toolbox:5.0.0 kolla_toolbox
0fe21b1ad18c gongysh/centos-source-fluentd:5.0.0 fluentd
a13e45fc034f gongysh/centos-source-memcached:5.0.0 memcached

.. code-block:: console
#. Install tacker client:

$ sudo pip install python-tackerclient
.. code-block:: console

..
$ sudo pip install python-tackerclient

#. Check the Tacker server is running well:

7. Check the Tacker server is running well:
.. code-block:: console

.. code-block:: console
$ . /etc/kolla/admin-openrc.sh
$ openstack vim list

$ . /etc/kolla/admin-openrc.sh
$ openstack vim list

..
References
----------

.. [1] https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html

+ 206
- 205
doc/source/install/kubernetes_vim_installation.rst View File

@@ -27,242 +27,243 @@ creating Kubernetes cluster and setting up native Neutron-based networking
between Kubernetes and OpenStack VIMs. Features from Kuryr-Kubernetes will
bring VMs and Pods (and other Kubernetes resources) on the same network.

1. Edit local.conf file by adding the following content
#. Edit local.conf file by adding the following content

.. code-block:: console
.. code-block:: console

# Enable kuryr-kubernetes, docker, octavia
KUBERNETES_VIM=True
enable_plugin kuryr-kubernetes https://opendev.org/openstack/kuryr-kubernetes master
enable_plugin octavia https://opendev.org/openstack/octavia master
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container master
KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24"
# Enable kuryr-kubernetes, docker, octavia
KUBERNETES_VIM=True
enable_plugin kuryr-kubernetes https://opendev.org/openstack/kuryr-kubernetes master
enable_plugin octavia https://opendev.org/openstack/octavia master
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container master
KURYR_K8S_CLUSTER_IP_RANGE="10.0.0.0/24"

The public network will be used to launched LoadBalancer for Services in
Kubernetes. The example for setting public subnet is described in [#first]_
The public network will be used to launched LoadBalancer for Services in
Kubernetes. The example for setting public subnet is described in [#first]_

For more details, users also see the same examples in [#second]_ and [#third]_.
For more details, users also see the same examples in [#second]_ and [#third]_.

2. Run stack.sh
#. Run stack.sh

.. code-block:: console
.. code-block:: console

$ ./stack.sh
$ ./stack.sh

3. Get Kubernetes VIM configuration
#. Get Kubernetes VIM configuration

* After successful installation, user can get "Bearer Token":
* After successful installation, user can get "Bearer Token":

.. code-block:: console
.. code-block:: console

$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')

In the Hyperkube folder /yourdirectory/data/hyperkube/, user can get more
information for authenticating to Kubernetes cluster.
In the Hyperkube folder /yourdirectory/data/hyperkube/, user can get more
information for authenticating to Kubernetes cluster.

* Get ssl_ca_cert:
* Get ssl_ca_cert:

.. code-block:: console
.. code-block:: console

$ sudo cat /opt/stack/data/hyperkube/ca.crt
-----BEGIN CERTIFICATE-----
MIIDUzCCAjugAwIBAgIJAI+laRsxtQQMMA0GCSqGSIb3DQEBCwUAMCAxHjAcBgNV
BAMMFTE3Mi4xNy4wLjJAMTUwNzU1NTc4MzAeFw0xNzEwMDkxMzI5NDNaFw0yNzEw
MDcxMzI5NDNaMCAxHjAcBgNVBAMMFTE3Mi4xNy4wLjJAMTUwNzU1NTc4MzCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALfJ+Lsq8VmXBfZC4OPm96Y1Ots2
Np/fuGLEhT+JpHGCK65l4WpBf+FkcNDIb5Jn1EBr5XDEVN1hlzcPdCHu1sAvfTNB
AJkq/4TzkenEusxiQ8TQWDnIrAo73tkYPyQMAfXHifyM20gCz/jM+Zy2IoQDArRq
MItRdoFa+7rRJntFk56y9NZTzDqnziLFFoT6W3ZdU3BElX6oWarbLWxNNpYlVEbI
YdfooLqKTH+25Fh3TKsMVxOdc7A5MggXRHYYkbbDgDAVln9ki9x/c6U+5bQQ9H8+
+Lhzdova4gjq/RBJCtiISN7HvLuq+VenArFREgAqr/r/rQZckeAD/4mzQNECAwEA
AaOBjzCBjDAdBgNVHQ4EFgQU1zZHXIHhmPDe+ajaNqsOdu5QfbswUAYDVR0jBEkw
R4AU1zZHXIHhmPDe+ajaNqsOdu5QfbuhJKQiMCAxHjAcBgNVBAMMFTE3Mi4xNy4w
LjJAMTUwNzU1NTc4M4IJAI+laRsxtQQMMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQD
AgEGMA0GCSqGSIb3DQEBCwUAA4IBAQAr8ARlYpIbeML8fbxdAARuZ/dJpbKvyNHC
GXJI/Uh4xKmj3LrdDYQjHb1tbRSV2S/gQld+En0L92XGUl/x1pG/GainDVpxpTdt
FwA5SMG5HLHrudZBRW2Dqe1ItKjx4ofdjz+Eni17QYnI0CEdJZyq7dBInuCyeOu9
y8BhzIOFQALYYL+K7nERKsTSDUnTwgpN7p7CkPnAGUj51zqVu2cOJe48SWoO/9DZ
AT0UKTr/agkkjHL0/kv4x+Qhr/ICjd2JbW7ePxQBJ8af+SYuKx7IRVnubnqVMEN6
V/kEAK/h2NAKS8OnlBgUMXIojSInmGXJfM5l1GUlQiqiBTv21Fm6
-----END CERTIFICATE-----
$ sudo cat /opt/stack/data/hyperkube/ca.crt
-----BEGIN CERTIFICATE-----
MIIDUzCCAjugAwIBAgIJAI+laRsxtQQMMA0GCSqGSIb3DQEBCwUAMCAxHjAcBgNV
BAMMFTE3Mi4xNy4wLjJAMTUwNzU1NTc4MzAeFw0xNzEwMDkxMzI5NDNaFw0yNzEw
MDcxMzI5NDNaMCAxHjAcBgNVBAMMFTE3Mi4xNy4wLjJAMTUwNzU1NTc4MzCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBALfJ+Lsq8VmXBfZC4OPm96Y1Ots2
Np/fuGLEhT+JpHGCK65l4WpBf+FkcNDIb5Jn1EBr5XDEVN1hlzcPdCHu1sAvfTNB
AJkq/4TzkenEusxiQ8TQWDnIrAo73tkYPyQMAfXHifyM20gCz/jM+Zy2IoQDArRq
MItRdoFa+7rRJntFk56y9NZTzDqnziLFFoT6W3ZdU3BElX6oWarbLWxNNpYlVEbI
YdfooLqKTH+25Fh3TKsMVxOdc7A5MggXRHYYkbbDgDAVln9ki9x/c6U+5bQQ9H8+
+Lhzdova4gjq/RBJCtiISN7HvLuq+VenArFREgAqr/r/rQZckeAD/4mzQNECAwEA
AaOBjzCBjDAdBgNVHQ4EFgQU1zZHXIHhmPDe+ajaNqsOdu5QfbswUAYDVR0jBEkw
R4AU1zZHXIHhmPDe+ajaNqsOdu5QfbuhJKQiMCAxHjAcBgNVBAMMFTE3Mi4xNy4w
LjJAMTUwNzU1NTc4M4IJAI+laRsxtQQMMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQD
AgEGMA0GCSqGSIb3DQEBCwUAA4IBAQAr8ARlYpIbeML8fbxdAARuZ/dJpbKvyNHC
GXJI/Uh4xKmj3LrdDYQjHb1tbRSV2S/gQld+En0L92XGUl/x1pG/GainDVpxpTdt
FwA5SMG5HLHrudZBRW2Dqe1ItKjx4ofdjz+Eni17QYnI0CEdJZyq7dBInuCyeOu9
y8BhzIOFQALYYL+K7nERKsTSDUnTwgpN7p7CkPnAGUj51zqVu2cOJe48SWoO/9DZ
AT0UKTr/agkkjHL0/kv4x+Qhr/ICjd2JbW7ePxQBJ8af+SYuKx7IRVnubnqVMEN6
V/kEAK/h2NAKS8OnlBgUMXIojSInmGXJfM5l1GUlQiqiBTv21Fm6
-----END CERTIFICATE-----

* Get basic authentication username and password:
* Get basic authentication username and password:

.. code-block:: console
.. code-block:: console

$ sudo cat /opt/stack/data/hyperkube/basic_auth.csv
admin,admin,admin
$ sudo cat /opt/stack/data/hyperkube/basic_auth.csv
admin,admin,admin

The basic auth file is a csv file with a minimum of 3 columns: password,
user name, user id. If there are more than 3 columns, see the following
example:
The basic auth file is a csv file with a minimum of 3 columns: password,
user name, user id. If there are more than 3 columns, see the following
example:

.. code-block:: console
.. code-block:: console

password,user,uid,"group1,group2,group3"
password,user,uid,"group1,group2,group3"

In this example, the user belongs to group1, group2 and group3.
In this example, the user belongs to group1, group2 and group3.

* Get Kubernetes server url
* Get Kubernetes server url

By default Kubernetes server listens on https://127.0.0.1:6443 and
https://{HOST_IP}:6443
By default Kubernetes server listens on https://127.0.0.1:6443 and
https://{HOST_IP}:6443

.. code-block:: console
.. code-block:: console

$ curl http://localhost:8080/api/
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
$ curl http://localhost:8080/api/
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.11.110:6443"
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.11.110:6443"
}
]
}
]
}

4. Check Kubernetes cluster installation

By default, after set KUBERNETES_VIM=True, Devstack creates a public network
called net-k8s, and two extra ones for the kubernetes services and pods under
the project k8s:

.. code-block:: console

$ openstack network list --project admin
+--------------------------------------+-----------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-----------------+--------------------------------------+
| 28361f77-1875-4070-b0dc-014e26c48aeb | public | 28c51d19-d437-46e8-9b0e-00bc392c57d6 |
| 71c20650-6295-4462-9219-e0007120e64b | k8s-service-net | f2835c3a-f567-44f6-b006-a6f7c52f2396 |
| 97c12aef-54f3-41dc-8b80-7f07c34f2972 | k8s-pod-net | 7759453f-6e8a-4660-b845-964eca537c44 |
| 9935fff9-f60c-4fe8-aa77-39ba7ac10417 | net0 | 92b2bd7b-3c14-4d32-8de3-9d3cc4d204cb |
| c2120b78-880f-4f28-8dc1-3d33b9f3020b | net_mgmt | fc7b3f32-5cac-4857-83ab-d3700f4efa60 |
| ec194ffc-533e-46b3-8547-6f43d92b91a2 | net1 | 08beb9a1-cd74-4f2d-b2fa-0e5748d80c27 |
+--------------------------------------+-----------------+--------------------------------------+

To check Kubernetes cluster works well, please see some tests in
kuryr-kubernetes to get more information [#fourth]_.

5. Register Kubernetes VIM

In vim_config.yaml, project_name is fixed as "default", that will use to
support multi tenant on Kubernetes in the future.

* Create vim_config.yaml file for Kubernetes VIM as the following examples:

.. code-block:: console

auth_url: "https://192.168.11.110:6443"
bearer_token: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tc2ZqcTQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBiMzZmYTQ2LWFhOTUtMTFlNy05M2Q4LTQwOGQ1Y2Q0ZmJmMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.MBjFA18AjD6GyXmlqsdsFpJD_tgPfst2faOimfVob-gBqnAkAU0Op2IEauiBVooFgtvzm-HY2ceArftSlZQQhLDrJGgH0yMAUmYhI8pKcFGd_hxn_Ubk7lPqwR6GIuApkGVMNIlGh7LFLoF23S_yMGvO8CHPM-UbFjpbCOECFdnoHjz-MsMqyoMfGEIF9ga7ZobWcKt_0A4ge22htL2-lCizDvjSFlAj4cID2EM3pnJ1J3GXEqu-W9DUFa0LM9u8fm_AD9hBKVz1dePX1NOWglxxjW4KGJJ8dV9_WEmG2A2B-9Jy6AKW83qqicBjYUUeAKQfjgrTDl6vSJOHYyzCYQ"
ssl_ca_cert: "None"
project_name: "default"
type: "kubernetes"

* Or vim_config.yaml with ssl_ca_cert enabled:

.. code-block:: console

auth_url: "https://192.168.11.110:6443"
bearer_token: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tc2ZqcTQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBiMzZmYTQ2LWFhOTUtMTFlNy05M2Q4LTQwOGQ1Y2Q0ZmJmMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.MBjFA18AjD6GyXmlqsdsFpJD_tgPfst2faOimfVob-gBqnAkAU0Op2IEauiBVooFgtvzm-HY2ceArftSlZQQhLDrJGgH0yMAUmYhI8pKcFGd_hxn_Ubk7lPqwR6GIuApkGVMNIlGh7LFLoF23S_yMGvO8CHPM-UbFjpbCOECFdnoHjz-MsMqyoMfGEIF9ga7ZobWcKt_0A4ge22htL2-lCizDvjSFlAj4cID2EM3pnJ1J3GXEqu-W9DUFa0LM9u8fm_AD9hBKVz1dePX1NOWglxxjW4KGJJ8dV9_WEmG2A2B-9Jy6AKW83qqicBjYUUeAKQfjgrTDl6vSJOHYyzCYQ"
ssl_ca_cert: "-----BEGIN CERTIFICATE-----
MIIDUzCCAjugAwIBAgIJANPOjG38TA+fMA0GCSqGSIb3DQEBCwUAMCAxHjAcBgNV
BAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTAeFw0xNzEwMDYxMjUxMDVaFw0yNzEw
MDQxMjUxMDVaMCAxHjAcBgNVBAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKlPwd5Dp484Fb+SjBZeV8qF4k8s
Z06NPdlHKuXaxz7+aReGSwz09JittlqQ/2CwSd5834Ll+btfyTyrB4bv+mr/WD3b
jxEhnWrUK7oHObzZq0i60Ard6CuiWnv5tP0U5tVPWfNBoHEEPImVcUmgzGSAWW1m
ZzGdcpwkqE1NznLsrqYqjT5bio7KUqySRe13WNichDrdYSqEEQwFa+b+BO1bRCvh
IYSI0/xT1CDIlPmVucKRn/OVxpuTQ/WuVt7yIMRKIlApsZurZSt7ypR7SlQOLEx/
xKsVTbMvhcKIMKdK8pHUJK2pk8uNPAKd7zjpiu04KMa3WsUreIJHcjat6lMCAwEA
AaOBjzCBjDAdBgNVHQ4EFgQUxINzbfoA2RzXk584ETZ0agWDDk8wUAYDVR0jBEkw
R4AUxINzbfoA2RzXk584ETZ0agWDDk+hJKQiMCAxHjAcBgNVBAMMFTE3Mi4xNy4w
LjJAMTUwNzI5NDI2NYIJANPOjG38TA+fMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQD
AgEGMA0GCSqGSIb3DQEBCwUAA4IBAQB7zNVRX++hUXs7+Fg1H2havCkSe63b/oEM
J8LPLYWjqdFnLgC+usGq+nhJiuVCqqAIK0dIizGaoXS91hoWuuHWibSlLFRd2wF2
Go2oL5pgC/0dKW1D6V1Dl+3mmCVYrDnExXybWGtOsvaUmsnt4ugsb+9AfUtWbCA7
tepBsbAHS62buwNdzrzjJV+GNB6KaIEVVAdZdRx+HaZP2kytOXqxaUchIhMHZHYZ
U0/5P0Ei56fLqIFO3WXqVj9u615VqX7cad4GQwtSW8sDnZMcQAg8mnR4VqkF8YSs
MkFnsNNkfqE9ck/D2auMwRl1IaDPVqAFiWiYZZhw8HsG6K4BYEgk
-----END CERTIFICATE-----"
project_name: "default"
type: "kubernetes"

* You can also specify username and password for Kubernetes VIM configuration:

.. code-block:: console

auth_url: "https://192.168.11.110:6443"
username: "admin"
password: "admin"
ssl_ca_cert: "-----BEGIN CERTIFICATE-----
MIIDUzCCAjugAwIBAgIJANPOjG38TA+fMA0GCSqGSIb3DQEBCwUAMCAxHjAcBgNV
BAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTAeFw0xNzEwMDYxMjUxMDVaFw0yNzEw
MDQxMjUxMDVaMCAxHjAcBgNVBAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKlPwd5Dp484Fb+SjBZeV8qF4k8s
Z06NPdlHKuXaxz7+aReGSwz09JittlqQ/2CwSd5834Ll+btfyTyrB4bv+mr/WD3b
jxEhnWrUK7oHObzZq0i60Ard6CuiWnv5tP0U5tVPWfNBoHEEPImVcUmgzGSAWW1m
ZzGdcpwkqE1NznLsrqYqjT5bio7KUqySRe13WNichDrdYSqEEQwFa+b+BO1bRCvh
IYSI0/xT1CDIlPmVucKRn/OVxpuTQ/WuVt7yIMRKIlApsZurZSt7ypR7SlQOLEx/
xKsVTbMvhcKIMKdK8pHUJK2pk8uNPAKd7zjpiu04KMa3WsUreIJHcjat6lMCAwEA
AaOBjzCBjDAdBgNVHQ4EFgQUxINzbfoA2RzXk584ETZ0agWDDk8wUAYDVR0jBEkw
R4AUxINzbfoA2RzXk584ETZ0agWDDk+hJKQiMCAxHjAcBgNVBAMMFTE3Mi4xNy4w
LjJAMTUwNzI5NDI2NYIJANPOjG38TA+fMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQD
AgEGMA0GCSqGSIb3DQEBCwUAA4IBAQB7zNVRX++hUXs7+Fg1H2havCkSe63b/oEM
J8LPLYWjqdFnLgC+usGq+nhJiuVCqqAIK0dIizGaoXS91hoWuuHWibSlLFRd2wF2
Go2oL5pgC/0dKW1D6V1Dl+3mmCVYrDnExXybWGtOsvaUmsnt4ugsb+9AfUtWbCA7
tepBsbAHS62buwNdzrzjJV+GNB6KaIEVVAdZdRx+HaZP2kytOXqxaUchIhMHZHYZ
U0/5P0Ei56fLqIFO3WXqVj9u615VqX7cad4GQwtSW8sDnZMcQAg8mnR4VqkF8YSs
MkFnsNNkfqE9ck/D2auMwRl1IaDPVqAFiWiYZZhw8HsG6K4BYEgk
-----END CERTIFICATE-----"
project_name: "default"
type: "kubernetes"

User can change the authentication like username, password, etc. Please see
Kubernetes document [#fifth]_ to read more information about Kubernetes
authentication.

* Run Tacker command for register vim:

.. code-block:: console

$ openstack vim register --config-file vim_config.yaml vim-kubernetes

$ openstack vim list
+--------------------------------------+----------------------------------+----------------+------------+------------+------------------------------------------------------------+-----------+
| id | tenant_id | name | type | is_default | placement_attr | status |
+--------------------------------------+----------------------------------+----------------+------------+------------+------------------------------------------------------------+-----------+
| 45456bde-6179-409c-86a1-d8cd93bd0c6d | a6f9b4bc9a4d439faa91518416ec0999 | vim-kubernetes | kubernetes | False | {u'regions': [u'default', u'kube-public', u'kube-system']} | REACHABLE |
+--------------------------------------+----------------------------------+----------------+------------+------------+------------------------------------------------------------+-----------+

In ``placement_attr``, there are three regions: 'default', 'kube-public',
'kube-system', that map to ``namespace`` in Kubernetes environment.

* Other related commands to Kubernetes VIM

.. code-block:: console

$ cat kubernetes-VIM-update.yaml
username: "admin"
password: "admin"
project_name: "default"
ssl_ca_cert: "None"
type: "kubernetes"


$ tacker vim-update vim-kubernetes --config-file kubernetes-VIM-update.yaml
$ tacker vim-show vim-kubernetes
$ tacker vim-delete vim-kubernetes

When update Kubernetes VIM, user can update VIM information (such as username,
password, bearer_token and ssl_ca_cert) except auth_url and type of VIM.

#. Check Kubernetes cluster installation

By default, after set KUBERNETES_VIM=True, Devstack creates a public network
called net-k8s, and two extra ones for the kubernetes services and pods
under the project k8s:

.. code-block:: console

$ openstack network list --project admin
+--------------------------------------+-----------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-----------------+--------------------------------------+
| 28361f77-1875-4070-b0dc-014e26c48aeb | public | 28c51d19-d437-46e8-9b0e-00bc392c57d6 |
| 71c20650-6295-4462-9219-e0007120e64b | k8s-service-net | f2835c3a-f567-44f6-b006-a6f7c52f2396 |
| 97c12aef-54f3-41dc-8b80-7f07c34f2972 | k8s-pod-net | 7759453f-6e8a-4660-b845-964eca537c44 |
| 9935fff9-f60c-4fe8-aa77-39ba7ac10417 | net0 | 92b2bd7b-3c14-4d32-8de3-9d3cc4d204cb |
| c2120b78-880f-4f28-8dc1-3d33b9f3020b | net_mgmt | fc7b3f32-5cac-4857-83ab-d3700f4efa60 |
| ec194ffc-533e-46b3-8547-6f43d92b91a2 | net1 | 08beb9a1-cd74-4f2d-b2fa-0e5748d80c27 |
+--------------------------------------+-----------------+--------------------------------------+

To check Kubernetes cluster works well, please see some tests in
kuryr-kubernetes to get more information [#fourth]_.

#. Register Kubernetes VIM

In vim_config.yaml, project_name is fixed as "default", that will use to
support multi tenant on Kubernetes in the future.

Create vim_config.yaml file for Kubernetes VIM as the following examples:

.. code-block:: console

auth_url: "https://192.168.11.110:6443"
bearer_token: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tc2ZqcTQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBiMzZmYTQ2LWFhOTUtMTFlNy05M2Q4LTQwOGQ1Y2Q0ZmJmMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.MBjFA18AjD6GyXmlqsdsFpJD_tgPfst2faOimfVob-gBqnAkAU0Op2IEauiBVooFgtvzm-HY2ceArftSlZQQhLDrJGgH0yMAUmYhI8pKcFGd_hxn_Ubk7lPqwR6GIuApkGVMNIlGh7LFLoF23S_yMGvO8CHPM-UbFjpbCOECFdnoHjz-MsMqyoMfGEIF9ga7ZobWcKt_0A4ge22htL2-lCizDvjSFlAj4cID2EM3pnJ1J3GXEqu-W9DUFa0LM9u8fm_AD9hBKVz1dePX1NOWglxxjW4KGJJ8dV9_WEmG2A2B-9Jy6AKW83qqicBjYUUeAKQfjgrTDl6vSJOHYyzCYQ"
ssl_ca_cert: "None"
project_name: "default"
type: "kubernetes"

Or vim_config.yaml with ssl_ca_cert enabled:

.. code-block:: console

auth_url: "https://192.168.11.110:6443"
bearer_token: "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRlZmF1bHQtdG9rZW4tc2ZqcTQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBiMzZmYTQ2LWFhOTUtMTFlNy05M2Q4LTQwOGQ1Y2Q0ZmJmMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.MBjFA18AjD6GyXmlqsdsFpJD_tgPfst2faOimfVob-gBqnAkAU0Op2IEauiBVooFgtvzm-HY2ceArftSlZQQhLDrJGgH0yMAUmYhI8pKcFGd_hxn_Ubk7lPqwR6GIuApkGVMNIlGh7LFLoF23S_yMGvO8CHPM-UbFjpbCOECFdnoHjz-MsMqyoMfGEIF9ga7ZobWcKt_0A4ge22htL2-lCizDvjSFlAj4cID2EM3pnJ1J3GXEqu-W9DUFa0LM9u8fm_AD9hBKVz1dePX1NOWglxxjW4KGJJ8dV9_WEmG2A2B-9Jy6AKW83qqicBjYUUeAKQfjgrTDl6vSJOHYyzCYQ"
ssl_ca_cert: "-----BEGIN CERTIFICATE-----
MIIDUzCCAjugAwIBAgIJANPOjG38TA+fMA0GCSqGSIb3DQEBCwUAMCAxHjAcBgNV
BAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTAeFw0xNzEwMDYxMjUxMDVaFw0yNzEw
MDQxMjUxMDVaMCAxHjAcBgNVBAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKlPwd5Dp484Fb+SjBZeV8qF4k8s
Z06NPdlHKuXaxz7+aReGSwz09JittlqQ/2CwSd5834Ll+btfyTyrB4bv+mr/WD3b
jxEhnWrUK7oHObzZq0i60Ard6CuiWnv5tP0U5tVPWfNBoHEEPImVcUmgzGSAWW1m
ZzGdcpwkqE1NznLsrqYqjT5bio7KUqySRe13WNichDrdYSqEEQwFa+b+BO1bRCvh
IYSI0/xT1CDIlPmVucKRn/OVxpuTQ/WuVt7yIMRKIlApsZurZSt7ypR7SlQOLEx/
xKsVTbMvhcKIMKdK8pHUJK2pk8uNPAKd7zjpiu04KMa3WsUreIJHcjat6lMCAwEA
AaOBjzCBjDAdBgNVHQ4EFgQUxINzbfoA2RzXk584ETZ0agWDDk8wUAYDVR0jBEkw
R4AUxINzbfoA2RzXk584ETZ0agWDDk+hJKQiMCAxHjAcBgNVBAMMFTE3Mi4xNy4w
LjJAMTUwNzI5NDI2NYIJANPOjG38TA+fMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQD
AgEGMA0GCSqGSIb3DQEBCwUAA4IBAQB7zNVRX++hUXs7+Fg1H2havCkSe63b/oEM
J8LPLYWjqdFnLgC+usGq+nhJiuVCqqAIK0dIizGaoXS91hoWuuHWibSlLFRd2wF2
Go2oL5pgC/0dKW1D6V1Dl+3mmCVYrDnExXybWGtOsvaUmsnt4ugsb+9AfUtWbCA7
tepBsbAHS62buwNdzrzjJV+GNB6KaIEVVAdZdRx+HaZP2kytOXqxaUchIhMHZHYZ
U0/5P0Ei56fLqIFO3WXqVj9u615VqX7cad4GQwtSW8sDnZMcQAg8mnR4VqkF8YSs
MkFnsNNkfqE9ck/D2auMwRl1IaDPVqAFiWiYZZhw8HsG6K4BYEgk
-----END CERTIFICATE-----"
project_name: "default"
type: "kubernetes"

You can also specify username and password for Kubernetes VIM configuration:

.. code-block:: console

auth_url: "https://192.168.11.110:6443"
username: "admin"
password: "admin"
ssl_ca_cert: "-----BEGIN CERTIFICATE-----
MIIDUzCCAjugAwIBAgIJANPOjG38TA+fMA0GCSqGSIb3DQEBCwUAMCAxHjAcBgNV
BAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTAeFw0xNzEwMDYxMjUxMDVaFw0yNzEw
MDQxMjUxMDVaMCAxHjAcBgNVBAMMFTE3Mi4xNy4wLjJAMTUwNzI5NDI2NTCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKlPwd5Dp484Fb+SjBZeV8qF4k8s
Z06NPdlHKuXaxz7+aReGSwz09JittlqQ/2CwSd5834Ll+btfyTyrB4bv+mr/WD3b
jxEhnWrUK7oHObzZq0i60Ard6CuiWnv5tP0U5tVPWfNBoHEEPImVcUmgzGSAWW1m
ZzGdcpwkqE1NznLsrqYqjT5bio7KUqySRe13WNichDrdYSqEEQwFa+b+BO1bRCvh
IYSI0/xT1CDIlPmVucKRn/OVxpuTQ/WuVt7yIMRKIlApsZurZSt7ypR7SlQOLEx/
xKsVTbMvhcKIMKdK8pHUJK2pk8uNPAKd7zjpiu04KMa3WsUreIJHcjat6lMCAwEA
AaOBjzCBjDAdBgNVHQ4EFgQUxINzbfoA2RzXk584ETZ0agWDDk8wUAYDVR0jBEkw
R4AUxINzbfoA2RzXk584ETZ0agWDDk+hJKQiMCAxHjAcBgNVBAMMFTE3Mi4xNy4w
LjJAMTUwNzI5NDI2NYIJANPOjG38TA+fMAwGA1UdEwQFMAMBAf8wCwYDVR0PBAQD
AgEGMA0GCSqGSIb3DQEBCwUAA4IBAQB7zNVRX++hUXs7+Fg1H2havCkSe63b/oEM
J8LPLYWjqdFnLgC+usGq+nhJiuVCqqAIK0dIizGaoXS91hoWuuHWibSlLFRd2wF2
Go2oL5pgC/0dKW1D6V1Dl+3mmCVYrDnExXybWGtOsvaUmsnt4ugsb+9AfUtWbCA7
tepBsbAHS62buwNdzrzjJV+GNB6KaIEVVAdZdRx+HaZP2kytOXqxaUchIhMHZHYZ
U0/5P0Ei56fLqIFO3WXqVj9u615VqX7cad4GQwtSW8sDnZMcQAg8mnR4VqkF8YSs
MkFnsNNkfqE9ck/D2auMwRl1IaDPVqAFiWiYZZhw8HsG6K4BYEgk
-----END CERTIFICATE-----"
project_name: "default"
type: "kubernetes"

User can change the authentication like username, password, etc. Please see
Kubernetes document [#fifth]_ to read more information about Kubernetes
authentication.

Run Tacker command for register vim:

.. code-block:: console

$ openstack vim register --config-file vim_config.yaml vim-kubernetes

$ openstack vim list
+--------------------------------------+----------------------------------+----------------+------------+------------+------------------------------------------------------------+-----------+
| id | tenant_id | name | type | is_default | placement_attr | status |
+--------------------------------------+----------------------------------+----------------+------------+------------+------------------------------------------------------------+-----------+
| 45456bde-6179-409c-86a1-d8cd93bd0c6d | a6f9b4bc9a4d439faa91518416ec0999 | vim-kubernetes | kubernetes | False | {u'regions': [u'default', u'kube-public', u'kube-system']} | REACHABLE |
+--------------------------------------+----------------------------------+----------------+------------+------------+------------------------------------------------------------+-----------+

In ``placement_attr``, there are three regions: 'default', 'kube-public',
'kube-system', that map to ``namespace`` in Kubernetes environment.

Other related commands to Kubernetes VIM:

.. code-block:: console

$ cat kubernetes-VIM-update.yaml
username: "admin"
password: "admin"
project_name: "default"
ssl_ca_cert: "None"
type: "kubernetes"


$ tacker vim-update vim-kubernetes --config-file kubernetes-VIM-update.yaml
$ tacker vim-show vim-kubernetes
$ tacker vim-delete vim-kubernetes

When update Kubernetes VIM, user can update VIM information (such as username,
password, bearer_token and ssl_ca_cert) except auth_url and type of VIM.


References
==========
----------

.. [#first] https://github.com/openstack-dev/devstack/blob/master/doc/source/networking.rst#shared-guest-interface
.. [#second] https://github.com/openstack/tacker/blob/master/doc/source/install/devstack.rst
.. [#third] https://github.com/openstack/tacker/blob/master/devstack/local.conf.kubernetes


+ 193
- 252
doc/source/install/manual_installation.rst View File

@@ -21,369 +21,310 @@ Manual Installation

This document describes how to install and run Tacker manually.

Pre-requisites
==============

1). Ensure that OpenStack components Keystone, Mistral, Barbican and
Horizon are installed. Refer the list below for installation of
these OpenStack projects on different Operating Systems.

* https://docs.openstack.org/keystone/latest/install/index.html
* https://docs.openstack.org/mistral/latest/admin/install/index.html
* https://docs.openstack.org/barbican/latest/install/install.html
* https://docs.openstack.org/horizon/latest/install/index.html

2). one admin-openrc.sh file is generated. one sample admin-openrc.sh file
is like the below:

.. code-block:: ini
.. note::

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=KTskN5eUMTpeHLKorRcZBBbH0AM96wdvgQhwENxY
export OS_AUTH_URL=http://localhost:5000/identity
export OS_INTERFACE=internal
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME=RegionOne
User is supposed to install on Ubuntu. Some examples are invalid on other
distirbutions. For example, you should replace ``/usr/local/bin/`` with
``/usr/bin/`` on CentOS.

Pre-requisites
--------------

Installing Tacker server
========================
#. Install required components.

.. note::
Ensure that OpenStack components, Keystone, Mistral, Barbican and
Horizon are installed. Refer the list below for installation of
these OpenStack projects on different Operating Systems.

The paths we are using for configuration files in these steps are with reference to
Ubuntu Operating System. The paths may vary for other Operating Systems.
* https://docs.openstack.org/keystone/latest/install/index.html
* https://docs.openstack.org/mistral/latest/admin/install/index.html
* https://docs.openstack.org/barbican/latest/install/install.html
* https://docs.openstack.org/horizon/latest/install/index.html

The branch_name which is used in commands, specify the branch_name as
"stable/<branch>" for any stable branch installation.
For eg: stable/ocata, stable/newton. If unspecified the default will be
"master" branch.
#. Create ``admin-openrc.sh`` for env variables.

.. code-block:: shell

1). Create MySQL database and user.
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=KTskN5eUMTpeHLKorRcZBBbH0AM96wdvgQhwENxY
export OS_AUTH_URL=http://localhost:5000/identity
export OS_INTERFACE=internal
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME=RegionOne

.. code-block:: console

mysql -uroot -p
CREATE DATABASE tacker;
GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' \
IDENTIFIED BY '<TACKERDB_PASSWORD>';
GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' \
IDENTIFIED BY '<TACKERDB_PASSWORD>';
exit;
..
Installing Tacker Server
------------------------

.. note::

Replace ``TACKERDB_PASSWORD`` with your password.
The ``<branch_name>`` in command examples is replaced with specific branch
name, such as ``stable/ussuri``.

2). Create users, roles and endpoints:
#. Create MySQL database and user.

a). Source the admin credentials to gain access to admin-only CLI commands:
.. code-block:: console

.. code-block:: console
$ mysql -uroot -p

. admin-openrc.sh
..
Create database ``tacker`` and grant provileges for ``tacker`` user with
password ``<TACKERDB_PASSWORD>`` on all tables.

b). Create tacker user with admin privileges.
.. code-block::

.. note::
CREATE DATABASE tacker;
GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'localhost' \
IDENTIFIED BY '<TACKERDB_PASSWORD>';
GRANT ALL PRIVILEGES ON tacker.* TO 'tacker'@'%' \
IDENTIFIED BY '<TACKERDB_PASSWORD>';
exit;

Project_name can be "service" or "services" depending on your
OpenStack distribution.
..
#. Create OpenStack user, role and endpoint.

.. code-block:: console
#. Set admin credentials to gain access to admin-only CLI commands.

openstack user create --domain default --password <PASSWORD> tacker
openstack role add --project service --user tacker admin
..
.. code-block:: console

c). Create tacker service.
$ . admin-openrc.sh

.. code-block:: console
#. Create ``tacker`` user with admin privileges.

openstack service create --name tacker \
--description "Tacker Project" nfv-orchestration
..
.. code-block:: console

d). Provide an endpoint to tacker service.
$ openstack user create --domain default --password <PASSWORD> tacker
$ openstack role add --project service --user tacker admin

If you are using keystone v3 then,
.. note::

.. code-block:: console
Project name can be ``service`` or ``services`` depending on your
OpenStack distribution.

openstack endpoint create --region RegionOne nfv-orchestration \
public http://<TACKER_NODE_IP>:9890/
openstack endpoint create --region RegionOne nfv-orchestration \
internal http://<TACKER_NODE_IP>:9890/
openstack endpoint create --region RegionOne nfv-orchestration \
admin http://<TACKER_NODE_IP>:9890/
..
#. Create ``tacker`` service.

If you are using keystone v2 then,
.. code-block:: console

.. code-block:: console
$ openstack service create --name tacker \
--description "Tacker Project" nfv-orchestration

openstack endpoint create --region RegionOne \
--publicurl 'http://<TACKER_NODE_IP>:9890/' \
--adminurl 'http://<TACKER_NODE_IP>:9890/' \
--internalurl 'http://<TACKER_NODE_IP>:9890/' <SERVICE-ID>
..

3). Clone tacker repository.

.. code-block:: console

cd ~/
git clone https://github.com/openstack/tacker -b <branch_name>
..
#. Provide an endpoint to tacker service.

4). Install all requirements.
For keystone v3:

.. code-block:: console
.. code-block:: console

cd tacker
sudo pip install -r requirements.txt
..
$ openstack endpoint create --region RegionOne nfv-orchestration \
public http://<TACKER_NODE_IP>:9890/
$ openstack endpoint create --region RegionOne nfv-orchestration \
internal http://<TACKER_NODE_IP>:9890/
$ openstack endpoint create --region RegionOne nfv-orchestration \
admin http://<TACKER_NODE_IP>:9890/

Or keystone v2:

5). Install tacker.

.. code-block:: console
.. code-block:: console

sudo python setup.py install
..
$ openstack endpoint create --region RegionOne \
--publicurl 'http://<TACKER_NODE_IP>:9890/' \
--adminurl 'http://<TACKER_NODE_IP>:9890/' \
--internalurl 'http://<TACKER_NODE_IP>:9890/' <SERVICE-ID>

..
#. Clone tacker repository.

6). Create 'tacker' directory in '/var/log', and create directories for vnf
package and zip csar file(for glance store).
You can use ``-b`` for specific release optionally.

.. code-block:: console
.. code-block:: console

sudo mkdir /var/log/tacker
sudo mkdir -p /var/lib/tacker/vnfpackages
sudo mkdir -p /var/lib/tacker/csar_files
$ cd ${HOME}
$ git clone https://opendev.org/openstack/tacker.git -b <branch_name>

.. note::
#. Install required packages and tacker itself.

In case of multi node deployment, we recommend to configure
/var/lib/tacker/csar_files on a shared storage.
.. code-block:: console

..
$ cd ${HOME}/tacker
$ sudo pip3 install -r requirements.txt
$ sudo python3 setup.py install

7). Generate the tacker.conf.sample using tools/generate_config_file_sample.sh
or 'tox -e config-gen' command. Rename the "tacker.conf.sample" file at
"etc/tacker/" to tacker.conf. Then edit it to ensure the below entries:
#. Create directories for tacker.

.. note::
Directories log, VNF packages and csar files are required.

Ignore any warnings generated while using the
"generate_config_file_sample.sh".
.. code-block:: console

..
$ sudo mkdir -p /var/log/tacker \
/var/lib/tacker/vnfpackages \
/var/lib/tacker/csar_files

.. note::
.. note::

project_name can be "service" or "services" depending on your
OpenStack distribution in the keystone_authtoken section.
..
In case of multi node deployment, we recommend to configure
``/var/lib/tacker/csar_files`` on a shared storage.

.. note::
#. Generate the ``tacker.conf.sample`` using
``tools/generate_config_file_sample.sh`` or ``tox -e config-gen`` command.
Rename the ``tacker.conf.sample`` file at ``etc/tacker/`` to
``tacker.conf``. Then edit it to ensure the below entries:

The path of tacker-rootwrap varies according to the operating system,
e.g. it is /usr/bin/tacker-rootwrap for CentOS, therefore the configuration for
[agent] should be like:
.. note::

.. code-block:: ini
Ignore any warnings generated while using the
"generate_config_file_sample.sh".

[agent]
root_helper = sudo /usr/bin/tacker-rootwrap /usr/local/etc/tacker/rootwrap.conf
..
..
.. note::

.. code-block:: ini

[DEFAULT]
auth_strategy = keystone
policy_file = /usr/local/etc/tacker/policy.json
debug = True
use_syslog = False
bind_host = <TACKER_NODE_IP>
bind_port = 9890
service_plugins = nfvo,vnfm

state_path = /var/lib/tacker
...

[nfvo_vim]
vim_drivers = openstack

[keystone_authtoken]
memcached_servers = 11211
region_name = RegionOne
auth_type = password
project_domain_name = <DOMAIN_NAME>
user_domain_name = <DOMAIN_NAME>
username = <TACKER_USER_NAME>
project_name = service
password = <TACKER_SERVICE_USER_PASSWORD>
auth_url = http://<KEYSTONE_IP>:5000
www_authenticate_uri = http://<KEYSTONE_IP>:5000
...

[agent]
root_helper = sudo /usr/local/bin/tacker-rootwrap /usr/local/etc/tacker/rootwrap.conf
...

[database]
connection = mysql+pymysql://tacker:<TACKERDB_PASSWORD>@<MYSQL_IP>:3306/tacker?charset=utf8
...

[tacker]
monitor_driver = ping,http_ping
project_name can be "service" or "services" depending on your
OpenStack distribution in the keystone_authtoken section.

..
.. note::

8). Copy the tacker.conf file to "/usr/local/etc/tacker/" directory
The path of tacker-rootwrap varies according to the operating system,
e.g. it is /usr/bin/tacker-rootwrap for CentOS, therefore the configuration for
[agent] should be like:

.. code-block:: console
.. code-block:: ini

sudo su
cp etc/tacker/tacker.conf /usr/local/etc/tacker/
[agent]
root_helper = sudo /usr/bin/tacker-rootwrap /usr/local/etc/tacker/rootwrap.conf

..
.. code-block:: ini

9). Populate Tacker database:
[DEFAULT]
auth_strategy = keystone
policy_file = /usr/local/etc/tacker/policy.json
debug = True
use_syslog = False
bind_host = <TACKER_NODE_IP>
bind_port = 9890
service_plugins = nfvo,vnfm

state_path = /var/lib/tacker
...

[nfvo_vim]
vim_drivers = openstack

[keystone_authtoken]
memcached_servers = 11211
region_name = RegionOne
auth_type = password
project_domain_name = <DOMAIN_NAME>
user_domain_name = <DOMAIN_NAME>
username = <TACKER_USER_NAME>
project_name = service
password = <TACKER_SERVICE_USER_PASSWORD>
auth_url = http://<KEYSTONE_IP>:5000
www_authenticate_uri = http://<KEYSTONE_IP>:5000
...

.. note::
[agent]
root_helper = sudo /usr/local/bin/tacker-rootwrap /usr/local/etc/tacker/rootwrap.conf
...

The path of tacker-db-manage varies according to the operating system,
e.g. it is /usr/bin/tacker-bin-manage for CentOS
[database]
connection = mysql+pymysql://tacker:<TACKERDB_PASSWORD>@<MYSQL_IP>:3306/tacker?charset=utf8
...

..
[tacker]
monitor_driver = ping,http_ping

.. code-block:: console
#. Copy the ``tacker.conf`` to ``/usr/local/etc/tacker/`` directory.

/usr/local/bin/tacker-db-manage --config-file /usr/local/etc/tacker/tacker.conf upgrade head
.. code-block:: console

..
$ sudo su
$ cp etc/tacker/tacker.conf /usr/local/etc/tacker/

10). To support systemd, copy tacker.service and tacker-conductor.service file to
"/etc/systemd/system/" directory, and restart systemctl daemon.
#. Populate Tacker database.

.. code-block:: console

sudo su
cp etc/systemd/system/tacker.service /etc/systemd/system/
cp etc/systemd/system/tacker-conductor.service /etc/systemd/system/
systemctl daemon-reload
.. code-block:: console

..
$ /usr/local/bin/tacker-db-manage \
--config-file /usr/local/etc/tacker/tacker.conf \
upgrade head

.. note::
#. To make tacker be controlled from systemd, copy ``tacker.service`` and
``tacker-conductor.service`` file to ``/etc/systemd/system/`` directory,
and restart ``systemctl`` daemon.

Needs systemd support.
By default Ubuntu16.04 onward is supported.
..
.. code-block:: console

$ sudo su
$ cp etc/systemd/system/tacker.service /etc/systemd/system/
$ cp etc/systemd/system/tacker-conductor.service /etc/systemd/system/
$ systemctl daemon-reload

Install Tacker client
=====================
Install Tacker Client
---------------------

1). Clone tacker-client repository.
#. Clone ``tacker-client`` repository.

.. code-block:: console
.. code-block:: console

cd ~/
git clone https://github.com/openstack/python-tackerclient -b <branch_name>
..
$ cd ~/
$ git clone https://opendev.org/openstack/python-tackerclient.git -b <branch_name>

2). Install tacker-client.
#. Install ``tacker-client``.

.. code-block:: console
.. code-block:: console

cd python-tackerclient
sudo python setup.py install
..
$ cd ${HOME}/python-tackerclient
$ sudo python3 setup.py install

Install Tacker horizon
======================
----------------------

#. Clone ``tacker-horizon`` repository.

1). Clone tacker-horizon repository.
.. code-block:: console

.. code-block:: console
$ cd ~/